editing mode

Nonlinear programming (NLP) is crucial for solving optimization problems in real-world scenarios where linear assumptions are not applicable, providing effective solutions for complex and diverse systems. NLP problems are commonly found across various domains such as engineering, economics, machine learning, and finance, where the relationships between variables are nonlinear. These problems are generally more challenging than linear programming problems due to issues like multiple local optima, non-convexities, and the difficulty of applying gradient-based optimization methods.

The main components of nonlinear programming involve defining the objective function, setting up constraints, and choosing the right solution techniques. These methods can vary from simpler gradient-based approaches, such as steepest descent or Newton’s method, to more sophisticated algorithms, including interior-point methods or evolutionary algorithms, depending on the specific characteristics and needs of the problem.

A nonlinear optimization problem can be mathematically described as follows. Assume that all functions and their derivatives are well defined.

\[\begin{gather*} f(x)\\ g(x)= 0\\ h(x)\leq 0 \end{gather*}\]

Unlike an unconstrained problem where the derivative of the objective function is set to zero to find the minimum, the optimality conditions for the constrained counterpart are cumbersome.

The Karush-Kuhn-Tucker (KKT) conditions are essential criteria for determining the optimality of a solution in constrained optimization problems, especially in nonlinear programming. They extend the concept of Lagrange multipliers to account for inequality constraints. The KKT conditions include four key elements:

  1. the stationarity condition, which requires the gradient of the Lagrangian to be zero
  2. the primal feasibility condition, ensuring the satisfaction of all constraints
  3. the dual feasibility condition, which demands that the Lagrange multipliers for inequality constraints be non-negative
  4. the complementary slackness condition, stating that the product of each multiplier and its corresponding constraint must equal zero, meaning either the constraint is active or the multiplier is zero. These conditions play a crucial role in efficiently solving optimization problems with constraints.

Numerical solvers have algorithms which attempts to compute the solution which satisfies the KKT conditions. For equality constrained optimization, KKT condition are set of nonlinear equations and Newton methods can be directly applied. Inequality constrained optimization problem is solved broadly using 2 methods: interior point method and active set method.

Interior point method a barrier function is used to approximate the inequality and a sequence of transformed equality constrained optimization problems are solved. The iterations are expensive but the algorithm converges fast compared to active set methods. Yet another selling point is the ability to compute deterministic run-time for convex optimization problems, critical for real-time applications.

Active set methods use a combinatorial approach to solve the optimization problem.

For small and large problems, active set and interior-point method is generally preferred.

Software used

  • python 3.11.4
  • casadi 3.6.3

Consider the function given below, which has a global minimum at $(5,8)$ and the minimum value is 0.

$f(x)=(x-5^2)+(y-8)^2$

Its worth noting the following features of the Opti class.

  1. A modelling language for solving nonlinear optimization problem
  2. Interface to several solvers
  3. JIT evaluation
import casadi as cs
p=cs.Opti()
w=p.variables(1,2)
x=w[0]
y=w[1]
obj=(x-5)**2+(y-8)**2
p.minimize(obj)
p.solver('ipopt')
sol=p.solve()
print('x*: ',sol.value(x),'x*: ',sol.value(x))
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.14.11, running with linear solver MUMPS 5.4.1.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        2

Total number of variables............................:        2
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  8.9000000e+01 0.00e+00 1.60e+01  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  0.0000000e+00 0.00e+00 0.00e+00  -1.0 8.00e+00    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 1

                                   (scaled)                 (unscaled)
Objective...............:   0.0000000000000000e+00    0.0000000000000000e+00
Dual infeasibility......:   0.0000000000000000e+00    0.0000000000000000e+00
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   0.0000000000000000e+00    0.0000000000000000e+00


Number of objective function evaluations             = 2
Number of objective gradient evaluations             = 2
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 1
Total seconds in IPOPT                               = 0.023

EXIT: Optimal Solution Found.
      solver  :   t_proc      (avg)   t_wall      (avg)    n_eval
       nlp_f  |        0 (       0)  10.00us (  5.00us)         2
  nlp_grad_f  |        0 (       0) 290.00us ( 96.67us)         3
  nlp_hess_l  |        0 (       0)   6.00us (  6.00us)         1
       total  |  40.00ms ( 40.00ms)  31.65ms ( 31.65ms)         1
x*:  5.0 x*:  5.0