Table Of ContentUSING CONOPT WITH AMPL
Using CONOPT with AMPL
AMPL/CONOPT is an optimization system for nonlinear mathematical programs in
continuous variables. It has been developed jointly by AMPL Optimization LLC, responsible for
the AMPL/CONOPT interface, and ARKI Consulting & Development A/S, responsible for the
CONOPT solution engine. Sales are handled by AMPL Optimization LLC.
This supplement to AMPL: A Modeling Language for Mathematical Programming
describes the most important features of CONOPT for users of AMPL.
Section 1 describes the kinds of problems to which CONOPT is applicable, and section 2
explains how solver specific options can be passed from AMPL to CONOPT. Section 3 describes
the iteration log, section 4 describes the many types of termination messages that CONOPT can
produce, and section 5 discusses Function Evaluation Errors. Appendix A gives a short overview
over the CONOPT algorithm. Appendix B gives a list of the options that control the interface
between AMPL and CONOPT and Appendix C gives a list of the options that control the
CONOPT algorithm. The solve_result_num values and the corresponding text string in the
solve_message text is returned by a solve statement are given in Appendix D. References are
given in Appendix E.
1 Applicability
CONOPT is designed to solve “smooth” nonlinear programs as described in chapter 13 of
AMPL: A Modeling Language for Mathematical Programming (in the following referred to as
The AMPL Book). CONOPT can accommodate both smooth nonlinear functions in the objective
and in the constraints. When no objective is specified, CONOPT seeks a point that satisfies the
given constraints and thus can be used to solve systems of nonlinear equations and/or inequalities.
Non-smooth nonlinearities will also be accepted, but CONOPT is not designed for them; the
algorithm is based on the assumption that all functions are smooth and it will in general not
produce reliable results when they are not.
CONOPT can also be used to solve linear programs, but other solvers specifically designed
for linear programs will in general be more efficient.
CONOPT does not solve integer programs as described in Chapter 15 of the AMPL Book.
When presented with an integer program, CONOPT ignores the integrality restrictions on the
variables, as indicated by a message such as
CONOPT 3.014R: ignoring integrality of 10 variables
It then solves the continuous relaxation of the resulting problem, and returns a solution in which
some of the integer variables may have fractional values. Solvers that can handle integrality
constraints can be found at the AMPL website.
Like most other nonlinear programming solvers CONOPT will attempt to find a local
optimum, i.e., a solution that is the best in a neighborhood around the solution. Better solutions
may in general exist elsewhere in the solution space. Some models have the property that a local
optimum is also a global optimum, i.e., a solution that is the best among all solutions. Among
these models are convex optimization models, where the set of feasible solutions is convex and
1
USING CONOPT WITH AMPL
the objective function is convex (minimization). It is the responsibility of the modeler to ensure
that the model is convex, that the model for some other reason only has one local solution, or that
the solution produced by CONOPT is an acceptable local solution.
Models with piecewise-linear terms as described in the AMPL Book (chapter 17 of the
second edition, 14 of the first) can be used with CONOPT. AMPL will automatically convert the
piecewise-linear terms into a sum of linear terms (assuming option pl_linearize has its default
value 1), send the latter to CONOPT, and convert the solution back; the conversion has the effect
of adding a variable to correspond to each linear piece. The piecewise-linear terms are
mathematically non-smooth, but AMPL’s expansion converts them into a sum of smooth terms
that satisfies the smoothness assumptions in CONOPT. If you use option pl_linearize 0
AMPL will send the non-smooth model to CONOPT and CONOPT will attempt to solve it as if it
was smooth; the result is a high likelihood that CONOPT will get stuck in one of the non-smooth
points.
Piecewise-linear terms can still give rise to difficulties due to non-convexities. They are
only safe provided they satisfy certain convexity rules: Any piecewise-linear term in a minimized
objective must be convex, i.e., its slopes must form an increasing sequence as in:
<<-1,1,3,5; -5,-1,0,1.5,3>> x[j]
Any piecewise-linear term in a maximized objective must be concave, i.e., its slopes must form a
decreasing sequence as in:
<<1,3; 1.5,0.5,0.25>> x[j]
In constraints, any piecewise-linear term must be either convex and on the left-hand side of a <
constraint (or equivalently, the right-hand side of a > constraint), or else concave and on the left-
hand side of a > constraint (or equivalently, the right side of a < constraint). Piecewise-linear
programs that violate the above rules are converted using integer variables, but CONOPT will
ignore the integrality and just issue a message like the one shown above and it may put the pieces
together in the wrong order and return a solution that does not correspond to AMPL’s definition
of the piecewise linear term.
Piecewise-linear terms can be used to convert certain non-smooth functions into a form
more suitable for CONOPT. Examples are abs, min, and max where abs(x) can be replaced by
<<0;-1,1>>x, min(0,x) by <<0;-1,0>>x, and max(0,x) by <<0;0,1>>x.
2 Controlling CONOPT from AMPL
In many instances, you can successfully use CONOPT by simply specifying a model and
data, setting the solver option to conopt, and typing solve. For larger or more difficult
nonlinear models, however, you may need to pass specific options to CONOPT to obtain the
desired results. You will also need options to get more detailed messages and iteration output that
may help you to diagnose problems.
To give directives to CONOPT, you must first assign an appropriate character string to the
AMPL option called conopt_options. When solve invokes CONOPT, CONOPT breaks this
string into a series of individual options. Here is an example:
2
USING CONOPT WITH AMPL
ampl: model pindyck.mod;
ampl: data pindyck.dat;
ampl: option solver conopt;
ampl: option conopt_options 'workmeg=2 \
ampl? maxiter=2000';
ampl: solve;
CONOPT 3.14R: workmeg=2
maxiter=2000
CONOPT 3.14R: Locally optimal; objective 1170.486285
16 iterations; evals: nf = 35, ng = 0, nc = 58, nJ = 15, nH = 2, nHv = 13
All the directives described below have the form of an identifier, an = sign, and a value. You may
store any number of concatenated options in conopt_options. The example above shows how to
type all directives in one long string, using the \ character to indicate that the string continues on
the next line. Alternatively, you can list several strings, which AMPL will automatically
concatenate:
ampl: option conopt_options 'workmeg=2'
ampl? ' maxiter=2000';
In this form, you must take care to supply the space that goes between the options; here we have
put it before maxiter. Alternatively, when specifying only a few options, you can simply put
them all on one line:
ampl: option conopt_options 'workmeg=2 maxiter=2000';
If you have specified the directive above, and then want to set maxtime to 500 you may
think to type:
ampl: option conopt_options
ampl? ' maxtime=500';
This will replace the previous conopt_options string, however; the other previously specified
options such as workmeg and maxiter will revert to their default values. (CONOPT supplies a
default value for every option not explicitly specified; the defaults are indicated in the discussion
below and in Appendices B and C.) To append new options to conopt_options, use this form:
ampl: option conopt_options $conopt_options
ampl? ' maxtime=500';
The $ in front of an option name denotes the current value of that option, so this statement
appends more options to the current option string. Note the space before maxtime.
The available options can be divided into two groups: Interface options are related to the
way AMPL and CONOPT communicates with each other; they are described in Appendix B.
Algorithmic options control aspects of the CONOPT solution algorithm; they are described in
Appendix C.
3
USING CONOPT WITH AMPL
3 Iteration Output
Following the AMPL conventions CONOPT will by default display very little output. You
may get something like:
CONOPT 3.14R: Locally optimal; objective 1170.486285
16 iterations; evals: nf = 35, ng = 0, nc = 58, nJ = 15, nH = 2, nHv = 13
The first line gives an identification of the solver, a classification of the solution status, and
the final objective function value. The second line gives some statistics: number of iterations and
number of function calls of various types: nf = number of objective function evaluations, ng =
number of gradient evaluations (this particular model has a linear objective function so the
gradient is constant and is not evaluated repeatedly), nc = number of constraint evaluations, nJ =
number of evaluations of the Jacobian of the constraints, nH = number of evaluations of the
Hessian of the Lagrangian function, and nHv = number of evaluations of the Hessian of the
Lagrangian multiplied by a vector.
To get more information from CONOPT you must increase the output level, outlev. With
outlev=2 general messages from CONOPT (except the iteration log) will go to stdout and the
model above may give something like:
CONOPT 3.14R: outlev=2
The model has 112 variables and 97 constraints
with 332 Jacobian elements, 80 of which are nonlinear.
The Hessian of the Lagrangian has 32 elements on the diagonal,
48 elements below the diagonal, and 64 nonlinear variables.
** Optimal solution. Reduced gradient less than tolerance.
CONOPT time Total 0.141 seconds
of which: Function evaluations 0.000 = 0.0%
1st Derivative evaluations 0.000 = 0.0%
2nd Derivative evaluations 0.000 = 0.0%
Directional 2nd Derivative 0.000 = 0.0%
CONOPT 3.14R: Locally optimal; objective 1170.486285
16 iterations; evals: nf = 35, ng = 0, nc = 58, nJ = 15, nH = 2, nHv = 13
The time spend in various types of function evaluation are listed. Function evaluations correspond
to nf + nc, 1st Derivative evaluation correspond to ng + nJ, 2nd Derivative evaluations correspond
to nH, and Directional 2nd Derivatives correspond to nHv. On problems that are solved quickly,
some times may be reported as zero, as in this example, due to the granularity of the timer.
With outlev=3 you will also get information about the individual iterations similar to the
following:
4
USING CONOPT WITH AMPL
CONOPT 3.14R: outlev=3
The model has 112 variables and 97 constraints
with 332 Jacobian elements, 80 of which are nonlinear.
The Hessian of the Lagrangian has 32 elements on the diagonal,
48 elements below the diagonal, and 64 nonlinear variables.
iter phase numinf suminf/objval nsuper rgmax step initr mx ok
0 0 2368.91092
1 0 14.8848226
2 0 1.97454016
3 0 0 0.00414943373 1 0 1
4 0 0 1.10604059e-05 1 0 1
5 0 0 1.81205579e-08 1 0 1
6 3 1150.89301 16 42 0.32 1 0 1
7 3 1165.06202 16 1e+02 0.31 1 0 1
8 3 1168.00681 16 28 0.0041 0 1
9 3 1169.17253 16 12 0.0075 0 1
10 4 1170.07584 16 11 1 0 1
11 4 1170.48623 16 1.8 1 6 0 1
12 4 1170.48628 16 0.021 1 5 0 1
13 4 1170.48628 16 0.00014 1 4 0 1
14 4 1170.48629 16 6.3e-07 12 0 1
15 4 1170.48629 16 5.2e-07 1 1 0 1
16 4 1170.48629 16 4.8e-08 1 1 0 1
** Optimal solution. Reduced gradient less than tolerance.
CONOPT time Total 0.031 seconds
of which: Function evaluations 0.000 = 0.0%
1st Derivative evaluations 0.000 = 0.0%
2nd Derivative evaluations 0.000 = 0.0%
Directional 2nd Derivative 0.000 = 0.0%
CONOPT 3.14R: Locally optimal; objective 1170.486285
16 iterations; evals: nf = 35, ng = 0, nc = 58, nJ = 15, nH = 2, nHv = 13
The ten columns in the iterations output have the following meaning (more details on the
algorithm can be found in Appendix A):
• Iter: The iteration counter. The first three lines have special meaning: iteration 0 refers to
the initial point as received from AMPL, iteration 1 refers to the point obtained after
preprocessing, and iteration 2 refers to the same point but after scaling.
• Phase: The remaining iterations are characterized by the Phase in column 2. During Phases
0, 1, and 2 the model is infeasible and an artificial “Sum of infeasibilities” objective
function is minimized. During Phases 3 and 4 the model is feasible and the actual Objective
function is optimized.
• Numinf: Number of infeasibilities. During Phases 1 and 2 Numinf indicates the number of
infeasible constraints. CONOPT adds “Artificial” variables to these constraints and
minimizes a “Sum of Infeasibility” objective function; this is similar to the standard phase-
1 procedure known from Linear Programming. During this process CONOPT will ensure
that the feasible constraints remain feasible. The distinction between Phases 1 and 2 is
related to the degree of nonlinearity: during Phase 1 CONOPT will minimize the
infeasibilities using a completely linear approximation to the model. During Phase 2 some
degree of second order information is included. Phase 1 iterations are therefore cheaper
than Phase 2 iterations. Phase 0 indicates a special cheap method for finding a feasible
solution: A promising set of basic variables is selected and the basic variables are changed
5
USING CONOPT WITH AMPL
using a Newton method. Newton’s method may not converge quickly if the constraints are
very nonlinear; in this case CONOPT removes some of the more nonlinear constraints or
constraints with large infeasibilities from the process and tries to satisfy a smaller set of
constraints. The number of constraints that has been removed from the Newton process is
listed in Numinf and it will most likely grow during Phase 0.
• Suminf/Objval: During Phases 0, 1, and 2 this column will list the sum of absolute values
of the infeasibilities, i.e., the artificial objective function that is being minimized. During
Phases 3 and 4 this column shows the user’s objective function that is being optimized.
• Nsuper: Number of super-basic variables. This number measures the dimension of the
space over which CONOPT is optimizing or the degrees of freedom in the model at the
current point, taking into account the lower and upper bounds that are active. The second
order information that CONOPT uses during Phases 2 and 4 is of dimension Nsuper.
• Rgmax: The absolute value of the largest reduced gradient, taken over the super-basic
variables. The optimality conditions require the reduced gradient to be zero so this number
is a measure of the non-optimality of the current point.
• Step: The steplength for the iteration. The interpretation depends on several things: During
Phase 0 steplength 1 corresponds to a full Newton step. During iterations in which Initr (see
next) is positive steplength 1 correspond to full use of the solution from the LP or QP
submodel. Finally, iterations in Phase 3 with no value in the Initr column steplength 1
correspond to a full Quasi-Newton step.
• Initr: A number in column eight is a counter for “Inner Iterations”. If column eight is empty
CONOPT will use steepest descend iterations during Phases 1 and 3 and Quasi-Newton
iterations during Phases 2 and 4. Otherwise CONOPT will use an iterative procedure to find
a good search direction and Initr indicates the number of iterations in this procedure:
During Phases 1 and 3 where CONOPT uses first order information only the feasibility or
optimality problem is formulated as an LP model and CONOPT uses an approximate
solution to this model as a search direction. Similarly, during Phases 2 and 4 where second
order information is used the feasibility or optimality problem is formulated as a QP model
and CONOPT uses an approximate solution to this model as a search direction.
• Mx is 1 if the step was limited by a variable reaching a bound and 0 otherwise.
• Ok is 1 if the step was “well behaved” and 0 if the step had to be limited because
nonlinearities of the constraints prevented CONOPT from maintaining feasibility.
In the particular log file shown above the cheap Newton-based Phase 0 procedure finds a
feasible solution after iteration 5 without introducing any artificial variables. CONOPT continues
in Phase 3 which means that the iterations are based on linear information only. From iteration 10
CONOPT switches to Phase 4 which means that it takes into account second order information. In
several of the iterations there is a entry in the initr column, indicating that CONOPT performs
several inner iterations trying to get a good search direction from a QP-based approximation to
the overall model.
4 CONOPT Termination Messages
CONOPT may terminate in a number of ways. Standard AMPL messages such as
6
USING CONOPT WITH AMPL
CONOPT 3.14R: Locally optimal; objective 1170.486285
have only a few classifications that are sufficient for most purposes (see Appendix D for a list of
the solve_message text strings and the associated numerical solve_result_num values). More
detailed messages are available from CONOPT and this section will show most of them and
explain their meaning. Note that the detailed messages only are available if you use outlev=2 or
3. The first 4 messages are used for optimal solutions:
** Optimal solution. There are no superbasic variables.
The solution is a locally optimal corner solution. The solution is determined by constraints only,
and it is usually very accurate.
** Optimal solution. Reduced gradient less than tolerance.
The solution is a locally optimal interior solution. The largest component of the reduced gradient
is less than the optimality tolerance rtredg with default value around 1.e-7. The value of the
objective function is very accurate while the values of the variables are less accurate due to a flat
objective function in the interior of the feasible area.
** Optimal solution. The error on the optimal objective function
value estimated from the reduced gradient and the estimated
Hessian is less than the minimal tolerance on the objective.
The solution is a locally optimal interior solution. The largest component of the reduced gradient
is larger than the optimality tolerance rtredg. However, when the reduced gradient is scaled with
information from the estimated Hessian of the reduced objective function the solution seems
optimal. The objective must be large or the reduced objective must have large second derivatives
for this message to appear so it is advisable to scale the model.
Scaling the model will in this note mean to change the units of variables and equations.
You should try to select units for the variables such that the solution values of variables that are
not at an active bound are expected to be somewhere between 0.01 to 100. Similarly, you should
try to select units for the equations such that the activities described by the equations are
somewhere between 0.01 and 100. These two scaling rules should as a by-product result in first
derivatives in the model that are not too far from 1.
The last termination message for an optimal solution is:
** Optimal solution. Convergence too slow. The change in
objective has been less than xx.xx for xx consecutive
iterations.
CONOPT stopped with a solution that seems optimal. The solution process is stopped because of
slow progress. The largest component of the reduced gradient is greater than the optimality
tolerance rtredg, but less than rtredg multiplied by a scaling factor equal to the largest Jacobian
element divided by 100. Again, the model must have large derivatives for this message to appear
so it is advisable to scale the model.
The four messages above all exist in versions where “Optimal” is replaced by
“Infeasible”. The infeasible messages indicate that a Sum of Infeasibility objective function is
locally minimal, but positive. If the model is convex it does not have a feasible solution; if the
model is non-convex it may have a feasible solution in a different region and you should try to
start CONOPT from a different starting point.
7
USING CONOPT WITH AMPL
** Feasible solution. Convergence too slow. The change in
objective has been less than xx.xx for xx consecutive
iterations.
** Feasible solution. The tolerances are minimal and
there is no change in objective although the reduced
gradient is greater than the tolerance.
The two messages above tell that CONOPT stopped with a feasible solution. In the first case the
solution process is very slow and in the second there is no progress at all. However, the optimality
criteria have not been satisfied. The problem can be caused by discontinuities if the model has
discontinuous functions such as abs, min, or max. In this case you should consider alternative,
smooth formulations. The problem can also be caused by a poorly scaled model. Finally, it can be
caused by stalling as described in section A14 in Appendix A. The two messages also exist in a
version where “Feasible” is replaced by “Infeasible”. These versions tell that CONOPT cannot
make progress towards feasibility, but the Sum of Infeasibility objective function does not have a
well defined local minimum.
Variable <var>: The variable has reached 'infinity’
** Unbounded solution. A variable has reached 'Infinity’.
Largest legal value (Rtmaxv) is xx.xx
CONOPT considers a solution to be unbounded if a variable exceeds the indicated value. Check
whether the solution appears unbounded or the problem is caused by the scaling of the unbounded
variable <var> mentioned in the first line of the message. If the model seems correct you are
advised to scale it. There is also a lazy solution: you can increase the largest legal value, rtmaxv,
as mentioned in the section on options. However, you will pay through reduced reliability or
increased solution times. Unlike LP models, where an unbounded model is recognized by an
unbounded ray and the iterations are stopped at a solution far from “infinity”, CONOPT will have
to follow a curved path all the way to “infinity” and it will actually return a feasible solution with
very large values for some of the variables. On the way to “infinity” terms in the model may
become very large and CONOPT may run into numerical problems and may not be able to
maintain feasibility.
The variable names will by default appear as “_svar[xx]” where xx is the internal index in
CONOPT. You can request AMPL to pass the proper names to CONOPT by defining
option conopt_auxfiles cr;
before the solve statement in your AMPL program and CONOPT will then use these names in
the messages.
The message above exists in a version where “Unbounded” is replaced by “Infeasible”.
You may also see a message like
Variable <var>: Free variable becomes too large
** Infeasible solution. A free variable exceeds the allowable
range. Current value is 1.10E+10 and current upper bound
(Rtmaxv) is 1.00E+10
These messages indicate that some variable became very large before a feasible solution was
8
USING CONOPT WITH AMPL
found. You should again check whether the problem is caused by the scaling of the unbounded
variable <var> mentioned in the first line of the message. If the model seems correct you should
scale it. You can also experiment with alternative starting points; the path towards a feasible point
depends heavily on the initial point.
CONOPT may also stop on various resource limits:
** The time limit has been reached.
The time limit defined by option maxftime (default value 999999 seconds) has been reached.
** The iteration limit has been reached.
The iteration limit defined by option iterlim (default value 1000000 iterations) has been
reached.
** Domain errors in nonlinear functions.
Check bounds on variables.
The number of function evaluation errors has reached the limit defined by option errlim. See
section 5 for more details on “Function Evaluation Errors”.
The next two messages appear if a derivative (Jacobian element) is very large, either in the
initial point or in a later intermediate point.
** An initial derivative is too large (larger than x.xExx)
Scale the variables and/or equations or add bounds.
variable <var> appearing in constraint <equ>:
Initial Jacobian element too large = xx.xx
and
** A derivative is too large (larger than x.xExx).
Scale the variables and/or equations or add bounds.
variable <var> appearing in constraint <equ>:
Jacobian element too large = xx.xx
A large derivative means that the constraint function changes very rapidly with changes in the
variable and it will most likely create numerical problems for many parts of the optimization
algorithm. Instead of attempting to solve a model that most likely will fail, CONOPT will stop
and you are advised to adjust the model if at all possible. The relevant variable and equation
pair(s) will show you where to look.
If the offending derivative is associated with a log(x) or 1/x term you may try to increase
the lower bound on x. If the offending derivative is associated with an exp(x) term you must
decrease the upper bound on x. You may also try to scale the model. There is also in this case a
lazy solution: increase the limit on Jacobian elements, rtmaxj; however, you will most likely
pay through reduced reliability or longer solution times.
In addition to the messages shown above you may see messages like
** An equation in the pre-triangular part of the model cannot be
solved because the critical variable is at a bound.
** An equation in the pre-triangular part of the model cannot be
solved because of too small pivot.
9
USING CONOPT WITH AMPL
or
** An equation is inconsistent with other equations in the
pre-triangular part of the model.
These messages containing the word “Pre-triangular” are all related to infeasibilities identified by
CONOPT’s pre-processing stage and they are explained in detail in section A4 in Appendix A.
Usually, CONOPT will be able to estimate the amount of memory needed for the model
based on size statistics provided by AMPL. However, in some cases with unusual models, e.g.,
very dense models or very large models, the estimate will be too small and you must request more
memory yourself using options like workfactor=x.x or workmeg=x.x. Workfactor will
multiply the default estimate by a factor and this is usually the preferred method. Workmeg will
allocate a fixed amount of memory, measured in Mbytes. The memory related messages you will
see are similar to the following:
** Fatal Error ** Insufficient memory to continue the
optimization.
You must request more memory.
Current CONOPT space = 0.29 Mbytes
Estimated CONOPT space = 0.64 Mbytes
Minimum CONOPT space = 0.33 Mbytes
The text after “Insufficient memory to” may be different; it says something about where
CONOPT ran out of memory. If the memory problem appears during model setup you will not get
any solution value back. If the memory problem appears later during the optimization CONOPT
will usually return primal solution values. The marginal values of both equations and variables
will be zero.
5. Function Evaluation Errors
Many of the nonlinear functions available with AMPL are not defined for all values of
their arguments. log is not defined for negative arguments, exp overflows for large arguments,
and division by zero is illegal. To avoid evaluating functions outside their domain of definition
you should add reasonable bounds on your variables. CONOPT will in return guarantee that the
nonlinear functions never will be evaluated with variables outside their bounds.
In some cases bounds are not sufficient, e.g., in the expression log(sum{i in I} x[i])
where each individual x should be allowed to become zero, but the sum must be strictly positive.
In this case you should introduce an intermediate variable bounded away from zero and an extra
equation, e.g.,
var xsum >= 0.01;
and
subject to xsumdef: xsum = sum {i in I} x[i];
and use xsum as the argument to the log function.
Whenever a nonlinear function is called outside its domain of definition, AMPLs function
evaluator will intercept the function evaluation error and prevent that the system crashes. AMPL
10
Description:AMPL/CONOPT is an optimization system for nonlinear mathematical explains how solver specific options can be passed from AMPL to CONOPT.