Saturday, November 9, 2013

LP Cutting Planes in CPLEX

Cut generation, as used in what follows, refers to generating constraints for a mathematical program "on the fly" (based on intermediate solutions), rather than adding all relevant constraints at the outset of the problem. It is typically used when  there is an astronomical number of possible constraints, most of which will turn out not to be very useful.

Someone recently asked me which is faster in CPLEX when solving a linear program with cut generation:
  • solving the LP, adding cuts, solving the LP again ...; or
  • using a callback to add the cuts on the fly.
My initial guess (which I think proved correct) was that the first method is likely to be faster, or at least not notably slower, than the second method, but I suppose it might be possible for the second method to win if problems are large. (I'm not at all confident about that last point.)

What makes the "solve - cut - repeat" loop competitive is that CPLEX can, and with default settings does, "warm start" the solution of a linear program based on the most recent previous solution of the model. The model can be changed between calls to the solver, so long as CPLEX recognizes that it is the same problem. Generally, warm starting after adding (or removing) constraints expedites the solution process (relative to starting from scratch) if the number of changes is fairly modest.

The second method involves attaching a lazy constraint callback to the model. Each time a potential new optimum for the LP is found, the solution will be passed to the callback, which can either reject the solution by adding one or more constraints (cuts), at least one of which renders the candidate solution infeasible, or accept the solution by refraining from adding any new cuts. The original LP now becomes the root node "relaxation" of the problem. CPLEX will never attempt to branch, since there are no integer variables in the LP (and thus none that will take on fractional values), so once the root node is optimized, we're done.

A bit of trickery is required for the second method, since the lazy constraint callback is available only for integer and mixed integer programs. We can spoof CPLEX by adding a single integer or binary variable to the model, but not including it in any constraints or in the objective function. This works in the various programming APIs, where there is a method to add an object (in this case our dummy variable) directly to the model. In the interactive optimizer, it may be necessary to add a vacuous constraint as well. For example, if we add a dummy binary variable $z$, we can use the constraint $z \le 1$ to force $z$ into the model (and convince CPLEX the model is an integer program) without affecting the optimal solution.

I tested this with a couple of small convex optimization problems, using a rather old school cutting plane method. The problems take the form\[ \begin{array}{cccc} \mathrm{opt} & c'x\\ \mathrm{s.t.} & Ax & \le & b\\ & g(x) & \le & h\\ & x & \in & \Re^n_+ \end{array} \]where $g:\Re^n_+\rightarrow\Re^m$ is convex and differentiable and "opt" denotes either maximization or minimization. The initial linear program omits the constraint(s) $g(x)\le h$. When a candidate solution $\hat{x}$ is obtained, we plug it into each $g_i()$. If $g_i(\hat{x})\gt h_i$ by more than some rounding tolerance, we add the constraint\[\nabla g_i(\hat{x})x \le h_i-g_i(\hat{x}) + \nabla g_i(\hat{x})\hat{x}.\]If multiple constraints are violated, we can add just one cut or (as I did in my tests) simultaneously add a cut for each violated constraint.

I wrote a test program in Java (for which the source code is available), using two test problems:\[ \begin{array}{cccc} \mathrm{max} & 3x_{1}+x_{2}\\ \mathrm{s.t.} & x_{1}+x_{2} & \le & 3\\ & x_{1}^{2}+x_{2} & \le & 1\\ & x & \ge & 0 \end{array} \]and\[ \begin{array}{cccc} \mathrm{max} & 3x_{1}^{\phantom{2}}+\phantom{2}x_{2}+4x_{3}^{\phantom{4}}\\ \mathrm{s.t.} & \phantom{{3}}x_{1}^{\phantom{2}}\phantom{-2x_{2}+4x_{3}^{4}} & \le & 5\\ & \phantom{{3x_{1}^{2}-2}}x_{2}\phantom{+4x_{3}^{4}} & \le & 5\\ & \phantom{3x_{1}^{2}-2x_{2}+4}x_{3}^{\phantom{4}} & \le & 5\\ & \phantom{3}x_{1}^{2}-\phantom{1}x_{2}\phantom{+4x_{3}^{4}} & \le & 3\\ & \phantom{3x_{1}^{2}}-2x_{2}+\phantom{4}x_{3}^{4} & \le & 0\\ & x & \ge & 0. \end{array} \]My code recorded execution times for the CPLEX 12.5.1 solver using both methods, excluding time spent setting up the models, time spent reporting the results, and time spent generating the cuts. (Time spent adding the cuts to the models was included.)

The problems are both very small, so their execution times (which varied significantly during repeated runs of the same models) may not be indicative of what happens on more difficult problems. On the first problem, the first method usually beat the second method by a small margin (typically about 6 milliseconds v. 7 milliseconds), but I did see an occasional win for the second method (by about 1 ms.). On the second problem, the first method consistently beat the second method, usually by one or two milliseconds, occasionally by a bit more. The number of cuts generated (five in the first problem, eleven in the second) did not vary between methods, nor did the optimal solution reported.

4 comments:

  1. Dear Prof. Rubin,

    I have a question regarding the first method above ("solve - cut - repeat").
    How to calculate the total time?

    I tried the concept of "solve-cut-repeat" on my MIP, but I found difficulty to get the running time. Because, when I repeat the solving proccess, it seems that the optimization start from the beginning again, so that there will be an additional time on each iteration.

    I used the MIPINFOCALLBACK to stop the solving process when it found new incumbent solution. Add/remove the constraint, and then re-solve again.

    I compare this method with the plain CPLEX and time gap is large (plain CPLEX is far better). I used the same parameter setting on both.

    Do you have any suggestion regarding this issue?

    best regards,

    ReplyDelete
    Replies
    1. To calculate total time for the first approach, you can use IloCplex.getCplexTime() to get a time stamp at the start and end of each pass through the loop, or just once before starting the loop and once after exiting the loop. I don't know if that's any more accurate than doing the same with Java's System.currentTimeMillis() method.

      Please note that my second approach in the post used a single call to the solver (with a lazy constraint callback). If you use an info callback to stop the (MIP) solver, add a constraint, then restart the MIP solver, I think you force CPLEX to start with a new search tree (because you have modified the problem that generated the old tree). With an LP, you can make modifications and CPLEX can repair the final basis of the starting run to get a basis for the new run, usually fairly quickly. Hot-starting a MIP that has changed is different, since the changes invalidate the previous search tree. So I would expect solve MIP - stop - modify - restart MIP applied to an LP to be *much* slower than solve LP - modify - restart LP.

      Delete
  2. Hello,
    I'm working with Benders method and I what I'm doing now is a single constraint generation.
    the problem is that at each iteration, the solver delete the previous constraint added and add a new one.
    Do you know how can I fix it?
    thank you

    ReplyDelete
    Replies
    1. Sounds like a bug in your code. I suggest asking on the CPLEX help forum or on OR Exchange. There are links in the right margin of this page under "Places to Ask Questions"; for the CPLEX forum, you will need to drill down a level or two.

      Delete

Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.