Draft:Crossover (linear programming)
Procedure for converting an interior-point solution of a linear program into an optimal basic feasible solution
From Wikipedia, the free encyclopedia
Crossover is a procedure in linear programming used to convert a solution obtained by an interior-point method into an optimal basic feasible solution (BFS), also called a vertex solution.[1][2] It is commonly used in barrier-based and other interior-point solvers for linear programs, because such methods typically return a solution in the relative interior of the optimal face rather than a basis-associated vertex.[1][3]
| Review waiting, please be patient.
This may take 7 weeks or more, since drafts are reviewed in no specific order. There are 2,854 pending submissions waiting for review.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Reviewer tools
|
Crossover combines features of interior-point and simplex methods: the interior-point phase is used to approach optimality efficiently, while the crossover phase identifies a basis and produces a vertex solution suitable for post-optimal analysis, warm starts, and downstream algorithms that require a basis.[3][1]
Motivation
For a linear program in standard form, an optimal solution need not be basic, even though an optimal basic feasible solution exists whenever an optimum exists. Interior-point methods usually return primal-dual solutions that are approximately complementary and approximately feasible, but not attached to an explicit basis.[4]
A basic feasible solution is often desirable in practice for several reasons:
- it provides an explicit basis for the constraint matrix;
- it can be used to warm-start the simplex method;
- it is useful for sensitivity analysis and postsolve procedures;
- in integer programming, an optimal BFS of the LP relaxation may be preferable because it is a vertex of the relaxation polyhedron.[3][1]
Overview
In practical LP solvers, crossover is usually performed after an interior-point solve and before termination.[1][3] A common high-level decomposition is:
- obtaining a near-optimal primal-dual solution by an interior-point method;
- identifying a candidate basis from that solution;
- performing pivoting steps to obtain a primal-feasible or dual-feasible basis;
- reoptimizing, typically with primal or dual simplex, until an optimal BFS is reached.[3]
Some commercial solvers use related terminology such as push and cleanup for parts of this process.[3]
Theoretical background
The theoretical study of crossover is closely related to the problem of recovering an optimal basis from optimal primal and dual solutions.
In 1991, Nimrod Megiddo showed that, given optimal primal and dual solutions, one can find a basis that is optimal for both problems in strongly polynomial time.[5] This result is often cited as a foundational theoretical basis for crossover methods.[3]
Also in 1991, Sanjay Mehrotra proposed an approach for obtaining a vertex solution while using an interior-point method for linear programming.[6]
In 1994, Robert E. Bixby and Michael J. Saltzman described a practical method for recovering an optimal LP basis from an interior-point solution, linking a high-performance interior-point code with a simplex code.[7]
In 1996, Erling D. Andersen and Yinyu Ye proposed a method combining interior-point and pivoting algorithms by constructing a related linear program for which the available primal-dual pair is complementary, enabling a pivot-based recovery of an optimal basis.[8]
Practical approaches
Practical crossover algorithms differ across solvers and implementations, but usually rely on one or more of the following ideas:
Basis identification
A candidate basis is inferred from the approximate primal-dual solution returned by the interior-point method.[3] This may be done by examining which primal variables are close to their bounds, which dual slacks are small, or which constraints appear active.[7][8]
Push or pivot phases
After a candidate basis is chosen, pivot operations similar to those used in simplex methods are applied in order to move variables to bounds while maintaining, or restoring, feasibility and optimality as much as possible.[3][5]
Cleanup or reoptimization
Because the basis obtained from approximate interior-point data may still be infeasible or only nearly optimal in finite precision arithmetic, a final primal simplex or dual simplex phase is often used to complete the recovery of an optimal BFS.[7][3]
Numerical issues
In exact arithmetic, crossover can be described cleanly in terms of complementarity and optimal bases. In practice, however, interior-point solvers produce only approximately feasible and approximately complementary solutions, so crossover must contend with roundoff error, degeneracy, and ambiguity in active-set identification.[3][8]
These issues are one reason why crossover can represent a significant fraction of the total running time of a barrier solve on some instances.[3][2]
Beyond classical barrier methods
Although crossover is most commonly associated with barrier or primal-dual interior-point methods, related basis-recovery ideas have also been studied in other contexts, including methods that exploit symmetry or reduced formulations, where a non-vertex optimal solution must later be converted into a vertex solution of the original problem.[2]
More recent work has also studied crossover-like procedures for first-order methods and large-scale LP algorithms that do not naturally return a basis.[3]
