Sleipnir
A linearity-exploiting sparse nonlinear constrained optimization problem solver that uses the interior-point method.
|
Sparsity and Linearity-Exploiting Interior-Point solver - Now Internally Readable
Named after Odin's eight-legged horse from Norse mythology, Sleipnir is a linearity-exploiting sparse nonlinear constrained optimization problem solver that uses the interior-point method.
Sleipnir's internals are intended to be readable by those who aren't domain experts with links to explanatory material for its algorithms.
flywheel-scalability-results-casadi.csv flywheel-scalability-results-sleipnir.csv | cart-pole-scalability-results-casadi.csv cart-pole-scalability-results-sleipnir.csv |
Generated by tools/generate-scalability-results.sh from benchmarks/scalability source.
The following thirdparty software was used in the benchmarks:
Ipopt uses MUMPS by default because it has free licensing. Commercial linear solvers may be much faster.
See benchmark details for more.
Sleipnir requires somewhat newer operating systems and C++ runtimes for std::print().
sudo apt install g++-14
)xcode-select --install
)See the build instructions.
See the C++ API docs and Python API docs.
See the examples, C++ optimization unit tests, and Python optimization unit tests.
sudo apt install g++-14
xcode-select --install
sudo apt install cmake
brew install cmake
sudo apt install python
brew install python
Library dependencies which aren't installed locally will be automatically downloaded and built by CMake.
The benchmark executables require CasADi to be installed locally.
On Windows, open a Developer PowerShell. On Linux or macOS, open a Bash shell.
The following build types can be specified via -DCMAKE_BUILD_TYPE
during CMake configure:
On Windows, open a Developer PowerShell. On Linux or macOS, open a Bash shell.
Passing the --enable-diagnostics
flag to the test executable enables solver diagnostic prints.
Some test problems generate CSV files containing their solutions. These can be plotted with tools/plot_test_problem_solutions.py.
Benchmark projects are in the benchmarks folder. To compile and run them, run the following in the repository root:
See the contents of ./tools/generate-scalability-results.sh
for how to run specific benchmarks.
During problem setup, equality and inequality constraints are encoded as different types, so the appropriate setup behavior can be selected at compile time via operator overloads.
The autodiff library automatically records the linearity of every node in the computational graph. Linear functions have constant first derivatives, and quadratic functions have constant second derivatives. The constant derivatives are computed in the initialization phase and reused for all solver iterations. Only nonlinear parts of the computational graph are recomputed during each solver iteration.
For quadratic problems, we compute the Lagrangian Hessian and constraint Jacobians once with no problem structure hints from the user.
Eigen provides these. It also has no required dependencies, which makes cross compilation much easier.
This promotes fast allocation/deallocation and good memory locality.
We could mitigate the solver's high last-level-cache miss rate (~42% on the machine above) further by breaking apart the expression nodes into fields that are commonly iterated together. We used to use a tape, which gave computational graph updates linear access patterns, but tapes are monotonic buffers with no way to reclaim storage.