OptimLib is a lightweight C++ library of numerical optimization methods for nonlinear functions.
Features:
float
) or double precision (double
) library.A list of currently available algorithms includes:
Full documentation is available online:
A PDF version of the documentation is available here.
The OptimLib API follows a relatively simple convention, with most algorithms called in the following manner:
algorithm_id(<initial/final values>, <objective function>, <objective function data>);
The inputs, in order, are:
For example, the BFGS algorithm is called using
bfgs(ColVec_t& init_out_vals, std::function<double (const ColVec_t& vals_inp, ColVec_t* grad_out, void* opt_data)> opt_objfn, void* opt_data);
where ColVec_t
is used to represent, e.g., arma::vec
or Eigen::VectorXd
types.
OptimLib is available as a compiled shared library, or as header-only library, for Unix-alike systems only (e.g., popular Linux-based distros, as well as macOS). Use of this library with Windows-based systems, with or without MSVC, is not supported.
OptimLib requires either the Armadillo or Eigen C++ linear algebra libraries. (Note that Eigen version 3.4.0 requires a C++14-compatible compiler.)
Before including the header files, define one of the following:
#define OPTIM_ENABLE_ARMA_WRAPPERS #define OPTIM_ENABLE_EIGEN_WRAPPERS
Example:
#define OPTIM_ENABLE_EIGEN_WRAPPERS #include "optim.hpp"Installation Method 1: Shared Library
The library can be installed on Unix-alike systems via the standard ./configure && make
method.
First clone the library and any necessary submodules:
# clone optim into the current directory git clone https://github.com/kthohr/optim ./optim # change directory cd ./optim # clone necessary submodules git submodule update --init
Set (one) of the following environment variables before running configure
:
export ARMA_INCLUDE_PATH=/path/to/armadillo export EIGEN_INCLUDE_PATH=/path/to/eigen
Finally:
# build and install with Eigen ./configure -i "/usr/local" -l eigen -p make make install
The final command will install OptimLib into /usr/local
.
Configuration options (see ./configure -h
):
Primary
-h
print help-i
installation path; default: the build directory-f
floating-point precision mode; default: double
-l
specify the choice of linear algebra library; choose arma
or eigen
-m
specify the BLAS and Lapack libraries to link with; for example, -m "-lopenblas"
or -m "-framework Accelerate"
-o
compiler optimization options; defaults to -O3 -march=native -ffp-contract=fast -flto -DARMA_NO_DEBUG
-p
enable OpenMP parallelization features (recommended)Secondary
-c
a coverage build (used with Codecov)-d
a 'development' build-g
a debugging build (optimization flags set to -O0 -g
)Special
--header-only-version
generate a header-only version of OptimLib (see below)OptimLib is also available as a header-only library (i.e., without the need to compile a shared library). Simply run configure
with the --header-only-version
option:
./configure --header-only-version
This will create a new directory, header_only_version
, containing a copy of OptimLib, modified to work on an inline basis. With this header-only version, simply include the header files (#include "optim.hpp
) and set the include path to the head_only_version
directory (e.g.,-I/path/to/optimlib/header_only_version
).
To use OptimLib with an R package, first generate a header-only version of the library (see above). Then simply add a compiler definition before including the OptimLib files.
#define OPTIM_USE_RCPP_ARMADILLO #include "optim.hpp"
#define OPTIM_USE_RCPP_EIGEN #include "optim.hpp"
To illustrate OptimLib at work, consider searching for the global minimum of the Ackley function:
This is a well-known test function with many local minima. Newton-type methods (such as BFGS) are sensitive to the choice of initial values, and will perform rather poorly here. As such, we will employ a global search method--in this case: Differential Evolution.
Code:
#define OPTIM_ENABLE_EIGEN_WRAPPERS #include "optim.hpp" #define OPTIM_PI 3.14159265358979 double ackley_fn(const Eigen::VectorXd& vals_inp, Eigen::VectorXd* grad_out, void* opt_data) { const double x = vals_inp(0); const double y = vals_inp(1); const double obj_val = 20 + std::exp(1) - 20*std::exp( -0.2*std::sqrt(0.5*(x*x + y*y)) ) - std::exp( 0.5*(std::cos(2 * OPTIM_PI * x) + std::cos(2 * OPTIM_PI * y)) ); return obj_val; } int main() { Eigen::VectorXd x = 2.0 * Eigen::VectorXd::Ones(2); // initial values: (2,2) bool success = optim::de(x, ackley_fn, nullptr); if (success) { std::cout << "de: Ackley test completed successfully." << std::endl; } else { std::cout << "de: Ackley test completed unsuccessfully." << std::endl; } std::cout << "de: solution to Ackley test:\n" << x << std::endl; return 0; }
On x86-based computers, this example can be compiled using:
g++ -Wall -std=c++14 -O3 -march=native -ffp-contract=fast -I/path/to/eigen -I/path/to/optim/include optim_de_ex.cpp -o optim_de_ex.out -L/path/to/optim/lib -loptim
Output:
de: Ackley test completed successfully.
elapsed time: 0.028167s
de: solution to Ackley test:
-1.2702e-17
-3.8432e-16
On a standard laptop, OptimLib will compute a solution to within machine precision in a fraction of a second.
The Armadillo-based version of this example:
#define OPTIM_ENABLE_ARMA_WRAPPERS #include "optim.hpp" #define OPTIM_PI 3.14159265358979 double ackley_fn(const arma::vec& vals_inp, arma::vec* grad_out, void* opt_data) { const double x = vals_inp(0); const double y = vals_inp(1); const double obj_val = 20 + std::exp(1) - 20*std::exp( -0.2*std::sqrt(0.5*(x*x + y*y)) ) - std::exp( 0.5*(std::cos(2 * OPTIM_PI * x) + std::cos(2 * OPTIM_PI * y)) ); return obj_val; } int main() { arma::vec x = arma::ones(2,1) + 1.0; // initial values: (2,2) bool success = optim::de(x, ackley_fn, nullptr); if (success) { std::cout << "de: Ackley test completed successfully." << std::endl; } else { std::cout << "de: Ackley test completed unsuccessfully." << std::endl; } arma::cout << "de: solution to Ackley test:\n" << x << arma::endl; return 0; }
Compile and run:
g++ -Wall -std=c++11 -O3 -march=native -ffp-contract=fast -I/path/to/armadillo -I/path/to/optim/include optim_de_ex.cpp -o optim_de_ex.out -L/path/to/optim/lib -loptim ./optim_de_ex.out
Check the /tests
directory for additional examples, and https://optimlib.readthedocs.io/en/latest/ for a detailed description of each algorithm.
For a data-based example, consider maximum likelihood estimation of a logit model, common in statistics and machine learning. In this case we have closed-form expressions for the gradient and hessian. We will employ a popular gradient descent method, Adam (Adaptive Moment Estimation), and compare to a pure Newton-based algorithm.
#define OPTIM_ENABLE_ARMA_WRAPPERS #include "optim.hpp" // sigmoid function inline arma::mat sigm(const arma::mat& X) { return 1.0 / (1.0 + arma::exp(-X)); } // log-likelihood function data struct ll_data_t { arma::vec Y; arma::mat X; }; // log-likelihood function with hessian double ll_fn_whess(const arma::vec& vals_inp, arma::vec* grad_out, arma::mat* hess_out, void* opt_data) { ll_data_t* objfn_data = reinterpret_cast<ll_data_t*>(opt_data); arma::vec Y = objfn_data->Y; arma::mat X = objfn_data->X; arma::vec mu = sigm(X*vals_inp); const double norm_term = static_cast<double>(Y.n_elem); const double obj_val = - arma::accu( Y%arma::log(mu) + (1.0-Y)%arma::log(1.0-mu) ) / norm_term; // if (grad_out) { *grad_out = X.t() * (mu - Y) / norm_term; } // if (hess_out) { arma::mat S = arma::diagmat( mu%(1.0-mu) ); *hess_out = X.t() * S * X / norm_term; } // return obj_val; } // log-likelihood function for Adam double ll_fn(const arma::vec& vals_inp, arma::vec* grad_out, void* opt_data) { return ll_fn_whess(vals_inp,grad_out,nullptr,opt_data); } // int main() { int n_dim = 5; // dimension of parameter vector int n_samp = 4000; // sample length arma::mat X = arma::randn(n_samp,n_dim); arma::vec theta_0 = 1.0 + 3.0*arma::randu(n_dim,1); arma::vec mu = sigm(X*theta_0); arma::vec Y(n_samp); for (int i=0; i < n_samp; i++) { Y(i) = ( arma::as_scalar(arma::randu(1)) < mu(i) ) ? 1.0 : 0.0; } // fn data and initial values ll_data_t opt_data; opt_data.Y = std::move(Y); opt_data.X = std::move(X); arma::vec x = arma::ones(n_dim,1) + 1.0; // initial values // run Adam-based optim optim::algo_settings_t settings; settings.gd_method = 6; settings.gd_settings.step_size = 0.1; std::chrono::time_point<std::chrono::system_clock> start = std::chrono::system_clock::now(); bool success = optim::gd(x,ll_fn,&opt_data,settings); std::chrono::time_point<std::chrono::system_clock> end = std::chrono::system_clock::now(); std::chrono::duration<double> elapsed_seconds = end-start; // if (success) { std::cout << "Adam: logit_reg test completed successfully.\n" << "elapsed time: " << elapsed_seconds.count() << "s\n"; } else { std::cout << "Adam: logit_reg test completed unsuccessfully." << std::endl; } arma::cout << "\nAdam: true values vs estimates:\n" << arma::join_rows(theta_0,x) << arma::endl; // // run Newton-based optim x = arma::ones(n_dim,1) + 1.0; // initial values start = std::chrono::system_clock::now(); success = optim::newton(x,ll_fn_whess,&opt_data); end = std::chrono::system_clock::now(); elapsed_seconds = end-start; // if (success) { std::cout << "newton: logit_reg test completed successfully.\n" << "elapsed time: " << elapsed_seconds.count() << "s\n"; } else { std::cout << "newton: logit_reg test completed unsuccessfully." << std::endl; } arma::cout << "\nnewton: true values vs estimates:\n" << arma::join_rows(theta_0,x) << arma::endl; return 0; }
Output:
Adam: logit_reg test completed successfully.
elapsed time: 0.025128s
Adam: true values vs estimates:
2.7850 2.6993
3.6561 3.6798
2.3379 2.3860
2.3167 2.4313
2.2465 2.3064
newton: logit_reg test completed successfully.
elapsed time: 0.255909s
newton: true values vs estimates:
2.7850 2.6993
3.6561 3.6798
2.3379 2.3860
2.3167 2.4313
2.2465 2.3064
Automatic Differentiation
By combining Eigen with the Autodiff library, OptimLib provides experimental support for automatic differentiation.
Example using forward-mode automatic differentiation with BFGS for the Sphere function:
#define OPTIM_ENABLE_EIGEN_WRAPPERS #include "optim.hpp" #include <autodiff/forward/real.hpp> #include <autodiff/forward/real/eigen.hpp> // autodiff::real opt_fnd(const autodiff::ArrayXreal& x) { return x.cwiseProduct(x).sum(); } double opt_fn(const Eigen::VectorXd& x, Eigen::VectorXd* grad_out, void* opt_data) { autodiff::real u; autodiff::ArrayXreal xd = x.eval(); if (grad_out) { Eigen::VectorXd grad_tmp = autodiff::gradient(opt_fnd, autodiff::wrt(xd), autodiff::at(xd), u); *grad_out = grad_tmp; } else { u = opt_fnd(xd); } return u.val(); } int main() { Eigen::VectorXd x(5); x << 1, 2, 3, 4, 5; bool success = optim::bfgs(x, opt_fn, nullptr); if (success) { std::cout << "bfgs: forward-mode autodiff test completed successfully.\n" << std::endl; } else { std::cout << "bfgs: forward-mode autodiff test completed unsuccessfully.\n" << std::endl; } std::cout << "solution: x = \n" << x << std::endl; return 0; }
Compile with:
g++ -Wall -std=c++17 -O3 -march=native -ffp-contract=fast -I/path/to/eigen -I/path/to/autodiff -I/path/to/optim/include optim_autodiff_ex.cpp -o optim_autodiff_ex.out -L/path/to/optim/lib -loptim
See the documentation for more details on this topic.
Keith O'Hara
Apache Version 2
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4