#include <boost/math/differentiation/autodiff.hpp> namespace boost { namespace math { namespace differentiation { template <typename RealType, size_t Order, size_t... Orders> autodiff_fvar<RealType, Order, Orders...> make_fvar(RealType const& ca); template<typename RealType, size_t... Orders, typename... RealTypes> auto make_ftuple(RealTypes const&... ca); template <typename RealType, typename... RealTypes> using promote = typename detail::promote_args_n<RealType, RealTypes...>::type; namespace detail { template <typename RealType, size_t Order> class fvar { public: template <typename... Orders> get_type_at<RealType, sizeof...(Orders) - 1> derivative(Orders... orders) const; template <typename RealType2, size_t Order2> fvar& operator+=(fvar<RealType2, Order2> const&); fvar& operator+=(root_type const&); }; template <typename RealType, size_t Order> fvar<RealType, Order> floor(fvar<RealType, Order> const&); template <typename RealType, size_t Order> fvar<RealType, Order> exp(fvar<RealType, Order> const&); } } } }Description
Autodiff is a header-only C++ library that facilitates the automatic differentiation (forward mode) of mathematical functions of single and multiple variables.
This implementation is based upon the Taylor series expansion of an analytic function f at the point x0:
The essential idea of autodiff is the substitution of numbers with polynomials in the evaluation of f(x0). By substituting the number x0 with the first-order polynomial x0+ε, and using the same algorithm to compute f(x0+ε), the resulting polynomial in ε contains the function's derivatives f'(x0), f''(x0), f'''(x0), ... within the coefficients. Each coefficient is equal to the derivative of its respective order, divided by the factorial of the order.
In greater detail, assume one is interested in calculating the first N derivatives of f at x0. Without loss of precision to the calculation of the derivatives, all terms O(εN+1) that include powers of ε greater than N can be discarded. (This is due to the fact that each term in a polynomial depends only upon equal and lower-order terms under arithmetic operations.) Under these truncation rules, f provides a polynomial-to-polynomial transformation:
C++'s ability to overload operators and functions allows for the creation of a class fvar
(forward-mode autodiff variable) that represents polynomials in ε. Thus the same algorithm f that calculates the numeric value of y0=f(x0), when written to accept and return variables of a generic (template) type, is also used to calculate the polynomial Σnynεn=f(x0+ε). The derivatives f(n)(x0) are then found from the product of the respective factorial n! and coefficient yn:
Examples Example 1: Single-variable derivatives Calculate derivatives of f(x)=x4 at x=2.
In this example, make_fvar<double, Order>(2.0)
instantiates the polynomial 2+ε. The Order=5
means that enough space is allocated (on the stack) to hold a polynomial of up to degree 5 during the proceeding computation.
Internally, this is modeled by a std::array<double,6>
whose elements {2, 1, 0, 0, 0, 0}
correspond to the 6 coefficients of the polynomial upon initialization. Its fourth power, at the end of the computation, is a polynomial with coefficients y = {16, 32, 24, 8, 1, 0}
. The derivatives are obtained using the formula f(n)(2)=n!*y[n].
#include <boost/math/differentiation/autodiff.hpp> #include <iostream> template <typename T> T fourth_power(T const& x) { T x4 = x * x; x4 *= x4; return x4; } int main() { using namespace boost::math::differentiation; constexpr unsigned Order = 5; auto const x = make_fvar<double, Order>(2.0); auto const y = fourth_power(x); for (unsigned i = 0; i <= Order; ++i) std::cout << "y.derivative(" << i << ") = " << y.derivative(i) << std::endl; return 0; }
The above calculates
Example 2: Multi-variable mixed partial derivatives with multi-precision data type Calculate with a precision of about 50 decimal digits, where .
In this example, make_ftuple<float50, Nw, Nx, Ny, Nz>(11, 12, 13, 14)
returns a std::tuple
of 4 independent fvar
variables, with values of 11, 12, 13, and 14, for which the maximum order derivative to be calculated for each are 3, 2, 4, 3, respectively. The order of the variables is important, as it is the same order used when calling v.derivative(Nw, Nx, Ny, Nz)
in the example below.
#include <boost/math/differentiation/autodiff.hpp> #include <boost/multiprecision/cpp_bin_float.hpp> #include <iostream> using namespace boost::math::differentiation; template <typename W, typename X, typename Y, typename Z> promote<W, X, Y, Z> f(const W& w, const X& x, const Y& y, const Z& z) { using namespace std; return exp(w * sin(x * log(y) / z) + sqrt(w * z / (x * y))) + w * w / tan(z); } int main() { using float50 = boost::multiprecision::cpp_bin_float_50; constexpr unsigned Nw = 3; constexpr unsigned Nx = 2; constexpr unsigned Ny = 4; constexpr unsigned Nz = 3; auto const variables = make_ftuple<float50, Nw, Nx, Ny, Nz>(11, 12, 13, 14); auto const& w = std::get<0>(variables); auto const& x = std::get<1>(variables); auto const& y = std::get<2>(variables); auto const& z = std::get<3>(variables); auto const v = f(w, x, y, z); float50 const answer("1976.319600747797717779881875290418720908121189218755"); std::cout << std::setprecision(std::numeric_limits<float50>::digits10) << "mathematica : " << answer << '\n' << "autodiff : " << v.derivative(Nw, Nx, Ny, Nz) << '\n' << std::setprecision(3) << "relative error: " << (v.derivative(Nw, Nx, Ny, Nz) / answer - 1) << '\n'; return 0; }Example 3: Black-Scholes Option Pricing with Greeks Automatically Calculated Calculate greeks directly from the Black-Scholes pricing function.
Below is the standard Black-Scholes pricing function written as a function template, where the price, volatility (sigma), time to expiration (tau) and interest rate are template parameters. This means that any greek based on these 4 variables can be calculated using autodiff. The below example calculates delta and gamma where the variable of differentiation is only the price. For examples of more exotic greeks, see example/black_scholes.cpp
.
#include <boost/math/differentiation/autodiff.hpp> #include <iostream> using namespace boost::math::constants; using namespace boost::math::differentiation; template <typename X> X Phi(X const& x) { return 0.5 * erfc(-one_div_root_two<X>() * x); } enum class CP { call, put }; template <typename Price, typename Sigma, typename Tau, typename Rate> promote<Price, Sigma, Tau, Rate> black_scholes_option_price(CP cp, double K, Price const& S, Sigma const& sigma, Tau const& tau, Rate const& r) { using namespace std; auto const d1 = (log(S / K) + (r + sigma * sigma / 2) * tau) / (sigma * sqrt(tau)); auto const d2 = (log(S / K) + (r - sigma * sigma / 2) * tau) / (sigma * sqrt(tau)); switch (cp) { case CP::call: return S * Phi(d1) - exp(-r * tau) * K * Phi(d2); case CP::put: return exp(-r * tau) * K * Phi(-d2) - S * Phi(-d1); } } int main() { double const K = 100.0; auto const S = make_fvar<double, 2>(105); double const sigma = 5; double const tau = 30.0 / 365; double const r = 1.25 / 100; auto const call_price = black_scholes_option_price(CP::call, K, S, sigma, tau, r); auto const put_price = black_scholes_option_price(CP::put, K, S, sigma, tau, r); std::cout << "black-scholes call price = " << call_price.derivative(0) << '\n' << "black-scholes put price = " << put_price.derivative(0) << '\n' << "call delta = " << call_price.derivative(1) << '\n' << "put delta = " << put_price.derivative(1) << '\n' << "call gamma = " << call_price.derivative(2) << '\n' << "put gamma = " << put_price.derivative(2) << '\n'; return 0; }Advantages of Automatic Differentiation
The above examples illustrate some of the advantages of using autodiff:
Additional details are in the autodiff manual.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4