Showing content from http://reference.wolfram.com/language/ref/method/GaussianProcess.html below:
GaussianProcess—Wolfram Documentation
WOLFRAM Consulting & Solutions
We deliver solutions for the AI eraâcombining symbolic computation, data-driven insights and deep technical expertise
- Data & Computational Intelligence
- Model-Based Design
- Algorithm Development
- Wolfram|Alpha for Business
- Blockchain Technology
- Education Technology
- Quantum Computation
WolframConsulting.com
- Method for Predict.
- Infers values by conditioning a Gaussian process on the training data.
Details & Suboptions
- The "GaussianProcess" method assumes that the function to be modeled has been generated from a Gaussian process. The Gaussian process is defined by its covariance function (also called kernel). In the training phase, the method will estimate the parameters of this covariance function. The Gaussian process is then conditioned on the training data and used to infer the value of a new example using a Bayesian inference.
- The following options can be given:
- AssumeDeterministic False whether or not the function should be assumed to be deterministic "CovarianceType" Automatic the covariance type to use "EstimationMethod" "MaximumPosterior" the method to infer the values "OptimizationMethod" Automatic the optimization method to estimate parameters
- Possible settings for "CovarianceType" include:
- "SquaredExponential" exponential kernel "HammingDistance" exponential kernel for nominal variables "Periodic" periodic kernel "RationalQuadratic" rational quadratic kernel "Linear" linear kernel "Matern5/2" Matérn kernel with exponent 5/2 "Matern3/2" Matérn kernel with exponent 3/2 "Composite" a composition of the previous kernels assoc specify a different kernel for each feature type
- In Method{"GaussianProcess", "CovarianceType"assoc}, assoc needs to be of the form <|"Numerical" kernel1,"Nominal"kernel2|>.
- Possible settings for "EstimationMethod" include:
- "MaximumPosterior" maximize the posterior distribution "MaximumLikelihood" maximize the likelihood "MeanPosterior" mean of the posterior distribution
- Possible settings for "OptimizationMethod" include:
- "SimulatedAnnealing" uses simulated annealing to find the minimum "FindMinimum" uses FindMinimum to find the minimum
Examplesopen all close all Basic Examples (2)
Train a classifier function on labeled examples:
Obtain information about the predictor:
Predict a new example:
Train a predictor on labeled examples:
Compare the data with the predicted values and look at the standard deviation:
Options (4) AssumeDeterministic (2)
Assume deterministic data and train a predictor on it:
Generate some labeled data normally distributed around a polynomial function:
Train a predictor by assuming the data is not deterministic:
Train a predictor by assuming the data is deterministic:
Compare the results:
"CovarianceType" (2)
Use a specific covariance type to train a predictor:
Generate a labeled training set and visualize it:
Train two predictors using different covariance types:
Train a third predictor using the "Composite" covariance type:
Look at the kernel type that has been found:
Get its internal parameters:
Compare the three results:
RetroSearch is an open source project built by @garambo
| Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4