The definitive Wolfram Language and notebook experience
The original technical computing environment
All-in-one AI assistance for your Wolfram experience
We deliver solutions for the AI eraâcombining symbolic computation, data-driven insights and deep technical expertise
Courses in computing, science, life and more
Learn, solve problems and share ideas.
News, views and insights from Wolfram
Resources for
Software DevelopersWe deliver solutions for the AI eraâcombining symbolic computation, data-driven insights and deep technical expertise
Wolfram SolutionsCourses in computing, science, life and more
Learn, solve problems and share ideas.
News, views and insights from Wolfram
Resources for
Software DevelopersThe Wolfram Language offers a rich selection of machine learning methods to perform regression, classification, clustering, dimensionality reduction and more.
Classification (Classify)"ClassDistributions" — use learned distributions
"DecisionTree" — use a decision tree
"GradientBoostedTrees" — use an ensemble of trees trained with gradient boosting
"LogisticRegression" — use probabilities from linear combinations of features
"Markov" — use a Markov model on the feature sequence (only for text, bag of tokens, ...)
"NaiveBayes" — classify by assuming probabilistic independence of features
"NearestNeighbors" — use nearest-neighbor examples
"NeuralNetwork" — use artificial neural networks
"RandomForest" — use Breiman–Cutler ensembles of decision trees
"SupportVectorMachine" — use a support vector machine
Regression (Predict)"DecisionTree" — use a decision tree
"GradientBoostedTrees" — use an ensemble of trees trained with gradient boosting
"LinearRegression" — use a linear combination of features
"NearestNeighbors" — use nearest-neighbor examples
"NeuralNetwork" — use artificial neural networks
"RandomForest" — use Breiman–Cutler ensembles of decision trees
"GaussianProcess" — use a Gaussian process prior over functions
Clustering (FindClusters)"Agglomerate" — single linkage clustering algorithm
"DBSCAN" — density-based spatial clustering of applications with noise
"GaussianMixture" — use a mixture of Gaussian (normal) distributions
"JarvisPatrick" — Jarvis–Patrick clustering algorithm
"KMeans" — k-means clustering algorithm
"KMedoids" — partitioning around medoids
"MeanShift" — mean-shift clustering algorithm
"NeighborhoodContraction" — shift data points toward high-density regions
"SpanningTree" — minimum spanning tree–based clustering algorithm
"Spectral" — spectral clustering algorithm
Distribution Modeling (LearnDistribution)"ContingencyTable" — discretize data and store each possible probability
"DecisionTree" — use a decision tree
"GaussianMixture" — use a mixture of Gaussian (normal) distributions
"KernelDensityEstimation" — use a kernel mixture distribution
"Multinormal" — use a multivariate normal (Gaussian) distribution
Dimensionality Reduction (DimensionReduction)"Autoencoder" — use a trainable autoencoder
"Hadamard" — project data using a Hadamard matrix
"Isomap" — isometric mapping
"LatentSemanticAnalysis" — latent semantic analysis method
"Linear" — automatically choose the best linear method
"LLE" — locally linear embedding
"PrincipalComponentsAnalysis" — principal components analysis method
"MultidimensionalScaling" — metric multidimensional scaling
"TSNE" — t-distributed stochastic neighbor embedding algorithm
"UMAP" — uniform manifold approximation and projection
Related Workflow Guides Related GuidesRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4