Showing content from http://reference.wolfram.com/language/tutorial/NumericalOperationsOnData.html below:
Numerical Operations on Data—Wolfram Language Documentation
WOLFRAM Consulting & Solutions
We deliver solutions for the AI eraâcombining symbolic computation, data-driven insights and deep technology expertise.
- Data & Computational Intelligence
- Model-Based Design
- Algorithm Development
- Wolfram|Alpha for Business
- Blockchain Technology
- Education Technology
- Quantum Computation
WolframConsulting.com
TECH NOTE Numerical Operations on Data Basic descriptive statistics operations. The standard deviation StandardDeviation[list] is defined to be . If the elements in list are thought of as being selected at random according to some probability distribution, then the mean gives an estimate of where the center of the distribution is located, while the standard deviation gives an estimate of how wide the dispersion in the distribution is. The median Median[list] effectively gives the value at the halfway point in the sorted version of list. It is often considered a more robust measure of the center of a distribution than the mean, since it depends less on outlying values. There are, however, about 10 other definitions of quantile in use, all potentially giving slightly different results. The Wolfram Language covers the common cases by introducing four quantile parameters in the form Quantile[list,q,{{a,b},{c,d}}]. The parameters and in effect define where in the list should be considered a fraction of the way through. If this corresponds to an integer position, then the element at that position is taken to be the th quantile. If it is not an integer position, then a linear combination of the elements on either side is used, as specified by and . {{0,0},{1,0}} inverse empirical CDF (default) {{0,0},{0,1}} linear interpolation (California method) {{1/2,0},{0,0}} element numbered closest to {{1/2,0},{0,1}} linear interpolation (hydrologist method) {{0,1},{0,1}} mean‐based estimate (Weibull method) {{1,-1},{0,1}} mode‐based estimate {{1/3,1/3},{0,1}} median‐based estimate {{3/8,1/4},{0,1}} normal distribution estimate Common choices for quantile parameters. Mean[{x1,x2,…}] the mean of the xi Mean[{{x1,y1,…},{x2,y2,…},…}] a list of the means of the xi,yi,… Handling multidimensional data. Sometimes each item in your data may involve a list of values. The basic statistics functions in the Wolfram Language automatically apply to all corresponding elements in these lists. This separately finds the mean of each "column" of data: Note that you can extract the elements in the th "column" of a multidimensional list using list[[All,i]]. Descriptive statistics refers to properties of distributions, such as location, dispersion, and shape. The functions described here compute descriptive statistics of lists of data. You can calculate some of the standard descriptive statistics for various known distributions by using the functions described in "Continuous Distributions" and "Discrete Distributions". This finds the mean and median of the data: This is the mean when the smallest entry in the list is excluded. TrimmedMean allows you to describe the data with removed outliers: Dispersion statistics summarize the scatter or spread of the data. Most of these functions describe deviation from a particular location. For instance, variance is a measure of deviation from the mean, and standard deviation is just the square root of the variance. This gives an unbiased estimate for the variance of the data with as the divisor: This compares three types of deviation: Covariance[v1,v2] covariance coefficient between lists v1 and v2 Covariance[m] covariance matrix for the matrix m Covariance[m1,m2] covariance matrix for the matrices m1 and m2 Correlation[v1,v2] correlation coefficient between lists v1 and v2 Correlation[m] correlation matrix for the matrix m Correlation[m1,m2] correlation matrix for the matrices m1 and m2 Covariance and correlation statistics. Covariance is the multivariate extension of variance. For two vectors of equal length, the covariance is a number. For a single matrix m, the i,j th element of the covariance matrix is the covariance between the i th and j th columns of m. For two matrices m1 and m2, the i,j th element of the covariance matrix is the covariance between the i th column of m1 and the j th column of m2. While covariance measures dispersion, correlation measures association. The correlation between two vectors is equivalent to the covariance between the vectors divided by the standard deviations of the vectors. Likewise, the elements of a correlation matrix are equivalent to the elements of the corresponding covariance matrix scaled by the appropriate column standard deviations. This gives the covariance between data and a random vector: This is the correlation matrix for the matrix m: This is the covariance matrix: Scaling the covariance matrix terms by the appropriate standard deviations gives the correlation matrix: You can get some information about the shape of a distribution using shape statistics. Skewness describes the amount of asymmetry. Kurtosis measures the concentration of data around the peak and in the tails versus the concentration in the flanks. Skewness is calculated by dividing the third central moment by the cube of the population standard deviation. Kurtosis is calculated by dividing the fourth central moment by the square of the population variance of the data, equivalent to CentralMoment[data,2]. (The population variance is the second central moment, and the population standard deviation is its square root.) Here is the second central moment of the data: A negative value for skewness indicates that the distribution underlying the data has a long left‐sided tail: Expectation[f[x],xlist] expected value of the function f of x with respect to the values of list Here is the expected value of the Log of the data: The functions described here are among the most commonly used discrete univariate statistical distributions. You can compute their densities, means, variances, and other related properties. The distributions themselves are represented in the symbolic form name[param1,param2,…]. Functions such as Mean, which give properties of statistical distributions, take the symbolic representation of the distribution as an argument. "Continuous Distributions" describes many continuous statistical distributions. BernoulliDistribution[p] Bernoulli distribution with mean p BetaBinomialDistribution[α,β,n] binomial distribution where the success probability is a BetaDistribution[α,β] random variable BetaNegativeBinomialDistribution[α,β,n] negative binomial distribution where the success probability is a BetaDistribution[α,β] random variable BinomialDistribution[n,p] binomial distribution for the number of successes that occur in n trials, where the probability of success in a trial is p DiscreteUniformDistribution[{imin,imax}] discrete uniform distribution over the integers from imin to imax GeometricDistribution[p] geometric distribution for the number of trials before the first success, where the probability of success in a trial is p HypergeometricDistribution[n,nsucc,ntot] hypergeometric distribution for the number of successes out of a sample of size n, from a population of size ntot containing nsucc successes LogSeriesDistribution[θ] logarithmic series distribution with parameter θ NegativeBinomialDistribution[n,p] negative binomial distribution with parameters n and p PoissonDistribution[μ] Poisson distribution with mean μ ZipfDistribution[ρ] Zipf distribution with parameter ρ Discrete statistical distributions. Most of the common discrete statistical distributions can be understood by considering a sequence of trials, each with two possible outcomes, for example, success and failure. The Bernoulli distribution BernoulliDistribution[p] is the probability distribution for a single trial in which success, corresponding to value 1, occurs with probability p, and failure, corresponding to value 0, occurs with probability 1-p. The binomial distribution BinomialDistribution[n,p] is the distribution of the number of successes that occur in n independent trials, where the probability of success in each trial is p. The negative binomial distribution NegativeBinomialDistribution[n,p] for positive integer n is the distribution of the number of failures that occur in a sequence of trials before n successes have occurred, where the probability of success in each trial is p. The distribution is defined for any positive n, though the interpretation of n as the number of successes and p as the success probability no longer holds if n is not an integer. The beta binomial distribution BetaBinomialDistribution[α,β,n] is a mixture of binomial and beta distributions. A BetaBinomialDistribution[α,β,n] random variable follows a BinomialDistribution[n,p] distribution, where the success probability p is itself a random variable following the beta distribution BetaDistribution[α,β]. The beta negative binomial distribution BetaNegativeBinomialDistribution[α,β,n] is a similar mixture of the beta and negative binomial distributions. The geometric distribution GeometricDistribution[p] is the distribution of the total number of trials before the first success occurs, where the probability of success in each trial is p. The hypergeometric distribution HypergeometricDistribution[n,nsucc,ntot] is used in place of the binomial distribution for experiments in which the n trials correspond to sampling without replacement from a population of size ntot with nsucc potential successes. The discrete uniform distribution DiscreteUniformDistribution[{imin,imax}] represents an experiment with multiple equally probable outcomes represented by integers imin through imax. The Poisson distribution PoissonDistribution[μ] describes the number of events that occur in a given time period where μ is the average number of events per period. The Zipf distribution ZipfDistribution[ρ], sometimes referred to as the zeta distribution, was first used in linguistics and its use has been extended to model rare events. PDF[dist,x] probability mass function at x CDF[dist,x] cumulative distribution function at x InverseCDF[dist,q] the largest integer x such that CDF[dist,x] is at most q Quantile[dist,q] q th quantile Mean[dist] mean Variance[dist] variance StandardDeviation[dist] standard deviation Skewness[dist] coefficient of skewness Kurtosis[dist] coefficient of kurtosis CharacteristicFunction[dist,t] characteristic function Expectation[f[x],xdist] expectation of f[x] for x distributed according to dist Median[dist] median Quartiles[dist] list of the th, th, th quantiles for dist InterquartileRange[dist] difference between the first and third quartiles QuartileDeviation[dist] half the interquartile range QuartileSkewness[dist] quartile‐based skewness measure RandomVariate[dist] pseudorandom number with specified distribution RandomVariate[dist,dims] pseudorandom array with dimensionality dims, and elements from the specified distribution Some functions of statistical distributions. Distributions are represented in symbolic form. PDF[dist,x] evaluates the mass function at x if x is a numerical value, and otherwise leaves the function in symbolic form whenever possible. Similarly, CDF[dist,x] gives the cumulative distribution and Mean[dist] gives the mean of the specified distribution. The table above gives a sampling of some of the more common functions available for distributions. For a more complete description of these functions, see the description of their continuous analogues in "Continuous Distributions". Here is a symbolic representation of the binomial distribution for 34 trials, each having probability 0.3 of success: This is the mean of the distribution: You can get the expression for the mean by using symbolic variables as arguments: Here is the 50% quantile, which is equal to the median: This gives the expected value of with respect to the binomial distribution: The elements of this matrix are pseudorandom numbers from the binomial distribution: The functions described here are among the most commonly used continuous univariate statistical distributions. You can compute their densities, means, variances, and other related properties. The distributions themselves are represented in the symbolic form name[param1,param2,…]. Functions such as Mean, which give properties of statistical distributions, take the symbolic representation of the distribution as an argument. "Discrete Distributions" describes many common discrete univariate statistical distributions. Distributions related to the normal distribution. The lognormal distribution LogNormalDistribution[μ,σ] is the distribution followed by the exponential of a normally distributed random variable. This distribution arises when many independent random variables are combined in a multiplicative fashion. The half-normal distribution HalfNormalDistribution[θ] is proportional to the distribution NormalDistribution[0,1/(θ Sqrt[2/π])] limited to the domain . The inverse Gaussian distribution InverseGaussianDistribution[μ,λ], sometimes called the Wald distribution, is the distribution of first passage times in Brownian motion with positive drift. Distributions related to normally distributed samples. Distributions that are derived from normal distributions with nonzero means are called noncentral distributions. Piecewise linear distributions. The uniform distribution UniformDistribution[{min,max}], commonly referred to as the rectangular distribution, characterizes a random variable whose value is everywhere equally likely. An example of a uniformly distributed random variable is the location of a point chosen randomly on a line from min to max. BetaDistribution[α,β] continuous beta distribution with shape parameters α and β CauchyDistribution[a,b] Cauchy distribution with location parameter a and scale parameter b ChiDistribution[ν] distribution with ν degrees of freedom ExponentialDistribution[λ] exponential distribution with scale inversely proportional to parameter λ ExtremeValueDistribution[α,β] extreme maximum value (Fisher–Tippett) distribution with location parameter α and scale parameter β GammaDistribution[α,β] gamma distribution with shape parameter α and scale parameter β GumbelDistribution[α,β] Gumbel minimum extreme value distribution with location parameter α and scale parameter β InverseGammaDistribution[α,β] inverse gamma distribution with shape parameter α and scale parameter β LaplaceDistribution[μ,β] Laplace (double exponential) distribution with mean μ and scale parameter β LevyDistribution[μ,σ] Lévy distribution with location parameter μ and dispersion parameter σ LogisticDistribution[μ,β] logistic distribution with mean μ and scale parameter β MaxwellDistribution[σ] Maxwell (Maxwell – Boltzmann) distribution with scale parameter σ ParetoDistribution[k,α] Pareto distribution with minimum value parameter k and shape parameter α RayleighDistribution[σ] Rayleigh distribution with scale parameter σ WeibullDistribution[α,β] Weibull distribution with shape parameter α and scale parameter β Other continuous statistical distributions. The Laplace distribution LaplaceDistribution[μ,β] is the distribution of the difference of two independent random variables with identical exponential distributions. The logistic distribution LogisticDistribution[μ,β] is frequently used in place of the normal distribution when a distribution with longer tails is desired. The Pareto distribution ParetoDistribution[k,α] may be used to describe income, with representing the minimum income possible. The Weibull distribution WeibullDistribution[α,β] is commonly used in engineering to describe the lifetime of an object. The extreme value distribution ExtremeValueDistribution[α,β] is the limiting distribution for the largest values in large samples drawn from a variety of distributions, including the normal distribution. The limiting distribution for the smallest values in such samples is the Gumbel distribution, GumbelDistribution[α,β]. The names "extreme value" and "Gumbel distribution" are sometimes used interchangeably because the distributions of the largest and smallest extreme values are related by a linear change of variable. The extreme value distribution is also sometimes referred to as the log‐Weibull distribution because of logarithmic relationships between an extreme value-distributed random variable and a properly shifted and scaled Weibull-distributed random variable. PDF[dist,x] probability density function at x CDF[dist,x] cumulative distribution function at x InverseCDF[dist,q] the value of x such that CDF[dist,x] equals q Quantile[dist,q] q th quantile Mean[dist] mean Variance[dist] variance StandardDeviation[dist] standard deviation Skewness[dist] coefficient of skewness Kurtosis[dist] coefficient of kurtosis CharacteristicFunction[dist,t] characteristic function Expectation[f[x],xdist] expectation of f[x] for x distributed according to dist Median[dist] median Quartiles[dist] list of the th, th, th quantiles for dist InterquartileRange[dist] difference between the first and third quartiles QuartileDeviation[dist] half the interquartile range QuartileSkewness[dist] quartile‐based skewness measure RandomVariate[dist] pseudorandom number with specified distribution RandomVariate[dist,dims] pseudorandom array with dimensionality dims, and elements from the specified distribution Some functions of statistical distributions. The preceding table gives a list of some of the more common functions available for distributions in the Wolfram Language. The inverse CDF InverseCDF[dist,q] gives the value of at which CDF[dist,x] reaches . The median is given by InverseCDF[dist,1/2]. Quartiles, deciles, and percentiles are particular values of the inverse CDF. Quartile skewness is equivalent to , where , , and are the first, second, and third quartiles, respectively. Inverse CDFs are used in constructing confidence intervals for statistical parameters. InverseCDF[dist,q] and Quantile[dist,q] are equivalent for continuous distributions. RandomVariate[dist] gives pseudorandom numbers from the specified distribution. Here is the cumulative distribution function evaluated at 10: This is the cumulative distribution function. It is given in terms of the built‐in function GammaRegularized: Here is a plot of the cumulative distribution function: This is a pseudorandom array with elements distributed according to the gamma distribution: Partitioning Data into Clusters Cluster analysis is an unsupervised learning technique used for classification of data. Data elements are partitioned into groups called clusters that represent proximate collections of data elements based on a distance or dissimilarity function. Identical element pairs have zero distance or dissimilarity, and all others have positive distance or dissimilarity. FindClusters[data] partition data into lists of similar elements FindClusters[data,n] partition data into at most n lists of similar elements General clustering function. The data argument of FindClusters can be a list of data elements, associations, or rules indexing elements and labels. {e1,e2,…} data specified as a list of data elements ei {e1v1,e2v2,…} data specified as a list of rules between data elements ei and labels vi {e1,e2,…}{v1,v2,…} data specified as a rule mapping data elements ei to labels vi key1e1,key2e2…|> data specified as an association mapping elements ei to labels keyi FindClusters works for a variety of data types, including numerical, textual, and image, as well as Boolean vectors, dates and times. All data elements ei must have the same dimensions. Here is a list of numbers: The rule-based data syntax allows for clustering data elements and returning labels for those elements. Here two-dimensional points are clustered and labeled with their positions in the data list: The rule-based data syntax can also be used to cluster data based on parts of each data entry. For instance, you might want to cluster data in a data table while ignoring particular columns in the table. Here is a list of data entries: This clusters the data while ignoring the first two elements in each data entry: In principle, it is possible to cluster points given in an arbitrary number of dimensions. However, it is difficult at best to visualize the clusters above two or three dimensions. To compare optional methods in this documentation, an easily visualizable set of two-dimensional data will be used. The following commands define a set of 300 two-dimensional data points chosen to group into four somewhat nebulous clusters: This clusters the data based on the proximity of points: Here is a plot of the clusters: With the default settings, FindClusters has found the four clusters of points. You can also direct FindClusters to find a specific number of clusters. This shows the effect of choosing 3 clusters: This shows the effect of choosing 5 clusters: In principle, clustering techniques can be applied to any set of data. All that is needed is a measure of how far apart each element in the set is from other elements, that is, a function giving the distance between elements. FindClusters[{e1,e2,…},DistanceFunction->f] treats pairs of elements as being less similar when their distances f[ei,ej] are larger. The function f can be any appropriate distance or dissimilarity function. A dissimilarity function f satisfies the following: If the ei are vectors of numbers, FindClusters by default uses a squared Euclidean distance. If the ei are lists of Boolean True and False (or 0 and 1) elements, FindClusters by default uses a dissimilarity based on the normalized fraction of elements that disagree. If the ei are strings, FindClusters by default uses a distance function based on the number of point changes needed to get from one string to another. Distance functions for numerical data. This shows the clusters in datapairs found using a Manhattan distance: Dissimilarity functions for Boolean data. Here is some Boolean data: These are the clusters found using the default dissimilarity for Boolean data: Dissimilarity functions for string data. The edit distance is determined by counting the number of deletions, insertions, and substitutions required to transform one string into another while preserving the ordering of characters. In contrast, the Damerau–Levenshtein distance counts the number of deletions, insertions, substitutions, and transpositions, while the Hamming distance counts only the number of substitutions. Here is some string data: This clusters the string data using the edit distance: The Method option can be used to specify different methods of clustering. Explicit settings for the Method option. By default, FindClusters tries different methods and selects the best clustering. The methods "KMeans" and "KMedoids" determine how to cluster the data for a particular number of clusters k. The methods "DBSCAN", "JarvisPatrick", "MeanShift", "SpanningTree", "NeighborhoodContraction", and "GaussianMixture" determine how to cluster the data without assuming any particular number of clusters. The methods "Agglomerate", "Spectral" and "SpanningTree" can be used in both cases. This shows the clusters in datapairs found using the "KMeans" method: Additional Method suboptions are available to allow for more control over the clustering. Available suboptions depend on the Method chosen. "NeighborhoodRadius" specifies the average radius of a neighborhood of a point "NeighborsNumber" specifies the average number of points in a neighborhood "InitialCentroids" specifies the initial centroids/medoids "SharedNeighborsNumber" specifies the minimum number of shared neighbors "MaxEdgeLength" specifies the pruning length threshold ClusterDissimilarityFunction specifies the intercluster dissimilarity Suboption for all methods. The suboption "NeighborhoodRadius" can be used in methods "DBSCAN", "MeanShift", "JarvisPatrick", "NeighborhoodContraction", and "Spectral". The suboptions "NeighborsNumber" and "SharedNeighborsNumber" can be used in methods "DBSCAN" and "JarvisPatrick", respectively. The suboption "MaxEdgeLength" can be used in the method "SpanningTree". The suboption "InitialCentroids" can be used in methods "KMeans" and "KMedoids". The suboption ClusterDissimilarityFunction can be used in the method "Agglomerate". The "NeighborhoodRadius" suboption can be used to control the average radius of the neighborhood of a generic point. This shows different clusterings of datapairs found using the "NeighborhoodContraction" method by varying the "NeighborhoodRadius": The "NeighborsNumber" suboption can be used to control the number of neighbors in the neighborhood of a generic point. This shows different clusterings of datapairs found using the "DBSCAN" method by varying the "NeighborsNumber": The "InitialCentroids" suboption can be used to change the initial configuration in the "KMeans" and "KMedoids" methods. Bad initial configurations may result in bad clusterings. This shows different clusterings of datapairs found using the "KMeans" method by varying the "InitialCentroids": With Method->{"Agglomerate",ClusterDissimilarityFunction->f}, the specified linkage function f is used for agglomerative clustering. "Single" smallest intercluster dissimilarity "Average" average intercluster dissimilarity "Complete" largest intercluster dissimilarity "WeightedAverage" weighted average intercluster dissimilarity "Centroid" distance from cluster centroids "Median" distance from cluster medians "Ward" Ward's minimum variance dissimilarity f a pure function Linkage methods determine this intercluster dissimilarity, or fusion level, given the dissimilarities between member elements. With ClusterDissimilarityFunction->f, f is a pure function that defines the linkage algorithm. Distances or dissimilarities between clusters are determined recursively using information about the distances or dissimilarities between unmerged clusters to determine the distances or dissimilarities for the newly merged cluster. The function f defines a distance from a cluster k to the new cluster formed by fusing clusters i and j. The arguments supplied to f are dik, djk, dij, ni, nj, and nk, where d is the distance between clusters and n is the number of elements in a cluster. The CriterionFunction option can be used to select both the method to use and the best number of clusters. "StandardDeviation" root-mean-square standard deviation "RSquared" R-squared "Dunn" Dunn index "CalinskiHarabasz" Calinski–Harabasz index "DaviesBouldin" Davies–Bouldin index Automatic internal index These are the clusters found using the default CriterionFunction with automatically selected number of clusters: These are the clusters found using the "CalinskiHarabasz" index: Nearest is used to find elements in a list that are closest to a given data point. Nearest[{elem1,elem2,…},x] give the list of elemi to which x is nearest Nearest[{elem1->v1,elem2->v2,…},x] give the vi corresponding to the elemi to which x is nearest Nearest[{elem1,elem2,…}->{v1,v2,…},x] give the same result Nearest[{elem1,elem2,…}->Automatic,x] take the vi to be the integers 1, 2, 3, … Nearest[data,x,n] give the n nearest elements to x Nearest[data,x,{n,r}] give up to the n nearest elements to x within a radius r Nearest[data] generate a NearestFunction[…] which can be applied repeatedly to different x Nearest works with numeric lists, tensors, or a list of strings. This finds the elements nearest to 4.5: This finds 3 elements nearest to 4.5: This finds all elements nearest to 4.5 within a radius of 2: This finds the points nearest to {1,2} in 2D: This finds the nearest string to "cat": The rule-based data syntax lets you use nearest elements to return their labels. Here two-dimensional points are labeled: This labels the elements using successive integers: If Nearest is to be applied repeatedly to the same numerical data, you can get significant performance gains by first generating a NearestFunction. This finds points in the set that are closest to the 10 target points: For numerical data, by default Nearest uses the EuclideanDistance. For strings, EditDistance is used. Manipulating Numerical Data When you have numerical data, it is often convenient to find a simple formula that approximates it. For example, you can try to "fit" a line or curve through the points in your data. Fit[{y1,y2,…},{f1 , f2,…},x] fit the values yn to a linear combination of functions fi Fit[{{x1,y1},{x2,y2},…},{f1 , f2,…},x] fit the points (xn,yn) to a linear combination of the fi Fitting curves to linear combinations of functions. This finds a fit of the form : This finds a fit to the new data, of the form : FindFit[data,form,{p1,p2,…},x] find a fit to form with parameters pi Fitting data to general forms. This finds the best parameters for a linear fit: This does a nonlinear fit: One common way of picking out "signals" in numerical data is to find the Fourier transform, or frequency spectrum, of the data. Here is a simple square pulse: This takes the Fourier transform of the pulse: Note that the Fourier function in the Wolfram Language is defined with the sign convention typically used in the physical sciences—opposite to the one often used in electrical engineering. "Discrete Fourier Transforms" gives more details. There are many situations where one wants to find a formula that best fits a given set of data. One way to do this in the Wolfram Language is to use Fit. Fit[{f1,f2,…},{fun1,fun2,…},x] find a linear combination of the funi that best fits the values fi Here is a table of the first 20 primes: Here is a plot of this "data": This gives a linear fit to the list of primes. The result is the best linear combination of the functions 1 and x: Here is a plot of the fit: Here is the fit superimposed on the original data: This gives a quadratic fit to the data: Here is a plot of the quadratic fit: This shows the fit superimposed on the original data. The quadratic fit is better than the linear one: {f1,f2,…} data points obtained when a single coordinate takes on values {{x1,f1},{x2,f2},…} data points obtained when a single coordinate takes on values {{x1,y1,…,f1},{x2,y2,…,f2},…} data points obtained with values of a sequence of coordinates Fit[data,{fun1,fun2,…},{x,y,…}] fit to a function of several variables This produces a fit to a function of two variables: Fit takes a list of functions, and uses a definite and efficient procedure to find what linear combination of these functions gives the best least‐squares fit to your data. Sometimes, however, you may want to find a nonlinear fit that does not just consist of a linear combination of specified functions. You can do this using FindFit, which takes a function of any form, and then searches for values of parameters that yield the best fit to your data. FindFit[data,form,{par1,par2,…},x] search for values of the pari that make form best fit data FindFit[data,form,pars,{x,y,…}] fit multivariate data Searching for general fits to data. This fits the list of primes to a simple linear combination of terms: The result is the same as from Fit: This fits to a nonlinear form, which cannot be handled by Fit: This uses the ‐norm, which minimizes the maximum distance between the fit and the data. The result is slightly different from least‐squares: FindFit works by searching for values of parameters that yield the best fit. Sometimes you may have to tell it where to start in doing this search. You can do this by giving parameters in the form . FindFit also has various options that you can set to control how it does its search. FindFit[data,{form,cons},pars,vars] finds a best fit subject to the parameter constraints cons Searching for general fits to data. This gives a best fit subject to constraints on the parameters: Statistical Model Analysis When fitting models to data, it is often useful to analyze how well the model fits the data and how well the fitting meets the assumptions of the model. For a number of common statistical models, this is accomplished in the Wolfram System by way of fitting functions that construct FittedModel objects. Object for fitted model information. FittedModel objects can be evaluated at a point or queried for results and diagnostic information. Diagnostics vary somewhat across model types. Available model fitting functions fit linear, generalized linear, and nonlinear models. This fits a linear model assuming values are 1, 2, …: Here is the functional form of the fitted model: This evaluates the model at : Here is a shortened list of available results for the linear fitted model: The major difference between model fitting functions such as LinearModelFit and functions such as Fit and FindFit is the ability to easily obtain diagnostic information from the FittedModel objects. The results are accessible without refitting the model. This gives the residuals for the fitting: Here multiple results are obtained at once in a list: Fitting options relevant to property computations can be passed to FittedModel objects to override defaults. This gives default 95% confidence intervals: Here 90% intervals are obtained: Typical data for these model-fitting functions takes the same form as data in other fitting functions such as Fit and FindFit. {y1,y2,…} data points with a single predictor variable taking values 1, 2, … {{x11,x12,…,y1},{x21,x22,…,y2},…} data points with explicit coordinates Linear Models Linear models with assumed independent normally distributed errors are among the most common models for data. Models of this type can be fitted using the LinearModelFit function. LinearModelFit[{y1,y2,…},{f1,f2,…},x] obtain a linear model with basis functions fi and a single predictor variable x LinearModelFit[{{x11,x12,…,y1},{x21,x22,…,y2}},{f1,f2,…},{x1,x2,…}] obtain a linear model of multiple predictor variables xi LinearModelFit[{m,v}] obtain a linear model based on a design matrix m and a response vector v This fits a linear model to the first 20 primes: Options for model specification and for model analysis are available. The Weights option specifies weight values for weighted linear regression. The NominalVariables option specifies which predictor variables should be treated as nominal or categorical. With NominalVariables->All, the model is an analysis of variance (ANOVA) model. With NominalVariables->{x1,…,xi-1,xi+1,…,xn} the model is an analysis of covariance (ANCOVA) model with all but the th predictor treated as nominal. Nominal variables are represented by a collection of binary variables indicating equality and inequality to the observed nominal categorical values for the variable. ConfidenceLevel, VarianceEstimatorFunction, and WorkingPrecision are relevant to the computation of results after the initial fitting. These options can be set within LinearModelFit to specify the default settings for results obtained from the FittedModel object. These options can also be set within an already constructed FittedModel object to override the option values originally given to LinearModelFit. Here are the default and mean-squared error variance estimates: IncludeConstantBasis, LinearOffsetFunction, NominalVariables, and Weights are relevant only to the fitting. Setting these options within an already constructed FittedModel object will have no further impact on the result. A major feature of the model-fitting framework is the ability to obtain results after the fitting. The full list of available results can be obtained using "Properties". This is the number of properties available for linear models: The properties include basic information about the data, fitted model, and numerous results and diagnostics. "BasisFunctions" list of basis functions "BestFit" fitted function "BestFitParameters" parameter estimates "Data" the input data or design matrix and response vector "DesignMatrix" design matrix for the model "Function" best-fit pure function "Response" response values in the input data Properties related to data and the fitted function. "FitResiduals" difference between actual and predicted responses "StandardizedResiduals" fit residuals divided by the standard error for each residual "StudentizedResiduals" fit residuals divided by single deletion error estimates "ANOVATable" analysis of variance table "ANOVATableDegreesOfFreedom" degrees of freedom from the ANOVA table "ANOVATableEntries" unformatted array of values from the table "ANOVATableFStatistics" F‐statistics from the table "ANOVATableMeanSquares" mean square errors from the table "ANOVATablePValues" ‐values from the table "ANOVATableSumsOfSquares" sums of squares from the table "CoefficientOfVariation" response mean divided by the estimated standard deviation "EstimatedVariance" estimate of the error variance "PartialSumOfSquares" changes in model sum of squares as nonconstant basis functions are removed "SequentialSumOfSquares" the model sum of squares partitioned componentwise Properties related to the sum of squared errors. "ANOVATable" gives a formatted analysis of variance table for the model. "ANOVATableEntries" gives the numeric entries in the table and the remaining ANOVATable properties give the elements of columns in the table so individual parts of the table can easily be used in further computations. This gives a formatted ANOVA table for the fitted model: Here are the elements of the MS column of the table: Properties and diagnostics for parameter estimates. "ParameterTable" and "ParameterConfidenceIntervalTable" contain information about the individual parameter estimates, tests of parameter significance, and confidence intervals. This fits a model using both predictor variables: These are the formatted parameter and parameter confidence interval tables: Here 99% confidence intervals are used in the table: "EigenstructureTable" gives the eigenvalues, condition indices, and variance partitions for the nonconstant basis functions. The Index column gives the square root of the ratios of the eigenvalues to the largest eigenvalue. The column for each basis function gives the proportion of variation in that basis function explained by the associated eigenvector. "EigenstructureTablePartitions" gives the values in the variance partitioning for all basis functions in the table. Properties related to influence measures. This plots the Cook distances for the bivariate model: "MeanPredictionBands" confidence bands for mean predictions "MeanPredictionConfidenceIntervals" confidence intervals for the mean predictions "MeanPredictionConfidenceIntervalTable" table of confidence intervals for the mean predictions "MeanPredictionConfidenceIntervalTableEntries" unformatted array of values from the table "MeanPredictionErrors" standard errors for mean predictions "PredictedResponse" fitted values for the data "SinglePredictionBands" confidence bands based on single observations "SinglePredictionConfidenceIntervals" confidence intervals for the predicted response of single observations "SinglePredictionConfidenceIntervalTable" table of confidence intervals for the predicted response of single observations "SinglePredictionConfidenceIntervalTableEntries" unformatted array of values from the table "SinglePredictionErrors" standard errors for the predicted response of single observations Properties of predicted values. Tabular results for confidence intervals are given by "MeanPredictionConfidenceIntervalTable" and "SinglePredictionConfidenceIntervalTable". These include the observed and predicted responses, standard error estimates, and confidence intervals for each point. Mean prediction confidence intervals are often referred to simply as confidence intervals and single prediction confidence intervals are often referred to as prediction intervals. "MeanPredictionBands" and "SinglePredictionBands" give formulas for mean and single prediction confidence intervals as functions of the predictor variables. Here is the mean prediction table: This gives the 90% mean prediction intervals: Goodness-of-fit measures. Goodness-of-fit measures are used to assess how well a model fits or to compare models. The coefficient of determination "RSquared" is the ratio of the model sum of squares to the total sum of squares. "AdjustedRSquared" penalizes for the number of parameters in the model and is given by . Generalized Linear Models GeneralizedLinearModelFit[{y1,y2,…},{f1,f2,…},x] obtain a generalized linear model with basis functions fi and a single predictor variable x GeneralizedLinearModelFit[{{x11,x12,…,y1},{x21,x22,…,y2}},{f1,f2,…},{x1,x2,…}] obtain a generalized linear model of multiple predictor variables xi GeneralizedLinearModelFit[{m,v}] obtain a generalized linear model based on a design matrix m and response vector v Generalized linear model fitting. This fits a linear regression model: This fits a canonical gamma regression model to the same data: Here are the functional forms of the models: LogitModelFit[data,funs,vars] obtain a logit model with basis functions funs and predictor variables vars LogitModelFit[{m,v}] obtain a logit model based on a design matrix m and response vector v ProbitModelFit[data,funs,vars] obtain a probit model fit to data ProbitModelFit[{m,v}] obtain a probit model fit to a design matrix m and response vector v Logit and probit model fitting. Parameter estimates are obtained via iteratively reweighted least squares with weights obtained from the variance function of the assumed distribution. Options for GeneralizedLinearModelFit include options for iteration fitting such as PrecisionGoal, options for model specification such as LinkFunction, and options for further analysis such as ConfidenceLevel. The options for LogitModelFit and ProbitModelFit are the same as for GeneralizedLinearModelFit except that ExponentialFamily and LinkFunction are defined by the logit or probit model and so are not options to LogitModelFit and ProbitModelFit. ExponentialFamily, IncludeConstantBasis, LinearOffsetFunction, LinkFunction, NominalVariables, and Weights all define some aspect of the model structure and optimization criterion and can only be set within GeneralizedLinearModelFit. All other options can be set either within GeneralizedLinearModelFit or passed to the FittedModel object when obtaining results and diagnostics. Options set in evaluations of FittedModel objects take precedence over settings given to GeneralizedLinearModelFit at the time of the fitting. This gives 95% and 99% confidence intervals for the parameters in the gamma model: "BasisFunctions" list of basis functions "BestFit" fitted function "BestFitParameters" parameter estimates "Data" the input data or design matrix and response vector "DesignMatrix" design matrix for the model "Function" best fit pure function "LinearPredictor" fitted linear combination "Response" response values in the input data Properties related to data and the fitted function. "Deviances" deviances "DevianceTable" deviance table "DevianceTableDegreesOfFreedom" degrees of freedom differences from the table "DevianceTableDeviances" deviance differences from the table "DevianceTableEntries" unformatted array of values from the table "DevianceTableResidualDegreesOfFreedom" residual degrees of freedom from the table "DevianceTableResidualDeviances" residual deviances from the table "EstimatedDispersion" estimated dispersion parameter "NullDeviance" deviance for the null model "NullDegreesOfFreedom" degrees of freedom for the null model "ResidualDeviance" difference between the deviance for the fitted model and the deviance for the full model "ResidualDegreesOfFreedom" difference between the model degrees of freedom and null degrees of freedom Properties related to dispersion and model deviances. Here is some data with two predictor variables: This fits the data to an inverse Gaussian model: Here is the deviance table for the model: As with sums of squares, deviances are additive. The Deviance column of the table gives the increase in the model deviance when the given basis function is added. The Residual Deviance column gives the difference between the model deviance and the deviance for the submodel containing all previous terms in the table. For large samples, the increase in deviance is approximately distributed with degrees of freedom equal to that for the basis function in the table. "NullDeviance" is the deviance for the null model, the constant model equal to the mean of all observed responses for models including a constant, or if a constant term is not included. As with "ANOVATable", a number of properties are included to extract the columns or unformatted array of entries from "DevianceTable". "AnscombeResiduals" Anscombe residuals "DevianceResiduals" deviance residuals "FitResiduals" difference between actual and predicted responses "LikelihoodResiduals" likelihood residuals "PearsonResiduals" Pearson residuals "StandardizedDevianceResiduals" standardized deviance residuals "StandardizedPearsonResiduals" standardized Pearson residuals "WorkingResiduals" working residuals "FitResiduals" is the list of residuals, differences between the observed and predicted responses. Given the distributional assumptions, the magnitude of the residuals is expected to change as a function of the predicted response value. Various types of scaled residuals are employed in the analysis of generalized linear models. This plots the residuals and Anscombe residuals for the inverse Gaussian model: Properties and diagnostics for parameter estimates. "CorrelationMatrix" is the associated correlation matrix for the parameter estimates. "ParameterErrors" is equivalent to the square root of the diagonal elements of the covariance matrix. "ParameterTable" and "ParameterConfidenceIntervalTable" contain information about the individual parameter estimates, tests of parameter significance, and confidence intervals. The test statistics for generalized linear models asymptotically follow normal distributions. "CookDistances" list of Cook distances "HatDiagonal" diagonal elements of the hat matrix Properties related to influence measures. "CookDistances" and "HatDiagonal" extend the leverage measures from linear regression to generalized linear models. The hat matrix from which the diagonal elements are extracted is defined using the final weights of the iterative fitting. "PredictedResponse" fitted values for the data Properties of predicted values. Goodness-of-fit measures. Nonlinear Models A nonlinear least-squares model is an extension of the linear model where the model need not be a linear combination of basis function. The errors are still assumed to be independent and normally distributed. Models of this type can be fitted using the NonlinearModelFit function. NonlinearModelFit[{y1,y2,…},form,{β1,…},x] obtain a nonlinear model of the function form with parameters βi a single parameter predictor variable x NonlinearModelFit[{{x11,…,y1},{x21,…,y2}},form,{β1,…},{x1,…}] obtain a nonlinear model as a function of multiple predictor variables xi NonlinearModelFit[data,{form,cons},{β1,…},{x1,…}] obtain a nonlinear model subject to the constraints cons This fits a nonlinear model to a sequence of square roots: Options for model fitting and for model analysis are available. General numeric options such as AccuracyGoal, Method, and WorkingPrecision are the same as for FindFit. The Weights option specifies weight values for weighted nonlinear regression. The optimal fit is for a weighted sum of squared errors. All other options can be relevant to computation of results after the initial fitting. They can be set within NonlinearModelFit for use in the fitting and to specify the default settings for results obtained from the FittedModel object. These options can also be set within an already constructed FittedModel object to override the option values originally given to NonlinearModelFit. "BestFit" fitted function "BestFitParameters" parameter estimates "Data" the input data "Function" best fit pure function "Response" response values in the input data Properties related to data and the fitted function. Basic properties of the data and fitted function for nonlinear models behave like the same properties for linear and generalized linear models with the exception that "BestFitParameters" returns a rule as is done for the result of FindFit. This gives the fitted function and rules for the parameter estimates: Many diagnostics for nonlinear models extend or generalize concepts from linear regression. These extensions often rely on linear approximations or large sample approximations. "FitResiduals" difference between actual and predicted responses "StandardizedResiduals" fit residuals divided by the standard error for each residual "StudentizedResiduals" fit residuals divided by single deletion error estimates As in linear regression, "FitResiduals" gives the differences between the observed and fitted values , and "StandardizedResiduals" and "StudentizedResiduals" are scaled forms of these differences. "ANOVATable" analysis of variance table "ANOVATableDegreesOfFreedom" degrees of freedom from the ANOVA table "ANOVATableEntries" unformatted array of values from the table "ANOVATableMeanSquares" mean square errors from the table "ANOVATableSumsOfSquares" sums of squares from the table "EstimatedVariance" estimate of the error variance Properties related to the sum of squared errors. "ANOVATable" provides a decomposition of the variation in the data attributable to the fitted function and to the errors or residuals. This gives the ANOVA table for the nonlinear model: The uncorrected total sums of squares gives the sum of squared responses, while the corrected total gives the sum of squared differences between the responses and their mean value. Properties and diagnostics for parameter estimates. "ParameterTable" and "ParameterConfidenceIntervalTable" contain information about the individual parameter estimates, tests of parameter significance, and confidence intervals obtained using the error estimates. "CurvatureConfidenceRegion" confidence region for curvature diagnostics "FitCurvatureTable" table of curvature diagnostics "FitCurvatureTableEntries" unformatted array of values from the table "MaxIntrinsicCurvature" measure of maximum intrinsic curvature "MaxParameterEffectsCurvature" measure of maximum parameter effects curvature The first-order approximation used for many diagnostics is equivalent to the model being linear in the parameters. If the parameter space near the parameter estimates is sufficiently flat, the linear approximations and any results that rely on first-order approximations can be deemed reasonable. Curvature diagnostics are used to assess whether the approximate linearity is reasonable. "FitCurvatureTable" is a table of curvature diagnostics. "MaxIntrinsicCurvature" and "MaxParameterEffectsCurvature" are scaled measures of the normal and tangential curvatures of the parameter spaces at the best-fit parameter values. "CurvatureConfidenceRegion" is a scaled measure of the radius of curvature of the parameter space at the best-fit parameter values. If the normal and tangential curvatures are small relative to the value of the "CurvatureConfidenceRegion", the linear approximation is considered reasonable. Some rules of thumb suggest comparing the values directly, while others suggest comparing with half the "CurvatureConfidenceRegion". Here is the curvature table for the nonlinear model: "HatDiagonal" diagonal elements of the hat matrix "SingleDeletionVariances" list of variance estimates with the th data point omitted Properties related to influence measures. "MeanPredictionBands" confidence bands for mean predictions "MeanPredictionConfidenceIntervals" confidence intervals for the mean predictions "MeanPredictionConfidenceIntervalTable" table of confidence intervals for the mean predictions "MeanPredictionConfidenceIntervalTableEntries" unformatted array of values from the table "MeanPredictionErrors" standard errors for mean predictions "PredictedResponse" fitted values for the data "SinglePredictionBands" confidence bands based on single observations "SinglePredictionConfidenceIntervals" confidence intervals for the predicted response of single observations "SinglePredictionConfidenceIntervalTable" table of confidence intervals for the predicted response of single observations "SinglePredictionConfidenceIntervalTableEntries" unformatted array of values from the table "SinglePredictionErrors" standard errors for the predicted response of single observations Properties of predicted values. Tabular results for confidence intervals are given by "MeanPredictionConfidenceIntervalTable" and "SinglePredictionConfidenceIntervalTable". These results are analogous to those for linear models obtained via LinearModelFit, again with first-order approximations used for the design matrix. "MeanPredictionBands" and "SinglePredictionBands" give functions of the predictor variables. Here the fitted function and mean prediction bands are obtained: This plots the fitted curve and confidence bands: Goodness-of-fit measures. Approximate Functions and Interpolation In many kinds of numerical computations, it is convenient to introduce approximate functions. Approximate functions can be thought of as generalizations of ordinary approximate real numbers. While an approximate real number gives the value to a certain precision of a single numerical quantity, an approximate function gives the value to a certain precision of a quantity which depends on one or more parameters. The Wolfram Language uses approximate functions, for example, to represent numerical solutions to differential equations obtained with NDSolve, as discussed in "Numerical Differential Equations". Approximate functions in the Wolfram Language are represented by InterpolatingFunction objects. These objects work like the pure functions discussed in "Pure Functions". The basic idea is that when given a particular argument, an InterpolatingFunction object finds the approximate function value that corresponds to that argument. The InterpolatingFunction object contains a representation of the approximate function based on interpolation. Typically it contains values and possibly derivatives at a sequence of points. It effectively assumes that the function varies smoothly between these points. As a result, when you ask for the value of the function with a particular argument, the InterpolatingFunction object can interpolate to find an approximation to the value you want. Interpolation[{f1,f2,…}] construct an approximate function with values fi at successive integers Interpolation[{{x1,f1},{x2,f2},…}] construct an approximate function with values fi at points xi Constructing approximate functions. Here is a table of the values of the sine function: This constructs an approximate function which represents these values: The approximate function reproduces each of the values in the original table: It also allows you to get approximate values at other points: In this case the interpolation is a fairly good approximation to the true sine function: You can work with approximate functions much as you would with any other Wolfram Language functions. You can plot approximate functions, or perform numerical operations such as integration or root finding. If you give a non‐numerical argument, the approximate function is left in symbolic form: Here is a numerical integral of the approximate function: Here is the same numerical integral for the true sine function: A plot of the approximate function is essentially indistinguishable from the true sine function: If you differentiate an approximate function, the Wolfram Language will return another approximate function that represents the derivative. This finds the derivative of the approximate sine function, and evaluates it at : The result is close to the exact one: InterpolatingFunction objects contain all the information the Wolfram Language needs about approximate functions. In standard Wolfram Language output format, however, only the part that gives the domain of the InterpolatingFunction object is printed explicitly. The lists of actual parameters used in the InterpolatingFunction object are shown only in iconic form. In standard output format, the only parts of an InterpolatingFunction object printed explicitly are its domain and output type: If you ask for a value outside of the domain, the Wolfram Language prints a warning, then uses extrapolation to find a result: The more information you give about the function you are trying to approximate, the better the approximation the Wolfram Language constructs can be. You can, for example, specify not only values of the function at a sequence of points, but also derivatives. Interpolation[{{{x1},f1,df1,ddf1,…},…}] construct an approximate function with specified derivatives at points xi Constructing approximate functions with specified derivatives. This interpolates through the values of the sine function and its first derivative: This finds a better approximation to the derivative than the previous interpolation: Interpolation works by fitting polynomial curves between the points you specify. You can use the option InterpolationOrder to specify the degree of these polynomial curves. The default setting is InterpolationOrder->3, yielding cubic curves. This makes a table of values of the cosine function: This creates an approximate function using linear interpolation between the values in the table: The approximate function consists of a collection of straight‐line segments: With the default setting InterpolationOrder->3, cubic curves are used, and the function looks smooth: Increasing the setting for InterpolationOrder typically leads to smoother approximate functions. However, if you increase the setting too much, spurious wiggles may develop. ListInterpolation[{{f11,f12,…},{f21,…},…}] construct an approximate function from a two‐dimensional grid of values at integer points ListInterpolation[list,{{xmin,xmax},{ymin,ymax}}] assume the values are from an evenly spaced grid with the specified domain ListInterpolation[list,{{x1,x2,…},{y1,y2,…}}] assume the values are from a grid with the specified grid lines Interpolating multidimensional arrays of data. This interpolates an array of values from integer grid points: Here is the value at a particular position: Here is another array of values: To interpolate this array you explicitly have to tell the Wolfram Language the domain it covers: ListInterpolation works for arrays of any dimension, and in each case it produces an InterpolatingFunction object which takes the appropriate number of arguments. This interpolates a three‐dimensional array: The Wolfram Language can handle not only purely numerical approximate functions, but also ones which involve symbolic parameters. This shows how the interpolated value at 2.2 depends on the parameters: With the default setting for InterpolationOrder used, the value at this point no longer depends on a: In working with approximate functions, you can quite often end up with complicated combinations of InterpolatingFunction objects. You can always tell the Wolfram Language to produce a single InterpolatingFunction object valid over a particular domain by using FunctionInterpolation. FunctionInterpolation[expr,{x,xmin,xmax}] construct an approximate function by evaluating expr with x ranging from xmin to xmax FunctionInterpolation[expr,{x,xmin,xmax},{y,ymin,ymax},…] construct a higher‐dimensional approximate function Constructing approximate functions by evaluating expressions. Discrete Fourier Transforms A common operation in analyzing various kinds of data is to find the discrete Fourier transform (or spectrum) of a list of values. The idea is typically to pick out components of the data with particular frequencies or ranges of frequencies. Fourier[{u1,u2,…,un}] discrete Fourier transform InverseFourier[{v1,v2,…,vn}] inverse discrete Fourier transform Discrete Fourier transforms. Here is some data, corresponding to a square pulse: Here is the discrete Fourier transform of the data. It involves complex numbers: Here is the inverse discrete Fourier transform: Fourier works whether or not your list of data has a length which is a power of two: This generates a list of 200 elements containing a periodic signal with random noise added: The data looks fairly random if you plot it directly: In different scientific and technical fields different conventions are often used for defining discrete Fourier transforms. The option FourierParameters allows you to choose any of these conventions you want. Fourier[{{u11,u12,…},{u21,u22,…},…}] two‐dimensional discrete Fourier transform Two‐dimensional discrete Fourier transform. One issue with the usual discrete Fourier transform for real data is that the result is complex-valued. There are variants of real discrete Fourier transforms that have real results. The Wolfram Language has commands for computing the discrete cosine transform and the discrete sine transform. FourierDCT[list] Fourier discrete cosine transform of a list of real numbers FourierDST[list] Fourier discrete sine transform of a list of real numbers Discrete real Fourier transforms. Here is some data corresponding to a square pulse: Here is the Fourier discrete cosine transform of the data: Here is the Fourier discrete sine transform of the data: There are four types each of Fourier discrete sine and cosine transforms typically in use, denoted by number or sometimes roman numeral as in "DCTII" for the discrete cosine transform of type 2. FourierDCT[list,m] Fourier discrete cosine transform of type m FourierDST[list,m] Fourier discrete sine transform of type m Discrete real Fourier transforms of different types. The default is type 2 for both FourierDCT and FourierDST. The Wolfram Language does not need InverseFourierDCT or InverseFourierDST functions because FourierDCT and FourierDST are their own inverses when used with the appropriate type. The inverse transforms for types 1, 2, 3, 4 are types 1, 3, 2, 4, respectively. Check that the type 3 transform is the inverse of the type 2 transform: The discrete real transforms are convenient to use for data or image compression. Here is data that might be like a front or an edge: The discrete cosine transform has most of the information in the first few modes: Reconstruct the front from only the first 20 modes (1/10 of the original data size). The oscillations are a consequence of the truncation and are known to show up in image processing applications as well: Convolutions and Correlations Convolution and correlation are central to many kinds of operations on lists of data. They are used in such areas as signal and image processing, statistical data analysis, and approximations to partial differential equations, as well as operations on digit sequences and power series. ListConvolve[kernel,list] form the convolution of kernel with list ListCorrelate[kernel,list] form the correlation of kernel with list Convolution and correlation of lists. This forms the convolution of the kernel {x,y} with a list of data: This forms the correlation: In this case reversing the kernel gives exactly the same result as ListConvolve: This forms successive differences of the data: In forming sublists to combine with a kernel, there is always an issue of what to do at the ends of the list of data. By default, ListConvolve and ListCorrelate never form sublists which would "overhang" the ends of the list of data. This means that the output you get is normally shorter than the original list of data. With an input list of length 6, the output is in this case of length 4: In practice one often wants to get output that is as long as the original list of data. To do this requires including sublists that overhang one or both ends of the list of data. The additional elements needed to form these sublists must be filled in with some kind of "padding". By default, the Wolfram Language takes copies of the original list to provide the padding, thus effectively treating the list as being cyclic. ListCorrelate[kernel,list] do not allow overhangs on either side (result shorter than list ) ListCorrelate[kernel,list,1] allow an overhang on the right (result same length as list ) ListCorrelate[kernel,list,-1] allow an overhang on the left (result same length as list ) ListCorrelate[kernel,list,{-1,1}] allow overhangs on both sides (result longer than list ) ListCorrelate[kernel,list,{kL,kR}] allow particular overhangs on left and right Controlling how the ends of the list of data are treated. The default involves no overhangs: The last term in the last element now comes from the beginning of the list: Now the first term of the first element and the last term of the last element both involve wraparound: In the general case ListCorrelate[kernel,list,{kL,kR}] is set up so that in the first element of the result, the first element of list appears multiplied by the element at position kL in kernel, and in the last element of the result, the last element of list appears multiplied by the element at position kR in kernel. The default case in which no overhang is allowed on either side thus corresponds to ListCorrelate[kernel,list,{1,-1}]. With a kernel of length 3, alignments {-1,2} always make the first and last elements of the result the same: For many kinds of data, it is convenient to assume not that the data is cyclic, but rather that it is padded at either end by some fixed element, often 0, or by some sequence of elements. ListCorrelate[kernel,list,klist,p] pad with element p ListCorrelate[kernel,list,klist,{p1,p2,…}] pad with cyclic repetitions of the pi ListCorrelate[kernel,list,klist,list] pad with cyclic repetitions of the original data Controlling the padding for a list of data. This pads with element p: A common case is to pad with zero: When the padding is indicated by {p,q}, the list {a,b,c} overlays {…,p,q,p,q,…} with a p aligned under the a: Different choices of kernel allow ListConvolve and ListCorrelate to be used for different kinds of computations. This finds a moving average of data: Here is a Gaussian kernel: This generates some "data": Here is a plot of the data: This convolves the kernel with the data: The result is a smoothed version of the data: You can use ListConvolve and ListCorrelate to handle symbolic as well as numerical data. This forms the convolution of two symbolic lists: The result corresponds exactly with the coefficients in the expanded form of this product of polynomials: ListConvolve and ListCorrelate work on data in any number of dimensions. This imports image data from a file: This convolves the data with a two‐dimensional kernel: This shows the image corresponding to the data: Cellular automata provide a convenient way to represent many kinds of systems in which the values of cells in an array are updated in discrete steps according to a local rule. Generating a cellular automaton evolution. This starts with the list given, then evolves rule 30 for four steps: This shows 100 steps of rule 30 evolution from random initial conditions: {a1,a2,…} explicit list of values ai {{a1,a2,…},b} values ai superimposed on a b background {{a1,a2,…},blist} values ai superimposed on a background of repetitions of blist {{{{a11,a12,…},{d1}},…},blist} values aij at offsets di Ways of specifying initial conditions for one‐dimensional cellular automata. If you give an explicit list of initial values, CellularAutomaton will take the elements in this list to correspond to all the cells in the system, arranged cyclically. The right neighbor of the cell at the end is the cell at the beginning: It is often convenient to set up initial conditions in which there is a small "seed" region, superimposed on a constant "background". By default, CellularAutomaton automatically fills in enough background to cover the size of the pattern that can be produced in the number of steps of evolution you specify. This shows rule 30 evolving from an initial condition containing a single black cell: This shows rule 30 evolving from an initial condition consisting of a {1,1} seed on a background of repeated {1,0,1,1} blocks: Particularly in studying interactions between structures, you may sometimes want to specify initial conditions for cellular automata in which certain blocks are placed at particular offsets. This sets up an initial condition with black cells at offsets : n , , elementary rule {n,k} general nearest‐neighbor rule with k colors {n,k,r} general rule with k colors and range r {n,{k,1}} k‐color nearest‐neighbor totalistic rule {n,{k,1},r} k‐color range-r totalistic rule {n,{k,{wt1,wt2,…}},r} rule in which neighbor i is assigned weight wti {n,kspec,{{off1},{off2},…,{offs}}} rule with neighbors at specified offsets {lhs1->rhs1,lhs2->rhs2,…} explicit replacements for lists of neighbors {fun,{},rspec} rule obtained by applying function fun to each neighbor list Specifying rules for one‐dimensional cellular automata. In the simplest cases, a cellular automaton allows k possible values or "colors" for each cell, and has rules that involve up to r neighbors on each side. The digits of the "rule number" n then specify what the color of a new cell should be for each possible configuration of the neighborhood. This evolves a single neighborhood for 1 step: This shows the new color of the center cell for each of the 8 neighborhoods: For rule 30, this sequence corresponds to the base‐2 digits of the number 30: It is sometimes convenient to consider totalistic cellular automata, in which the new value of a cell depends only on the total of the values in its neighborhood. One can specify totalistic cellular automata by rule numbers or "codes" in which each digit refers to neighborhoods with a given total value, obtained for example from neig.{1,1,1}. In general, CellularAutomaton allows one to specify rules using any sequence of weights. Another choice sometimes convenient is {k,1,k}, which yields outer totalistic rules. Any cellular automaton rule can be thought of as corresponding to a Boolean function. In the simplest case, basic Boolean functions like And or Nor take two arguments. These are conveniently specified in a cellular automaton rule as being at offsets {{0},{1}}. Note that for compatibility with handling higher‐dimensional cellular automata, offsets must always be given in lists, even for one‐dimensional cellular automata. This generates the truth table for 2‐cell‐neighborhood rule number 7, which turns out to be the Boolean function Nand: Rule numbers provide a highly compact way to specify cellular automaton rules. But sometimes it is more convenient to specify rules by giving an explicit function that should be applied to each possible neighborhood. This runs an additive cellular automaton whose rule adds all values in each neighborhood modulo 4: The function is given the step number as a second argument: When you specify rules by functions, the values of cells need not be integers: They can even be symbolic: CellularAutomaton[rnum,init,t] evolve for t steps, keeping all steps CellularAutomaton[rnum,init,{{t}}] evolve for t steps, keeping only the last step CellularAutomaton[rnum,init,{spect}] keep only steps specified by spect CellularAutomaton[rnum,init] evolve rule for one step, giving only the last step Selecting which steps to keep. This runs rule 30 for 5 steps, keeping only the last step: This keeps the last 2 steps: The step specification spect works very much like taking elements from a list with Take. One difference, though, is that the initial condition for the cellular automaton is considered to be step 0. Note that any step specification of the form {…} must be enclosed in an additional list. u steps 0 through u {u} step u {u1,u2} steps u1 through u2 {u1,u2,du} steps u1, u1+du, … Cellular automaton step specifications. This evolves for 100 steps, but keeps only every other step: Selecting steps and cells to keep. Much as you can specify which steps to keep in a cellular automaton evolution, so also you can specify which cells to keep. If you give an initial condition such as {{a1,a2,…},blist}, then rd is taken to have offset 0 for the purpose of specifying which cells to keep. All all cells that can be affected by the specified initial condition Automatic all cells in the region that differs from the background (default) 0 cell aligned with beginning of aspec x cells at offsets up to x on the right -x cells at offsets up to x on the left {x} cell at offset x to the right {-x} cell at offset x to the left {x1,x2} cells at offsets x1 through x2 {x1,x2,dx} cells x1, x1+dx, … Cellular automaton cell specifications. This keeps all steps, but drops cells at offsets more than 20 on the left: This keeps just the center column of cells: If you give an initial condition such as {{a1,a2,…},blist}, then CellularAutomaton will always effectively do the cellular automaton as if there were an infinite number of cells. By using a specx such as {x1,x2} you can tell CellularAutomaton to include only cells at specific offsets x1 through x2 in its output. CellularAutomaton by default includes cells out just far enough that their values never simply stay the same as in the background blist. By default, only the parts that are not constant black are kept: Using All for specx includes all cells that could be affected by a cellular automaton with this range: CellularAutomaton generalizes quite directly to any number of dimensions. Above two dimensions, however, totalistic and other special types of rules tend to be more useful, since the number of entries in the rule table for a general rule rapidly becomes astronomical. {n,k,{r1,r2,…,rd}} ‐dimensional rule with neighborhood {n,{k,1},{1,1}} two‐dimensional 9‐neighbor totalistic rule {n,{k,{{0,1,0},{1,1,1},{0,1,0}}},{1,1}} two‐dimensional 5‐neighbor totalistic rule {n,{k,{{0,k,0},{k,1,k},{0,k,0}}},{1,1}} two‐dimensional 5‐neighbor outer totalistic rule Higher‐dimensional rule specifications. This is the rule specification for the two‐dimensional 9‐neighbor totalistic cellular automaton with code 797: This gives steps 0 and 1 in its evolution: This shows step 70 in the evolution: This shows all steps in a slice along the axis:
RetroSearch is an open source project built by @garambo
| Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4