A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://reference.wolfram.com/language/ref/LinearSolve.html below:

Solve a matrix equation numerically or symbolically—Wolfram Documentation

WOLFRAM Consulting & Solutions

We deliver solutions for the AI era—combining symbolic computation, data-driven insights and deep technology expertise.

WolframConsulting.com

BUILT-IN SYMBOL LinearSolve

LinearSolve[m,b]

finds an x that solves the matrix equation m.x==b.

LinearSolve[a,b]

finds an x that solves the array equation a.x==b.

Details and Options Examplesopen allclose all Basic Examples  (3)

Solve the matrix-vector equation with and :

Verify the solution:

Solve the matrix equation with and :

Verify the solution:

Solve a rectangular matrix equation:

Verify the solution:

Scope  (16) Basic Uses  (9)

Solve at machine precision:

Solve a case where is a matrix:

Solve for a complex matrix:

Find a solution for an exact, rectangular matrix:

Compute a solution at arbitrary precision:

Solve for a symbolic matrix:

Solve the system when is a matrix:

Solve over a finite field:

Solve for CenteredInterval matrices:

Find random representatives mrep and brep of m and b:

Verify that sol contains LinearSolve[mrep,brep]:

Solve for when is a matrix of different dimensions:

When no righthand side for is given, a LinearSolveFunction is returned:

This contains data to solve the problem quickly for a few values of :

Special Matrices  (6)

Solve with sparse matrices:

As the result is typically not sparse, the result is returned as an ordinary list:

Sparse methods are used to efficiently solve sparse matrices:

Visualize the result:

Solve a system with structured matrices:

Use a different type of matrix structure:

An identity matrix always produces a trivial solution:

Solve a linear system whose coefficient matrix is a Hilbert matrix:

Solve a system whose coefficients are univariate polynomials of degree :

Arrays  (1)

Solve with a 2×3×6 array and a 2×3×4×5 array :

The result is a 6×4×5 array:

Verify the solution:

Options  (7) Method  (6) "Banded"  (1)

Solve using a banded matrix method:

Check a relative error of the computed solution:

"Cholesky"  (1)

Solve using the Cholesky decomposition:

Check a relative error of the computed solution:

"Krylov"  (2)

The following suboptions can be specified for the method "Krylov":

  • "BasisSize" the size of the Krylov basis (GMRES only) "MaxIterations" the maximum number of iterations "Method" methods to be used "Preconditioner" which preconditioner to apply "PreconditionerSide" how to apply a preconditioner ("Left" or "Right") "ResidualNormFunction" A norm function that computes a norm of the residual of the solution "StartingVector" the initial vector to start iterations "Tolerance" the tolerance used to terminate iterations
  • Possible settings for "Method" include:

  • "BiCGSTAB" iterative method for arbitrary square matrices "ConjugateGradient" iterative method for Hermitian positive definite matrices "GMRES" iterative method for arbitrary square matrices
  • Possible settings for "Preconditioner" include:

  • "ILU0" a preconditioner based on an incomplete LU factorization of the original matrix without fill-in "ILUT" a variant of ILU0 with fill-in "ILUTP" a variant of ILUT with column permutation
  • Possible suboptions for "Preconditioner" include:

  • "FillIn" upper bound on the number of additional nonzero elements in a row introduced by the ILUT preconditioner "PermutationTolerance" when to permute columns "Tolerance" drop tolerance (any element of magnitude smaller than this tolerance is treated as zero)
  • Solve using a Krylov method:

    Check a relative error of the computed solution:

    "Multifrontal"  (1)

    Solve using a direct multifrontal method:

    Check a relative error of the computed solution:

    "Pardiso"  (1)

    Solve using Pardiso:

    Check a relative error of the computed solution:

    Modulus  (1)

    Find the solution x to m.x==b modulo 47:

    Verify the solution:

    Applications  (11) Spans and Linear Independence  (3)

    The following three vectors are not linearly independent:

    The equation with a generic right-hand side does not have a solution:

    Equivalently, the equation with the identity matrix on the right-hand side has no solution:

    The following three vectors are linearly independent:

    The equation with a generic right-hand side has a solution:

    Equivalently, the equation with the identity matrix on the right-hand side has a solution:

    The solution is the inverse of :

    Determine if the following vectors are linearly independent or not:

    As does not have a solution for an arbitrary , they are not linearly independent:

    Equation Solving and Invertibility  (6)

    Solve the following system of equations:

    Rewrite the system in matrix form:

    Use LinearSolve to find a solution:

    Show that the solution is unique using NullSpace:

    Verify the result using SolveValues:

    Find all solutions of the following system of equations:

    First, write the coefficient matrix , variable vector and constant vector :

    Verify the rewrite:

    LinearSolve gives a particular solution:

    NullSpace gives a basis for solutions to the homogeneous equation :

    Define to be an arbitrary linear combination of the elements of :

    The general solution is the sum of and :

    Determine if the following matrix has an inverse:

    Since the system has no solution, does not have an inverse:

    Verify the result using Inverse:

    Determine if the following matrix has a nonzero determinant:

    Since the system has a solution, 's determinant must be nonzero:

    Confirm the result using Det:

    Find the inverse of the following matrix:

    To find the inverse, first solve the system :

    Verify the result using Inverse:

    Solve the system , with several different by means of computing a LinearSolveFunction:

    Perform the computation by inverting the matrix and multiplying by the inverse:

    The results are practically identical, even though LinearSolveFunction is multiple times faster:

    Calculus  (2)

    Newton's method for finding a root of a multivariate function:

    Compare with the answer found by FindRoot:

    Approximately solve the boundary value problem using discrete differences:

    Show the error compared with the exact solution:

    Properties & Relations  (9)

    For an invertible matrix , LinearSolve[m,b] gives the same result as SolveValues for the corresponding system of equations:

    Create the corresponding system of linear equations:

    Confirm that SolveValues gives the same result:

    LinearSolve always returns the trivial solution to the homogenous equation :

    Use NullSpace to get the complete spanning set of solutions if is singular:

    Compare with the result of SolveValues:

    If is nonsingular, the solution of is the inverse of when is the identity matrix:

    In this case there is no solution to :

    Use LeastSquares to minimize :

    Compare to general minimization:

    If can be solved, LeastSquares is equivalent to LinearSolve:

    For a square matrix, LinearSolve[m,b] has a solution for a generic b iff Det[m]!=0:

    For a square matrix, LinearSolve[m,b] has a solution for a generic b iff m has full rank:

    For a square matrix, LinearSolve[m,b] has a solution for a generic b iff m has an inverse:

    For a square matrix, LinearSolve[m,b] has a solution for a generic b iff m has a trivial null space:

    Possible Issues  (3)

    Solution found for an underdetermined system is not unique:

    All solutions are found by Solve:

    LinearSolve gave the solution corresponding to :

    With ill-conditioned matrices, numerical solutions may not be sufficiently accurate:

    The solution is more accurate if sufficiently high precision is used:

    Some of the linear solvers available are not deterministic. Set up a system of equations:

    Create a solver function:

    The "Pardiso" solver is not deterministic:

    The Automatic solver method is deterministic:

    Neat Examples  (3)

    Solve 100,000 equations using a direct method:

    Solve a million equations using an iterative method:

    Check a relative error of the solution:

    Solve the same system of equations using a banded matrix method:

    Check a relative error of the solution:

    Wolfram Research (1988), LinearSolve, Wolfram Language function, https://reference.wolfram.com/language/ref/LinearSolve.html (updated 2024). Text

    Wolfram Research (1988), LinearSolve, Wolfram Language function, https://reference.wolfram.com/language/ref/LinearSolve.html (updated 2024).

    CMS

    Wolfram Language. 1988. "LinearSolve." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/LinearSolve.html.

    APA

    Wolfram Language. (1988). LinearSolve. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LinearSolve.html

    BibTeX

    @misc{reference.wolfram_2025_linearsolve, author="Wolfram Research", title="{LinearSolve}", year="2024", howpublished="\url{https://reference.wolfram.com/language/ref/LinearSolve.html}", note=[Accessed: 12-July-2025 ]}

    BibLaTeX

    @online{reference.wolfram_2025_linearsolve, organization={Wolfram Research}, title={LinearSolve}, year={2024}, url={https://reference.wolfram.com/language/ref/LinearSolve.html}, note=[Accessed: 12-July-2025 ]}


    RetroSearch is an open source project built by @garambo | Open a GitHub Issue

    Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

    HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4