Warning
This documentation is workinprogress and unorganized.
Common interface for performing matrix vector products
Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products, A*v where v is a dense vector. This class serves as an abstract interface between iterative solvers and matrixlike objects.
Parameters:  shape : tuple
matvec : callable f(v)


See also
Notes
The userdefined matvec() function must properly handle the case where v has shape (N,) as well as the (N,1) case. The shape of the return type is handled internally by LinearOperator.
Examples
>>> from scipy.sparse.linalg import LinearOperator
>>> from scipy import *
>>> def mv(v):
... return array([ 2*v[0], 3*v[1]])
...
>>> A = LinearOperator( (2,2), matvec=mv )
>>> A
<2x2 LinearOperator with unspecified dtype>
>>> A.matvec( ones(2) )
array([ 2., 3.])
>>> A * ones(2)
array([ 2., 3.])
Methods
matmat(X)  Matrixmatrix multiplication 
matvec(x)  Matrixvector multiplication 
Matrixmatrix multiplication
Performs the operation y=A*X where A is an MxN linear operator and X dense N*K matrix or ndarray.
Parameters:  X : {matrix, ndarray}


Returns:  Y : {matrix, ndarray}

Notes
This matmat wraps any userspecified matmat routine to ensure that y has the correct type.
Matrixvector multiplication
Performs the operation y=A*x where A is an MxN linear operator and x is a column vector or rank1 array.
Parameters:  x : {matrix, ndarray}


Returns:  y : {matrix, ndarray}

Notes
This matvec wraps the userspecified matvec routine to ensure that y has the correct shape and type.
Return A as a LinearOperator.
See the LinearOperator documentation for additonal information.
Examples
>>> from scipy import matrix
>>> M = matrix( [[1,2,3],[4,5,6]], dtype='int32' )
>>> aslinearoperator( M )
<2x3 LinearOperator with dtype=int32>
Use BIConjugate Gradient iteration to solve A x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


Use BIConjugate Gradient STABilized iteration to solve A x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


Use Conjugate Gradient iteration to solve A x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


Use Conjugate Gradient Squared iteration to solve A x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


Find k eigenvalues and eigenvectors of the square matrix A.
Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].
Parameters:  A : matrix, array, or object with matvec(x) method
k : integer


Returns:  w : array
v : array

See also
Find k eigenvalues and eigenvectors of the real symmetric square matrix A.
Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i].
Parameters:  A : matrix or array with real entries or object with matvec(x) method
k : integer


Returns:  w : array
v : array

See also
Return a fuction for solving a sparse linear system, with A prefactorized.
Use Generalized Minimal RESidual iteration to solve A x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


See also
Solve a matrix equation using the LGMRES algorithm.
The LGMRES algorithm [BJM] [BPh] is designed to avoid some problems in the convergence in restarted GMRES, and often converges in fewer iterations.
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}
x0 : {array, matrix}
tol : float
maxiter : integer
M : {sparse matrix, dense matrix, LinearOperator}
callback : function


Returns:  x : array or matrix
info : integer

Notes
The LGMRES algorithm [BJM] [BPh] is designed to avoid the slowing of convergence in restarted GMRES, due to alternating residual vectors. Typically, it often outperforms GMRES(m) of comparable memory requirements by some measure, or at least is not much worse.
Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. If the solution lies close to the span of these vectors, the algorithm converges faster. This can be useful if several very similar matrices need to be inverted one after another, such as in NewtonKrylov iteration where the Jacobian matrix often changes little in the nonlinear steps.
References
[BJM]  (1, 2, 3) A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005). 
[BPh]  (1, 2, 3) A.H. Baker, PhD thesis, University of Colorado (2003). http://amath.colorado.edu/activities/thesis/allisonb/Thesis.ps 
Solve symmetric partial eigenproblems with optional preconditioning
This function implements the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
X : array_like


Returns:  w : array
v : array

Notes
If both retLambdaHistory and retResidualNormsHistory are True, the return tuple has the following format (lambda, V, lambda history, residual norms history)
Use MINimum RESidual iteration to solve Ax=b
MINRES minimizes norm(A*x  b) for the symmetric matrix A. Unlike the Conjugate Gradient method, A can be indefinite or singular.
If shift != 0 then the method solves (A  shift*I)x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


Notes
THIS FUNCTION IS EXPERIMENTAL AND SUBJECT TO CHANGE!
References
Use QuasiMinimal Residual iteration to solve A x = b
Parameters:  A : {sparse matrix, dense matrix, LinearOperator}
b : {array, matrix}


See also
A linear solver, for a sparse, square matrix A, using LU decomposition where L is a lower triangular matrix and U is an upper triagular matrix.
Returns a factored_lu object. (scipy.sparse.linalg.dsolve._superlu.SciPyLUType)
See scipy.sparse.linalg.dsolve._superlu.dgstrf for more info.
Run tests for module using nose.
Parameters:  label : {‘fast’, ‘full’, ‘’, attribute identifier}, optional
verbose : int, optional
extra_argv : list, optional
doctests : bool, optional
coverage : bool, optional


Returns:  result : object

Notes
Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:
>>> np.lib.test()
Examples
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
OK
>>> result.errors
[]
>>> result.knownfail
[]
The default sparse solver is umfpack when available. This can be changed by passing useUmfpack = False, which then causes the always present SuperLU based solver to be used.
Umfpack requires a CSR/CSC matrix to have sorted column/row indices. If sure that the matrix fulfills this, pass assumeSortedIndices=True to gain some speed.