Linear algebra is the backbone of scientific computing: systems of equations, transformations, dimensionality reduction, and differential equations all reduce to matrix operations. NumPy covers the basics, but `scipy.linalg` goes further — it uses LAPACK under the hood for numerically stable factorizations, exposes lower-level control over pivoting and overwrite behavior, and adds functions NumPy doesn't have at all, like the matrix exponential. Use `scipy.linalg` whenever accuracy or a specific decomposition matters. ### Solving Linear Systems `scipy.linalg.solve` solves Ax = b for x. It's faster and more numerically stable than computing the matrix inverse explicitly (`inv(A) @ b`), because it uses LU factorization internally rather than constructing the full inverse.
import scipy.linalg as la
import numpy as np
A = np.array([[4, 3, 2],
[3, 5, 1],
[2, 1, 6]], dtype=float)
b = np.array([1, 2, 3], dtype=float)
x = la.solve(A, b)
print("Solution x:", x.round(6))
print("Residual Ax - b:", (A @ x - b).round(10))
# Compare: never do this in practice
x_inv = la.inv(A) @ b
print("Via inverse:", x_inv.round(6))- `la.solve(A, b)` solves without ever computing A⁻¹ — it's more accurate because matrix inversion amplifies rounding errors. - The residual `A @ x - b` should be near zero (at floating-point precision ≈ 1e-15) even for ill-conditioned matrices where `inv` might fail. - If A is symmetric positive definite (e.g., a covariance matrix), pass `assume_a="pos"` to use the faster Cholesky-based solver. ### LU Decomposition LU decomposition factors A = P @ L @ U where P is a permutation matrix, L is lower triangular, and U is upper triangular. It's the factorization that `solve` uses internally, but exposing it directly lets you reuse the factorization for multiple right-hand sides efficiently.
import scipy.linalg as la
import numpy as np
A = np.array([[4, 3, 2],
[3, 5, 1],
[2, 1, 6]], dtype=float)
P, L, U = la.lu(A)
print("L:\n", L.round(4))
print("U:\n", U.round(4))
print("P @ L @ U == A:", np.allclose(P @ L @ U, A))
# Reuse the factorization for two right-hand sides
lu_factor = la.lu_factor(A)
x1 = la.lu_solve(lu_factor, [1, 2, 3])
x2 = la.lu_solve(lu_factor, [4, 5, 6])
print("x1:", x1.round(4))
print("x2:", x2.round(4))- `la.lu(A)` returns the three matrices; `P @ L @ U` should exactly reproduce A (up to floating-point noise). - `la.lu_factor(A)` computes and stores the factorization once; `la.lu_solve(lu_factor, b)` then solves for any b in O(n²) instead of O(n³) — a big saving when solving many systems with the same A. - L has ones on the diagonal (unit lower triangular) and zeros above; U has zeros below the diagonal. ### Eigenvalues and Eigenvectors An eigenvector v of a matrix A is a direction that doesn't rotate under the transformation — it only gets scaled: Av = λv. The scalar λ is the corresponding eigenvalue. Eigendecomposition reveals the fundamental modes of a system and is used in PCA, stability analysis, graph theory, and quantum mechanics.
import scipy.linalg as la
import numpy as np
import matplotlib.pyplot as plt
A = np.array([[3, 1],
[0, 2]], dtype=float)
vals, vecs = la.eig(A)
print("Eigenvalues:", vals.real)
print("Eigenvectors (columns):\n", vecs.real.round(4))
# Verify: A @ v == λ * v for each eigenvector
for i in range(len(vals)):
v = vecs[:, i].real
lam = vals[i].real
print(f"A·v{i+1} = {(A @ v).round(4)}, λ·v{i+1} = {(lam * v).round(4)}")
# Visualize the action of A on the unit circle
theta = np.linspace(0, 2 * np.pi, 200)
circle = np.stack([np.cos(theta), np.sin(theta)])
transformed = A @ circle
plt.figure(figsize=(6, 6))
plt.plot(*circle, color="steelblue", linewidth=1.5, label="Unit circle")
plt.plot(*transformed, color="crimson", linewidth=1.5, linestyle="--", label="A × circle")
for i, (val, color) in enumerate(zip(vals.real, ["darkblue", "darkred"])):
v = vecs[:, i].real
plt.annotate("", xy=v * val, xytext=[0, 0],
arrowprops=dict(arrowstyle="->", color=color, lw=2))
plt.text(*(v * val * 1.15), f"λ={val:.0f}", color=color, fontsize=10, ha="center")
plt.axhline(0, color="gray", linewidth=0.5)
plt.axvline(0, color="gray", linewidth=0.5)
plt.gca().set_aspect("equal")
plt.title("Eigenvalues: Stretching the Unit Circle")
plt.legend()
plt.tight_layout()
plt.show()- `la.eig(A)` returns complex arrays; `.real` extracts the real part (for real symmetric matrices all eigenvalues are real). - The transformed circle (red dashed) shows how A stretches space: eigenvectors are the axes that don't rotate, and eigenvalues are the stretch factors along those axes. - For symmetric matrices (like covariance matrices), use `la.eigh` instead — it's faster and guarantees real eigenvalues. ### SVD and Dimensionality Reduction The Singular Value Decomposition A = U @ diag(s) @ Vt decomposes any matrix into orthogonal factors and non-negative singular values. It's the foundation of Principal Component Analysis (PCA): the rows of Vt are the principal directions, and the singular values quantify variance along each direction.
import scipy.linalg as la
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.default_rng(0)
data = rng.multivariate_normal([0, 0], [[4, 2.5], [2.5, 2]], size=300)
data -= data.mean(axis=0) # center
U, s, Vt = la.svd(data, full_matrices=False)
print("Singular values:", s.round(2))
print("PC1 (explains most variance):", Vt[0].round(3))
print("Variance explained: {:.1f}% / {:.1f}%".format(
100 * s[0]**2 / (s**2).sum(), 100 * s[1]**2 / (s**2).sum()))
plt.figure(figsize=(6, 6))
plt.scatter(data[:, 0], data[:, 1], s=10, alpha=0.3, color="steelblue")
for i, (scale, color) in enumerate(zip(s, ["crimson", "orange"])):
direction = Vt[i] * scale / 15
plt.annotate("", xy=direction, xytext=[0, 0],
arrowprops=dict(arrowstyle="->", color=color, lw=2.5))
plt.text(*(direction * 1.15), f"PC{i+1}", color=color, fontsize=11)
plt.title("SVD: Principal Components of Correlated Data")
plt.xlabel("x")
plt.ylabel("y")
plt.gca().set_aspect("equal")
plt.tight_layout()
plt.show()- `la.svd(data, full_matrices=False)` returns compact (thin) SVD — U is (n, 2), s is (2,), Vt is (2, 2) — appropriate for rectangular data matrices. - The variance explained by each component is proportional to `s[i]²`; PC1 captures the direction of maximum variance. - The arrows point along the principal component directions, scaled by the corresponding singular value — the longer arrow explains more variance. ### Matrix Exponential The matrix exponential `expm(A)` computes e^A — the matrix equivalent of the scalar exponential function. It arises directly in the solution to linear ODEs: if dy/dt = Ay, then y(t) = expm(A * t) @ y(0).
import scipy.linalg as la
import numpy as np
import matplotlib.pyplot as plt
# 2D linear ODE: dy/dt = A @ y (rotation + decay)
A = np.array([[-0.2, 2.0],
[-2.0, -0.2]])
y0 = np.array([1.0, 0.0])
t_vals = np.linspace(0, 5, 200)
trajectory = np.array([la.expm(A * t) @ y0 for t in t_vals])
plt.figure(figsize=(7, 5))
plt.plot(trajectory[:, 0], trajectory[:, 1], color="steelblue", linewidth=2)
plt.plot(*y0, "o", color="crimson", markersize=10, zorder=5, label="Start y(0)")
plt.plot(*trajectory[-1], "s", color="orange", markersize=10, zorder=5, label="End y(5)")
plt.title("Matrix Exponential: Spiral ODE Trajectory")
plt.xlabel("y₁(t)")
plt.ylabel("y₂(t)")
plt.legend()
plt.tight_layout()
plt.show()- `la.expm(A * t)` evaluates e^(At) at a given t; multiplying by y0 gives the exact solution to the linear ODE at time t. - The off-diagonal ±2 entries in A create rotation (the spiral), while the −0.2 diagonal entries create exponential decay (the inward spiral). - `expm` is also used in Markov chain analysis: if Q is a rate matrix, `expm(Q * t)` gives the transition probability matrix at time t. `scipy.linalg` is the right choice when you need more than basic matrix operations — especially for factorizations, eigenproblems, or matrix functions. For large sparse systems, see [sparse matrices with scipy.sparse](/tutorials/sparse-matrices-with-scipy).