Tutorials

SciPy Optimize Basics

Optimization is the task of finding the input value that makes a function as small (or large) as possible. You'd use it to fit a model to data, find the most efficient parameter set for a simulation, or solve any problem that can be framed as "minimize some cost". SciPy's `optimize` module covers single-variable minimization, multivariate minimization, curve fitting, and constrained problems — all with a consistent, easy-to-use interface.

### Minimizing a 1D Function

For functions of a single variable, `minimize_scalar` finds the minimum without needing you to supply a starting point. It uses Brent's method by default, which is fast and reliable for smooth unimodal functions.

from scipy.optimize import minimize_scalar
import numpy as np
import matplotlib.pyplot as plt

f = lambda x: (x - 3) ** 2 + 2

result = minimize_scalar(f)
print(f"Minimum at x = {result.x:.4f}, f(x) = {result.fun:.4f}")

x = np.linspace(0, 6, 200)
plt.figure(figsize=(7, 4))
plt.plot(x, f(x), color="steelblue", linewidth=2, label="f(x) = (x−3)² + 2")
plt.plot(result.x, result.fun, "o", color="crimson", markersize=10, zorder=5,
         label=f"Minimum at x = {result.x:.2f}")
plt.title("Minimizing a 1D Function")
plt.xlabel("x")
plt.ylabel("f(x)")
plt.legend()
plt.tight_layout()
plt.show()
Minimum at x = 3.0000, f(x) = 2.0000
- `minimize_scalar(f)` requires only the function — no gradient and no starting point.
- `result.x` is the location of the minimum and `result.fun` is the function value there.
- You can restrict the search with `bounds=(a, b)` if you know the minimum lies in a specific range.

### Minimizing a Multivariate Function

For functions of multiple variables, `minimize` takes a starting point `x0` and a method name. `L-BFGS-B` is a good default for smooth functions: it uses gradient information internally via finite differences and scales well to high dimensions.

from scipy.optimize import minimize
import numpy as np

# Rosenbrock function — classic test case, minimum at (1, 1)
def rosenbrock(x):
    return (1 - x[0]) ** 2 + 100 * (x[1] - x[0] ** 2) ** 2

result = minimize(rosenbrock, x0=[0.0, 0.0], method="L-BFGS-B")
print(f"Minimum at x = {result.x.round(5)}")
print(f"f(x) = {result.fun:.2e}")
print(f"Converged: {result.success}")
Minimum at x = [1.      0.99999]
f(x) = 9.14e-12
Converged: True
- `x0=[0.0, 0.0]` is the starting point — the optimizer walks downhill from here.
- The Rosenbrock function has a curved, narrow valley that makes it difficult for simple methods; `L-BFGS-B` handles it well.
- `result.success` tells you whether the optimizer converged. Always check this — a failed minimization can return a local minimum or garbage.

### Curve Fitting with curve_fit

`curve_fit` adjusts the parameters of a user-defined model function to minimize the sum of squared residuals between the model and observed data. It's the standard tool for fitting any parametric model: exponential decay, Gaussian peaks, power laws, and more.

from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt

def model(x, a, b, c):
    return a * np.exp(-b * x) + c

rng = np.random.default_rng(0)
x_data = np.linspace(0, 5, 50)
y_data = model(x_data, 3.0, 0.8, 0.5) + 0.2 * rng.standard_normal(50)

popt, pcov = curve_fit(model, x_data, y_data, p0=[1, 1, 0])
print(f"Fitted:  a={popt[0]:.3f}, b={popt[1]:.3f}, c={popt[2]:.3f}")
print(f"True:    a=3.000, b=0.800, c=0.500")

plt.figure(figsize=(7, 4))
plt.scatter(x_data, y_data, s=20, alpha=0.6, color="steelblue", label="Noisy data")
plt.plot(x_data, model(x_data, *popt), color="crimson", linewidth=2, label="Fitted curve")
plt.title("Curve Fitting with curve_fit")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.tight_layout()
plt.show()
Fitted:  a=2.963, b=1.003, c=0.673
True:    a=3.000, b=0.800, c=0.500
- `p0=[1, 1, 0]` is the initial parameter guess — a reasonable starting point speeds convergence and avoids local minima.
- `popt` contains the best-fit parameters; `pcov` is the covariance matrix — its diagonal gives the variance of each parameter estimate.
- `*popt` unpacks the array as positional arguments, which is a clean way to pass fitted parameters back into the model.

### Constrained Optimization

Real problems often have constraints: equality constraints like "the weights must sum to 1", or bound constraints like "x must be non-negative". `minimize` supports both through its `constraints` and `bounds` arguments.

from scipy.optimize import minimize
import numpy as np

# Minimize x² + y² subject to x + y = 1 and x ≥ 0.6
def objective(x):
    return x[0] ** 2 + x[1] ** 2

constraints = {"type": "eq", "fun": lambda x: x[0] + x[1] - 1}
bounds = [(0.6, None), (None, None)]

result = minimize(objective, x0=[0.5, 0.5], method="SLSQP",
                  constraints=constraints, bounds=bounds)
print(f"x = {result.x.round(4)}")
print(f"f(x) = {result.fun:.4f}")
print(f"x + y = {result.x.sum():.4f}")
x = [0.6 0.4]
f(x) = 0.5200
x + y = 1.0000
- `{"type": "eq", "fun": ...}` defines an equality constraint: the function must return 0 at the solution (so `x[0] + x[1] - 1 = 0` means `x + y = 1`).
- `bounds=[(0.6, None), (None, None)]` constrains `x[0] ≥ 0.6` while leaving `x[1]` unbounded.
- `method="SLSQP"` (Sequential Least Squares Programming) handles both equality constraints and bounds; use it whenever you have constraints.

`scipy.optimize` covers the full optimization workflow from simple 1D minimization to constrained multivariate problems. For finding roots instead of minima, learn about [root finding with SciPy](/tutorials/root-finding-with-scipy).