Tutorials

Taylor Series with SymPy

A Taylor series approximates a function near a point using an infinite sum of polynomial terms. Truncating the series at a finite order gives a polynomial approximation — useful for simplifying complex functions, deriving numerical methods, computing limits, and understanding how functions behave near a specific value. SymPy's `series` function computes this symbolically to any order you choose, returning the exact coefficients as fractions rather than floating-point approximations.

### Basic Series Expansion

`sympy.series(f, x, x0, n)` expands `f` around the point `x0` up to (but not including) order `n`. The result includes a Big-O term showing the truncation error.

from sympy import symbols, series, sin, cos, exp, pi

x = symbols('x')

for f in [sin(x), cos(x), exp(x)]:
    s = series(f, x, 0, n=8)
    print(f"series({f}, x=0, n=8):")
    print(" ", s)
    print()
series(sin(x), x=0, n=8):
  x - x**3/6 + x**5/120 - x**7/5040 + O(x**8)

series(cos(x), x=0, n=8):
  1 - x**2/2 + x**4/24 - x**6/720 + O(x**8)

series(exp(x), x=0, n=8):
  1 + x + x**2/2 + x**3/6 + x**4/24 + x**5/120 + x**6/720 + x**7/5040 + O(x**8)

- `series(sin(x), x, 0, n=8)` expands around x=0 (Maclaurin series) up to order 7.
- The `O(x**8)` term at the end is the Big-O remainder — it tells you the truncation error is proportional to x⁸ for small x.
- The even powers vanish in `sin(x)` (it's an odd function) and the odd powers vanish in `cos(x)` (even function) — SymPy captures this automatically.

### Removing the Big-O and Plotting

`.removeO()` strips the Big-O term, leaving a pure polynomial that can be converted to a NumPy function with `lambdify` for plotting.

from sympy import symbols, series, sin, lambdify
import numpy as np
import matplotlib.pyplot as plt

x = symbols('x')
f = sin(x)

orders = [3, 5, 7, 9]
x_vals = np.linspace(-2 * np.pi, 2 * np.pi, 400)
f_num = lambdify(x, f, modules='numpy')

plt.figure(figsize=(10, 5))
plt.plot(x_vals, f_num(x_vals), color="black", linewidth=2.5, label="sin(x)")

colors = ["steelblue", "crimson", "orange", "green"]
for n, color in zip(orders, colors):
    poly = series(f, x, 0, n=n+1).removeO()
    poly_num = lambdify(x, poly, modules='numpy')
    y = poly_num(x_vals)
    y = np.clip(y, -3, 3)  # clip wild values far from 0
    plt.plot(x_vals, y, color=color, linewidth=1.5, linestyle="--",
             label=f"Order {n}")

plt.axhline(0, color="gray", linewidth=0.5)
plt.ylim(-2.5, 2.5)
plt.title("Taylor Approximations of sin(x)")
plt.xlabel("x")
plt.legend()
plt.tight_layout()
plt.show()
- `.removeO()` on a series object returns the polynomial part as a regular SymPy expression — necessary before passing it to `lambdify`.
- `lambdify(x, poly, modules='numpy')` converts the symbolic polynomial to a vectorized NumPy function.
- Higher-order polynomials (order 9) stay close to the true `sin(x)` over a wider range; lower-order ones diverge quickly away from x=0.

### Expanding Around a Non-Zero Point

You can expand around any point, not just zero. This is useful when you want a local approximation near a specific value — for example, near π/2 for cosine.

from sympy import symbols, series, cos, pi, simplify

x = symbols('x')
f = cos(x)

# Expand around x = π/2
s = series(f, x, pi/2, n=6)
print("cos(x) around x = π/2:")
print(s)

# Substitute to verify: cos(π/2) should be ≈ 0
print("\nAt x = π/2:", s.removeO().subs(x, pi/2))
cos(x) around x = π/2:
pi/2 + (x - pi/2)**3/6 - (x - pi/2)**5/120 - x + O((x - pi/2)**6, (x, pi/2))

At x = π/2: 0
- `series(f, x, pi/2, n=6)` expands `cos(x)` around the point x = π/2 — the coefficients involve the function's derivatives at that point.
- Near π/2, the leading term is `-(x − π/2)` since `cos(π/2) = 0` and `cos'(π/2) = −sin(π/2) = −1`.
- `.subs(x, pi/2)` substitutes the expansion point back in and confirms the constant term is zero.

### Series Approximation Error

The Big-O term quantifies how quickly the approximation deteriorates. You can evaluate the series at a specific point and compare to the exact value to see the actual error.

from sympy import symbols, series, exp, Rational, N

x = symbols('x')
f = exp(x)

# Approximate exp(0.5) using increasingly high order
exact = float(N(f.subs(x, Rational(1, 2))))
print(f"Exact exp(0.5) = {exact:.10f}\n")

print(f"{'Order':>6}  {'Approx':>14}  {'Error':>12}")
for n in range(1, 9):
    poly = series(f, x, 0, n=n+1).removeO()
    approx = float(poly.subs(x, Rational(1, 2)))
    error = abs(approx - exact)
    print(f"  {n:4d}  {approx:14.10f}  {error:.2e}")
Exact exp(0.5) = 1.6487212707

 Order          Approx         Error
     1    1.5000000000  1.49e-01
     2    1.6250000000  2.37e-02
     3    1.6458333333  2.89e-03
     4    1.6484375000  2.84e-04
     5    1.6486979167  2.34e-05
     6    1.6487196181  1.65e-06
     7    1.6487211682  1.03e-07
     8    1.6487212650  5.66e-09
- `Rational(1, 2)` is the exact fraction ½ — substituting it into the polynomial gives an exact rational result before converting to float.
- The error drops by roughly a factor of 10 with each additional term for x=0.5, reflecting the geometric convergence of the series.
- At x=0.5, even a 4th-order Taylor polynomial achieves 5-decimal accuracy — the series converges quickly near the expansion point.

Taylor series are foundational in numerical methods. To evaluate related symbolic computations, see [symbolic integration with SymPy](/tutorials/symbolic-integration-with-sympy) and [symbolic differentiation with SymPy](/tutorials/symbolic-differentiation-with-sympy).