Numerical methods: numerical solutions of equations, iterative methods

Resources | Subject Notes | Mathematics

Numerical Methods - A-Level Maths

Numerical Methods (P3)

This section covers methods for finding numerical solutions to equations, particularly focusing on iterative techniques. These methods are essential when analytical solutions are difficult or impossible to obtain.

1. Solving Equations Numerically

Many equations, especially those involving transcendental functions (like sin, cos, e^x), do not have closed-form algebraic solutions. Numerical methods provide approximate solutions.

1.1 Bisection Method

The bisection method is a simple and robust method. It requires an initial interval [a, b] where f(a) and f(b) have opposite signs. The method repeatedly halves the interval, ensuring the root lies within the remaining subinterval.

Algorithm:

  1. Find an interval [a, b] such that f(a)f(b) < 0.
  2. Calculate the midpoint c = (a + b) / 2.
  3. If f(c) = 0, then c is the root.
  4. If f(a)f(c) < 0, the root lies in [a, c]. Set b = c.
  5. If f(c)f(b) < 0, the root lies in [c, b]. Set a = c.
  6. Repeat until the interval [a, b] is sufficiently small.

Example: Find the root of f(x) = x3 - 2 in [1, 2].

Iteration a b c f(a) f(b) f(c)
1 1 2 1.5 -1 4 0.375
2 1 1.5 1.25 -1 4 0.15625
3 1 1.25 1.125 -1 4 0.0390625

1.2 False Position Method (Regula Falsi)

The false position method is similar to the bisection method but uses a quadratic interpolation to estimate the root. It generally converges faster than the bisection method.

Formula for c:

$$c = \frac{a f(b) - b f(a)}{f(b) - f(a)}$$

1.3 Newton-Raphson Method

The Newton-Raphson method is a powerful iterative method that uses the derivative of the function. It generally converges very quickly, but it requires the derivative to be calculated and can be sensitive to the initial guess.

Formula:

$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$

Example: Find the root of f(x) = x2 - 2 in [1, 2]. f'(x) = 2x.

2. Iterative Methods

Iterative methods involve generating a sequence of approximations that converge to the solution. The convergence depends on the method and the function being solved.

2.1 Fixed-Point Iteration

Fixed-point iteration involves rewriting the equation f(x) = 0 in the form x = g(x) and then iterating xn+1 = g(xn). The convergence of this method depends on the derivative of g(x) at the fixed point.

Convergence Condition: |g'(x*)| < 1, where x* is the fixed point.

2.2 Successive Approximations (e.g., for finding square roots)

Successive approximations can be used to find square roots or other values. For example, to find the square root of a number S, we can use the formula: xn+1 = (xn + S/xn) / 2.

3. Error Estimation

It's important to estimate the error in the approximate solution. Common methods include:

  • Bisection Method: The error is bounded by half the length of the interval.
  • False Position Method: The error is bounded by the length of the interval.
  • Newton-Raphson Method: The error is proportional to the absolute value of the function evaluated at the approximate solution.

$$|e_n| \le \frac{|f(x_{n+1})|}{2}$$ (for Newton-Raphson)