Differential Equations

A differential equation is an equation relating some unknown function to its derivatives. Solving such an equation means finding all functions for which the equation holds true. Differential equations pop up in most areas of science and technology, specifically whenever a relation involving some continuously varying quantities (modeled by functions) and their rates of change in space or time (expressed as derivatives) are known. Isaac Newton was, together with his German counterpart Gottfried Leibniz, one of the first to demonstrate the effectiveness of this approach in physics, using it to establish his laws of motion and subsequent model of gravitation.

Topics:

First-Order Differential Equations

First-Order Differential Equations

A first-order differential equation is a differential equation which contains only first-order derivatives of the unknown function. There are several types of this kind of equation.

Separable Equations
A separable equation can be "separated", with the dependent variable on one side of the equation and the independent variable on the other. It has the general form $\frac{dy}{dx}=f(x)g(y),$ and can be solved by formally writing $\frac{dy}{g(y)}=f(x)dx,$ and then integrating both sides, the left with respect to $y$ and the right with respect to $x$.

First-order linear equations
These are equations of the type $\frac{dy}{dx}+p(x)y=q(x).$ If $q(x)$ is identically zero, then the equation is called homogeneous. Otherwise, it is called nonhomogeneous. The homogeneous equation is separable and has the general solution $y=K\mathrm{e}^{\mu(x)}$, where $\mu'(x)=p(x)$ and $K$ is an arbritrary constant. Here, $\mathrm{e}^{\mu(x)}$ is called an integrating factor.

The nonhomogeneous equation can be solved by the use of such an integrating factor. It can be checked that the general solution is given by $y(x)=\mathrm{e}^{-\mu(x)}\int \mathrm{e}^{\mu(x)}q(x)\,dx.$

Existence and Uniqueness of the First-Order Initial Value Problem

Existence and Uniqueness of the First-Order Initial Value Problem

When, in addition to the differential equation, an initial value $y(x_0) = y_0$ is prescribed, the two types of differential equations mentioned above are examples of a first-order initial value problem, $\frac{dy}{dx} = f(x,y) \, , \qquad y(x_0) = y_0. \qquad (*)$

Theorem 3: Existence and Uniqueness of the First-Order Initial Value Problem
If $f$ is continuously differentiable with respect to both of its arguments, then the equation $(*)$ has a unique solution $\phi$, defined on some interval $(a,b)$ containing $x_0$. In other words, $\phi'(x) = f(x,\phi(x))$ on $(a,b)$, $\phi(x_0) = y_0$, and $\phi$ is the only continuously differentiable function which satisfies this.

Relevant parts of the book: 18.3

Numerical Solutions

Numerical Solutions

Although we know that, under mild conditions on $f$, the initial-value problem $(*)$ introduced above has a unique solution $\phi$, it is often not possible to find $\phi$ using explicit formulas. However, one can still find numerical approximations of the solution. While there are many approaches to this, they all start by choosing a small step-length $h>0$, and then iteratively finding approximations at the points $x_0, \, x_1 = x_0+h, \, x_2 = x_0+2h, \ldots$.

Euler's Method
Letting $x_n = x_0+nh$, the numerical approximations $y_n$ of $\phi(x_n)$ are given by $y_{n+1} = y_n + f(x_n,y_n) h, \qquad n=1,2,\ldots$ where $y_0 = \phi(x_0)$ is the prescribed initial value associated with the problem.

Improved Euler's Method
In this method the numerical approximations $y_n$ of $\phi(x_n)$ are found by first calculating the Euler method approximation $u_n$, and then using this to refine the approximation. \begin{aligned} x_{n+1} & = x_n+h, \\ u_{n+1} & = y_n + hf(x_n,y_n), \\ y_{n+1} & = y_n + \frac{f(x_n,y_n)+f(x_{n+1},u_{n+1})}{2}h. \end{aligned}

Other methods
There is a wealth of other numerical schemes. The most used of the other methods is the fourth-order Runge-Kutta method (described in the book), which uses four intermediary approximations to arrive at the final approximation. It is both more complex, and more accurate, than the two methods described above.