Method of separation of variables

Template:Differential equations

In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

Ordinary differential equations (ODE)

Suppose a differential equation can be written in the form

$\frac\left\{d\right\}\left\{dx\right\} f\left(x\right) = g\left(x\right)h\left(f\left(x\right)\right),\qquad\qquad \left(1\right)$

which we can write more simply by letting $y = f\left(x\right)$:

$\frac\left\{dy\right\}\left\{dx\right\}=g\left(x\right)h\left(y\right).$

As long as h(y) ≠ 0, we can rearrange terms to obtain:

$\left\{dy \over h\left(y\right)\right\} = \left\{g\left(x\right)dx\right\},$

so that the two variables x and y have been separated. dx (and dy) can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential (infinitesimal) is somewhat advanced.

Alternative notation

Some who dislike Leibniz's notation may prefer to write this as

$\frac\left\{1\right\}\left\{h\left(y\right)\right\} \frac\left\{dy\right\}\left\{dx\right\} = g\left(x\right),$

but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to $x$, we have

$\int \frac\left\{1\right\}\left\{h\left(y\right)\right\} \frac\left\{dy\right\}\left\{dx\right\} \, dx = \int g\left(x\right) \, dx, \qquad\qquad \left(2\right)$

or equivalently,

$\int \frac\left\{1\right\}\left\{h\left(y\right)\right\} \, dy = \int g\left(x\right) \, dx$

because of the substitution rule for integrals.

If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the derivative $\frac\left\{dy\right\}\left\{dx\right\}$ as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below.

(Note that we do not need to use two constants of integration, in equation (2) as in

$\int \frac\left\{1\right\}\left\{h\left(y\right)\right\} \, dy + C_1 = \int g\left(x\right) \, dx + C_2,$

because a single constant $C = C_2 - C_1$ is equivalent.)

Example (I)

The ordinary differential equation

$\frac\left\{d\right\}\left\{dx\right\}f\left(x\right)=f\left(x\right)\left(1-f\left(x\right)\right)$

may be written as

$\frac\left\{dy\right\}\left\{dx\right\}=y\left(1-y\right).$

If we let $g\left(x\right) = 1$ and $h\left(y\right) = y\left(1-y\right)$, we can write the differential equation in the form of equation (1) above. Thus, the differential equation is separable.

As shown above, we can treat $dy$ and $dx$ as separate values, so that both sides of the equation may be multiplied by $dx$. Subsequently dividing both sides by $y\left(1 - y\right)$, we have

$\frac\left\{dy\right\}\left\{y\left(1-y\right)\right\}=dx.$

At this point we have separated the variables x and y from each other, since x appears only on the right side of the equation and y only on the left.

Integrating both sides, we get

$\int\frac\left\{dy\right\}\left\{y\left(1-y\right)\right\}=\int dx,$

which, via partial fractions, becomes

$\int\frac\left\{1\right\}\left\{y\right\} \, dy + \int\frac\left\{1\right\}\left\{1-y\right\}\,dy=\int 1 \, dx,$

and then

$\ln |y| -\ln |1-y|=x+C$

where C is the constant of integration. A bit of algebra gives a solution for y:

$y=\frac\left\{1\right\}\left\{1+Be^\left\{-x\right\}\right\}.$

One may check our solution by taking the derivative with respect to x of the function we found, where B is an arbitrary constant. The result should be equal to our original problem. (One must be careful with the absolute values when solving the equation above. It turns out that the different signs of the absolute value contribute the positive and negative values for B, respectively. And the B = 0 case is contributed by the case that y = 1, as discussed below.)

Note that since we divided by $y$ and $\left(1 - y\right)$ we must check to see whether the solutions $y\left(x\right) = 0$ and $y\left(x\right) = 1$ solve the differential equation (in this case they are both solutions). See also: singular solutions.

Example (II)

Population growth is often modeled by the differential equation

$\frac\left\{dP\right\}\left\{dt\right\}=kP\left\left(1-\frac\left\{P\right\}\left\{K\right\}\right\right)$

where $P$ is the population with respect to time $t$, $k$ is the rate of growth, and $K$ is the carrying capacity of the environment.

Separation of variables may be used to solve this differential equation.

$\frac\left\{dP\right\}\left\{dt\right\}=kP\left\left(1-\frac\left\{P\right\}\left\{K\right\}\right\right)$
$\int\frac\left\{dP\right\}\left\{P\left\left(1-\frac\left\{P\right\}\left\{K\right\}\right\right)\right\}=\int k\,dt$

To evaluate the integral on the left side, we simplify the fraction

$\frac\left\{1\right\}\left\{P\left\left(1-\frac\left\{P\right\}\left\{K\right\}\right\right)\right\}=\frac\left\{K\right\}\left\{P\left\left(K-P\right\right)\right\}$

and then, we decompose the fraction into partial fractions

$\frac\left\{K\right\}\left\{P\left\left(K-P\right\right)\right\}=\frac\left\{1\right\}\left\{P\right\}+\frac\left\{1\right\}\left\{K-P\right\}$

Thus we have

$\int\left\left(\frac\left\{1\right\}\left\{P\right\}+\frac\left\{1\right\}\left\{K-P\right\}\right\right)\,dP=\int k\,dt$

$\ln\begin\left\{vmatrix\right\}P\end\left\{vmatrix\right\}-\ln\begin\left\{vmatrix\right\}K-P\end\left\{vmatrix\right\}=kt+C$

$\ln\begin\left\{vmatrix\right\}K-P\end\left\{vmatrix\right\}-\ln\begin\left\{vmatrix\right\}P\end\left\{vmatrix\right\}=-kt-C$

$\ln\begin\left\{vmatrix\right\}\cfrac\left\{K-P\right\}\left\{P\right\}\end\left\{vmatrix\right\}=-kt-C$

$\begin\left\{vmatrix\right\}\cfrac\left\{K-P\right\}\left\{P\right\}\end\left\{vmatrix\right\}=e^\left\{-kt-C\right\}$

$\begin\left\{vmatrix\right\}\cfrac\left\{K-P\right\}\left\{P\right\}\end\left\{vmatrix\right\}=e^\left\{-C\right\}e^\left\{-kt\right\}$

$\frac\left\{K-P\right\}\left\{P\right\}=\pm e^\left\{-C\right\}e^\left\{-kt\right\}$

Let $A=\pm e^\left\{-C\right\}$.

$\frac\left\{K-P\right\}\left\{P\right\}=Ae^\left\{-kt\right\}$

$\frac\left\{K\right\}\left\{P\right\}-1=Ae^\left\{-kt\right\}$

$\frac\left\{K\right\}\left\{P\right\}=1+Ae^\left\{-kt\right\}$

$\frac\left\{P\right\}\left\{K\right\}=\frac\left\{1\right\}\left\{1+Ae^\left\{-kt\right\}\right\}$

$P=\frac\left\{K\right\}\left\{1+Ae^\left\{-kt\right\}\right\}$

Therefore, the solution to the logistic equation is

$P\left\left(t\right\right)=\frac\left\{K\right\}\left\{1+Ae^\left\{-kt\right\}\right\}$

To find $A$, let $t=0$ and $P\left\left(0\right\right)=P_0$. Then we have

$P_0=\frac\left\{K\right\}\left\{1+Ae^0\right\}$

Noting that $e^0=1$, and solving for A we get

$A=\frac\left\{K-P_0\right\}\left\{P_0\right\}$

Partial differential equations

The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as heat equation, wave equation, Laplace equation and Helmholtz equation.

Homogeneous case

Consider the one-dimensional heat equation.The equation is

$\frac\left\{\partial u\right\}\left\{\partial t\right\}-\alpha\frac\left\{\partial^\left\{2\right\}u\right\}\left\{\partial x^\left\{2$

(})

=0|Template:EqRef}}

The boundary condition is homogeneous, that is

$u\big$

(Template:EqRef)

Let us attempt to find a solution which is not identically zero satisfying the boundary conditions but with the following property: u is a product in which the dependence of u on x, t is separated, that is:

Template:EqRef

(})

Substituting u back into equation and using the product rule,

Template:EqRef

(})

Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value − λ. Thus:

Template:EqRef

(})

and

Template:EqRef

(})

− λ here is the eigenvalue for both differential operators, and T(t) and X(x) are corresponding eigenfunctions.

We will now show that solutions for X(x) for values of λ ≤ 0 cannot occur:

Suppose that λ < 0. Then there exist real numbers B, C such that

$X\left(x\right) = B e^\left\{\sqrt\left\{-\lambda\right\} \, x\right\} + C e^\left\{-\sqrt\left\{-\lambda\right\} \, x\right\}.$

From Template:EqNote we get

Template:EqRef

(})

and therefore B = 0 = C which implies u is identically 0.

Suppose that λ = 0. Then there exist real numbers B, C such that

$X\left(x\right) = Bx + C.$

From Template:EqNote we conclude in the same manner as in 1 that u is identically 0.

Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that

$T\left(t\right) = A e^\left\{-\lambda \alpha t\right\},$

and

$X\left(x\right) = B \sin\left(\sqrt\left\{\lambda\right\} \, x\right) + C \cos\left(\sqrt\left\{\lambda\right\} \, x\right).$

From Template:EqNote we get C = 0 and that for some positive integer n,

$\sqrt\left\{\lambda\right\} = n \frac\left\{\pi\right\}\left\{L\right\}.$

This solves the heat equation in the special case that the dependence of u has the special form of Template:EqNote.

In general, the sum of solutions to Template:EqNote which satisfy the boundary conditions Template:EqNote also satisfies Template:EqNote and Template:EqNote. Hence a complete solution can be given as

$u\left(x,t\right) = \sum_\left\{n = 1\right\}^\left\{\infty\right\} D_n \sin \frac\left\{n\pi x\right\}\left\{L\right\} \exp\left\left(-\frac\left\{n^2 \pi^2 \alpha t\right\}\left\{L^2\right\}\right\right),$

where Dn are coefficients determined by initial condition.

Given the initial condition

$u\big|_\left\{t=0\right\}=f\left(x\right),$

we can get

$f\left(x\right) = \sum_\left\{n = 1\right\}^\left\{\infty\right\} D_n \sin \frac\left\{n\pi x\right\}\left\{L\right\}.$

This is the sine series expansion of f(x). Multiplying both sides with $\sin \frac\left\{n\pi x\right\}\left\{L\right\}$ and integrating over [0,L] result in

$D_n = \frac\left\{2\right\}\left\{L\right\} \int_0^L f\left(x\right) \sin \frac\left\{n\pi x\right\}\left\{L\right\} \, dx.$

This method requires that the eigenfunctions of x, here $\left\\left\{\sin \frac\left\{n\pi x\right\}\left\{L\right\}\right\\right\}_\left\{n=1\right\}^\left\{\infty\right\}$, are orthogonal and complete. In general this is guaranteed by Sturm-Liouville theory.

Nonhomogeneous case

Suppose the equation is nonhomogeneous,

$\frac\left\{\partial u\right\}\left\{\partial t\right\}-\alpha\frac\left\{\partial^\left\{2\right\}u\right\}\left\{\partial x^\left\{2$

(})

=h(x,t)|Template:EqRef}}

with the boundary condition the same as Template:EqNote.

Expand h(x,t), u(x,t) and f(x,t) into

Template:EqRef

(})

Template:EqRef

(})

Template:EqRef

(})

where hn(t) and bn can be calculated by integration, while un(t) is to be determined.

Substitute Template:EqNote and Template:EqNote back to Template:EqNote and considering the orthogonality of sine functions we get

$u\text{'}_\left\{n\right\}\left(t\right)+\alpha\frac\left\{n^\left\{2\right\}\pi^\left\{2\right\}\right\}\left\{L^\left\{2\right\}\right\}u_\left\{n\right\}\left(t\right)=h_\left\{n\right\}\left(t\right),$

which are a sequence of linear differential equations that can be readily solved with, for instance, Laplace transform, or Integrating factor. Finally, we can get

$u_\left\{n\right\}\left(t\right)=e^\left\{-\alpha\frac\left\{n^\left\{2\right\}\pi^\left\{2\right\}\right\}\left\{L^\left\{2\right\}\right\} t\right\} \left \left(b_\left\{n\right\}+\int_\left\{0\right\}^\left\{t\right\}h_\left\{n\right\}\left(s\right)e^\left\{\alpha\frac\left\{n^\left\{2\right\}\pi^\left\{2\right\}\right\}\left\{L^\left\{2\right\}\right\} s\right\} \, ds \right\right).$

If the boundary condition is nonhomogeneous, then the expansion of Template:EqNote and Template:EqNote is no longer valid. One has to find a function v that satisfies the boundary condition only, and subtract it from u. The function u-v then satisfies homogeneous boundary condition, and can be solved with the above method.

In orthogonal curvilinear coordinates, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See spherical harmonics for example.

Matrices

The matrix form of the separation of variables is the Kronecker sum.

As an example we consider the 2D discrete Laplacian on a regular grid:

$L = \mathbf\left\{D_\left\{xx\right\}\right\}\oplus\mathbf\left\{D_\left\{yy\right\}\right\}=\mathbf\left\{D_\left\{xx\right\}\right\}\otimes\mathbf\left\{I\right\}+\mathbf\left\{I\right\}\otimes\mathbf\left\{D_\left\{yy\right\}\right\}, \,$

where $\mathbf\left\{D_\left\{xx\right\}\right\}$ and $\mathbf\left\{D_\left\{yy\right\}\right\}$ are 1D discrete Laplacians in the x- and y-directions, correspondingly, and $\mathbf\left\{I\right\}$ are the identities of appropriate sizes. See the main article Kronecker sum of discrete Laplacians for details.

References

• A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9.