### Linear homogeneous differential equation

In mathematics, **linear differential equations** are differential equations that have homogeneous** solutions that can be added together to form other homogeneous solutions. They can be ordinary or partial. The homogeneous solutions to linear equations form a vector space (not the case with non-linear differential equations).
**

## Contents

## Introduction

Linear **differential equations** are of the form

- $Ly\; =\; f$

where the differential operator *L* is a linear operator, *y* is the unknown function (such as a function of time *y*(*t*)), and the right hand side *f* is a given function of the same nature as *y* (called the **source term**). For a function dependent on time we may write the equation more expressly as

- $L\; y(t)\; =\; f(t)$

and, even more precisely by bracketing

- $L\; [y(t)]\; =\; f(t)$

The linear operator *L* may be considered to be of the form^{[1]}

- $L\_n(y)\; \backslash equiv\; \backslash frac\{d^n\; y\}\{dt^n\}\; +\; A\_1(t)\backslash frac\{d^\{n-1\}y\}\{dt^\{n-1\}\}\; +\; \backslash cdots\; +\; A\_\{n-1\}(t)\backslash frac\{dy\}\{dt\}\; +\; A\_n(t)y$

The linearity condition on *L* rules out operations such as taking the square of the derivative of *y*; but permits, for example, taking the second derivative of *y*.
It is convenient to rewrite this equation in an operator form

- $L\_n(y)\; \backslash equiv\; \backslash left[\backslash ,D^n\; +\; A\_\{1\}(t)D^\{n-1\}\; +\; \backslash cdots\; +\; A\_\{n-1\}(t)\; D\; +\; A\_n(t)\backslash right]\; y$

where *D* is the differential operator *d/dt* (i.e. *Dy = y' *, *D*^{2}*y = y",... *), and the *A _{n}* are given functions.

Such an equation is said to have **order** *n*, the index of the highest derivative of *y* that is involved.

A typical simple example is the linear differential equation used to model radioactive decay.^{[2]} Let *N*(*t*) denote the number of radioactive atoms in some sample of material ^{[3]} at time *t*. Then for some constant *k* > 0, the number of radioactive atoms which decay can be modelled by

- $\backslash frac\{dN\}\{dt\}\; =\; -k\; N$

If *y* is assumed to be a function of only one variable, one speaks about an ordinary differential equation, else the derivatives and their coefficients must be understood as (contracted) vectors, matrices or tensors of higher rank, and we have a (linear) partial differential equation.

The case where *f* = 0 is called a **homogeneous equation** and its solutions are called **complementary functions**. It is particularly important to the solution of the general case, since any complementary function can be added to a solution of the inhomogeneous equation to give another solution (by a method traditionally called *particular integral and complementary function*). When the *A _{i}* are numbers, the equation is said to have

*constant coefficients*.

## Homogeneous equations with constant coefficients

The first method of solving linear ordinary differential equations with constant coefficients is due to Euler, who realized that solutions have the form *e ^{zx}*, for possibly-complex values of

*z*. The exponential function is one of the few functions that keep its shape after differentiation. In order for the sum of multiple derivatives of a function to sum up to zero, the derivatives must cancel each other out and the only way for them to do so is for the derivatives to have the same form as the initial function. Thus, to solve

- $y^\{(n)\}\; +\; A\_\{1\}y^\{(n-1)\}\; +\; \backslash cdots\; +\; A\_\{n\}y\; =\; 0$

we set *y* = *e ^{zx}*, leading to

- $z^n\; e^\{zx\}\; +\; A\_1\; z^\{n-1\}\; e^\{zx\}\; +\; \backslash cdots\; +\; A\_n\; e^\{zx\}\; =\; 0.$

Division by *e ^{zx}* gives the

*n*th-order polynomial

- $F(z)\; =\; z^\{n\}\; +\; A\_\{1\}z^\{n-1\}\; +\; \backslash cdots\; +\; A\_n\; =\; 0.\backslash ,$

This algebraic equation *F*(*z*) = 0, is the characteristic equation considered later by Gaspard Monge and Augustin-Louis Cauchy.

Formally, the terms

- $y^\{(k)\}\backslash quad\backslash quad(k\; =\; 1,\; 2,\; \backslash dots,\; n).$

of the original differential equation are replaced by *z ^{k}*. Solving the polynomial gives

*n*values of

*z*,

*z*

_{1}, ...,

*z*. Substitution of any of those values for

_{n}*z*into

*e*gives a solution

^{zx}*e*. Since homogeneous linear differential equations obey the superposition principle, any linear combination of these functions also satisfies the differential equation.

^{zix}When these roots are all distinct, we have *n* distinct solutions to the differential equation. It can be shown that these are linearly independent, by applying the Vandermonde determinant, and together they form a basis of the space of all solutions of the differential equation.

Examples |
---|

} |

The preceding gave a solution for the case when all zeros are distinct, that is, each has multiplicity 1. For the general case, if *z* is a (possibly complex) zero (or root) of *F*(*z*) having multiplicity *m*, then, for $k\backslash in\backslash \{0,1,\backslash dots,m-1\backslash \}\; \backslash ,$, $y=x^ke^\{zx\}$ is a solution of the ODE. Applying this to all roots gives a collection of *n* distinct and linearly independent functions, where *n* is the degree of *F*(*z*). As before, these functions make up a basis of the solution space.

If the coefficients *A _{i}* of the differential equation are real, then real-valued solutions are generally preferable. Since non-real roots

*z*then come in conjugate pairs, so do their corresponding basis functions

*x*

^{k}e

^{zx}, and the desired result is obtained by replacing each pair with their real-valued linear combinations Re(

*y*) and Im(

*y*), where

*y*is one of the pair.

A case that involves complex roots can be solved with the aid of Euler's formula.

### Examples

Given $y$*-4y'+5y=0. The characteristic equation is $z^2-4z+5=0$ which has roots 2±*i*. Thus the solution basis $\backslash \{y\_1,y\_2\backslash \}$ is $\backslash \{e^\{(2+i)x\},e^\{(2-i)x\}\backslash \}$. Now *y* is a solution if and only if $y=c\_1y\_1+c\_2y\_2$ for $c\_1,c\_2\backslash in\backslash mathbf\{C\}$.*

Because the coefficients are real,

- we are likely not interested in the complex solutions
- our basis elements are mutual conjugates

The linear combinations

- $u\_1=\backslash mbox\{Re\}(y\_1)=\backslash tfrac\{1\}\{2\}\; (y\_1+y\_2)\; =e^\{2x\}\backslash cos(x),$
- $u\_2=\backslash mbox\{Im\}(y\_1)=\backslash tfrac\{1\}\{2i\}\; (y\_1-y\_2)\; =e^\{2x\}\backslash sin(x),$

will give us a real basis in $\backslash \{u\_1,u\_2\backslash \}$.

#### Simple harmonic oscillator

The second order differential equation

- $D^2\; y\; =\; -k^2\; y,$

which represents a simple harmonic oscillator, can be restated as

- $(D^2\; +\; k^2)\; y\; =\; 0.$

The expression in parenthesis can be factored out, yielding

- $(D\; +\; i\; k)\; (D\; -\; i\; k)\; y\; =\; 0,$

which has a pair of linearly independent solutions:

- $(D\; -\; i\; k)\; y\; =\; 0$
- $(D\; +\; i\; k)\; y\; =\; 0.$

The solutions are, respectively,

- $y\_0\; =\; A\_0\; e^\{i\; k\; x\}$

and

- $y\_1\; =\; A\_1\; e^\{-i\; k\; x\}.$

These solutions provide a basis for the two-dimensional solution space of the second order differential equation: meaning that linear combinations of these solutions will also be solutions. In particular, the following solutions can be constructed

- $y\_\{0\text{'}\}\; =\; \{C\_0\; e^\{i\; k\; x\}\; +\; C\_0\; e^\{-i\; k\; x\}\; \backslash over\; 2\}\; =\; C\_0\; \backslash cos\; (k\; x)$

and

- $y\_\{1\text{'}\}\; =\; \{C\_1\; e^\{i\; k\; x\}\; -\; C\_1\; e^\{-i\; k\; x\}\; \backslash over\; 2\; i\}\; =\; C\_1\; \backslash sin\; (k\; x).$

These last two trigonometric solutions are linearly independent, so they can serve as another basis for the solution space, yielding the following general solution:

- $y\_H\; =\; C\_0\; \backslash cos\; (k\; x)\; +\; C\_1\; \backslash sin\; (k\; x).$

#### Damped harmonic oscillator

Given the equation for the damped harmonic oscillator:

- $\backslash left(D^2\; +\; \backslash frac\{b\}\{m\}\; D\; +\; \backslash omega\_0^2\backslash right)\; y\; =\; 0,$

the expression in parentheses can be factored out: first obtain the characteristic equation by replacing *D* with λ. This equation must be satisfied for all *y*, thus:

- $\backslash lambda^2\; +\; \backslash frac\{b\}\{m\}\; \backslash lambda\; +\; \backslash omega\_0^2\; =\; 0.$

Solve using the quadratic formula:

- $\backslash lambda\; =\; \backslash tfrac\{1\}\{2\}\; \backslash left\; (-\backslash frac\{b\}\{m\}\; \backslash pm\; \backslash sqrt\{\backslash frac\{b^2\}\{m^2\}\; -\; 4\; \backslash omega\_0^2\}\; \backslash right\; ).$

Use these data to factor out the original differential equation:

- $\backslash left(D\; +\; \backslash frac\{b\}\{2m\}\; -\; \backslash sqrt\{\backslash frac\{b^2\}\{4\; m^2\}\; -\; \backslash omega\_0^2\}\; \backslash right)\; \backslash left(D\; +\; \backslash frac\{b\}\{2m\}\; +\; \backslash sqrt\{\backslash frac\{b^2\}\{4\; m^2\}\; -\; \backslash omega\_0^2\}\backslash right)\; y\; =\; 0.$

This implies a pair of solutions, one corresponding to

- $\backslash left(D\; +\; \backslash frac\{b\}\{2m\}\; -\; \backslash sqrt\{\backslash frac\{b^2\}\{4\; m^2\}\; -\; \backslash omega\_0^2\}\; \backslash right)\; y\; =\; 0$
- $\backslash left(D\; +\; \backslash frac\{b\}\{2m\}\; +\; \backslash sqrt\{\backslash frac\{b^2\}\{4\; m^2\}\; -\; \backslash omega\_0^2\}\backslash right)\; y\; =\; 0$

The solutions are, respectively,

- $y\_0\; =\; A\_0\; e^\{-\backslash omega\; x\; +\; \backslash sqrt\{\backslash omega^2\; -\; \backslash omega\_0^2\}\; x\}\; =\; A\_0\; e^\{-\backslash omega\; x\}\; e^\{\backslash sqrt\{\backslash omega^2\; -\; \backslash omega\_0^2\}\; x\}$
- $y\_1\; =\; A\_1\; e^\{-\backslash omega\; x\; -\; \backslash sqrt\{\backslash omega^2\; -\; \backslash omega\_0^2\}\; x\}\; =\; A\_1\; e^\{-\backslash omega\; x\}\; e^\{-\backslash sqrt\{\backslash omega^2\; -\; \backslash omega\_0^2\}\; x\}$

where ω = *b*/2*m*. From this linearly independent pair of solutions can be constructed another linearly independent pair which thus serve as a basis for the two-dimensional solution space:

- $y\_H\; (A\_0,\; A\_1)\; (x)\; =\; \backslash left(A\_0\; \backslash sinh\; \backslash left\; (\backslash sqrt\{\backslash omega^2\; -\; \backslash omega\_0^2\}\; x\; \backslash right\; )\; +\; A\_1\; \backslash cosh\; \backslash left\; (\; \backslash sqrt\{\backslash omega^2\; -\; \backslash omega\_0^2\}\; x\; \backslash right\; )\; \backslash right)\; e^\{-\backslash omega\; x\}.$

However, if |ω| < |ω_{0}| then it is preferable to get rid of the consequential imaginaries, expressing the general solution as

- $y\_H\; (A\_0,\; A\_1)\; (x)\; =\; \backslash left(A\_0\; \backslash sin\; \backslash left(\backslash sqrt\{\backslash omega\_0^2\; -\; \backslash omega^2\}\; x\; \backslash right\; )\; +\; A\_1\; \backslash cos\; \backslash left\; (\backslash sqrt\{\backslash omega\_0^2\; -\; \backslash omega^2\}\; x\; \backslash right\; )\; \backslash right)\; e^\{-\backslash omega\; x\}.$

This latter solution corresponds to the underdamped case, whereas the former one corresponds to the overdamped case: the solutions for the underdamped case oscillate whereas the solutions for the overdamped case do not.

## Nonhomogeneous equation with constant coefficients

To obtain the solution to the **nonhomogeneous equation** (sometimes called **inhomogeneous equation**), find a particular integral *y _{P}*(

*x*) by either the method of undetermined coefficients or the method of variation of parameters; the general solution to the linear differential equation is the sum of the general solution of the related homogeneous equation and the particular integral. Or, when the initial conditions are set, use Laplace transform to obtain the particular solution directly.

Suppose we face

- $\backslash frac\; \{d^\{n\}y(x)\}\; \{dx^\{n\}\}\; +\; A\_\{1\}\backslash frac\; \{d^\{n-1\}y(x)\}\; \{dx^\{n-1\}\}\; +\; \backslash cdots\; +\; A\_\{n\}y(x)\; =\; f(x).$

For later convenience, define the characteristic polynomial

- $P(v)=v^n+A\_1v^\{n-1\}+\backslash cdots+A\_n.$

We find a solution basis $\backslash \{y\_1(x),y\_2(x),\backslash ldots,y\_n(x)\backslash \}$ for the homogeneous (*f*(*x*) = 0) case. We now seek a **particular integral** *y _{p}*(

*x*) by the

**variation of parameters**method. Let the coefficients of the linear combination be functions of

*x*:

- $y\_p(x)\; =\; u\_1(x)\; y\_1(x)\; +\; u\_2(x)\; y\_2(x)\; +\; \backslash cdots\; +\; u\_n(x)\; y\_n(x).$

For ease of notation we will drop the dependency on *x* (i.e. the various *(x)*). Using the operator notation *D* = *d/dx*, the ODE in question is *P*(*D*)*y* = *f*; so

- $f=P(D)y\_p=P(D)(u\_1y\_1)+P(D)(u\_2y\_2)+\backslash cdots+P(D)(u\_ny\_n).$

With the constraints

- $0=u\text{'}\_1y\_1+u\text{'}\_2y\_2+\backslash cdots+u\text{'}\_ny\_n$
- $0=u\text{'}\_1y\text{'}\_1+u\text{'}\_2y\text{'}\_2+\backslash cdots+u\text{'}\_ny\text{'}\_n$
- $\backslash cdots$
- $0=u\text{'}\_1y^\{(n-2)\}\_1+u\text{'}\_2y^\{(n-2)\}\_2+\backslash cdots+u\text{'}\_ny^\{(n-2)\}\_n$

the parameters commute out,

- $f=u\_1P(D)y\_1+u\_2P(D)y\_2+\backslash cdots+u\_nP(D)y\_n+u\text{'}\_1y^\{(n-1)\}\_1+u\text{'}\_2y^\{(n-1)\}\_2+\backslash cdots+u\text{'}\_ny^\{(n-1)\}\_n.$

But *P*(*D*)*y _{j}* = 0, therefore

- $f=u\text{'}\_1y^\{(n-1)\}\_1+u\text{'}\_2y^\{(n-1)\}\_2+\backslash cdots+u\text{'}\_ny^\{(n-1)\}\_n.$

This, with the constraints, gives a linear system in the *u′ _{j}*. This much can always be solved; in fact, combining Cramer's rule with the Wronskian,

- $u\text{'}\_j=(-1)^\{n+j\}\backslash frac\{W(y\_1,\backslash ldots,y\_\{j-1\},y\_\{j+1\}\backslash ldots,y\_n)\_\{0\; \backslash choose\; f\}\}\{W(y\_1,y\_2,\backslash ldots,y\_n)\}.$

In the very non-standard notation used above, one should take the i,n-minor of W and multiply it by f. That's why we get a minus-sign. Alternatively, forget about the minus sign and just compute the determinant of the matrix obtained by substituting the j-th W column with (0, 0, ..., f).

The rest is a matter of integrating *u′ _{j}*.

The particular integral is not unique; $y\_p+c\_1y\_1+\backslash cdots+c\_ny\_n$ also satisfies the ODE for any set of constants *c _{j}*.

### Example

Suppose $y$*-4y'+5y=\sin(kx). We take the solution basis found above $\backslash \{e^\{(2+i)x\}=y\_1(x),e^\{(2-i)x\}=y\_2(x)\backslash \}$.*

- $\backslash begin\{align\}$

W &= \begin{vmatrix}e^{(2+i)x}&e^{(2-i)x} \\ (2+i)e^{(2+i)x}&(2-i)e^{(2-i)x} \end{vmatrix} = e^{4x}\begin{vmatrix}1&1\\ 2+i&2-i\end{vmatrix} =-2ie^{4x}\\ u'_1 &=\frac{1}{W}\begin{vmatrix}0&e^{(2-i)x}\\ \sin(kx)&(2-i)e^{(2-i)x}\end{vmatrix} = -\tfrac{i}{2} \sin(kx)e^{(-2-i)x}\\ u'_2 &=\frac{1}{W}\begin{vmatrix}e^{(2+i)x}&0\\ (2+i)e^{(2+i)x}&\sin(kx)\end{vmatrix} =\tfrac{i}{2} \sin(kx)e^{(-2+i)x}. \end{align}

Using the list of integrals of exponential functions

- $u\_1=-\backslash tfrac\{i\}\{2\}\backslash int\backslash sin(kx)e^\{(-2-i)x\}\backslash ,dx\; =\backslash frac\{ie^\{(-2-i)x\}\}\{2(3+4i+k^2)\}\backslash left((2+i)\backslash sin(kx)+k\backslash cos(kx)\backslash right)$
- $u\_2=\backslash tfrac\{i\}\{2\}\backslash int\backslash sin(kx)e^\{(-2+i)x\}\backslash ,dx=\backslash frac\{ie^\{(i-2)x\}\}\{2(3-4i+k^2)\}\backslash left((i-2)\backslash sin(kx)-k\backslash cos(kx)\backslash right).$

And so

- $\backslash begin\{align\}$

y_p &= u_1(x) y_1(x) + u_2(x) y_2(x) = \frac{i}{2(3+4i+k^2)}\left((2+i)\sin(kx)+k\cos(kx)\right) +\frac{i}{2(3-4i+k^2)}\left((i-2)\sin(kx)-k\cos(kx)\right) \\ &=\frac{(5-k^2)\sin(kx)+4k\cos(kx)}{(3+k^2)^2+16}. \end{align}

(Notice that *u*_{1} and *u*_{2} had factors that canceled *y*_{1} and *y*_{2}; that is typical.)

For interest's sake, this ODE has a physical interpretation as a driven damped harmonic oscillator; *y _{p}* represents the steady state, and $c\_1y\_1+c\_2y\_2$ is the transient.

## Equation with variable coefficients

A linear ODE of order *n* with variable coefficients has the general form

- $p\_\{n\}(x)y^\{(n)\}(x)\; +\; p\_\{n-1\}(x)\; y^\{(n-1)\}(x)\; +\; \backslash cdots\; +\; p\_0(x)\; y(x)\; =\; r(x).$

### Examples

A simple example is the Cauchy–Euler equation often used in engineering

- $x^n\; y^\{(n)\}(x)\; +\; a\_\{n-1\}\; x^\{n-1\}\; y^\{(n-1)\}(x)\; +\; \backslash cdots\; +\; a\_0\; y(x)\; =\; 0.$

## First order equation

Examples |
---|

Solve the equation
- $y\text{'}(x)+3y(x)=2$
with the initial condition - $y(0)=2.$
Using the general solution method: - $y=e^\{-3x\}\backslash left(\backslash int\; 2\; e^\{3x\}\backslash ,\; dx\; +\; \backslash kappa\backslash right).\; \backslash ,$
The indefinite integral is solved to give: - $y=e^\{-3x\}\backslash left(2/3\; e^\{3x\}\; +\; \backslash kappa\backslash right).\; \backslash ,$
Then we can reduce to: - $y=2/3\; +\; \backslash kappa\; e^\{-3x\}.\; \backslash ,$
where κ = 4/3 from the initial condition. |

A linear ODE of order 1 with variable coefficients has the general form

- $Dy(x)\; +\; f(x)\; y(x)\; =\; g(x).$

Where D is the differential operator. Equations of this form can be solved by multiplying the integrating factor

- $e^\{\backslash int\; f(x)\backslash ,dx\}$

throughout to obtain

- $Dy(x)e^\{\backslash int\; f(x)\backslash ,dx\}+f(x)y(x)e^\{\backslash int\; f(x)\backslash ,dx\}=g(x)e^\{\backslash int\; f(x)\; \backslash ,\; dx\},$

which simplifies due to the product rule to

- $D\backslash left\; (y(x)e^\{\backslash int\; f(x)\backslash ,dx\}\; \backslash right\; )=g(x)e^\{\backslash int\; f(x)\backslash ,dx\}$

which, on integrating both sides and solving for *y*(*x*) gives:

- $y(x)\; =\; \backslash frac\{\backslash int\; g(x)e^\{\backslash int\; f(x)\backslash ,dx\}\; \backslash ,dx+c\}\{e^\{\backslash int\; f(x)\backslash ,dx\}\}.$

In other words: The solution of a first-order linear ODE

- $y\text{'}(x)\; +\; f(x)\; y(x)\; =\; g(x),$

with coefficients that may or may not vary with *x*, is:

- $y=e^\{-a(x)\}\backslash left(\backslash int\; g(x)\; e^\{a(x)\}\backslash ,\; dx\; +\; \backslash kappa\backslash right)$

where κ is the constant of integration, and

- $a(x)=\backslash int\{f(x)\backslash ,dx\}.$

A compact form of the general solution is (see J. Math. Chem. 48 (2010) 175):

- $y(x)\; =\; \backslash int\_a^x\; \backslash !\; \{b\}+\; C\; \backslash right)\; =\; \backslash frac\{1\}\{b\}\; +\; C\; e^\{-bx\}\; .$

## Systems of Linear Differential Equations

An arbitrary linear ordinary differential equation or even a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. A linear system can be viewed as a single equation with a vector-valued variable. The general treatment is analogous to the treatment above of ordinary first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.

To solve

- $\backslash left\backslash \{\backslash begin\{array\}\{rl\}\backslash mathbf\{y\}\text{'}(x)\; \&=\; A(x)\backslash mathbf\{y\}(x)+\backslash mathbf\{b\}(x)\backslash \backslash \backslash mathbf\; y(x\_0)\&=\backslash mathbf\; y\_0\backslash end\{array\}\backslash right.$

(here $\backslash mathbf\{y\}\; (x)$ is a vector or matrix, and $A(\; x\; )$ is a matrix), let $U(\; x\; )$ be the solution of $\backslash mathbf\; y\text{'}(x)\; =\; A(x)\backslash mathbf\; y(x)$ with $U(x\_0)\; =\; I$ (the identity matrix). $U$ is a fundamental matrix for the equation — the columns of $U$ form a complete linearly independent set of solutions for the homogeneous equation. After substituting $\backslash mathbf\; y(x)\; =\; U(x)\backslash mathbf\; z(x)$, the equation $\backslash mathbf\; y\text{'}(x)\; =\; A(x)\backslash mathbf\; y(x)+\backslash mathbf\; b(x)$ simplifies to $U(x)\backslash mathbf\; z\text{'}(x)\; =\; \backslash mathbf\; b(x).$ Thus,

- $\backslash mathbf\{y\}(x)\; =\; U(x)\backslash mathbf\{y\_0\}\; +\; U(x)\backslash int\_\{x\_0\}^x\; U^\{-1\}(x)\backslash mathbf\{b\}(x)\backslash ,dx$

If $A(x\_1)$ commutes with $A(x\_2)$ for all $x\_1$ and $x\_2$, then

- $U(x)\; =\; e^\{\backslash int\_\{x\_0\}^x\; A(x)\backslash ,dx\}$

and thus

- $U^\{-1\}(x)\; =\; e^\{-\backslash int\_\{x\_0\}^x\; A(x)\backslash ,dx\},$

but in the general case there is no closed form solution, and an approximation method such as Magnus expansion may have to be used. Note that the exponentials are matrix exponentials.

## See also

- Matrix differential equation
- Partial differential equation
- Continuous-repayment mortgage
- Fourier transform
- Laplace transform
- List of differentiation identities, Nth Derivatives Section

## External links

- http://eqworld.ipmnet.ru/en/solutions/ode.htm