Distributions (or generalized functions) are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions than classical solutions, or appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta function (which is historically called a "function" even though it is not considered a genuine function mathematically).
According to Kolmogorov & Fomin (1999), generalized functions originated in the work of Sergei Sobolev (1936) on secondorder hyperbolic partial differential equations, and the ideas were developed in somewhat extended form by Laurent Schwartz in the late 1940s. According to his autobiography, Schwartz introduced the term "distribution" by analogy with a distribution of electrical charge, possibly including not only point charges but also dipoles and so on. Gårding (1997) comments that although the ideas in the transformative book by Schwartz (1951) were not entirely new, it was Schwartz's broad attack and conviction that distributions would be useful almost everywhere in analysis that made the difference.
The basic idea in distribution theory is to reinterpret functions as linear functionals acting on a space of test functions. Standard functions act by integration against a test function, but many other linear functionals do not arise in this way, and these are the "generalized functions". There are different possible choices for the space of test functions, leading to different spaces of distributions. The basic space of test function consists of smooth functions with compact support, leading to standard distributions. Use of the space of smooth, rapidly decreasing test functions gives instead the tempered distributions, which are important because they have a welldefined distributional Fourier transform. Every tempered distribution is a distribution in the normal sense, but the converse is not true: in general the larger the space of test functions, the more restrictive the notion of distribution. On the other hand, the use of spaces of analytic test functions leads to Sato's theory of hyperfunctions; this theory has a different character from the previous ones because there are no analytic functions with nonempty compact support.
Contents

Basic idea 1

Functions and measures as distributions 1.1

Adding and multiplying distributions 1.2

Derivatives of distributions 1.3

Test functions and distributions 2

Test function space 2.1

Distributions 2.2

Functions as distributions 2.3

Operations on distributions 3

Differentiation 3.1

Multiplication by a smooth function 3.2

Composition with a smooth function 3.3

Localization of distributions 4

Restriction 4.1

Support of a distribution 4.2

Tempered distributions and Fourier transform 5

Convolution 6

Distributions as derivatives of continuous functions 7

Tempered distributions 7.1

Restriction of distributions to compact sets 7.2

Distributions with point support 7.3

General distributions 7.4

Using holomorphic functions as test functions 8

Problem of multiplication 9

See also 10

References 11

Further reading 12
Basic idea
A typical test function, the
bump function Ψ(
x). It is
smooth (infinitely differentiable) and has
compact support (is zero outside an interval, in this case the interval [−1, 1]).
Distributions are a class of linear functionals that map a set of test functions (conventional and wellbehaved functions) into the set of real numbers. In the simplest case, the set of test functions considered is D(R), which is the set of functions φ : R → R having two properties:

φ is smooth (infinitely differentiable);

φ has compact support (is identically zero outside some bounded interval).
A distribution T is a linear mapping T : D(R) → R. Instead of writing T(φ), it is conventional to write \langle T,\varphi \rangle for the value of T acting on a test function φ. A simple example of a distribution is the Dirac delta δ, defined by

\left\langle \delta, \varphi \right\rangle = \varphi(0),
meaning that δ evaluates a test function at 0. Its physical interpretation is as the density of a point source.
As described next, there are straightforward mappings from both locally integrable functions and Radon measures to corresponding distributions, but not all distributions can be formed in this manner.
Functions and measures as distributions
Suppose that f : R → R is a locally integrable function. Then a corresponding distribution T_{f} may be defined by

\left\langle T_{f}, \varphi \right\rangle = \int_\mathbf{R} f(x) \varphi(x) \,dx\qquad \text{for} \quad \varphi\in D(\mathbf{R}).
This integral is a real number which depends linearly and continuously on φ. Conversely, the values of the distribution T_{f} on test functions in D(R) determine the pointwise almost everywhere values of the function f on R. In a conventional abuse of notation, f is often used to represent both the original function f and the corresponding distribution T_{f}. This example suggests the definition of a distribution as a linear and, in an appropriate sense, continuous functional on the space of test functions D(R).
Similarly, if μ is a Radon measure on R, then a corresponding distribution R_{μ} may be defined by

\left\langle R_\mu, \varphi \right\rangle = \int_{\mathbf{R}} \varphi\, d\mu\qquad \text{for} \quad \varphi\in D(\mathbf{R}).
This integral also depends linearly and continuously on φ, so that R_{μ} is a distribution. If μ is absolutely continuous with respect to Lebesgue measure with density f and dμ = f dx, then this definition for R_{μ} is the same as the previous one for T_{f}, but if μ is not absolutely continuous, then R_{μ} is a distribution that is not associated with a function. For example, if P is the pointmass measure on R that assigns measure one to the singleton set {0} and measure zero to sets that do not contain zero, then

\int_{\mathbf{R}} \varphi\, dP = \varphi(0),
so that R_{P} = δ is the Dirac delta.
Adding and multiplying distributions
Distributions may be multiplied by real numbers and added together, so they form a real vector space. Distributions may also be multiplied by infinitely differentiable functions, but it is not possible to define a product of general distributions that extends the usual pointwise product of functions and has the same algebraic properties.
Derivatives of distributions
It is desirable to choose a definition for the derivative of a distribution which, at least for distributions derived from smooth functions, has the property that T'_f = T_{f'}. If φ is a test function, we can use integration by parts to see that

\left\langle f', \varphi\right\rangle = \int_{\mathbf{R}}{}{f'\varphi \,dx} = \left[ f(x) \varphi(x) \right]_{\infty}^\infty  \int_{\mathbf{R}}{}{f\varphi' \,dx} = \left\langle f, \varphi' \right\rangle
where the last equality follows from the fact that φ is zero outside of a bounded set. This suggests that if T is a distribution, we should define its derivative T′ by

\left\langle T', \varphi \right\rangle =  \left\langle T, \varphi' \right\rangle.
It turns out that this is the proper definition; it extends the ordinary definition of derivative, every distribution becomes infinitely differentiable and the usual properties of derivatives hold.
Example: Recall that the Dirac delta (socalled Dirac delta function) is the distribution defined by the equation

\left\langle \delta, \varphi \right\rangle = \varphi(0).
It is the derivative of the distribution corresponding to the Heaviside step function H: For any test function φ,

\left\langle H', \varphi \right\rangle =  \left\langle H, \varphi' \right\rangle =  \int_{\infty}^{\infty} H(x) \varphi'(x) dx =  \int_{0}^{\infty} \varphi'(x) dx = \varphi(0)  \varphi(\infty) = \varphi(0) = \left\langle \delta, \varphi \right\rangle,
so H′ = δ. Note, φ(∞) = 0 because of compact support. Similarly, the derivative of the Dirac delta is the distribution defined by the equation

\langle\delta',\varphi\rangle= \varphi'(0).
This latter distribution is an example of a distribution that is not derived from a function or a measure. Its physical interpretation is as the density of a dipole source.
Test functions and distributions
In the following, realvalued distributions on an open subset U of R^{n} will be formally defined. With minor modifications, one can also define complexvalued distributions, and one can replace R^{n} by any (paracompact) smooth manifold.
The first object to define is the space D(U) of test functions on U. Once this is defined, it is then necessary to equip it with a topology by defining the limit of a sequence of elements of D(U). The space of distributions will then be given as the space of continuous linear functionals on D(U).
Test function space
The space D(U) of test functions on U is defined as follows. A function φ : U → R is said to have compact support if there exists a compact subset K of U such that φ(x) = 0 for all x in U \ K. The elements of D(U) are the infinitely differentiable functions φ : U → R with compact support – also known as bump functions. This is a real vector space. It can be given a topology by defining the limit of a sequence of elements of D(U). A sequence (φ_{k}) in D(U) is said to converge to φ ∈ D(U) if the following two conditions hold (Gelfand & Shilov 1966–1968, v. 1, §1.2):

There is a compact set K ⊂ U containing the supports of all φ_{k}:


\bigcup\nolimits_k \operatorname{supp}(\varphi_k)\subset K.

For each multiindex α, the sequence of partial derivatives \partial^\alpha \varphi_k tends uniformly to \partial^\alpha\varphi.
With this definition, D(U) becomes a complete locally convex topological vector space satisfying the Heine–Borel property (Rudin 1991, §6.4–5).
This topology can be placed in the context of the following general construction: let

X = \bigcup\nolimits_i X_i
be a countable increasing union of locally convex topological vector spaces and ι_{i} : X_{i} → X be the inclusion maps. In this context, the inductive limit topology, or final topology, τ on X is the finest locally convex vector space topology making all the inclusion maps \iota_i continuous. The topology τ can be explicitly described as follows: let β be the collection of convex balanced subsets W of X such that W ∩ X_{i} is open for all i. A base for the inductive limit topology τ then consists of the sets of the form x + W, where x in X and W in β.
The proof that τ is a vector space topology makes use of the assumption that each X_{i} is locally convex. By construction, β is a local base for τ. That any locally convex vector space topology on X must necessarily contain τ means it is the weakest one. One can also show that, for each i, the subspace topology X_{i} inherits from τ coincides with its original topology. When each X_{i} is a Fréchet space, (X, τ) is called an LF space.
Now let U be the union of U_{i} where {U_{i}} is a countable nested family of open subsets of U with compact closures K_{i} = U_{i}. Then we have the countable increasing union

\mathrm{D}(U) = \bigcup\nolimits_i \mathrm{D}_{K_i}
where D_{Ki} is the set of all smooth functions on U with support lying in K_{i}. On each D_{Ki}, consider the topology given by the seminorms

\ \varphi \_{\alpha} = \max_{x \in K_i} \left \partial^{\alpha} \varphi \right  ,
i.e. the topology of uniform convergence of derivatives of arbitrary order. This makes each D_{Ki} a Fréchet space. The resulting LF space structure on D(U) is the topology described in the beginning of the section.
On D(U), one can also consider the topology given by the seminorms

\ \varphi \_{\alpha, K_i} = \max_{x \in K_i} \left \partial^{\alpha} \varphi \right  .
However, this topology has the disadvantage of not being complete. On the other hand, because of the particular features of D_{Ki}'s, a set this bounded with respect to τ if and only if it lies in some D_{Ki}'s. The completeness of (D(U), τ) then follow from that of D_{Ki}'s.
The topology τ is not metrizable by the Baire category theorem, since D(U) is the union of subspaces of the first category in D(U) (Rudin 1991, §6.9).
Distributions
A distribution on U is a continuous linear functional T : D(U) → R (or T : D(U) → C). That is, a distribution T assigns to each test function φ a real (or complex) scalar T(φ) such that

T(c_1\varphi_1 + c_2\varphi_2) = c_1 T(\varphi_1) + c_2 T(\varphi_2)
for all test functions φ_{1}, φ_{2} and scalars c_{1}, c_{2}. Moreover, T is continuous if and only if

\lim_{k\to\infty}T(\varphi_k)= T\left(\lim_{k\to\infty}\varphi_k\right)
for every convergent sequence φ_{k} in D(U). (Even though the topology of D(U) is not metrizable, a linear functional on D(U) is continuous if and only if it is sequentially continuous.) Equivalently, T is continuous if and only if for every compact subset K of U there exists a positive constant C_{K} and a nonnegative integer N_{K} such that

T(\varphi) \le C_K \sup_K \partial^\alpha\varphi
for all test functions φ with support contained in K and all multiindices α with α ≤ N_{K} (Grubb 2009, p. 14).
The space of distributions on U is denoted by D′(U) and it is the continuous dual space of D(U). No matter what dual topology is placed on D′(U), a sequence of distributions converges in this topology if and only if it converges pointwise (although this need not be true of a net), which is why the topology is sometimes defined to be the weak* topology. But often the topology of bounded convergence, which in this case is the same as the topology of uniform convergence on compact sets, is placed on D′(U)^{[1]} since it is with this topology that D′(U) becomes a nuclear Montel space and it is with this topology that the kernels theorem of Schwartz holds. No matter which topology is chosen D(U) will be a nonmetrizable, locally convex topological vector space.
The duality pairing between a distribution T in D′(U) and a test function φ in D(U) is denoted using angle brackets by

\begin{cases} \mathrm{D}'(U) \times \mathrm{D}(U) \to \mathbf{R} \\ (T, \varphi) \mapsto \langle T, \varphi \rangle, \end{cases}
so that ⟨T,φ⟩ = T(φ). One interprets this notation as the distribution T acting on the test function φ to give a scalar, or symmetrically as the test function φ acting on the distribution T.
A sequence of distributions (T_{k}) converges with respect to the weak* topology on D′(U) to a distribution T if and only if

\langle T_k, \varphi\rangle \to \langle T, \varphi\rangle
for every test function φ in D(U). For example, if f_{k} : R → R is the function

f_k(x) = \begin{cases} k & \text{if}\ 0\le x \le 1/k \\ 0 & \text{otherwise} \end{cases}
and T_{k} is the distribution corresponding to f_{k}, then

\langle T_k, \varphi\rangle = k\int_0^{1/k} \varphi(x)\, dx \to \varphi(0) = \langle \delta, \varphi\rangle
as k → ∞, so T_{k} → δ in D′(R). Thus, for large k, the function f_{k} can be regarded as an approximation of the Dirac delta distribution.
Functions as distributions
The function f : U → R is called locally integrable if it is Lebesgue integrable over every compact subset K of U. This is a large class of functions which includes all continuous functions and all L^{p} functions. The topology on D(U) is defined in such a fashion that any locally integrable function f yields a continuous linear functional on D(U) – that is, an element of D′(U) – denoted here by T_{f}, whose value on the test function φ is given by the Lebesgue integral:

\langle T_f,\varphi \rangle = \int_U f\varphi\,dx.
Conventionally, one abuses notation by identifying T_{f} with f, provided no confusion can arise, and thus the pairing between T_{f} and φ is often written

\langle f, \varphi\rangle = \langle T_f,\varphi\rangle.
If f and g are two locally integrable functions, then the associated distributions T_{f} and T_{g} are equal to the same element of D′(U) if and only if f and g are equal almost everywhere (see, for instance, Hörmander (1983, Theorem 1.2.5)). In a similar manner, every Radon measure μ on U defines an element of D′(U) whose value on the test function φ is ∫φdμ. As above, it is conventional to abuse notation and write the pairing between a Radon measure μ and a test function φ as ⟨μ, φ⟩. Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is nonnegative on nonnegative functions is of this form for some (positive) Radon measure.
The test functions are themselves locally integrable, and so define distributions. As such they are dense in D′(U) with respect to the topology on D′(U) in the sense that for any distribution T ∈ D′(U), there is a sequence φ_{n} ∈ D(U) such that

\langle\varphi_n,\psi\rangle\to \langle T,\psi\rangle
for all ψ ∈ D(U). This fact follows from the Hahn–Banach theorem, since the dual of D′(U) with its weak* topology is the space D(U) (Rudin 1991, Theorem 3.10), and it can also be proven more constructively by a convolution argument.
Operations on distributions
Many operations which are defined on smooth functions with compact support can also be defined for distributions. In general, if A : D(U) → D(U) is a linear mapping of vector spaces which is continuous with respect to the weak* topology, then it is possible to extend A to a mapping A : D′(U) → D′(U) by passing to the limit. (This approach works for nonlinear mappings as well, provided they are assumed to be uniformly continuous.)
In practice, however, it is more convenient to define operations on distributions by means of the transpose (Strichartz 1994, §2.3; Trèves 1967). If A : D(U) → D(U) is a continuous linear operator, then the transpose is an operator A^{t} : D(U) → D(U) such that

\int_U A\varphi(x)\cdot \psi(x) \,dx = \int_U \varphi(x) \cdot A^t\psi(x)\, dx\qquad \text{for all}\ \varphi,\psi\in D(U).
(For operators acting on spaces of complexvalued test functions, the transpose A^{t} differs from the adjoint A^{*} in that it does not include a complex conjugate.)
If such an operator A^{t} exists and is continuous on D(U), then the original operator A may be extended to D′(U) by defining AT for a distribution T as

\langle AT, \varphi\rangle = \langle T, A^t\varphi\rangle\qquad \text{for all}\ \varphi\in D(U).
Differentiation
Suppose A : D(U) → D(U) is the partial derivative operator

A\varphi = \frac{\partial\varphi}{\partial x_k}.
If φ and ψ are in D(U), then an integration by parts gives

\int_U \frac{\partial\varphi}{\partial x_k} \psi \, dx = \int_U\varphi \frac{\partial\psi}{\partial x_k}\, dx,
so that A^{t} = −A. This operator is a continuous linear transformation on D(U). So, if T ∈ D′(U) is a distribution, then the partial derivative of T with respect to the coordinate x_{k} is defined by the formula

\left\langle \frac{\partial T}{\partial x_{k}}, \varphi \right\rangle =  \left\langle T, \frac{\partial \varphi}{\partial x_{k}} \right\rangle \qquad \text{for all}\ \varphi\in D(U).
With this definition, every distribution is infinitely differentiable, and the derivative in the direction x_{k} is a linear operator on D′(U).
More generally, if α = (α_{1}, ..., α_{n}) is an arbitrary multiindex and ∂^{α} is the associated partial derivative operator, then the partial derivative ∂^{α}T of the distribution T ∈ D′(U) is defined by

\left\langle \partial^{\alpha} T, \varphi \right\rangle = (1)^{ \alpha } \left\langle T, \partial^{\alpha} \varphi \right\rangle \mbox{ for all } \varphi \in \mathrm{D}(U).
Differentiation of distributions is a continuous operator on D′(U); this is an important and desirable property that is not shared by most other notions of differentiation.
Multiplication by a smooth function
If m : U → R is an infinitely differentiable function and T is a distribution on U, then the product mT is defined by

\langle mT, \varphi\rangle = \langle T, m\varphi \rangle\qquad \text{for all}\ \varphi\in D(U).
This definition coincides with the transpose definition since if M : D(U) → D(U) is the operator of multiplication by the function m (i.e., Mφ = m φ), then

\int_U M\varphi(x)\cdot \psi(x)\,dx = \int_U m(x)\varphi(x)\cdot \psi(x)\,dx = \int_U \varphi(x)\cdot m(x)\psi(x)\,dx = \int_U \varphi(x)\cdot M\psi(x)\,dx,
so that M^{t} = M.
Under multiplication by smooth functions, D′(U) is a module over the ring C^{∞}(U). With this definition of multiplication by a smooth function, the ordinary product rule of calculus remains valid. However, a number of unusual identities also arise. For example, if δ is the Dirac delta distribution on R, then mδ = m(0)δ, and if δ′ is the derivative of the delta distribution, then

m\delta' = m(0)\delta'  m'\delta = m(0)\delta'  m'(0)\delta.\,
These definitions of differentiation and multiplication also make it possible to define the operation of a linear differential operator with smooth coefficients on a distribution. A linear differential operator P takes a distribution T ∈ D′(U) to another distribution PT given by a sum of the form

PT = \sum\nolimits_{\alpha\le k} p_\alpha \partial^\alpha T,
where the coefficients p_{α} are smooth functions on U. The action of the distribution PT on a test function φ is given by

\left\langle \sum\nolimits_{\alpha\le k} p_\alpha \partial^\alpha T,\varphi\right\rangle = \left\langle T,\sum\nolimits_{\alpha\le k} (1)^{\alpha} \partial^\alpha(p_\alpha\varphi)\right\rangle.
The minimum integer k for which such an expansion holds for every distribution T is called the order of P. The space D′(U) is a Dmodule with respect to the action of the ring of linear differential operators.
Composition with a smooth function
Let T be a distribution on an open set U ⊂ R^{n}. Let V be an open set in R^{n}, and F : V → U. Then provided F is a submersion, it is possible to define

T\circ F \in \mathrm{D}'(V).
This is the composition of the distribution T with F, and is also called the pullback of T along F, sometimes written

F^\sharp : T\mapsto F^\sharp T = T\circ F.
The pullback is often denoted F*, although this notation should not be confused with the use of '*' to denote the adjoint of a linear mapping.
The condition that F be a submersion is equivalent to the requirement that the Jacobian derivative dF(x) of F is a surjective linear map for every x ∈ V. A necessary (but not sufficient) condition for extending F^{#} to distributions is that F be an open mapping (Hörmander 1983, Theorem 6.1.1). The inverse function theorem ensures that a submersion satisfies this condition.
If F is a submersion, then F^{#} is defined on distributions by finding the transpose map. Uniqueness of this extension is guaranteed since F^{#} is a continuous linear operator on D(U). Existence, however, requires using the change of variables formula, the inverse function theorem (locally) and a partition of unity argument; see Hörmander (1983, Theorem 6.1.2).
In the special case when F is a diffeomorphism from an open subset V of R^{n} onto an open subset U of R^{n} change of variables under the integral gives

\int_V\varphi\circ F(x) \psi(x)\,dx = \int_U\varphi(x) \psi \left (F^{1}(x) \right ) \left \det dF^{1}(x) \right \,dx.
In this particular case, then, F^{#} is defined by the transpose formula:

\left \langle F^\sharp T,\varphi \right \rangle = \left \langle T, \left \det d(F^{1}) \right  \varphi\circ F^{1} \right \rangle.
Localization of distributions
There is no way to define the value of a distribution in D′(U) at a particular point of U. However, as is the case with functions, distributions on U restrict to give distributions on open subsets of U. Furthermore, distributions are locally determined in the sense that a distribution on all of U can be assembled from a distribution on an open cover of U satisfying some compatibility conditions on the overlap. Such a structure is known as a sheaf.
Restriction
Let U and V be open subsets of R^{n} with V ⊂ U. Let E_{VU} : D(V) → D(U) be the operator which extends by zero a given smooth function compactly supported in V to a smooth function compactly supported in the larger set U. Then the restriction mapping ρ_{VU} is defined to be the transpose of E_{VU}. Thus for any distribution T ∈ D′(U), the restriction ρ_{VU}T is a distribution in the dual space D′(V) defined by

\langle \rho_{VU}T,\varphi\rangle = \langle T, E_{VU}\varphi\rangle
for all test functions φ ∈ D(V).
Unless U = V, the restriction to V is neither injective nor surjective. Lack of surjectivity follows since distributions can blow up towards the boundary of V. For instance, if U = R and V = (0, 2), then the distribution

T(x) = \sum_{n=1}^\infty n\,\delta\left(x\frac{1}{n}\right)
is in D′(V) but admits no extension to D′(U).
Support of a distribution
Let T ∈ D′(U) be a distribution on an open set U. Then T is said to vanish on an open set V of U if T lies in the kernel of the restriction map ρ_{VU}. Explicitly T vanishes on V if

\langle T,\varphi\rangle = 0
for all test functions φ ∈ C^{∞}(U) with support in V. Let V be a maximal open set on which the distribution T vanishes; i.e., V is the union of every open set on which T vanishes. The support of T is the complement of V in U. Thus

\operatorname{supp}\,T = U \setminus \bigcup\left\{V \mid \rho_{VU}T = 0\right\}.
The distribution T has compact support if its support is a compact set. Explicitly, T has compact support if there is a compact subset K of U such that for every test function φ whose support is completely outside of K, we have T(φ) = 0. Compactly supported distributions define continuous linear functionals on the space C^{∞}(U); the topology on C^{∞}(U) is defined such that a sequence of test functions φ_{k} converges to 0 if and only if all derivatives of φ_{k} converge uniformly to 0 on every compact subset of U. Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. The embedding of C_{c}^{∞}(U) into C^{∞}(U), where the spaces are given their respective topologies, is continuous and has dense image. Thus compactly supported distributions can be identified with those distributions that can be extended from C_{c}^{∞}(U) to C^{∞}(U).
Tempered distributions and Fourier transform
By using a larger space of test functions, one can define the tempered distributions, a subspace of D′(R^{n}). These distributions are useful if one studies the Fourier transform: all tempered distributions have a Fourier transform, but not all distributions in D′(R^{n}) have one.
The space of test functions employed here, the socalled Schwartz space S(R^{n}), is the function space of all infinitely differentiable functions that are rapidly decreasing at infinity along with all partial derivatives. Thus φ : R^{n} → R is in the Schwartz space provided that any derivative of φ, multiplied with any power of x, converges towards 0 for x → ∞. These functions form a complete topological vector space with a suitably defined family of seminorms. More precisely, let

p_{\alpha , \beta} (\varphi) = \sup_{x \in \mathbf{R}^n}  x^\alpha D^\beta \varphi(x)
for α, β multiindices of size n. Then φ is a Schwartz function if all the values

p_{\alpha, \beta} (\varphi) < \infty.
The family of seminorms p_{α, β} defines a locally convex topology on the Schwartz space. The seminorms are, in fact, norms on the Schwartz space, since Schwartz functions are smooth. The Schwartz space is metrizable and complete. Because the Fourier transform changes differentiation by x^{α} into multiplication by x^{α} and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function.
The space of tempered distributions is defined as the (continuous) dual of the Schwartz space. In other words, a distribution T is a tempered distribution if and only if

\lim_{m\to\infty} T(\varphi_m)=0.
is true whenever,

\lim_{m\to\infty} p_{\alpha , \beta} (\varphi_m) = 0
holds for all multiindices α, β.
The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slowgrowing) locally integrable functions; all distributions with compact support and all squareintegrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of L^{p}(R^{n}) for p ≥ 1 are tempered distributions.
The tempered distributions can also be characterized as slowly growing. This characterization is dual to the rapidly falling behaviour, e.g. \propto x^n \cdot \exp ( x^2), of the test functions.
To study the Fourier transform, it is best to consider complexvalued test functions and complexlinear distributions. The ordinary continuous Fourier transform F yields then an automorphism of Schwartz function space, and we can define the Fourier transform of the tempered distribution T by (FT)(ψ) = T(Fψ) for every Schwartz function ψ. FT is thus again a tempered distribution. The Fourier transform is a continuous, linear, bijective operator from the space of tempered distributions to itself. This operation is compatible with differentiation in the sense that

F\dfrac{dT}{dx}=ixFT
and also with convolution: if T is a tempered distribution and ψ is a slowly increasing infinitely differentiable function on R^{n} (meaning that all derivatives of ψ grow at most as fast as polynomials), then ψT is again a tempered distribution and

F(\psi T)=F\psi*FT\,
is the convolution of FT and Fψ. In particular, the Fourier transform of the constant function equal to 1 is the δ distribution.
Convolution
Under some circumstances, it is possible to define the convolution of a function with a distribution, or even the convolution of two distributions.

Convolution of a test function with a distribution
If f ∈ D(R^{n}) is a compactly supported smooth test function, then convolution with f,

\begin{cases} C_f : \mathrm{D}(\mathbf{R}^n)\to \mathrm{D}(\mathbf{R}^n) \\ C_f : g \mapsto f * g \end{cases}
defines a linear operator which is continuous with respect to the LF space topology on D(R^{n}).
Convolution of f with a distribution T ∈ D′(R^{n}) can be defined by taking the transpose of C_{f} relative to the duality pairing of D(R^{n}) with the space D′(R^{n}) of distributions (Trèves 1967, Chapter 27). If f, g, φ ∈ D(R^{n}), then by Fubini's theorem

\left \langle C_fg, \varphi \right \rangle = \int_{\mathbf{R}^n}\varphi(x)\int_{\mathbf{R}^n}f(xy)g(y)\,dydx = \left \langle g, C_{\widetilde{f}}\varphi \right \rangle
where \scriptstyle{\widetilde{f}(x) = f(x)}. Extending by continuity, the convolution of f with a distribution T is defined by

\langle f*T, \varphi\rangle = \left \langle T, \widetilde{f}*\varphi \right \rangle
for all test functions φ ∈ D(R^{n}).
An alternative way to define the convolution of a function f and a distribution T is to use the translation operator τ_{x} defined on test functions by

\tau_x \varphi(y) = \varphi(yx)
and extended by the transpose to distributions in the obvious way (Rudin 1991, §6.29). The convolution of the compactly supported function f and the distribution T is then the function defined for each x ∈ R^{n} by

(f*T)(x) = \left \langle T, \tau_x\widetilde{f} \right \rangle.
It can be shown that the convolution of a compactly supported function and a distribution is a smooth function. If the distribution T has compact support as well, then f∗T is a compactly supported function, and the Titchmarsh convolution theorem (Hörmander 1983, Theorem 4.3.3) implies that

\operatorname{ch}(f*T) = \operatorname{ch}f + \operatorname{ch}T
where ch denotes the convex hull.

Distribution of compact support
It is also possible to define the convolution of two distributions S and T on R^{n}, provided one of them has compact support. Informally, in order to define S∗T where T has compact support, the idea is to extend the definition of the convolution ∗ to a linear operation on distributions so that the associativity formula

S*(T*\varphi) = (S*T)*\varphi
continues to hold for all test functions φ. Hörmander (1983, §IV.2) proves the uniqueness of such an extension.
It is also possible to provide a more explicit characterization of the convolution of distributions (Trèves 1967, Chapter 27). Suppose that it is T that has compact support. For any test function φ in D(R^{n}), consider the function

\psi(x) = \langle T, \tau_{x}\varphi\rangle.
It can be readily shown that this defines a smooth function of x, which moreover has compact support. The convolution of S and T is defined by

\langle S * T,\varphi\rangle = \langle S, \psi\rangle.
This generalizes the classical notion of convolution of functions and is compatible with differentiation in the following sense:

\partial^\alpha(S*T)=(\partial^\alpha S)*T=S*(\partial^\alpha T).
This definition of convolution remains valid under less restrictive assumptions about S and T; see for instance Gel'fand & Shilov (1966–1968, v. 1, pp. 103–104) and Benedetto (1997, Definition 2.5.8).
Distributions as derivatives of continuous functions
The formal definition of distributions exhibits them as a subspace of a very large space, namely the topological dual of D(U) (or S(R^{d}) for tempered distributions). It is not immediately clear from the definition how exotic a distribution might be. To answer this question, it is instructive to see distributions built up from a smaller space, namely the space of continuous functions. Roughly, any distribution is locally a (multiple) derivative of a continuous function. A precise version of this result, given below, holds for distributions of compact support, tempered distributions, and general distributions. Generally speaking, no proper subset of the space of distributions contains all continuous functions and is closed under differentiation. This says that distributions are not particularly exotic objects; they are only as complicated as necessary.
Tempered distributions
If f ∈ S′(R^{n}) is a tempered distribution, then there exists a constant C > 0, and positive integers M and N such that for all Schwartz functions φ ∈ S(R^{n})

\langle f, \varphi\rangle \le C\sum\nolimits_{\alpha\le N, \beta\le M}\sup_{x\in\mathbf{R}^n} \left x^\alpha D^\beta \varphi(x) \right =C\sum\nolimits_{\alpha\le N, \beta\le M}p_{\alpha,\beta}(\varphi).
This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function F and a multiindex α such that

f=D^\alpha F.\,
Restriction of distributions to compact sets
If f ∈ D′(R^{n}), then for any compact set K ⊂ R^{n}, there exists a continuous function F compactly supported in R^{n} (possibly on a larger set than K itself) and a multiindex α such that f=D^{α}F on C_{c}^{∞}(K). This follows from the previously quoted result on tempered distributions by means of a localization argument.
Distributions with point support
If f has support at a single point {P}, then f is in fact a finite linear combination of distributional derivatives of the δ function at P. That is, there exists an integer m and complex constants a_{α} for multiindices α ≤ m such that

f = \sum\nolimits_{\alpha\le m}a_\alpha D^\alpha(\tau_P\delta)
where τ_{P} is the translation operator.
General distributions
A version of the above theorem holds locally in the following sense (Rudin 1991). Let T be a distribution on U, then one can find for every multiindex α a continuous function g_{α} such that

\displaystyle T = \sum\nolimits_{\alpha} D^{\alpha} g_{\alpha}
and that any compact subset K of U intersects the supports of only finitely many g_{α}; therefore, to evaluate the value of T for a given smooth function f compactly supported in U, we only need finitely many g_{α}; hence the infinite sum above is welldefined as a distribution. If the distribution T is of finite order, then one can choose g_{α} in such a way that only finitely many of them are nonzero.
Using holomorphic functions as test functions
The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals.
Problem of multiplication
It is easy to define the product of a distribution with a smooth function, or more generally the product of two distributions whose singular supports are disjoint. With more effort it is possible to define a wellbehaved product of several distributions provided their wave front sets at each point are compatible. A limitation of the theory of distributions (and hyperfunctions) is that there is no associative product of two distributions extending the product of a distribution by a smooth function, as has been proved by Laurent Schwartz in the 1950s. For example, if p.v. 1/x is the distribution obtained by the Cauchy principal value

\left(\operatorname{p.v.}\frac{1}{x}\right)[\phi] = \lim_{\epsilon\to 0^+} \int_{x\ge\epsilon} \frac{\phi(x)}{x}\, dx
for all φ ∈ S(R), and δ is the Dirac delta distribution then

\left(\delta \times x \right) \times \operatorname{p.v.} \frac{1}{x} = 0
but

\delta \times \left( x \times \operatorname{p.v.} \frac{1}{x} \right) = \delta
so the product of a distribution by a smooth function (which is always well defined) cannot be extended to an associative product on the space of distributions.
Thus, nonlinear problems cannot be posed in general and thus not solved within distribution theory alone. In the context of quantum field theory, however, solutions can be found. In more than two spacetime dimensions the problem is related to the regularization of divergences. Here Henri Epstein and Vladimir Glaser developed the mathematically rigorous (but extremely technical) causal perturbation theory. This does not solve the problem in other situations. Many other interesting theories are non linear, like for example Navier–Stokes equations of fluid dynamics.
In some cases a solution of the multiplication problem is dictated by the path integral formulation of quantum mechanics. Since this is required to be equivalent to the Schrödinger theory of quantum mechanics which is invariant under coordinate transformations, this property must be shared by path integrals. This fixes some products of distributions as shown by Kleinert & Chervyakov (2001). The result is equivalent to what can be derived from dimensional regularization (Kleinert & Chervyakov 2000).
Several not entirely satisfactory theories of algebras of generalized functions have been developed, among which Colombeau's (simplified) algebra is maybe the most popular in use today.
See also
References

Benedetto, J.J. (1997), Harmonic Analysis and Applications, CRC Press .

Gårding, L. (1997), Some Points of Analysis and their History, American Mathematical Society .

Gel'fand, I.M.; Shilov, G.E. (1966–1968), Generalized functions 1–5, Academic Press .

Grubb, G. (2009), Distributions and Operators, Springer .

.

.

.

.

.

Schaefer, Helmuth H.; Wolff, M.P. (1999). Topological Vector Spaces.

.

.

Sobolev, S.L. (1936), "Méthode nouvelle à résoudre le problème de Cauchy pour les équations linéaires hyperboliques normales", Mat. Sbornik 1: 39–72

.

Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, .

Trèves, François (1967), Topological Vector Spaces, Distributions and Kernels, Academic Press, pp. 126 ff .
Further reading

M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. ISBN 0521091284 (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals)

H. Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2006)(also available online here). See Chapter 11 for defining products of distributions from the physical requirement of coordinate invariance.

V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 0415273560

.

.

.

.

Oberguggenberger, Michael (2001), "Generalized function algebras", in Hazewinkel, Michiel, .
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.