World Library  
Flag as Inappropriate
Email this Article

Asymptotic analysis

Article Id: WHEBN0000641995
Reproduction Date:

Title: Asymptotic analysis  
Author: World Heritage Encyclopedia
Language: English
Subject: Prime number theorem, Busy beaver, Arthur Erdélyi, Asymptotology, Analysis of algorithms
Collection: Asymptotic Analysis
Publisher: World Heritage Encyclopedia

Asymptotic analysis

In mathematical analysis, asymptotic analysis is a method of describing limiting behavior. The methodology has applications across science. Examples are

  • In applied mathematics, asymptotic analysis is used to build numerical methods to approximate equation solutions.
  • in computer science in the analysis of algorithms, considering the performance of algorithms when applied to very large input datasets.
  • the behavior of physical systems when they are very large, an example being Statistical mechanics.
  • in accident analysis when identifying the causation of crash through count modeling with large number of crash counts in a given time and space.

The simplest example, when considering a function f(n), is when there is a need to describe its properties as n becomes very large. Thus, if f(n) = n2+3n, the term 3n becomes insignificant compared to n2, when n is very large. The function f(n) is said to be "asymptotically equivalent to n2 as n → ∞", and this is written symbolically as f(n) ~ n2.


  • Definition 1
  • Asymptotic expansion 2
  • Use in applied mathematics 3
  • Method of dominant balance 4
  • See also 5
  • References 6


Formally, given functions f and g of a natural number variable n, one defines a binary relation

f \sim g \quad (\text{as } n\to\infty)

if and only if (according to Erdelyi, 1956)

\lim_{n \to \infty} \frac{f(n)}{g(n)} = 1 ~.

This relation is an equivalence relation on the set of functions of n. The equivalence class of f informally consists of all functions g which are approximately equal to f in a relative sense, in the limit.

Asymptotic expansion

An asymptotic expansion of a function f(x) is in practice an expression of that function in terms of a series, the partial sums of which do not necessarily converge, but such that taking any initial partial sum provides an asymptotic formula for f. The idea is that successive terms provide an increasingly accurate description of the order of growth of f. An example is Stirling's approximation.

In symbols, it means we have

f \sim g_1

but also

f - g_1 \sim g_2


f - g_1 - \cdots - g_{k-1} \sim g_{k}

for each fixed k. In view of the definition of the \sim symbol, the last equation means

f - (g_1 + \cdots + g_k) = o(g_k)

in the little o notation, ie f - (g_1 + \cdots + g_k) is much smaller than g_k.

The relation :f - g_1 - \cdots - g_{k-1} \sim g_{k} takes its full meaning if \forall k, g_{k+1} = o(g_k), which means the g_k form an asymptotic scale. In that case, some authors may abusively write

f \sim g_1 + \cdots + g_{k}

to denote the statement

f - (g_1 + \cdots + g_k) = o(g_k)\,.

One should however be careful that this is not a standard use of the \sim symbol, and that it does not correspond to the definition given in section Definition.

In the present situation, this relation g_{k} = o(g_{k-1}) actually follows from combining steps k and (k-1): by subtracting f - g_1 - \cdots - g_{k-2} = g_{k-1} + o(g_{k-1}) from f - g_1 - \cdots - g_{k-2} - g_{k-1} = g_{k} + o(g_{k}) one gets g_{k} + o(g_{k})=o(g_{k-1})\,, ie g_{k} = o(g_{k-1}).

In case the asymptotic expansion does not converge, for any particular value of the argument there will be a particular partial sum which provides the best approximation and adding additional terms will decrease the accuracy. However, this optimal partial sum will usually have more terms as the argument approaches the limit value.

Asymptotic expansions typically arise in the approximation of certain integrals (Laplace's method, saddle-point method, method of steepest descent) or in the approximation of probability distributions (Edgeworth series). The famous Feynman graphs in quantum field theory are another example of asymptotic expansions which often do not converge.

Use in applied mathematics

Asymptotic analysis is a key tool for exploring the ordinary and partial differential equations which arise in the mathematical modelling of real-world phenomena.[1] An illustrative example is the derivation of the boundary layer equations from the full Navier-Stokes equations governing fluid flow. In many cases, the asymptotic expansion is in power of a small parameter, ε: in the boundary layer case, this is the nondimensional ratio of the boundary layer thickness to a typical lengthscale of the problem. Indeed, applications of asymptotic analysis in mathematical modelling often[1] centre around a nondimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand.

Method of dominant balance

The method of dominant balance is used to determine the asymptotic behavior of solutions to an ODE without fully solving it. The process is iterative, in that the result obtained by performing the method once can be used as input when the method is repeated, to obtain as many terms in the asymptotic expansion as desired.[2]

The process goes as follows:

  • 1. Assume that the asymptotic behavior has the form
y(x) \sim e^{S(x)}~.
  • 2. Make an informed guess as to which terms in the ODE might be negligible in the limit of interest.
  • 3. Drop these terms and solve the resulting simpler ODE.
  • 4. Check that the solution is consistent with step 2. If this is the case, then one has the controlling factor of the asymptotic behavior; otherwise, one needs try dropping different terms in step 2, instead.
  • 5. Repeat the process to higher orders, relying on the above result as the leading term in the solution.

Example. For arbitrary constants c and a, consider


This differential equation cannot be solved exactly. However, it is useful to consider how the solutions behave for large x: it turns out that y behaves like e^{x}\, as x → ∞ . More rigorously, we will have \log(y)\sim {x}, not y\sim e^{x}. Since we are interested in the behavior of y in the large x limit, we change variables to y = exp(S(x)), and re-express the ODE in terms of S(x),




where we have used the product rule and chain rule to evaluate the derivatives of y.

Now suppose first that a solution to this ODE satisfies

S'^2\sim S' ~,

as x → ∞, so that


as x → ∞. Obtain then the dominant asymptotic behaviour by setting


If S_0 satisfies the above asymptotic conditions, then the above assumption is consistent. The terms we dropped will have been negligible with respect to the ones we kept.

S_0 is not a solution to the ODE for S, but it represents the dominant asymptotic behaviour, which is what we are interested in. Check that this choice for S_0 is consistent,


Everything is indeed consistent.

Thus the dominant asymptotic behaviour of a solution to our ODE has been found,

S_0\sim x\,
\log(y)\sim x~.

By convention, the full asymptotic series is written as

y\sim Ax^p e^{\lambda x^r}\left(1+\frac{u_1}{x}+\frac{u_2}{x^2}\cdots+\frac{u_k}{x^k}+o(\frac 1 {x^{k}})\right)~,

so to get at least the first term of this series we have to take a further step to see if there is a power of x out the front.

We proceed by introducing a new subleading dependent variable,

S(x)\equiv S_0(x)+C(x)\,

and then seek asymptotic solutions for C(x). Substituting into the above ODE for S(x) we find


Repeating the same process as before, we keep C' and (c-a)/x to find that

C_0=\log x^{a-c}~.

The leading asymptotic behaviour is then

y\sim x^{a-c}e^x~.

See also


  1. ^ a b S. Howison, Practical Applied Mathematics, Cambridge University Press, Cambridge, 2005. ISBN 0-521-60369-2
  2. ^  
  • Boyd, John P. (March 1999). "The Devil's Invention: Asymptotic, Superasymptotic and Hyperasymptotic Series". Acta Applicandae Mathematicae 56 (1): 1–98.  
  • Asymptotic Expansions (Dover Books on Mathematics) by A. Erdelyi, 1956.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.