World Library  
Flag as Inappropriate
Email this Article

Markov process

Article Id: WHEBN0000098772
Reproduction Date:

Title: Markov process  
Author: World Heritage Encyclopedia
Language: English
Subject: Hunt process, Markov additive process, Markov chain, Andrey Markov, Hidden Markov model
Collection: Markov Processes, Stochastic Processes
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Markov process

Markov process example

In probability theory and statistics, a Markov process or Markoff process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. A Markov process can be thought of as 'memoryless': loosely speaking, a process satisfies the Markov property if one can make predictions for the future of the process based solely on its present state just as well as one could knowing the process's full history. i.e., conditional on the present state of the system, its future and past are independent.[1]

Contents

  • Introduction 1
  • Markov property 2
    • The general case 2.1
    • For discrete-time Markov chains 2.2
  • Examples 3
    • Gambling 3.1
    • A birth-death process 3.2
    • A non-Markov example 3.3
  • Markovian representations 4
  • In popular culture 5
  • See also 6
  • References 7
  • External links 8

Introduction

A Markov process is a stochastic model that has the Markov property. It can be used to model a random system that changes states according to a transition rule that only depends on the current state. This article describes the Markov process in a very general sense, which is a concept that is usually specified further. Particularly, the system's state space and time parameter index needs to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time vs. continuous time.

Countable state space Continuous or general state space
Discrete-time Markov chain on a countable or finite state space Harris chain (Markov chain on a general state space)
Continuous-time Continuous-time Markov process Any continuous stochastic process with the Markov property, e.g. the Wiener process

Note that there is no definitive agreement in literature on the use of some of the terms that signify special cases of Markov processes. For example, often the term "Markov chain" is used to indicate a Markov process which has a finite or countable state-space, but Markov chains on a general state space fall under the same description. Similarly, a Markov chain would usually be defined for a discrete set of times (i.e. a discrete-time Markov chain)[2] although some authors use the same terminology where "time" can take continuous values.[3] In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term.

Markov processes arise in probability and statistics in one of two ways. A stochastic process, defined via a separate argument, may be shown mathematically to have the Markov property, and as a consequence to have the properties that can be deduced from this for all Markov processes. Alternately, in modelling a process, one may assume the process to be Markov, and take this as the basis for a construction. In modelling terms, assuming that the Markov property holds is one of a limited number of simple ways of introducing statistical dependence into a model for a stochastic process in such a way that allows the strength of dependence at different lags to decline as the lag increases.

Markov property

The general case

Let (\Omega,\mathcal{F},\mathbb{P}) be a probability space with a filtration (\mathcal{F}_t,\ t \in T), for some (totally ordered) index set T; and let (S,\mathcal{S}) be a measure space. An S-valued stochastic process X=(X_t,\ t\in T) adapted to the filtration is said to possess the Markov property with respect to the \{\mathcal{F}_t\} if, for each A\in \mathcal{S} and each s,t\in T with s < t,

\mathbb{P}(X_t \in A |\mathcal{F}_s) = \mathbb{P}(X_t \in A| X_s).[4]

A Markov process is a stochastic process which satisfies the Markov property with respect to its natural filtration.

For discrete-time Markov chains

In the case where S is a discrete set with the discrete sigma algebra and T = \mathbb{N}, this can be reformulated as follows:

\mathbb{P}(X_n=x_n|X_{n-1}=x_{n-1},X_{n-2}=x_{n-2}, \dots, X_0=x_0)=\mathbb{P}(X_n=x_n|X_{n-1}=x_{n-1}).

Examples

Gambling

Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If X_n represents the number of dollars you have after n tosses, with X_0 = 10, then the sequence \{X_n : n \in \mathbb{N} \} is a Markov process. If I know that you have $12 now, then it would be expected that with even odds, you will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that you started with $10, then went up to $11, down to $10, up to $11, and then to $12.

The process described here is a Markov chain on a countable state space that follows a random walk.

A birth-death process

Suppose that you are popping one hundred kernels of popcorn, and each kernel will pop at an independent, exponentially-distributed time. Let X_t denote the number of kernels which have popped up to time t. Then this is a continuous-time Markov process. If after some amount of time, I want to guess how many kernels will pop in the next second, I need only to know how many kernels have popped so far. It will not help me to know when they popped, so knowing X_t for previous times t will not inform my guess.

The process described here is an approximation of a Poisson process - Poisson processes are also Markov processes.

A non-Markov example

Suppose that you have a coin purse containing five quarters (each worth 25c), five nickels (each worth 5c) and five dimes (each worth 10c), and one-by-one, you randomly draw coins from the purse and set them on a table. If X_n represents the total value of the coins set on the table after n draws, with X_0 = 0, then the sequence \{X_n : n\in\mathbb{N}\} is not a Markov process.

To see why this is the case, suppose that in your first six draws, you draw all five nickels, and then a quarter. So X_6 = \$0.50. If we know not just X_6, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel, so we can determine that X_7 \geq \$0.60 with probability 1. But if we do not know the earlier values, then based only on the value X_6 we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about X_7 are impacted by our knowledge of values prior to X_6.

Markovian representations

In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the 'current' and 'future' states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes the form:

Y(t) = \big\{ X(s): s \in [a(t), b(t)] \, \big\}.

If Y has the Markov property, then it is a Markovian representation of X.

An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one.[5]

In popular culture

The band Bad Religion has a song titled "The Markovian Process" on their album Stranger Than Fiction.

See also

References

  1. ^ Markov process (mathematics) - Britannica Online Encyclopedia
  2. ^ Everitt,B.S. (2002) The Cambridge Dictionary of Statistics. CUP. ISBN 0-521-81099-X
  3. ^ Dodge, Y. The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9
  4. ^
  5. ^ Doblinger, G., 1998. Smoothing of Noise AR Signals Using an Adaptive Kalman Filter. In EUSIPCO 98. pp. 781–784. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.251.3078 [Accessed January 15, 2015].

External links

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.