Draft:Doob's h-transform
Probabilistic concept
From Wikipedia, the free encyclopedia
In mathematics, more specifically in stochastics, Doob's h-transform[1][2][3] is a method to transform a Markov process to a new Markov process, which exhibits certain properties. Most prominently, for a homogeneous Markov process with (random) lifetime , the reverse process and the conditioned process on terminating at a single point , can be expressed as an -transform of the original process via some deterministic function .
Review waiting, please be patient.
This may take 2 months or more, since drafts are reviewed in no specific order. There are 4,282 pending submissions waiting for review.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Reviewer tools
|
The key advantage of Doob's -transform is that the transition probabilities of the transformed process are explicitly given as functions of . If is an Itô diffusion, then under relatively mild conditions, the -transformed process is again an Itô diffusion with an explicit drift term depending on .[2]
Motivation
Consider a -dimensional Brownian motion started in and stopped at the time , when it first hits the unit ball , and one is interested in the process of conditioned on hitting some point at time . In law, is given byfor any Borel set .
According to Bayes formula, for some fixed , conditioning on can be expressed in terms of the distribution of given and the distribution of , namely for withwhere denotes the uniform distribution on the sphere. In this context, the function happens to be the well-known Poisson kernel, which has the explicit formulaThe distribution of under is explicit as well, since the Brownian Motion is a Gaussian process and so the Bayes formula for gives an explicit way to compute the transition probabilities. The Bayes formula from above is precisely an -transform of with .
For the simulation of , one is interested in a stochastic differential equation for the latter. With the Bayes formula from above, the infinitesemal generator of can be related to the generator (the Laplace operator) of Brownian motion: For , and by denoting the Feller semigroups of and by and respectively, it holds for [3]because . Comparing with the generator of a general Itô diffusion, one can conclude that
Definition
Homogeneous Markov processes
Let be a homogeneous Markov process with state space , which is often assumed to be a locally compact Polish space to guarantee the existence of regular conditional expectations, and denote its transition semigroup by . Let be an excessive function, that is,[1]Then the -transform is defined as the Markov process with transition semigroup defined byThis definition implies that , which means that the process need not staying inside its state space for all times. For such processes, one usually defines an additional cemetery state that the process enters with probability for all and that it never leaves, i.e., . In other words, the process is allowed to be killed at some finite time. On the extended state space , the with the previous definitions are probability measures for all by construction.
Inhomogeneous Markov processes
Let be an inhomogeneous Markov process with values in , inhomogeneous transition probabilities and let be space-time regular, that is, for , and [2]Then the -transform is defined as the (inhomogeneous) Markov process with transition probabilities defined by
Properties
In the theory of -transforms for homogeneous Markov processes on a filtered probability space it is assumed that[1]
- is distributed according to some distribution ,
- has a lifetime , that is, a random variable (not necessarily a stopping time) such that for and for ,
- is a strong Markov process with càdlàg paths on .
For simplicity, the underlying probability space is usually chosen as the canonical space of càdlàg paths with values in and as the identity.
Probabilities
The transition semigroup generates a family of probability measures on the canonical space of càdlàg paths. For any stopping time , it holds for and [1]For a family of excessive functions on over an index set (and measurable w.r.t. the product -algebra on ), if is of the formwith some finite measure on , then for the probability simplifies to
Killing
If is a co-optional time, that is, if for any it holds , where is the shift-semigroup, then one can definefor which it holds that is an -transform of .[1]
Time-reversal
The time-reversal of is defined as is again a homogeneous Markov process, however now with left-continuous paths.[1]
If has initial measure , then the reverses of and have the same transition probabilities. In particular, for a co-optional time (for example a last hitting time), if starts in a fixed point almost surely, then the reverses of and have the same transition probabilities.[1]
Moreover, if possesses a dual process (defined below), then its reverse can be expressed as an -transform of its dual. To formulate this statement, we define for and respectively the transition semigroups and , and the resolventsAs the dual of , musst fulfill for some measure
- is a strong Markov process with càdlàg paths,
- For all and measurable:
- For all and , both and are absolutely continuous w.r.t. .
Then by setting and have the same transition probabilities.
Conditioning
Doob's -transform can also be used to condition the process on hitting a distinct point or, more generally, to attain a pre-defined distribution at its lifetime.
Let and be the resolvent of for as defined above. Then withit holds that almost surely on .
If , that is, starts in almost surely, then one can achieve for any distribution on by choosing[1]
