Subido por tamaya

(Grundlehren der mathematischen Wissenschaften 293) Daniel Revuz, Marc Yor (auth.) - Continuous Martingales and Brownian Motion-Springer-Verlag Berlin Heidelberg (1999)

Anuncio
Gruncllehren cler
mathematischen Wissenschaften 293
A Series of Comprehensive Studies in Mathematics
Editors
S. S. Chern B. Eckmann P. de la Harpe
H. Hironaka F. Hirzebruch N. Hitchin
L. Hormander M.-A. Knus A. Kupiainen
J. Lannes G. Lebeau M. Ratner D. Serre
Ya.G. Sinai N.J.A. Sloane J. Tits
M. Waldschmidt S. Watanabe
Managing Editors
M.Berger J.Coates S.R.S.Varadhan
Daniel Revuz
MarcYor
Continuous Martingales
and Brownian Motion
Corrected Third Printing of the Third Edition
With 8 Figures
~ Springer
Daniel Revuz
Universite Paris VII
Departement de Mathematiques
2, place Jussieu
75251 Paris Cedex 05, France
e-mail: [email protected]
MarcYor
Universite Pierre et Marie Curie
Laboratoire de Probabilites
4, place Jussieu, Boite courrier 188
75252 Paris Cedex 05, France
[email protected]
3rd edition 1999
Corrected 3rd printing 2005
The Library of Congress has catalogued the original printing as follows:
Revuz, D. Continuous Martingales and Brownian motion I Daniel Revuz, Marc Yor.
- 3rd ed. p. cm. - (Grundlehren der mathematischen Wissenschaften; 293) Includes
bibliographical references and index.
ISBN 978-3-662-06400-9 (eBook)
ISBN 978-3-642-08400-3
DOI 10.1007/978-3-662-06400-9
1. Martingales (Mathematics) 2. Brownian motion processes.
I. Yor, Marc. II. Title. III. Series. QA274.5.R48 1999 519.2'87 - dC21
Mathematics Subject Classification (2000): 60G07,60H05
ISSN 0072-7830
ISBN 978-3-642-08400-3
This work is subject to copyright. All rights are reserved, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilm or in anyotherway,and
storage in data banks. Duplication of this publication or parts thereof is permitted only
under the provisions of the German Copyright Law of September 9, 1965, in its current
version, and permission for use must always be obtained from Springer-Verlag Berlin
Heidelberg GmbH. Violations are liable for prosecution under the German Copyright Law.
springeronline.com
© Springer-Verlag Berlin Heidelberg 1991,1994,1999
Originally published by Springer-Verlag Berlin Heidelberg New York in 1999
Softcover reprint of the hardcover 3rd edition 1999
Cover design: MetaDesign plus GmbH, Berlin
Typesetting: Typeset by Ingeborg Jebram, Heiligkreuzsteinach, and reformatted
by Kurt Mattes, Heidelberg, using a Springer LATEX macro-package
Printed on acid-free paper
41/3142-543210
Preface
Since the first edition of this book (1991), the interest for Brownian motion and
related stochastic processes has not abated in the least. This is probably due to
the fact that Brownian motion is in the intersection of many fundamental classes
of processes. It is a continuous martingale, a Gaussian process, a Markov process
or more specifically a process with independent increments; it can actually be
defined, up to simple transformations, as the real-valued, centered process with
stationary independent increments and continuous paths. It is therefore no surprise
that a vast array of techniques may be successfully applied to its study and we,
consequently, chose to organize the book in the following way.
After a first chapter where Brownian motion is introduced, each of the following ones is devoted to a new technique or notion and to some of its applications
to Brownian motion. Among these techniques, two are of paramount importance:
stochastic calculus, the use of which pervades the whole book, and the powerful
excursion theory, both of which are introduced in a self-contained fashion and
with a minimum of apparatus. They have made much easier the proofs of many
results found in the epoch-making book of Ito and Mc Kean: Diffusion processes
and their sample paths, Springer (1965).
These two techniques can both be taught, as we did several times, in a pair
of one-semester courses. The first one devoted to Brownian motion and stochastic
integration and centered around the famous Ito formula would cover Chapters I
through V with possibly the early parts of Chapters VIII and IX. The second
course, more advanced, would begin with the local times of Chapter VI and the
extension of stochastic calculus to convex functions and work towards such topics
as time reversal, Bessel processes and the Ray-Knight theorems which describe
the Brownian local times in terms of Bessel processes. Chapter XII on Excursion
theory plays a basic role in this second course. Finally, Chapter XIII describes the
asymptotic behavior of additive functionals of Brownian motion in dimension 1
and 2 and especially of the winding numbers around a finite number of points for
planar Brownian motion.
The text is complemented at the end of each section by a large selection of
exercices, the more challenging being marked with the sign * or even **. On
the one hand, they should enable the reader to improve his understanding of the
notions introduced in the text. On the other hand, they deal with many results
without which the text might seem a bit "dry" or incomplete; their inclusion in the
VI
Preface
text however would have increased forbiddingly the size of the book and deprived
the reader of the pleasure of working things out by himself. As it is, the text is
written with the assumption that the reader will try a good proportion of them,
especially those marked with the sign #, and in a few proofs we even indulged in
using the results of foregoing exercices.
The text is practically self-contained but for a few results of measure theory.
Beside classical calculus, we only ask the reader to have a good knowledge of
basic notions of integration and probability theory such as almost-sure and the
mean convergences, conditional expectations, independence and the like. Chapter
ocontains a few complements on these topics. Moreover the early chapters include
some classical material on which the beginner can hone his skills.
Each chapter ends up with notes and comments where, in particular, references
and credits are given. In view of the enormous literature which has been devoted
in the past to Brownian motion and related topics, we have in no way tried to
draw a historical picture of the subject and apologize in advance to those who
may feel slighted.
Likewise our bibliography is not even remotely complete and leaves out the
many papers which relate Brownian motion with other fields of Mathematics such
as Potential Theory, Harmonic Analysis, Partial Differential Equations and Geometry. A number of excellent books have been written on these subjects, some of
which we discuss in the notes and comments.
This leads us to mention some of the manifold offshoots of the Brownian
studies which have sprouted since the beginning of the nineties and are bound to
be still very much alive in the future:
- the profound relationships between branching processes, random trees and
Brownian excursions initiated by Neveu and Pitman and furthered by Aldous,
Le Gall, Duquesne, ...
- the important advances in the studies of Levy processes which benefited from
the results found for Brownian motion or more generally diffusions and from the
deep understanding of the general theory of processes developed by P. A. Meyer
and his "Ecole de Strasbourg". Bertoin's book: Levy processes (Cambridge
Univ. Press, 1996) is a basic reference in these matters; so is the book of Sato:
Levy processes and infinitely divisible distributions (Cambridge Univ. Press,
1999), although it is written in a different spirit and stresses the properties of
infinitely divisible laws.
- in a somewhat similar fashion, the deep understanding of Brownian local times
has led to intersection local times which serve as a basic tool for the study of
multiple points of the three-dimensional Brownian motion. The excellent lecture
course of Le Gall (Saint-Flour, 1992) spares us any regret we might have of
omitting this subject in our own book. One should also mention the results on
the Brownian curve due to Lawler-Schramm-Werner who initiated the study of
the Stochastic Loewner Equations.
- stochastic integration and Ito's formula have seen the extension of their domains
of validity beyond semimartingales to, for instance, certain Dirichlet processes
Preface
VII
i.e. sums of a martingale and of a process with a vanishing quadratic variation (Bertoin, Yamada). Let us also mention the anticipative stochastic calculus
(Skorokhod, Nualart, Pardoux). However, a general unifying theory is not yet
available; such a research is justified by the interest in fractional Brownian
motion (Cheridito, Feyel-De la Pradelle, Valkeila, ... )
Finally it is a pleasure to thank all those, who, along the years, have helped us
to improve our successive drafts, J. Jacod, B. Maisonneuve, J. Pitman, A. Adhikari,
J. Azema, M. Emery, H. Follmer and the late P. A. Meyer to whom we owe so
much. Our special thanks go to J. F. Le Gall who put us straight on an inordinate
number of points and Shi Zhan who has helped us with the exercises.
Paris, August 2004
Daniel Revuz
Marc Yor
Table of Contents
Chapter O. Preliminaries ............................................... .
§ 1. Basic Notation .................................................... .
§2. Monotone Class Theorem ..........................................
§3. Completion ........................................................
§4. Functions of Finite Variation and Stieltjes Integrals ...................
§5. Weak Convergence in Metric Spaces ................................
§6. Gaussian and Other Random Variables ...............................
2
3
4
9
11
Chapter I. Introduction. . . . . . . . . . . .. . . . . . . . . .. . .. . . .. . . . . . . . . . . . . . . . .. . .
15
§ 1. Examples of Stochastic Processes. Brownian Motion .................. 15
§2. Local Properties of Brownian Paths ................................. 26
§3. Canonical Processes and Gaussian Processes ......................... 33
§4. Filtrations and Stopping Times. . . . . . . . . .. . .. . . . .. . . . . .. . . . . . . . . .. . .. 41
Notes and Comments .................................................. 48
Chapter II. Martingales ................................................
51
§ 1. Definitions, Maximal Inequalities and Applications ...................
§2. Convergence and Regularization Theorems...........................
§3. Optional Stopping Theorem.........................................
Notes and Comments..................................................
51
60
68
77
Chapter III. Markov Processes ..........................................
79
§ 1. Basic Definitions ................................................... 79
§2. Feller Processes .................................................... 88
§3. Strong Markov Property ............................................ 102
§4. Summary of Results on Levy Processes .............................. 114
Notes and Comments .................................................. 117
Chapter IV. Stochastic Integration ...................................... 119
§ 1. Quadratic Variations ................................................ 119
§2. Stochastic Integrals ................................................ 137
X
Table of Contents
§3. Ito's Formula and First Applications ................................. 146
§4. Burkholder-Davis-Gundy Inequalities ................................ 160
§5. Predictable Processes ............................................... 171
Notes and Comments .................................................. 176
Chapter V. Representation of Martingales ............................... 179
§ 1. Continuous Martingales as Time-changed Brownian Motions .......... 179
§2. Conformal Martingales and Planar Brownian Motion ................. 189
§3. Brownian Martingales .............................................. 198
§4. Integral Representations ............................................ 209
Notes and Comments .................................................. 216
Chapter VI. Local Times ............................................... 221
§1. Definition and First Properties ...................................... 221
§2. The Local Time of Brownian Motion ................................ 239
§3. The Three-Dimensional Bessel Process .............................. 251
§4. First Order Calculus ................................................ 260
§5. The Skorokhod Stopping Problem ................................... 269
Notes and Comments .................................................. 277
Chapter VII. Generators and Time Reversal ............................. 281
§ 1. Infinitesimal Generators ............................................ 281
§2. Diffusions and Ito Processes ........................................ 294
§3. Linear Continuous Markov Processes ................................ 300
§4. Time Reversal and Applications ..................................... 313
Notes and Comments .................................................. 322
Chapter VIII. Girsanov's Theorem and First Applications ............... " 325
§1. Girsanov's Theorem ................................................ 325
§2. Application of Girsanov's Theorem to the Study of Wiener's Space .... 338
§3. Functionals and Transformations of Diffusion Processes ............... 349
Notes and Comments .................................................. 362
Chapter IX. Stochastic Differential Equations ............................ 365
§ 1. Formal Definitions and Uniqueness .................................. 365
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients ........ 375
§3. The Case of Holder Coefficients in Dimension One ................... 388
Notes and Comments .................................................. 399
Chapter X. Additive Functionals of Brownian Motion .................... 401
§ 1. General Definitions ................................................ 401
Table of Contents
XI
§2. Representation Theorem for Additive Functionals
of Linear Brownian Motion ......................................... 409
§3. Ergodic Theorems for Additive Functionals .......................... 422
§4. Asymptotic Results for the Planar Brownian Motion .................. 430
Notes and Comments .................................................. 436
Chapter XI. Bessel Processes and Ray-Knight Theorems .................. 439
§ 1. Bessel Processes ................................................... 439
§2. Ray-Knight Theorems .............................................. 454
§3. Bessel Bridges ..................................................... 463
Notes and Comments .................................................. 469
Chapter XII. Excursions ................................................ 471
§ 1. Prerequisites on Poisson Point Processes ............................. 471
§2. The Excursion Process of Brownian Motion .......................... 480
§3. Excursions Straddling a Given Time ................................. 488
§4. Descriptions of Ito's Measure and Applications ...................... 493
Notes and Comments .................................................. 511
Chapter XIII. Limit Theorems in Distribution ............................ 515
§ 1. Convergence in Distribution ........................................ 515
§2. Asymptotic Behavior of Additive Functionals of Brownian Motion .... 522
§3. Asymptotic Properties of Planar Brownian Motion .................... 531
Notes and Comments .................................................. 541
Appendix ............................................................. 543
§1. Gronwall's Lemma ................................................. 543
§2. Distributions ....................................................... 543
§3. Convex Functions .................................................. 544
§4. Hausdorff Measures and Dimension ................................. 547
§5. Ergodic Theory .................................................... 548
§6. Probabilities on Function Spaces .................................... 548
§7. Bessel Functions ................................................... 549
§8. Sturm-Liouville Equation ........................................... 550
Bibliography .......................................................... 553
Index of Notation ..................................................... 595
Index of Terms ........................................................ 599
Catalogue ............................................................. 605
Chapter O. Preliminaries
In this chapter, we review a few basic facts, mainly from integration and classical
probability theories, which will be used throughout the book without further ado.
Some other prerequisites, usually from calculus, which will be used in some special
parts are collected in the Appendix at the end of the book.
§1. Basic Notation
Throughout the sequel, N will denote the set of integers, namely, N = {O, I, ... },
lR. the set of real numbers, Q the set of rational numbers, C the set of complex
numbers. Moreover lR.+ = [0, oo[ and Q+ = Q n lR.+. By positive we will always
mean 2: 0 and say strictly positive for> O.
Likewise a real-valued function f defined on an interval of lR. is increasing
(resp. strictly increasing) if x < y entails f(x) :::; fey) (resp. f(x) < f(y».
If a, b are real numbers, we write:
a /\ b = min(a, b),
a vb = max(a, b).
If E is a set and f a real-valued function on E, we use the notation
f+ = f vO,
f- = -(f /\ 0),
IIfll = sup If(x)l·
XEE
We will write an ..I- a (an t a) if the sequence (an) of real numbers decreases
(increases) to a.
If (E,
and (F, .¥) are measurable spaces, we write f E if! /:7 to say that
the function f : E --+ F is measurable with respect to if! and :7. If (F,.§?"') is
the real line endowed with the a-field of Borel sets, we write simply f E if! and
if, in addition, f is positive, we write f E if!+. The characteristic function of a set
A is written 1A; thus, the statements A E if! and 1A E if! have the same meaning.
If Q is a set and /;, i E I, is a collection of maps from Q to measurable
spaces (E i , g;), the smallest a-field on Q for which the fi's are measurable is
denoted by a(/i. i E /). If)1f is a collection of subsets of Q, then a(3') is the
smallest a-field containing 'e'; we say that a(3') is generated by 3'. The a-field
a (/i. i E /) is generated by the family l?f' = {1;-1 (Ai), Ai E g;, i E I}. Finally if
in
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
2
Chapter O. Preliminaries
~, i E f, is a family of a-fields on Q, we denote by Vi g; the a-field generated
by Ui ~. It is the union of the a-fields generated by the countable sub-families
of g;, i E f.
A measurable space (E, ~) is separable if g-. is generated by a countable
collection of sets. In particular, if E is a LCCB space i.e. a locally compact space
with countable basis, the a-field of its Borel sets is separable; it will often be
denoted by $(E). For instance, $(~d) is the a-field of Borel subsets of the
d-dimensional euclidean space.
For a measure m on (E, ~) and f E tF, the integral of f with respect to m,
if it makes sense, will be denoted by any of the symbols
f
fdm,
f
f(x)dm(x),
f
f(x)m(dx),
m(f),
(m, j),
and in case E is a subset of a euclidean space and m is the Lebesgue measure,
J f(x)dx.
If (Q,~, P) is a probability space, we will as usual use the words random
variable and expectation in lieu of measurable function and integral and write
E[X] =
1
X dP.
We will often write r.v. as shorthand for random variable. The law of the r.v. X,
namely the image of P by X will be denoted by Px or X(P). Two r.v.'s defined
on the same space are P-equivalent if they are equal P-a.s.
If ~ is a sub-a-field of.¥", the conditional expectation of X with respect to ~,
if it exists, is written E[X I ~1 If X = lA, A E~, we may write peA I W). If
~ = a (Xi, i E I) we also write E [X I Xi, i E f] or P (A I Xi, i E l). As is wellknown conditional expectations are defined up to P-equivalence, but we will often
omit the qualifying P-a.s. When we apply conditional expectation successively,
we shall abbreviate E [E [X I §j] 1.97z] to E [X I §j 1.97z].
We recall that if Q is a Polish space (i.e. a metrizable complete topological
space with a countable dense subset), .Y the a-field of its Borel subsets and if W
is separable, then there is a regular conditional probability distribution given ~'.
If IL and v are two a-finite measures on (E, g'), we write lL..Lv to mean that
they are mutually singular, IL « v to mean that IL is absolutely continuous with
respect to v and IL ~ v if they are equivalent, namely if IL « v and v « IL. The
Radon-Nikodym derivative of the absolutely continuous part of IL with respect to
v is written ~~ It5 and g- is dropped when there is no risk of confusion.
§2. Monotone Class Theorem
We will use several variants of this theorem which we state here without proof.
(2.1) Theorem. Let Y" be a collection of subsets of Q such that
§3. Completion
3
i) Q E.cr,
ii) if A, B E .'7' and A C B, then B\A E y,
iii) if {An} is an increasing sequence of elements of Y then U An E Y.
If.Y :J .7' where.7' is closed under finite intersections then .'7' :J a(.7').
The above version deals with sets. We turn to the functional version.
(2.2) Theorem. Let ,9fJ be a vector space of bounded real-valued functions on Q
such that
i) the constant functions are in ,9'&,
ii) if {h n } is an increasing sequence of positive elements of ,9fJ such that h
sUPn h n is bounded, then h E ,9'&.
If W is a subset of.9'& which is stable under pointwise multiplication, then .9'&
contains all the bounded a (W)-measurable jUnctions.
The above theorems will be used, especially in Chap. III, in the following
set-up. We have a family Ii, i E I, of mappings of a set Q into measurable spaces
(E i , (!if). We assume that for each i E I there is a subclass A(of g;, closed under
finite intersections and such that a (fi() = (!if. We then have the following results.
(2.3) Theorem. Let./f/" be the family of sets of the form niEJ ~-\(Ai) where Ai
ranges through A(and J ranges through the finite subsets of I; then a(./f/") =
a(Ji, i E /).
(2.4) Theorem. Let.9fJ be a vector space of real-valued functions on Q, containing I,Q, satisfying property ii) of Theorem (2.2) and containing all the functions
Ir for r E ./V. Then, ,9'& contains all the bounded, real-valued, a(J;, i E I)measurable functions.
§3. Completion
If (E, go) is a measurable space and J.L a probability measure on g', the completion
g'/l- of g' with respect to J.L is the a-field of subsets B of E such that there exist
B\ and B2 in g' with B\ c B C Bz and J.L(B 2\B\) = O. If y is a family of
probability measures on go, the a-field
is called the completion of g' with respect to y. If Y is the family of all probability
measures on g', then goy is denoted by g'* and is called the a-field of universally
measurable sets.
If .¥"' is a sub-a -algebra of g'Y we define the completion of ,¥", in g'Y with
respect to y as the family of sets A with the following property: for each J.L E y,
4
Chapter O. Preliminaries
there is a set B such that AL1B is in g'Y and J.t(AL1B) = O. This family will be
denoted ,fry; the reader will show that it is a a-field which is larger than .¥Y.
Moreover, it has the following characterization.
(3.1) Proposition. A set A is in .¥Y if and only iffor every J.t E Y there is a set
BIL in.¥"' and two J.t-negligible sets NIL and MIL in g'such that
Proof Left to the reader as an exercise.
D
The following result gives a means of checking the measurability of functions
with respect to a -algebras of the .t7Y -type.
(3.2) Proposition. For i = 1,2, let (Ei, g;) be a measurable space, Yi afamily of
probability measures on g; and .9f a sub-a-algebra of i5;Yi. if I is a map which
is both in ~ / ~ and §f/.o/i. and if I (JL) E Y2 for every J.t E YI then I is in
~YI/§;Y2.
Proof Let A be in §;Y2. For J.t E YJ. since v = I (J.t) is in Y2, there is a set
Bv E ~ and two v-negligible sets N v and Mv in gz such that
Bv\Nv cAe Bv U Mv.
The set BIL = I-I (Bv) belongs to.9i, the sets NIL = I-I (Nv) and MIL = I-I (Mv)
are J.t-negligible sets of ~ and
BIL \NIL c I-I (A) C BIL U Mw
This entails that I-I (A) E .~YI, which completes the proof.
D
§4. Functions of Finite Variation and Stieltjes Integrals
This section is devoted to a set of properties which will be used constantly throughout the book.
We deal with real-valued, right-continuous functions A with domain [0, 00[.
The results may be easily extended to the case of R The value of A in t is denoted
At or A(t). Let ,1 be a subdivision of the interval [0, t] with 0 = to < tl < ... <
tn = t; the number 1,11 = SUPi Iti+1 - til is called the modulus or mesh of ,1. We
consider the sum
Sr'1 =
L IA ti
+1 -
Ati I .
i
If ,1' is another subdivision which is a refinement of ,1, that is, every point ti of
,1 is a point of ,1', then plainly S{'" ~ S{'.
§4. Functions of Finite Variation and Stieltjes Integrals
5
(4.1) Definition. Thefunction A is offinite variation if/or every t
St = sup
LI
st, < +00.
The function t -+ St is called the total variation of A and St is the variation of A on
[0, t]. The function S is obviously positive and increasing and iflimHoo St < +00,
the function A is said to be of bounded variation.
The same notions could be defined on any interval [a, b]. We shall say that a
function A on the whole line is offinite variation if it is of finite variation on any
compact interval but not necessarily of bounded variation on the whole of R
Let us observe that C'-functions are of finite variation. Monotone finite functions are of finite variation and conversely we have the
(4.2) Proposition. Any function offinite variation is the difference of two increasing functions.
Proof The functions (S + A)j2 and (S - A)j2 are increasing as the reader can
easily show, and A is equal to their difference.
0
This decomposition is moreover minimal in the sense that if A = F - G where
F and G are positive and increasing, then (S + A)j2 ::::: F and (S - A)j2 ::::: G.
As a result, the function A has left limits in any t E ]0, 00[. We write A t - or
A(t-) for limstt As and we set Ao- = O. We moreover set ..1At = At - A t-; this
is the jump of A in t.
The importance of these functions lies in the following
(4.3) Theorem. There is a one-to-one correspondence between Radon measures J,L
on [0, oo[ and right-continuous functions A offinite variation given by
At = J,L([O, tD·
Consequently A t- = J,L([O, t[) and ..1At = J,L({t}). Moreover, if J,L({O}) = 0, the
variation S of A corresponds to the total variation IJ,LI of J,L and the decomposition
in the proof of Proposition (4.2) corresponds to the minimal decomposition of J,L
into positive and negative parts.
If f is a locally bounded Borel function on lR+, its Stieltjes integral with
respect to A, denoted
lot f(s)dA(s)
or
(
f(s)dAs
1]O,t)
is the integral of f with respect to J,L on the interval ]0, t]. The reader will observe
that the jump of A at zero does not come into play and that f~ dAs = At - Ao. If
we want to consider the integral on [0, t], we will write fro,t) f(s)dAs. The integral
on ]0, t] is also denoted by (f . A)t. We point out that the map t -+ (f. A)t is
itself a right-continuous function of finite variation.
A consequence of the Radon-Nikodym theorem applied to J,L and to the
Lebesgue measure A is the
6
Chapter o. Preliminaries
°
(4.4) Theorem. A function A offinite variation is A-a.e. differentiable and there
exists afunction B offinite variation such that B' = A-a.e. and
At = Bt + lot A~ds.
The function A is said to be absolutely continuous if B = 0. The corresponding
measure /.L is then absolutely continuous with respect to A.
We now tum to a series of notions and properties which are very useful in
handling Stieltjes integrals.
(4.5) Proposition (Integration by parts formula). If A and B are two functions
offinite variation, then for any t,
AtBt = AoBo + lot AsdBs + lot Bs_dAs.
Proof If /.L (resp. v) is associated with A (resp. B) both sides of the equality
are equal to /.L ® v([O, tf); indeed
AsdBs is the measure of the upper triangle
including the diagonal,
Bs_dAs the measure of the lower triangle excluding the
diagonal and AoBo = /.L ® v({O, OD.
0
J;
J;
To reestablish the symmetry, the above formula can also be written
AtBt =
t As_dBs + 10t Bs_dAs + L L1AsL1Bs.
10
s:9
The sum on the right is meaningful as A and B have only countably many discontinuities. In fact, A can be written uniquely At = A~ + Ls<t L1As where A C
is continuous and of finite variation.
The next result is a "chain rule" formula.
(4.6) Proposition. ifF is a Cl-function and A is offinite variation, then F(A) is
offinite variation and
F(A t ) = F(Ao) +
t F'(As-)dA s + L (F(As) - F(As_) - F'(As_)L1As}.
10
S:9
Proof The result is true for F (x) = x, and if it is true for F, it is true for x F (x)
as one can deduce from the integration by parts formula; consequently the result
is true for polynomials. The proof is completed by approximating a Cl-function
by a sequence of polynomials.
0
As an application of the notions introduced thus far, let us prove the useful
(4.7) Proposition. If A is a right continuous function offinite variation, then
Yt = Yo
n +
(1
L1As) exp (A~ - Ag)
s:9
is the only locally bounded solution of the equation
Yt = Yo + lot Ys-dAs.
§4. Functions of Finite Variation and Stieltjes Integrals
7
Proof By applying the integration by parts formula to Yo ns<t (1 + ..1As) and
exp (J~ dA~) which are both of finite variation, it is easily seen that Y is a solution
of the above equation.
Let Z be the difference of two locally bounded solutions and M t = sUPs<t IZs I.
It follows from the equality Zt = J~ Zs_d As that IZt I ~ M t St where S-is the
variation of A; then, thanks to the integration by parts formula
IZtl ~ M t 1t Ss_dSs ~ M t S;/2,
and inductively,
-+
IZtl ~ M 1t S:_dSs ~ MtS~+l /(n + I)!
n.
0
o
which proves that Z = O.
We close this section by a study of the fundamental technique of time changes,
which allows the explicit computation of some Stieltjes integrals. We consider
now an increasing, possibly infinite, right-continuous function A and for s ::: 0,
we define
c, = inf{t : At > s}
where, here and below, it is understood that inf{0} = +00. We will also say that
C is the (right-continuous) inverse of A.
To understand what follows, it is useful to draw Figure 1 (see below) showing
the graph of A and the way to find C. The function C is obviously increasing so
that
c,- = limCu
uts
is well-defined for every s. It is easily seen that
c,- = inf{t : At ::: s}.
In particular if A has a constant stretch at level s, then Cs will be at the right end
and c,- at the left end of the stretch; moreover C s - =J:. C s only if A has a constant
stretch at level s. By convention Co- = 0.
(4.8) Lemma. The/unction C is right-continuous. Moreover A(Cs ) ::: sand
At = inf{s: Cs > t}.
Proof That A(Cs ) ::: s is obvious. Moreover, the set {At> s} is the union of the
sets {At > s + 8} for 8 > 0, which proves the right continuity of C.
If furthermore, C s > t, then t f/:. {u : Au > s} and At ~ s. Consequently,
At ~ inf{s : C s > t}. On the other hand, C(At) ::: t for every t, hence C (AHe) :::
t + 8 > t which forces
At+e ::: inf{s : Cs > t}
and because of the right continuity of A
At ::: inf{s : C s > t}.
0
8
Chapter O. Preliminaries
/
At
w
____________ JI
I
Fig. 1.
Remarks. Thus A and C play symmetric roles. But if A is continuous, C is still
only right-continuous in general; in that case, however, A (C s ) = s but C (As) > s
if s is in an interval of constancy of A. As already observed, the jumps of C
correspond to the level stretches of A and vice-versa; thus C is continuous iff
A is strictly increasing. The right continuity of C does not stem from the rightcontinuity of A but from its definition with a strict inequality; likewise, Cs - is left
continuous.
We now state a "change of variables" formula.
(4.9) Proposition. Iff is a positive Borel function on [0,00[,
{
i[o.oo[
f(u)dAu =
roo f(C I <oo)ds.
io
s)
(C,
Proof If f = I[o,v], the formula reads
Av =
1
00
I(c,="v)ds
and is then a consequence of the definition of C. By taking differences, the equality
holds for the indicators of sets ]u, v], and by the monotone class theorem for any f
with compact support. Taking increasing limits yields the result in full generality.
D
In the same way, we also have
§S. Weak Convergence in Metric Spaces
9
The right member in the proposition may also be written
roo f(Cs)ds,
10
because Cs < 00 if and only if Aoo > s.
The last result is closely related to time changes.
(4.10) Proposition. If u is a continuous, non decreasing function on the interval
[a, b], then for a non negative Borelfunction f on [u(a), u(b)]
1.
[a,b)
f(u(s))dAu(s) =
1.
f(t)dAt.
[u(a),u(b»)
The integral on the left is with respect to the measure associated with the right
continuous increasing function s -+ A (u (s) ).
Proof We define Vt = inf{s : u(s) > t}, then u(Vt) = t and v is a measurable
mapping from [u(a), u(b)] into [a, b]. Let dA be the measure on [u(a), u(b)]
associated with A and v the image of dA by v. Then dA is the image of v by u
and therefore
1.
[a,b)
f(u(s))dv(s) =
1.
[u(a),u(b»)
f(t)dAt.
In particular, A(u(b)) - A(u(a) - ) = v([a, bD which proves that v is associated
with the increasing function s -+ A (u (s) ). The proposition is established.
0
§5. Weak Convergence in Metric Spaces
Let E be a metric space with metric d and call $ the Borel u-algebra on E. We
want to recall a few facts about the weak convergence of probability measures on
(E, .99). If P is such a measure, we say that a subset A of E is a P -continuity
set if p(aA) = 0 where aA is the boundary of A.
(5.1) Proposition. For probability measures Pn, n E N, and P, the following conditions are equivalent:
(i)
For every bounded continuous function f on E,
li~
f
f dPn =
f
f dP;
(ii) For every bounded uniformly continuous function f on E,
li~
f
f dPn =
f
f dP;
(iii) For every closed subset F of E, lim Pn(F) :::: P(F);
n
10
Chapter O. Preliminaries
(iv) For every open subset G of E, lim Pn(G) ~ peG);
n
(v) for every P-continuity set A, lim Pn(A) = peA).
n
(5.2) Definition. If Pn and P satisfy the equivalent conditions of the preceding
proposition, we say that (Pn) converges weakly to P.
If JT: is a family of probability measures on (E, $), we will say that it is
weakly relatively compact if every sequence of elements of JT: contains a weakly
convergent subsequence. To prove weak convergence, one needs a criterion for
weak compactness which is the raison d'€tre of the following
(5.3) Definition. A family JT: is tight iffor every e E]O, 1[, there exists a compact
set Ke such that
P[Ke ] ~ 1- e,
for every P E JT:.
With this definition we have the
(5.4) Theorem (Prokhorov's criterion). If a family JT: is tight, then it is weakly
relatively compact. If E is a Polish space, then a weakly relatively compact family
is tight.
(5.5) Definition. If (Xn)nEN and X are random variables taking their values in a
metric space E, we say that (Xn) converges in distribution or in law to X if their
laws PXn converge weakly to the law Px of X. We will then write Xn ~ X. We
also write X ~ Y to mean that X and Y have the same distribution or the same
law.
We stress the fact that the Xn's and X need not be defined on the same
probability space. If they are defined on the same probability space (fl,.¥, P),
we may set the
(5.6) Definition. The sequence (X n) converges in probability to X if, for every
e > 0,
lim P [d(Xn, X) > e] = 0.
n--->oo
We will then write P-lim Xn = X.
In a Polish space, if P-limXn = X, then Xn ~ X. The converse is not true
in general nor even meaningful since, as already observed the Xn's need not be
defined on the same probability space. However if the Xn's are defined on the
same space and converge weakly to a constant c then they converge in probability
to the constant r.v. c as the reader can easily check.
The following remarks will be important in Chap. XIII.
(5.7) Lemma. If(X n, Yn) is a sequence ofr.v. 's with values in separable metric
spaces E and F and such that
(i) (Xn' Yn) converges in distribution to (X, Y),
§6. Gaussian and Other Random Variables
11
(ii) the law of Yn does not depend on n,
then, for every Borel function cP : F ~ G where G is a separable metric space,
the sequence (Xn, cp(Yn» converges in distribution to (X, cp(Y»).
Proof It is enough to prove that if h, k are bounded continuous functions,
lim E [h(Xn)k (cp(Yn))] = E [h(X)k(cp(Y))] .
n
Set p = k 0 cp; if v is the common law of the Yn's, there is a bounded continuous
function p in LI(v) such that J Ip - pldv < c for any preassigned c. Then,
IE [h(Xn)p(Yn)] - E [h(X)p(Y)]1
~ IE [h(X n) (p(Yn) - p(Yn»]1 + IE [h(Xn)p(Yn)] - E [h(X)p(Y)]1
+ IE [h(X)(p(Y) - p(Y»)]1
~ 211hll oo c + IE [h(Xn)p(Yn)] - E [h(X)p(Y)]I·
By taking n large, the last term can be made arbitrarily small since hand pare
continuous. The proof is complete.
0
(5.8) Corollary. Let (X~, ... , X~) be a sequence ofk-tuples ofr.v. 's with values
in separable metric spaces Sj, j = 1, ... , k, which converges in distribution to
(Xl, ... , Xk). If for each j, the law of X~ does not depend on n, then for any
Borel functions Cpj : Sj ~ Uj where Uj is a separable metric space, the sequence
(CPI (X~), ... , CPk (X~)) converges in distribution to (CPI (Xl), ... , CPk (Xk)).
Proof The above lemma applied to Xn = (x~, ... , X~-I), Yn = X~ permits to
replace (X~) by CPk (X~); one then takes Xn = (X~, ... , X~-2; CPk (X~)), Yn =
X~-I and so on and so forth.
§6. Gaussian and Other Random Variables
We will write X "-' '/v·(m, (J2) to mean that the r.v. X is Gaussian with mean m
and variance (J2. In particular, X "-' ~;V(o, 1) means that X is a Gaussian centered
r.v. with unit variance or in other words a reduced Gaussian r.v. In what follows
the constant r.v.'s are considered to be a particular case of Gaussian r.v.'s, namely
those with (J2 = o. If X "-' JI/(m, (J2), (J > 0, we recall that X has the density
( "f2i[(J ) -I exp ( - ~ (x - m)2 / (J2) and that its characteristic function (abbreviated
c.f. in the sequel) is given by
We recall the
12
Chapter O. Preliminaries
(6.1) Proposition. If (Xn) is a sequence of Gaussian r.v. 's which converges in
probability to a r.v. X, then X is a Gaussian r.v., thefamily {IXnIP} is uniformly
integrable and the convergence holds in LP for every p ~ 1.
Thus the set of Gaussian r.v.'s defined on a given probability space (g,.¥, P)
is a closed subset of L 2 (g,.¥, P). More specifically we set the
(6.2) Definition. A Gaussian space is a closed linear subspace of a space
L 2 (g,.¥, P) consisting only of centered Gaussian r.v. 'so
If G is a Gaussian space and X I, .•. , Xd are in G, then the d-dimensional
r.v. X = (XI, ... , Xd) is a Gaussian r.v., in other words a(X) is a real Gaussian
r.v. for every linear fonn a on ]Rd. Let us also recall that if K is a symmetric
semi-definite positive d x d-matrix (i.e. (x, Kx) ~ for every x E ]Rd), it is the
covariance matrix of a d-dimensional centered Gaussian r.v ..
We recall the
°
(6.3) Proposition. Let G i, i E I, be a family of closed subspaces of a given Gaussian space; then, the a-fields a(G i ) are independent if and only if the spaces G i
are pairwise orthogonal.
In particular, the components of an ]Rd-valued centered Gaussian variable are
independent if and only if they are uncorrelated.
A few probability distributions on the line will occur throughout the book and
we recall some of their relationships.
A random variable Y follows the arcsine law if it has the density
(rr.Jx(1-x)r1 on [0, 1]; then logY has a characteristic function equal to
r (~ + iA) /./iir(1 + iA).
G
If N '" h'(O, 1) then log (!N 2 ) has c.f. r
+ iA) /"jii and ife is an exponential
r.v. with parameter 1 the c.f. of log (e) is r(1 + iA). It follows that
N 2 !!J. 2eY
with e and Y independent. This can also be seen directly by writing
N 2 =(N 2 +N,2) (
2
N
)
N2 +N,2
where N' '" %(0, 1) is independent of N and showing that the two factors are
independent and have respectively the exponential (mean: 2) and arcsine laws.
The above identity is in fact but a particular case of an identity on Gamma
and Beta r.v.'s. Let us call Ya, a > 0, the Gamma r.v. with density xa-1e- x / rea)
on ]R+, and f3a,b, a and b > 0, the Beta r.v. with density x a- 1(1 - x)b-l / B(a, b)
on [0, 1]. Classical computations show that if Ya and Yb are independent, then
§6. Gaussian and Other Random Variables
i)
1'1')
13
Ya + Yb and Yaj (Ya + Yb) are independent,
(d)
... )
j (Ya + Yb ) (d)
fJ
Ya + Yb = Ya+b,
111 Ya
= a,b·
From these properties follows the bi-dimensional equality in law
(
)
(Ya, Yb) ~)
= Ya+b fJa,b, 1 - fJa,b ,
where, on the right-hand side, Ya+b and fJa,b are assumed independent.
Further if Nand N' are independent reduced Gaussian random variables then
N j N' and N jlN'1 are Cauchy r.v.'s i.e. have the density (n(l + x 2 ) r l on the real
line. If C is a Cauchy r.v. then Y ~ (l + C 2)-I. Next if (e', Y') is an independent
copy of (e, Y), then the c.f.'s of log(eje'), 10g(Y j Y') and log C 2 are respectively
equal to
nAj sinh(nA), tanh(nA)jnA and (cosh(nA»-1 =
r (~ + iA) r (~ - iA) jn.
Thus, log C 2 ~ log(eje') + 10g(Y j Y'). Finally, the density of log C 2 is
(2n cosh(xj2) rl.
The above hyperbolic functions occur in many computations. If (/l is such a
function, we give below series representations for (/l, as well as for the probability
densities ¢ and f defined by
(/l(A) =
L:
exp(iJ"x)¢(x)dx =
and the distribution function
l
F(x) =
x
1
00
exp (_)..2yj2) f(y)dy,
f(y)dy.
a) If (/l(A) = tanhnAjnA, then also
00
(/l(A)
= (2n-2) L
o
(A2 + (n - (lj2»2r l ,
and ¢(x) = -n- 2 10gtanh(xj4), fey) = n- 2 I::lexp(-(n-(lj2»2 y j2),
from which F is obtained by term by term integration.
b) Likewise if (/l (A) = n Aj sinh n A, then also
00
(/l(A) = 1 + 2A2 L(-I)n (A2 + n2rl ,
n=l
and ¢(x) = (2cosh(xj2»-2, F(x) = I::_oo(_l)n exp(-n 2xj2).
c) Furthermore if (/l(A) = (coshnA)-I, then also
00
(/l(A) = n- l L(-lt(2n + 1) (A2 + (n + (lj2»2r l ,
o
14
Chapter O. Preliminaries
and
¢(x)
= (2n cosh(x/2))-1 ,
00
fey) = n- 1 ~)-1t (n + (1/2)) exp (- (n + (1/2))2 y/2)).
o
d) Finally, if <P()") = (n),./ sinhn),.)2, then also
L (),.2 - n2) (),.2 + n2r2 ,
00
<P()") = 1+ 2),.2
n=l
and
¢ (x)
=
((x /2) cosh(x /2) 00
n=-oo
I) /2 sinh (x )2,
Chapter I. Introduction
§1. Examples of Stochastic Processes. Brownian Motion
A stochastic process is a phenomenon which evolves in time in a random way.
Nature, everyday life, science offer us a huge variety of such phenomena or at
least of phenomena which can be thought of as a function both of time and of a
random factor. Such are for instance the price of certain commodities, the size of
some populations, or the number of particles registered by a Geiger counter.
A basic example is the Brownian motion of pollen particles in a liquid. This
phenomenon, which owes its name to its discovery by the English botanist R.
Brown in 1827, is due to the incessant hitting of pollen by the much smaller
molecules of the liquid. The hits occur a large number of times in any small
interval of time, independently of each other and the effect of a particular hit is
small compared to the total effect. The physical theory of this motion was set up
by Einstein in 1905. It suggests that this motion is random, and has the following
properties:
i) it has independent increments;
ii) the increments are gaussian random variables;
iii) the motion is continuous.
Property i) means that the displacements of a pollen particle over disjoint time
intervals are independent random variables. Property ii) is not surprising in view
of the central-limit theorem.
Much of this book will be devoted to the study of a mathematical model of
this phenomenon.
The goal of the theory of stochastic processes is to construct and study mathematical models of physical systems which evolve in time according to a random
mechanism, as in the above example. Thus, a stochastic process will be a family
of random variables indexed by time.
(1.1) Definition. Let T be a set, (E, g) a measurable space. A stochastic process
indexed by T, taking its values in (E, I'ff), is a family of measurable mappings
XI, t E T,from a probability space (ll,.¥', P) into (E, I'ff). The space (E, I'ff) is
called the state space.
The set T may be thought of as "time". The most usual cases are T = Nand
T = lR+, but they are by no means the only interesting ones. In this book, we
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
16
Chapter I. Introduction
deal mainly with the case T = JR+ and E will usually be JRd or a Borel subset of
JRd and g' the Borel a-field on E.
For every w E a, the mapping t --+ Xt(w) is a "curve" in E which is referred
to as a trajectory or a path of X. We may think of a path as a point chosen
randomly in the space .¥'(T, E) of all functions from T into E, or, as we shall
see later, in a reasonable subset of this space.
To set up our basic example of a stochastic process, namely the mathematical
model of Brownian motion, we will use the following well-known existence result.
(1.2) Theorem. Given a probability measure JL on JR, there exist a probability
space (a,.¥', P) and a sequence of independent random variables X n, defined on
a, such that Xn(P) = J.Lfor every n.
As a consequence, we get the
(1.3) Proposition. Let H be a separable real Hitbert space. There exist a probability space (a,.¥', P) and afamity X(h), h E H, of random variables on this
space, such that
i) the map h --+ X (h) is linear;
ii) for each h, the r.v. X(h) is gaussian centered and
Proof Pick an orthonormal basis {en} in H. By Theorem (1.2), there is a probability space (a,.¥', P) on which one can define a sequence of independent reduced
real Gaussian variables gn. The series r:g" (h, en) Hgn converges in L2(a,.¥', P)
to a r.v. which we call X(h). The proof is then easily completed.
0
We may observe that the above series converges also almost-surely, and X(h)
is actually an equivalence class of random variables rather than a random variable.
Moreover, the space {X(h),h E H} is a Gaussian subspace of L2(a,/P', P)
which is isomorphic to H. In particular, E [X(h)X(h')] = (h, h')H. Because in a
Gaussian space independence is equivalent to orthogonality, this shows that X(h)
and X(h') are independent if and only if hand h' are orthogonal in H.
(1.4) Definition. Let (A, ,/~, J.L) be a separable a-finite measure space. If in
Proposition (1.3), we choose H = L2(A, v~, JL), the mapping X is called a Gaussian measure with intensity JL on (A, v~). When F E v4; and JL(F) < 00, we shall
write X (F) instead of X (l F).
The term "measure" is warranted by the fact that if F E u~, J.L(F) < 00
and F = r:g" Fn, then X(F) = r:g" X(Fn) a.s. and in L 2(A, A, JL); however,
the exceptional set depends on F and on the sequence (Fn) and, consequently,
there is usually no true measure mew, .) depending on w such that almost-surely
X(F)(w) = mew, F) for every F E L:4J.
Let us also observe that for any two sets F and G, such that J.L(F) < 00,
J.L(G) < 00,
§ 1. Examples of Stochastic Processes. Brownian Motion
17
E[X(F)X(G)] = f-L(F n G);
if F and G are disjoint sets, X(F) and X(G) are uncorrelated, hence independent.
We now take a first step towards the construction of Brownian motion. The
method we use may be extended to other examples of processes as is shown
in Exercises (3.9) and (3.11). We take the space (A, ./~, f-L) to be 1R+ = [0,00[,
endowed with the a-field of Borel sets and the Lebesgue measure. For each t E 1R+,
we pick a random variable Bt within the equivalence class X([O, tD. We now study
the properties of the process B thus defined.
1°) The process B has independent increments i.e. for any sequence 0 = to <
tl < ... < tk the random variables Bti - B ti _P i = 1,2, ... , k are independent.
Indeed, Bti - B ti _1 is in the class X (]ti-l, tiD and these classes are independent
because the corresponding intervals are pairwise disjoint.
2°) The process B is a Gaussian process, that is: for any sequence 0 = to <
tl < ... < tn, the vector r.v. (Bto' ... , B t.) is a vector Gaussian r.v. This follows
from the independence of the increments and the fact that the individual variables
are Gaussian.
3°) For each t, we have obviously E[B~] = t; in particular, P[Bo = 0] = l.
This implies that for a Borel subset A of the real line and t > 0
P[Bt E A] =
i
gt(x)dx,
where gt(x) = (2nt)-l j 2 exp(-x 2 /2t). Likewise, the increment B t - Bs has variance t -so Furthermore, the covariance E[BsB t ] is equal to inf(s, t); indeed using
the independence of increments and the fact that all the Bt ' s are centered, we have
for s < t,
E [BsBr]
E [Bs (Bs + B t - Bs)]
E [Bn + E[Bs (BI - Bs)] = S.
If we refer to our idea of what a model of the physical Brownian motion
ought to be, we see that we have got everything but the continuity of paths. There
is no reason why an arbitrary choice of B t within the class X ([0, t D will yield
continuous maps t ---+ Bt(w). On the other hand, since we can pick any function
within the class, we may wonder whether we can do it so as to get a continuous
function for almost all w's. We now address ourselves to this question.
We first need to make a few general observations and give some definitions.
From now on, unless otherwise stated, all the processes we consider are indexed
by 1R+.
(1.5) Definition. Let E be a topological space and g' the a -algebra of its Borel
subsets. A process X with values in (E, CJ) is said to be a.s. continuous if, for
almost all w's, the function t ---+ X t (w) is continuous.
18
Chapter I. Introduction
We would like our process B above to have this property; we could then, by
discarding a negligible set, get a process with continuous paths. However, whether
in discarding the negligible set or in checking that a process is a.s. continuous, we
encounter the following problem: there is no reason why the set
{w : t --+ Xt(w)
is continuous}
should be measurable. Since we want to construct a process with state space JR.,
it is tempting, as we hinted at before Theorem (1.2), to use as probability space
the set .¥ (JR.+, JR.) = JR.lRC+ of all possible paths, and as r.v. X t the coordinate
mapping over t, namely Xt(w) = w(t). The smallest (J-algebra for which the Xt's
are measurable is the product (J-algebra, say
Each set in .¥ depends only on
a countable set of coordinates and therefore the set of continuous w's is not in
This problem of continuity is only one of many similar problems. We will, for
instance, want to consider, for a JR.-valued process X, expressions such as
.r.
T(w)
= inf{t : Xt(w) > O},
limXs ,
stt
.r.
sup IXsl,
S2: t
and there is no reason why these expressions should be measurable or even meaningful if the only thing we know about X is that it satisfies Definition (1.1). This
difficulty will be overcome by using the following notions.
(1.6) Definition. Two processes X and X' defined respectively on the probability
P) and (QI, [7', Pi), having the same state space (E, C;), are said
spaces (Q,
to be equivalent iffor any finite sequence tl, ... , tn and sets Ai E (5,
.r,
We also say that each one is a version of the other or that they are versions of the
same process.
The image of P by (X t !, ••• , X t,.) is a probability measure on (En, c;n) which
we denote by Pt ! .... ,In. The family obtained by taking all the possible finite sequences (t" ... , t n ) is the family of finite-dimensional distributions (abbreviated
f.d.d.) of X. The processes X and X' are equivalent if they have the same f.d.d.'s.
We observe that the f.d.d.'s of X form a projective family, that is, if (Sl, ... , Sk)
is a subset of (tl, ... , t n ) and if J[ is the corresponding canonical projection from
En onto Ek, then
This condition appears in the Kolmogorov extension Theorem (3.2).
We shall denote by .46x the indexed family off.d.d.'s of the process X. With
this notation, X and Yare equivalent if and only if .46x = '/!6y.
It is usually admitted that, most often, when faced with a physical phenomenon,
statistical experiments or physical considerations can only give information about
the f.d.d.' s of the process. Therefore, when constructing a mathematical model,
§ 1. Examples of Stochastic Processes. Brownian Motion
19
we may if we can, choose, within the class of equivalent processes, a version for
which expressions as those above Definition (1.6) are meaningful. We now work
toward this goal in the case of Brownian motion.
(1.7) Definition. Two processes X and X' defined on the same probability space
are said to be modifications of each other iffor each t
Xt =
X;
a.s.
They are called indistinguishable iffor almost all w
for every t.
Xt(w) = X;(w)
Clearly, if X and X' are modifications of each other, they are versions of each
other. We may also observe that if X and X' are modifications of each other and
are a.s. continuous, they are indistinguishable.
In the next section, we will prove the following
(1.8) Theorem. (Kolmogorov's continuity criterion). A real-valued process X
for which there exist three constants a, {3, C > 0 such that
E [IX t +h -
Xtl a ] :::: Chl+{J
for every t and h, has a modification which is almost-surely continuous.
In the case of the process B above, the r.v. B t +h - B t is Gaussian centered
and has variance h, so that
The Kolmogorov criterion applies and we get
(1.9) Theorem. There exists an almost-surely continuous process B with independent increments such that for each t, the random variable B t is centered, Gaussian,
and has variance t.
Such a process is called a standard linear Brownian motion or simply Brownian
motion (which we will often abbreviate to BM) and will be our main interest
throughout the sequel.
The properties stated in Theorem (1.9) imply those we already know. For
instance, for s < t, the increments Bt - Bs are Gaussian centered with variance
t - s; indeed, we can write
Bt = Bs + CBt - Bs)
and using the independence of Bs and Bt - Bs , we get, taking characteristic
functions,
exp ( -
2
2tU2) = exp (su
- 2 ) E [exp (iu(Bt - Bs»)]
20
Chapter 1. Introduction
whence E [exp(iu(Bt - Bs))] = exp (- (t;slu Z) follows. It is then easy to see
that B is a Gaussian process (see Definition (3.5)) with covariance inf(s, t). We
leave as an exercise to the reader the task of showing that conversely we could
have stated Theorem (1.9) as: there exists an a.s. continuous centered Gaussian
process with covariance inf(s, t).
By discarding a negligible set, we may, and often will, consider that all the
paths of B are continuous.
As soon as we have constructed the standard linear BM of Theorem (1.9), we
can construct a host of other interesting processes. We begin here with a few.
1°) For any x E JR, the process X~ = x + B t is called the Brownian motion
started at x, or in abbreviated form a BM(x). Obviously, for any A E $(JR),
t > 0,
P [X: E A] = _1_
.../2nt
r
JA
e-(Y- X l2 / Zt dy =
r gt(Y -x)dy .
JA
2°) If B/, B;, ... , Bf, are d independent copies of Br, we define a process
X with state space JRd by stipulating that the i-th component of X t is B;. This
process is called the d-dimensional Brownian motion. It is a continuous Gaussian
process vanishing at time zero. Again as above by adding x, we can make it start
from x E JRd and we will use the abbreviation BMd (x).
3°) The process X t = (t, Bt ) with state space JR+ x JR is a continuous process
known as the heat process. We can replace Bt by the d-dimensional BM to get
the heat process in JR+ x JRd.
4°) Because of the continuity of paths
sup {Bs, 0::::: s ::::: t} = sup {Bs, 0::::: s ::::: t, SEQ}.
Therefore, we can define another process S by setting St = sups<t Bs. In similar
fashion, we can consider the processes IBtl, Bt = sUPs<t IBsl or ~+ = sup(O, Bt).
5°) Finally, because of the continuity of paths, for any Borel set A, the map
(lV, s) --+ l A (B s (lV))
is measurable on the product (Q x JR+, ~ ® .99 (JR+) ) and therefore
X t = 1t lA(Bs)ds
is meaningful and defines yet another process, the occupation time of A by the
Brownian motion B.
We finally close this section by describing a few geometrical invariance properties of BM, which are of paramount importance in the sequel, especially property
(iii).
(1.10) Proposition. Let B be a standard linear BM. Then, the following properties
hold:
§ I. Examples of Stochastic Processes. Brownian Motion
21
(i)
(time-homogeneity). For any s > 0, the process Bt+s - B s, t ::: 0, is a Brownian
motion independent ofu(B u , u :::: s);
(ii) (symmetry). The process -Bt, t ::: 0, is a Brownian motion;
(iii) (scaling). For every c > 0, the process cBt / e" t ::: 0, is a Brownian motion;
(iv) (time-inversion). The process X defined by Xo = 0, X t = t B 1/ t for t > 0, is
a Brownian motion.
Proof (i) It is easily seen that X t = Bt+s - Bs is a centered Gaussian process,
with continuous paths, independent increments and variance t, hence a Brownian
motion. Property (ii) is obvious and (iii) is obtained just as (i).
To prove (iv), one checks that X is a centered Gaussian process with covariance inf(s, t); thus, it will be a BM if its paths are continuous and, since they
are clearly continuous on ]0, 00[, it is enough to prove that limHo X t = 0. But
X t , t E]O, oo[ is equivalent to B t , t E]O, oo[ and since limHo.tEIQi Bt = 0, it follows that limt--+o.tEIQi X t = a.s. Because X is continuous on ]0,00[, we have
limt--+o.tEill?+ XI = a.s.
0
°
°
Remarks. 1°) Once we have constructed a BM on a space (.G,. /6, P), this proposition gives us a host of other versions of the BM on this same space.
2°) A consequence of (iv) is the law of large numbers for the BM, namely
P [limHoo t- 1 B t = 0] = 1.
3°) These properties are translations in terms of BM of invariance properties
of the Lebesgue measure as is hinted at in Exercise (l.l4) 2°).
°: :
(1.11) Exercise. Let B be the standard linear BM on [0, 1], i.e. we consider only
81 ,
t :::: 1, defined by
t E [0, 1). Prove that the process
81 = B 1-
1 -
B1
is another version of B, in other words, a standard BM on [0, 1).
(1.12) Exercise. We denote by H the subspace of C([O, 1]) of functions h such
that h (0) = 0, h is absolutely continuous and its derivative h' (which exists a.e.)
satisfies
fa1 h'(s)2ds < +00.
1°) Prove that H is a Hilbert space for the scalar product
(g, h) = fa1 g'(s)h'(s)ds.
2°) For any bounded measure ~ on [0, 1], show that there exists an element h
in H such that for every f E H
fa1 f(x)d~(x) = (j, h),
and that h'(s) = ~(]s, 1]).
22
Chapter I. Introduction
[Hint: The canonical injection of H into C([O, 1]) is continuous; use Riesz's
theorem.]
3°) Let B be a standard linear BM, J1, and v two bounded measures associated
as in 2°) with hand g. Prove that
and
XV(w)
= 10 1 Bs(w)dv(s)
are random variables, that the pair (X", XV) is Gaussian and that
4°) Prove also that with the notation of Exercise (1.14) below
X" =
10 J1,(]u, l])dBu .
1
This will be taken up in Sect. 2 Chap. VIII.
* (1.13) Exercise. 1°) Let B be the standard linear BM. Prove that limt-+oo (Bt /0)
°
is a.s. > (it is in fact equal to +00 as will be seen in Chap. II).
2°) Prove that B is recurrent, namely: for any real x, the set {t : Bt = x} is
unbounded.
3°) Prove that the Brownian paths are a.s. nowhere locally Holder continuous
of order a if a > 4 (see Sect. 2).
[Hint: Use the invariance properties of Proposition (1.10).]
#
(1.14) Exercise. 1°) With the notation of Proposition (1.3) and its sequel, we set
for f bounded or more generally in L~o~e (lR+) for 8 > 0,
Prove that the process Y has a continuous version. This is a particular case of the
stochastic integral to be defined in Chapter IV and we will see that the result is
true for more general integrands.
2°) For e > and f E L 2 (lR+), set XC(f) = eX(r) where fC(t) = f(e 2t).
Prove that XC is also a Gaussian measure with intensity the Lebesgue measure
on (lR+, .J7(lR+». Derive therefrom another proof of Proposition (1.IO)iii). Give
similar proofs for properties i) and ii) of Proposition (1.10) as well as for Exercise
(1.11).
°
(1.15) Exercise. Let B be a standard linear BM. Prove that
X(w)
= 10 1 B;(w)ds
is a random variable and compute its first two moments.
§ I. Examples of Stochastic Processes. Brownian Motion
#
23
(1.16) Exercise. Let B be a standard linear BM.
1°) Prove that the f.d.d.'s of B are given, for 0 < tl < t2 < ... < tn, by
P [Btl E AI, Bt2 E A2, ... , Bt• E An]
=
1
Al
gl l(XI)dXl
l
gt2-tl (X2 -XI) dx 2···1 gt.-I._I (Xn -xn-I)dxn.
~
~
2°) Prove that for tl < t2 < ... < tn < t,
More generally, for s < t,
P [Bt E Ala (Bu, u::s s)] =
i
gt-s (y - Bs)dy.
(1.17) Exercise. Let B be the standard B M2 and let A be the Lebesgue measure
on 1R2 •
1°) Prove that the sets
YI(w)={Bt(w),O::s t ::s I}, Y2(w) = {Bt(w), O::s t ::s 2},
Y3 (w) = {BI-t(w) - BI (w), 0 ::s t ::s I}, Y4(W) = {BI+t(w) - BI (w), 0 ::s t ::s I},
are a.s. Borel subsets oflR2 and that the maps w ~ A (Yi(W» are random variables.
[Hint: To prove the second point, use the fact that for instance
YI(W) =
{z: u::;1
inflz - Bu(w)1 = o}.]
2°) Prove that E[A(Y2)] = 2E[A(YI)]' and E[A(YI)] = E[A(Y3)] = E[A(Y4)].
3°) Deduce from the equality
that E[A(YI)] = 0, hence that the Brownian curve has a.s. zero Lebesgue measure.
One may have to use the fact (Proposition (3.7) Chap. III) that for a BMI f3 and
every t, the r.v. St = sUPs::;r f3s is integrable.
* (1.18) Exercise. Let B be the standard linear BM. Using the scaling invariance
property, prove that
t- I / 2 10g
(1
t
eXP(Bs)dS)
converges in law, as t tends to infinity, to SI = SUPs::;1 Bs.
[Hint: Use the Laplace method, namely IIlIIp converges to 1111100 as p tends to
+00 where II lip is the LP-norm with respect to the Lebesgue measure on [0,1].]
The law of SI is found in Proposition (3.7) Chap. III.
24
#
Chapter I. Introduction
(1.19) Exercise. 1°) If X is a BMd , prove that for every x E ]Rd with Ilx II = 1,
the process (x, Xt) is a linear BM.
2 Prove that the converse is false. One may use the following example: if
B = (B1, B2) is a BM2, set
C)
(1.20) Exercise (Polar functions and points). A continuous function f from ]R+
into ]R2 is said to be polar for BM2 if for any x E ]R2, P [rx ] = 0 where
rx = {3t > 0 : Bt + x = fCt)} (the measurability of the set rx will follow from
results in Sect. 4). The set of polar functions is denoted by rr.
1°) Prove that f is polar if and only if
P [3t > 0: B t = f(t)] = o.
[Hint: See 1°) in Exercise (3.14).]
2°) Prove that rr is left invariant by the transformations:
a) f ~ - f;
b) f ~ To f where T is a rigid motion of ]R2;
c) f ~ (t ~ tf(l/t))·
3°) Prove that if f fj. rr, then E [A ({Bt + f(t), t :::: Om > 0 where A is the
Lebesgue measure in ]R2. Use the result in Exercise (1.17) to show that one-point
sets are polar i.e. for any x E ]R2,
P [3t > 0 : Bf = x] = O.
Extend the result to BMd with d :::: 3. Another proof of this important result
will be given in Chap. V.
4°) Prove that almost all paths of BM2 are in rr.
[Hint: Use the independent copies Bland B2 of BM2, consider B 1 - B2 and
apply the result in 3°).]
**
(1.21) Exercise. Let X = B+ or IBI where B is the standard linear BM, p be a
real number> 1 and q its conjugate number (p-1 + q-1 = 1).
1°) Prove that the r.v. lp = sUPr:o:o (Xr - t P / 2 ) is a.s. strictly positive and finite
and has the same law as SUPt:o:o (Xt! (1 + t P/2))Q.
2°) Using time-inversion, show that
SUP(Xt!(I+tP/2))~SUp(
1
1:0:1
u::::1
1 + uP
/2)( 1~U/2)
U
P
and conclude that E[lp] < 00.
[Hint: Use Theorem (2.1).]
3°) Prove that there exists a constant Cp(X) such that for any positive r.v. L
E[Xd :S C p(X)IIL1/21Ip.
§ I. Examples of Stochastic Processes. Brownian Motion
25
[Hint: For jJ > 0, write E[Xd = E[XL - JJU/ 2] + jJE[U/ 2] and using
scaling properties, show that the first tenn on the right is less than jJ-(q/p) E[Jp].]
4°) Let L/1 be a random time such that
XL" - jJL~/2 = sup (X t - jJt P/ 2).
t:>:O
Prove that L/1 is a.s. unique and that the constant Cp(X) = pl/p (qE[JpJ)l/q is
the best possible.
5°) Prove that
E [Lf/2IJp] = ~Jp.
**
(1.22) Exercise. A continuous process X is said to be self-similar (of order 1) if
for every A > 0
(d)
(X).f, t ~ 0) = (AX t • t ~ 0)
i.e. the two processes have the same law (see Sect. 3).
1°) Prove that if B is the standard BM then X t = Bt2 is self-similar.
2°) Let henceforth X be self-similar and positive and, for p > 1, set
Sp = sup (Xs - sP)
and
X7 = supXs.
s:>:O
s~t
Prove that there is a constant c p depending only on p such that for any a > 0
where
q-l + p-l = 1.
[Hint: Sp = SUPt:>:o (X; - t p); apply the self-similarity property to X;.]
3°) Let k > 1; prove that for any a > 0
P [Sp ~ a] ::: 2P [(kXnq ~ a] +
L P [(kXnq ~ k np a ] .
00
n=l
[Hint: Observe that P [Sp ~ a] ::: sup { P [Xr - U ~ a]; L positive random
variable} and write Q = {L ::: a 1/ p } U Un>O {kna 1/ P < L ::: kn+la 1/ P }.]
4°) Prove that if g is a positive convex function on lR+ and g(O) = 0, then
[Hint: If X and Yare two positive r.v.'s such that P[X ~ a] ::: Ln LVnP[Y ~
f3na], then
E[g(X)] =
1
00
o
P[X ~ a]dg(a) :::
Ln ~E[g(Y)].]
fin
LV
26
Chapter I. Introduction
§2. Local Properties of Brownian Paths
Our first task is to figure out what the Brownian paths look like. It is helpful, when
reasoning on Brownian paths, to draw a picture of the paths of the heat process
or, in other words, of the graph of the mapping t ---+ Bt(w) (Fig. 2 below). This
graph should be very "wiggly". How wiggly is the content of this section which
deals with the local behavior of Brownian paths.
B,
o~~~------+-------~~
Fig. 2.
We begin with a more general version of the Kolmogorov criterion which was
stated in the preceding section. We consider a Banach-valued process X indexed
by a d-dimensional parameter. The norm we use on JRd is It I = sup; It; I and we
also denote by I I the norm of the state space of X. We recall that a Banach-valued
function I on lE,d is locally Holder of order a if, for every L > 0,
sup {1/(t) - l(s)I/lt - sl"'; Itl, lsi :s L, t -I- s} < 00.
(2.1) Theorem. Let Xt, t E [0, l[d, be a Banach-valued process for which there
exist three strictly positive constants y, c, c such that
then, there is a modification X of X such that
for every a E [0, c/y[. In particular, the paths of X are Holder continuous of
order a.
Proof For mEN, let Dm be the set of d-uples
.
2- m Id
' )
s = (2 - m II,
... ,
§2. Local Properties of Brownian Paths
27
where each h is an integer in the interval [0, 2m[ and set D = UmDm. Let further
Ll m be the set of pairs (s, t) in Dm such that Is - tl = 2- m; there are fewer than
2(m+l)d such pairs. Finally for sand t in D, we say that s :S t if each component
of s is less than or equal to the corresponding component of t.
Let us now set K; = sUP(S.!)ELl; IXs - Xtl. The hypothesis entails that for a
constant J,
E [Ki]:s
L E [IX
s -
XtIY] :S 2 U+l)d . cT;(d+e) = 12-;e.
(s,t)ELl;
For a point s (resp,: t) in D, there is an increasing sequence (sn) (resp,: (tn»
of points in D such that Sn (resp,: tn) is in Dn, Sn :S sUn :S t) and Sn = s(tn = t)
from some non,
Let now sand t be in D and Is - tl :S 2- m; either Sm = tm or (sm, tm) E Llm,
and in any case
00
00
i=m
i=m
where the series are actually finite sums, It follows that
IXs - Xtl :S Km + 2
L K; :S 2 L K;.
00
00
m+1
m
As a result, setting Ma = sup {lX t - Xsi/it - sla; s, tED, s ::j:. t}, we have
Ma
<
sup /2(m+l)a
mEN
sup IX t - Xsl; s, tED, s ::j:. tJ
It-sls2- m
00
<
2a + 1 L2;a K;,
;=0
For y::::: 1 and ex < elY, we get, with l' = 2a + I J 1/y,
00
00
;=0
;=0
For y < 1, the same reasoning applies to E [(Ma)Y] instead of IIMally,
It follows in particular that for almost every w, X. is uniformly continuous on
D and it makes sense to set
Xt(w) = limXs(w),
S----'>-t
SED
By Fatou's lemma and the hypothesis, Xt = X t a.s. and X is clearly the desired
modification.
28
Chapter I. Introduction
Remark. Instead of using the unit cube, we could have used any cube whatsoever.
In the case of Brownian motion, we have
and because the increments are Gaussian, for every p > 0,
for some constant Cpo From this result we deduce the
(2.2) Theorem. The linear Brownian motion is locally Holder continuous oforder
a for every a < 1/2.
Proof As we have already observed, a process has at most one continuous mod-
ification (up to indistinguishability). Theorem (2.1) tells us that BM has a modification which is locally HOlder continuous of order a for a < (p - 1) /2 p =
1/2 - 1/2p. Since p can be taken arbitrarily large, the result follows.
D
From now on, we may, and will, suppose that all the paths of linear BM are
locally Holder continuous of order a for every a < 1/2. We shall prove that the
Brownian paths cannot have a Holder continuity of order a, for a ~ 1/2 (see also
Exercise (1.13». We first need a few definitions for which we retain the notation
of Sect. 4 Chap. 0.
For a real-valued function X defined on lR+, we set
TtL1 =
L (Xt;+l - Xt;)2.
At variance with the StL1 of Chap. 0, it is no longer true that Tt L1 ' ~ TtL1 if Ll' is a
refinement of Ll and we set
(2.3) Definition. A real-valued process X is of finite quadratic variation if there
exists a finite process (X, X) such that for every t and every sequence {Ll n } of
subdivisions of [0, t] such that ILlnl goes to zero,
P-lim TtL1" = (X, X}t.
The process (X, X) is called the quadratic variation of X
Of course, we may consider intervals [s, t] and, with obvious notation, we will
then have
P-limTs~" = (X, X}t - (X, X}s;
thus, (X, X) is an increasing process.
Remark. We stress that a process may be of finite quadratic variation in the sense
of Definition (2.3) and its paths be nonetheless a.s. of infinite quadratic variation
in the classical sense, i.e. sup L1 Tt L1 = 00 for every t > 0; this is in particular the
case for BM. In this book the words "quadratic variation" will be used only in the
sense of Definition (2.3).
§2. Local Properties of Brownian Paths
29
(2.4) Theorem. Brownian motion is offinite quadratic variation and (B, BlI = t
a.s. More generally, if X is a Gaussian measure with intensity /J, and F is a set
such that /J(F) < 00, for every sequence {Ft} , n = 1,2, ... offinite partitions of
F such that sUPk /J (Ft) -----+ 0,
n---+oo
lim
n
L X(Ftf = /J(F)
k
in the L 2-sense.
Proof Because of the independence of the X (Ft)'s, and the fact that E [X(Ft)2]
= /J (Ft),
and since for a centered Gaussian r.v. Y, E[y4] = 3E[y2]2, this is equal to
o
which completes the proof.
Remarks. 1°) This result will be generalized to semimartingales in Chap. IV.
2°) By extraction of a subsequence, one can always choose a sequence (.1 n )
such that the above convergence holds almost-surely; in the case of BM, one can
actually show that the a.s. convergence holds for any refining (i.e . .1 n C .1n+d
sequence (see Proposition (2.12) in Chap. II and Exercise (2.8) in this section).
(2.5) Corollary. The Brownian paths are a.s. of infinite variation on any interval.
Proof By the foregoing result, there is a set Do C D such that P(D o) = I and
for any pair of rationals p < q there exists a sequence (.1n) of subdivisions of
[p, q] such that l.1 n l -+ and
°
lim
L (B li + (w) - B li (w»)2 = q - p
1
t;EL1r1
for every w E Do.
Let V(w) ~ +00 be the variation of t -+ BI(w) on [p, q]. We have
L (Bli+I(W) - B li (w»)2 ~ (s~p IBli+l(w) - B1i(w)l) V(w).
I
°
By the continuity of the Brownian path, the right-hand side would converge to
as n -+ 00 if V (w) were finite. Hence, V (w) = +00 a.s.
0
Chapter I. Introduction
30
In the following, we will say that a function is nowhere locally Holder continuous of order ex if there is no interval on which it is Holder continuous of order
ex.
(2.6) Corollary. The Brownian paths are a.s. nowhere locally Holder continuous
of order ex for ex > 1/2.
Proof It is almost the same as that of Corollary (2.5). If IBt(w) - Bs(w) I <
kit - sla for p ~ s, t ~ q and ex > 1/2 then
L (BtHl (w) - Bti(w»)z ~ kZ(q - p) sup Iti+l - ti\Za-l
i
i
o
and we conclude as in the previous proof.
Theorem (2.2) and Corollary (2.6) leave open the case ex = 1/2. The next
result shows in particular that the Brownian paths are not Holder continuous of
order 1/2 (see also Exercise (2.31) Chap. III).
(2.7) Theorem (Levy's modulus of continuity). Ifh(t) = (2tlog(1/t))I/Z,
Proof Pick a number 8 in ]0, 1[ and consider the quantity
Ln = P [max IBkz-n - B(k-l)z-n I : : ; (1 - 8)h(r n)] .
l::sk::s2n
By the independence of the increments, Ln is less than
[1_2/
00
e- x2 / 2 dX]zn
(l-8)~ .j2ii
By integrating by parts the left side of the inequality
1
00
b- 2 exp (_
~) db < a- 2
the reader can check that
1 (b
00
a
2
1
a
OO
exp (_
(a
~Z) db
2
exp - - ) db> - - e x p - - ) .
2
a2 + 1
2
Using this and the inequality 1 - s < e- s , we see that there exists a constant
C > such that
°
§2. Local Properties of Brownian Paths
31
It follows from the Borel-Cantelli lemma that the lim in the statement is a.s.
~ 1 - 8, and it is a.s. ~ 1 since 8 is arbitrary. We shall now prove the reverse
inequality.
Again, we pick 8 E]O, 1[ and s > 0 such that (l + 8)2(1 - 8) > 1 + 8. Let K
be the set of pairs (i, j) of integers such that 0 S i < j < 2n and 0 < j - i S 2no
and for such a pair set k = j - i. Using the inequality
1
00
exp (_b 2 /2) db <
1
00
exp (_b 2 /2) (b/a)db = a-I exp (_a 2 /2),
and setting L = P [maxK (IBj2 -n -
L
Bi2 -n 1/ h(k2-n») ~ 1 + s], we have
<
<
where D is a constant which may vary from line to line. Since k- I is always
larger than 2- no , we further have
L S DT n (1-o)(1+e)2
L (log(k- 2 )r
1 n
l/2 •
K
Moreover, there are at most 2n (1H) points in K and for each of them
log(k -12 n ) > log 2n (1-o)
so that finally
L S Dn- I / 2 2n «(l+O)-(1-0)(l+en.
By the choice of sand 8, this is the general term of a convergent series; by the
Borel-Cantelli lemma, for almost every w, there is an integer new) such that for
n ~ new),
IBj2 -n - Bi2 -n I < (1 + s)h(kTn)
where (i, j) E K and k = j - i. Moreover, as the reader may show, the integer
new) may be chosen so that for n ~ new),
m>n
where 1') > 0 is any preassigned number. Let w be a path for which these properties
hold; pick 0 S tl < t2 S 1 such that t2 - tl < 2- n (w)(l-o). Next let n ~ new) be
the integer such that
We may find integers i, j, Pr. qs such that
32
Chapter I. Introduction
with n < PI < P2 < . '" n < ql < q2 < ... and 0 < j - i :s (t2 - tl)2n < 2 no .
Since B is continuous, we get
IBt,(w) - Bt,(w)
I <
+ IBj2-"(w) - B i2-"(w)1
+ IBt,(w) - B j2-n(w) I
2(1 + £) L h(r m ) + (1 + £)h (j - orn)
IBi2 -"(W) - Bt,(w)1
<
m>n
2(1 + £)17h (r(n+I)(I-O)) + (1 + £)h (j - i)rn).
<
Since h is increasing in a neighborhood of 0, for t2 - tl sufficiently small, we get
IBt, (w) - Bt,(w)1
:s (2(1 + £)17 + (l + £) )h(t2 - tl).
But £, 17 and 8 can be chosen arbitrarily close to 0, which ends the proof.
#
(2.8) Exercise. If B is the BM and .1 n is the subdivision of [0, t] given by the
points tj = j2- n t, j = 0, l, ... , 2n , prove the following sharpening of Theorem
(2.4):
lim TrI1" = t
almost-surely.
n
[Hint: Compute the exact variance of T/" - t and apply the Borel-Cantelli
lemma.]
This result is proved in greater generality in Proposition (2.12) of Chap. II.
*
(2.9) Exercise (Non-differentiability of Brownian paths). 1°) If g is a realvalued function on JR.+ which is differentiable at t, there exists an integer I such
that if i = [nt] + 1 then
Ig(jln) - g(j - l)ln)1
:s 711n
for i < j :s i + 3 and n sufficiently large.
2°) Let D t be the set of Brownian paths which are differentiable at t > O.
Prove that Ut D t is contained in the event
r =
U lim U n {lB(jln) - B«j - l)ln)1 :s lin}
n+1 i+3
1:>:1 n-+co i=1 j=i+1
and finally that per) = o.
#*
(2.10) Exercise. Let (Xf) be a family of JR.k-valued continuous processes where
t ranges through some interval of JR. and a is a parameter which lies in JR.d. Prove
that if there exist three constants y, C, £ > 0 such that
E
[s~p IX~ - X71 Y ] :s cia - bl dH ,
then, there is a modification of (Xf) which is jointly continuous in a and t and is
moreover Holder continuous in a of order a for a < £1 y, uniformly in t.
§3. Canonical Processes and Gaussian Processes
33
(2.11) Exercise (p-variation of BM). 1°) Let B be the standard linear BM and,
for every n, let t; = i/n, i = 0, ... , n. Prove that for every p > 0
n-I
(n Cp / 2 )-l) L IBtHI - Bti IP
;=0
converges in probability to a constant vp as n tends to +00.
[Hint: Use the scaling invariance properties of BM and the weak law of large
numbers.]
2°) Prove moreover that
n Cp -l)/2
(t, I
Bti + 1 - Bti IP - n l - p / 2 v p )
converges in law to a Gaussian r.v.
(2.12) Exercise. For p > 0 given, find an example of a right-continuous but
discontinuous process X such that
E [IX s - XtIP] :s Cit - sl
for some constant C > 0, and all s, t ~ o.
[Hint: Consider X t = lCyg) for a suitable r.v. Y.]
§3. Canonical Processes and Gaussian Processes
We now come back more systematically to the study of some of the notions
introduced in Sect. 1. There, we pointed out that the choice of a path of a process
X amounts to the choice of an element of !7(T, E) for appropriate T and E.
It is well-known that the set .7' (T, E) is the same as the product space ET. If
W E .7'(T, E) it corresponds in ET to the product of the points w(t) of E.
From now on, we will not distinguish between .7'(T, E) and ET. The functions
Yt , t E T, taking their values in E, defined on ET by Yt(w) = w(t) are called the
coordinate mappings. They are random variables, hence form a process indexed
by T, if ET is endowed with the product a-algebra iFT . This a-algebra is the
smallest for which all the functions Yt are measurable and it is the union of the
a-algebras generated by the countable sub-families of functions Yt , t E T. It is
also the smallest a-algebra containing the measurable rectangles ntET At where
At E ~ for each t and At = E but for a finite sub-family (tl, ... , tn ) of T.
Let now X t , t E T, be a process defined on (Q,!7, P) with state space
(E, iF). The mapping ¢ from Q into ET defined by
¢(w)(t) = Xt(w)
is measurable with respect to Y and g T because Yt 0 ¢ is measurable for each
t. Let us call Px the image of P by ¢. Plainly, for any finite subset (tl, ... , tn )
of T and sets A; E (5,
34
Chapter I. Introduction
that is, the processes X and Yare versions of each other.
(3.1) Definition. The process Y is called the canonical version of the process X.
The probability measure Px is called the law of X.
In particular, if X and X' are two processes, possibly defined on different
probability spaces, they are versions of each other if and only if they have the
same law, i.e. Px = Px ' on ET and this we will often write for short
X <!J X'
or
(Xr. t ~ 0) <!J (X;, t ~ 0).
For instance, property iii) in Proposition (1.10) may be stated
(d)
(Br. t ~ 0) = (cBc-2t, t ~ 0) .
Suppose now that we want to construct a process which models some physical
phenomenon. Admittedly, nature (i.e. statistical experiments, physical considerations, ... ) gives us a set of f.d.d.'s, and the goal is to construct a process with
these given distributions. If we can do that, then by the foregoing we also have
a canonical version; actually, we are going to construct this version in a fairly
general setting.
We recall from Sect. 1 that the family .~6x of the finite-dimensional distributions of a process X forms a projective family. The existence of the canonical
version of a process of given finite-dimensional distributions is ensured by the
Kolmogorov extension theorem which we recall here without proof.
(3.2) Theorem. If E is a Polish space and go the a -algebra of its Borel subsets,
for any set T of indices and any projective family ofprobability measures on finite
products, there exists a unique probability measure on (E T , goT) whose projections
on finite products are the given family.
For our present purposes, this theorem might have been stated: if (E, ~)
is Polish, given a family .~6 of finite-dimensional distributions, there exists a
unique probability measure on (E T , ~T) such that .A'6 y = vi!; for the coordinate
process Y.
The above result permits to construct canonical versions of processes. However, for the same reason as those invoked in the first section, one usually does not
work with the canonical version. It is only an intermediate step in the construction
of some more tractable versions. These versions will usually have some continuity
properties. Many processes have a version with paths which are right continuous
and have left-hand limits at every point, in other words: lims,),t Xs(w) = Xt(w)
and limstt Xs(w) exists; the latter limit is then denoted by Xt-(w). We denote by
D(~+, E) or simply D the space of such functions which are called cadlag. The
space D is a subset of ET; to say that X has a version which is a.s. cadlag is
§3. Canonical Processes and Gaussian Processes
35
equivalent to saying that for the canonical version of X the probability measure
Px on gT gives the probability 1 to every measurable set which contains D.
In that case, we may use D itself as probability space. Indeed, still calling
Yt the mapping w -+ w(t) on D, let i§£ be the a-algebra a(Yr. t ~ 0); plainly,
g £ = g T n D. The reader will easily check that although D is not measurable
in i§T, one defines unambiguously a probability measure Q on iff£ by setting
Q(T) = Px(t)
gz,
where t is any set in gT such that r = tnD. Obviously, the process Y defined
on (D,
Q) is another version of X. This version again is defined on a space
of functions and is made up of coordinate mappings and will also be referred to as
canonical; we will also write Px instead of Q; this causes no confusion so long
as one knows the space we work with.
Finally, if X is a process defined on (il,.'7', P) with a.s. continuous paths,
we can proceed as above with C(lR+, E) instead of D and take the image of P
by the map ¢ defined on a set of probability 1 by
Yt(¢(w») = ¢(w)(t) = Xt(w).
We can do that in particular in the case of Brownian motion and we state
(3.3) Proposition. There is a unique probability measure W on C(lR+, JR) for
which the coordinate process is a Brownian motion. It is called the Wiener measure
and the space C(JR+, JR) is called the Wiener space.
Proof One gets W as the image by ¢ of the probability measure of the version
of BM already constructed.
0
It is actually possible to construct directly the Wiener measure, hence a continuous version of BM, without the knowledge of the results of Sect. 1. Let .~
be the union of the a-fields on C = C(JR+, JR) generated by all the finite sets of
coordinate mappings; A is an algebra which generates the a-field on C. On </~,
we can define W by the formula in Exercise (1.16) 1°) and then it remains to
prove that the set function thus defined extends to a probability measure on the
whole a-field.
For the Brownian motion in JRd, we define similarly the Wiener space
C(JR+, JRd) and the Wiener measure W as the only probability measure on the
Wiener space for which the coordinate process is a standard BMd. The Wiener
space will often be denoted by W or W d if we want to stress the dimension. It
is interesting to restate the results of Section 2 in terms of Wiener measure, for
instance: the Wiener measure is carried by the set of functions which are Holder
continuous of order a for every a < 1/2 and nowhere Holder continuous of order
a for a ~ 1/2.
We now introduce an important notion. On the three canonical spaces we have
defined, namely E~+, D(JR+, E), C(JR+, E), we can define a family of transformations Ot, t E JR+, by
36
Chapter I. Introduction
Ys (Ot(W» = Yt+s(w),
where as usual Y is the coordinate process. Plainly, Ot 0 Os = Ot+s and Ot is
measurable with respect to a (Xs, s ~ t) hence, to a (Xs, s ~ 0). The effect of Ot
on a path W is to cut off the part of the path before t and to shift the remaining
part in time. The operators Or. t ~ 0, are called the shift operators.
(3.4) Definition. A process X is stationary iffor every tl, ... , tn and t and every
Ai E tF"
P [XHtl E AI, ... , Xt+tn E An] = P [Xtl E AI, ... , Xtn E An].
Another way of stating this is to say, using the canonical version, that for
every t, Ot(Px ) = Px ; in other words, the law of X is invariant under the shift
operators.
We now illustrate the above notions by studying the case of Gaussian processes.
(3.5) Definition. A real-valued process Xr, t E T, is a Gaussian process iffor any
finite sub-family (tl, ... , tn) of T, the vector r. v. (X tl ' ... , X t.) is Gaussian. The
process X is centered if E[Xtl = 0 'it E T.
In other words, X is a Gaussian process if the smallest closed subspace of
L2(fJ,.¥" P) containing the r.v.'s Xr.t E T is a Gaussian subspace. As was
already observed, the standard linear BM is a Gaussian process.
(3.6) Definition. If Xt, t E T, is a Gaussian process, its covariance r is the function defined on TxT by
res, t) = cov (Xs, Xt) = E [(Xs - E[Xs])(Xt - E[Xr])].
Let us recall that a semi-definite positive function r on T is a function from
TxT into IR such that for any d-uple (tl, ... , td) of points in T and any integer
d, the d x d-matrix (r<ti, tj») is semi-definite positive (Sect. 6, Chap. 0).
(3.7) Proposition. The covariance ofa Gaussian process is a semi-definite positive
function. Conversely, any symmetric semi-definite positive function is the covariance of a centered Gaussian process.
n
Proof Let (ti) be a finite subset of T and ai some complex numbers; then
~ r(ti, tj)aJij = E[I ~ai (Xti - E [Xr,])
'.J
,
~0
which proves the first statement.
Conversely, given a symmetric semi-definite positive function r, for every finite subset tl, ... , tn of T, let ptl .... ,fn be the centered Gaussian probability measure
on IR n with covariance matrix (r(ti, tj») (see Sect. 6 Chap. 0). Plainly, this defines
a projective family and under the probability measure given by the Kolmogorov
extension theorem, the coordinate process is a Gaussian process with covariance
r. We stress the fact that the preceding discussion holds for a general set T and
not merely for subsets of R
0
§3. Canonical Processes and Gaussian Processes
37
Remark. The Gaussian measures of Definition (1.4) may be constructed by applying Proposition (3.7) with T =. ~ and rCA, B) = fL(A n B).
We have already seen that the covariance of the standard linear BM is given
by inf(s, t). Another large set of examples is obtained in the following way. If fL
is a symmetric probability measure on JR, its Fourier transform
¢J(t) =
1:
00
e itx fL(dx),
is real and we get a covariance on T = JR by setting ret, t') = ¢J(t - t') as
the reader will easily show. By Proposition (3.7), the process associated with
such a covariance is a stationary process. In particular we know that for fJ > 0,
the function exp(-fJltl) is the characteristic function of the Cauchy law with
parameter fJ. Consequently, the function r(t, t') = cexp(-fJlt - t'l) with c > 0
is the covariance of a stationary Gaussian process called a stationary OrnsteinUhlenbeck process (abbreviated OU in the sequel) with parameter fJ and size c. If
we call this process X, it is easily seen that
hence, by Kolmogorov's continuity criterion, X has a continuous modification.
Henceforth, we will consider only continuous modifications of this process and
the time set will often be restricted to JR+.
Another important example is the Brownian Bridge. This is the centered Gaussian process defined on T = [0, 1] and with covariance res, t) = s(l - t) on
(s ::; t). The easiest way to prove that r is a covariance is to observe that for
the process Xl = B t - tB I where B is a BM, E[XsXd = s(l - t) for s ::; t.
This gives us also immediately a continuous version of the Brownian Bridge. We
observe that X I = 0 a.s. hence all the paths go a.s. from 0 at time 0 to 0 at time 1;
this is the reason for the name given to this process which is also sometimes called
the tied-down or pinned Brownian motion. More generally, one may consider the
Brownian Bridge XY between 0 and y which may be realized by setting
xi = B t - t(BI - y) =
x? + ty,
O::;t::;l,
where X? = Xl is the Brownian Bridge going from 0 to O. Exercise (3.16) describes how XY may be viewed as BM conditioned to be in y at time 1. In the
sequel, the words "Brownian Bridge" without further qualification will mean the
Bridge going from 0 to O. Naturally, the notion of Bridge may be extended to
higher dimensions and to intervals other than [0, 1].
Finally, there is a centered Gaussian process with covariance res, t) = l[s=t],
s, t E JR+. For such a process, any two r.v.'s X s , Xl with s i- t are independent,
which can be seen as "great disorder". It is interesting to note that this process
does not have a good version; if it had a measurable version, i.e. such that the
map (t, w) ~ Xt(w) were measurable (see Definition (1.14) Chap. IV), then for
each t, Yt = f~ Xsds would be a Gaussian r.v. and, using a Fubini argument, it is
38
Chapter I. Introduction
easily shown that we would have E[Y?] = 0, hence Yt = 0 a.s. Consequently, we
would get X t (w) = 0 dt P (dw)-a.s., which is in contradiction with the equality
E
#
[1 X;]
t
ds
= t.
(3.8) Exercise. 1°) Let B be a linear BM and A a positive number; prove that the
process
X t = e -'AtBexp(2'At),
t E JR.,
is a stationary amstein-Uhlenbeck process; compute its parameter and its size.
Conclude that the stationary au process has a continuous version whose paths are
nowhere differentiable.
2°) Let X be a continuous au process with parameter 1/2 and size 1 and set
fJt
= X t + (1/2) 1t Xudu
for
t 2: 0,
fJt
= X t - (1/2) 1t Xudu
for
t < O.
Prove that fJ is a Gaussian process with continuous paths. In what sense is fJ a
BM?
[Hint: See Exercise (3.14).]
This is related to the solution of the Langevin equation in Chap. IX.
* (3.9) Exercise (Fractional Brownian motion). Let d be a positive integer and p
a real number such that (d/2) - 1 < p < d/2. We set ex = d - 2p.
1°) For x, y E JR.d, prove that the function
fy(x) = Ix - yl-P - Ixl- P
is in L 2 (JR.d , m) where m is the Lebesgue measure on JR.d.
2°) Let X be the Gaussian measure with intensity m and set Zy = X(fy).
Prove that there is a constant c depending only on d and p such that
The process Z is called the fractional Brownian motion of index ex. For ex = 1,
Z is Levy's Brownian motion with d parameters.
3°) Let X I and X2 be two independent Gaussian measures with intensity m,
and set X = XI + iX2 • For y E JR.d, set jy(x) = (1 - exp(iy· x»)/Ixld-p, where
. indicates the inner product in JR.d. Show that
f
h(x). X(dx)
is a constant multiple of the fractional Brownian motion of index ex.
[Hint: The Fourier-Plancherel transform of the function fy is y jy, where y is
a constant independent of y.]
§3. Canonical Processes and Gaussian Processes
#
39
(3.10) Exercise (Brownian Bridge). Let X be the Brownian Bridge (BB).
1°) Prove that the process XI-t, 0 ::::; t ::::; 1 is also a BB and that
Bt = (t + I)X(t/(t+I)),
t 2: 0,
is a BM. Conversely, if B is a BM, the processes (l-t)B(t/(l-t)) and tB(t-I-I),
o : : ; t ::::; 1, are BB's.
2°) If we write t for t modulo 1, the process
0::::; t ::::; 1,
where s is a fixed number of [0, 1] is a BB.
3°) (Continuation of Exercise (1.21». Prove that
1
00
00
2
sup - (IBtl - 1) = sup (IBtl- t) = sup X t .
t:::O
t
t:::O
09:0:1
(3.11) Exercise (Brownian sheet). 1°) Prove that the function
lRt by
r(u, s), (v, t)) = inf(u, v) x inf(s, t)
r defined on lRt x
is a covariance and that the corresponding Gaussian process has a continuous
version. This process is called the Brownian sheet.
2°) Prove that the Brownian sheet may be obtained from L2(lRt) as BM was
obtained from L2(lR+) in Sect. 1.
Let {lffi(r); r E $(lRt), ffr ds du < oo} be the Gaussian measure associated with the Brownian sheet lffi.
Prove that (lffi(R t ), 0::::; t ::::; I} is a Brownian bridge, where
R t = {(s, u); 0 ::::; s < t < u ::::; I},
o : : ; t ::::; 1.
3°) If lffi(s.!) is a Brownian sheet, prove that the following processes are also
Brownian sheets:
a) lffi(a2s.b2t)/ab where a and b are two positive numbers;
b) slffi(r1,t) and stlffi(r1r1);
c) lffi(so+s,to+t) -lfficso+s,to) -lfficso'!o+t) +lffi(so,to) where so, to are fixed positive numbers.
4°) If lffics'!) is a Brownian sheet, then for a fixed s, t -+ lffi(s,t) is a multiple of
linear BM. The process t -+ lffiCet,e-t) is a Omstein-Uhlenbeck process.
(3.12) Exercise (Reproducing Kernel Hilbert Space). Let r be a covariance on
TxT, and Xt, t E T, a centered Gaussian process with covariance r.
1°) Prove that there exists a unique Hilbert space H of functions on T such
that
i) H is the closure of the subspace spanned by {r(t, .), t E r);
ii) for every f E H, (j, r(t, .») = f(t).
40
Chapter I. Introduction
2°) Let ,9!(f be the Gaussian space generated by the variables Xt. t E T. Prove
that H and .918 are isomorphic, the isomorphism being given by
Z E .~ -+ (E[ZXr], t E T) .
3°) In the case of the covariance of BM on [0, 1], prove that H is the space
H of Exercise (1.12) in Chap. 1.
* (3.13) Exercise. 1°) Let ¢ be a locally bounded measurable function on ]0, 1]
such that
lim t(3/ 2H I¢(t)1 < +00
I,J,O
for some 8 > O. If B is the BM, prove that the integral
xt = 11 ¢(s)Bsds
defines a continuous Gaussian process on [0, 1]. (A sharper form is given in
Exercise (2.31) of Chap. III). Characterize the functions ¢ for which B - X'" is
again a BM.
2°) Treat the same questions for the Brownian Bridge.
#
.x
(3.14) Exercise. For any x E ~d, define the translation
on W = C(~+, ~d)
by t"x{w)(t) = x + wet) and call W X the image of W by
(thus, in particular
WO = W).
1°) Prove that under W X the coordinate process X is a version of the BMd
started at x. Observe that W X is carried by the set {w : w(O) = x} and conclude
that W X and WY are mutually singular if x =j:. y. On the other hand, prove that for
anye > 0, W X and WY are equivalent on the a-algebra a(Xs, s 2: e).
2°) For any r E a(Xs,s 2: 0), prove that the map x -+ WX(r) is a Borel
function. If JL is a probability measure on the Borel sets of ~d, we define a
probability measure WI-' on W by
.x
Under which condition is X a Gaussian process for WI-'? Compute its covariance
in that case.
* (3.15) Exercise. Let us first recall a result from Fourier series theory. Let J be
continuous on [0, 1] with J(O) = J(1) = 0 and set, for k 2: 1,
ak =
.Jill
J(t) cos2rrkt dt,
b k = .Ji
11
J(t) sin2rrkt dt,
then if L~ ak converges,
=.JiL:: (ak(cos2rrkt - 1) + bk sin2rrkt),
00
J(t)
k=1
§4. Filtrations and Stopping Times
41
where the series on the right converges in L2([0, 1]). By applying this to the paths
of the Brownian Bridge B t - t B I , prove that there exist two sequences (~d~~go
and (17d~~f of independent reduced Gaussian random variables such that almost
surely
~~
Bt = t~o + -v2
L.k=1
#
( - ~k (cos2rrkt - 1) + -11k.)
sm2rrkt .
2rrk
2rrk
(3.16) Exercise. (The Brownian Bridge as conditioned Brownian motion). We
consider Q = C([O, 1], JR.) endowed with its Borel a-field and the Wiener measure
W. As Q is a Polish space and,j3l1 = a(X I ) is countably generated, there exists a
regular conditional distribution P (y, .) for the conditional expectation with respect
to .93>1,
1°) Prove that we may take P (y, .) to be the law pY of the Brownian Bridge
between and y. In other words, for any Borel subset r of Q
°
W(n =
rpY[r] ~eXp(-l/2)dy.
J~
-v2rr
°
2°) The function y -+ pY is continuous in the weak topology (Chap. and
Chap. XIII) that is: for any bounded continuous function f on Q the map y -+
fa f(w)dPY(w) is continuous.
(3.17) Exercise. 1°) Let X be a Gaussian process with covariance r on [0, 1] and
suppose that for each t, X is differentiable at t, i.e., there exists a r.v. X; such that
lim h- I (X t+h - Xt) = X;
h->O
a.s.
Prove that a2 r(s, t)/as at exists and is equal to the covariance of the process X'.
2°) Let B be the standard linear BM on [0, 1] and T(w) the derivative in the
sense of Schwartz distributions of the continuous map t -+ Bt(w). If f, g are
in Cf(]O, 1[) prove that (T(w), f) and (T(w), g) are centered Gaussian random
variables with covariance equal to (U, fg) where U is the second mixed derivative
of inf(s, t) in the sense of Schwartz distributions, namely a suitable multiple of
the Lebesgue measure on the diagonal of the unit square.
§4. Filtrations and Stopping Times
In this section, we introduce some basic notation which will be used constantly in
the sequel.
(4.1) Definition. A filtration on the measurable space (Q, $) is an increasing
family (.:ji1f)to::o. of sub-a-algebras of
In other words. for each t we have a
sub-a -algebra .?7; and .Yr c .?7; if s < t. A measurable space (Q, $) endowed
with a filtration (.J;if)to::o. is said to be a filtered space.
.r.
42
Chapter I. Introduction
(4.2) Definition. A process X on (Q, .¥') is adapted to the filtration (Sf) if XI is
.¥r-measurable for each t.
Any process X is adapted to its natural filtration 3fo = a (Xs, s ::: t) and
(Yf o) is the minimal filtration to which X is adapted. To say that X is adapted to
(.¥r) is to say that Yfo c .¥r for each t.
It is the introduction of a filtration which allows for the parameter t to be really
thought of as "time". Heuristically speaking, the a-algebra .¥r is the collection of
events which may occur before or at time t or, in other words, the set of possible
"pasts" up to time t. In the case of stationary processes, where the law is invariant,
it is the measurability with respect to .91 which places the event in time.
Filtrations are a fundamental feature of the theory of stochastic processes and
the definition of the basic objects of our study such as martingales (see the following chapter) or Markov processes, will involve filtrations. We proceed to a study
of this notion and introduce some notation and a few definitions.
With a filtration (Sf), one can associate two other filtrations by setting
.¥r- =
V§.',
.¥r+ =
s<1
n§.'·
s>1
We have VIO?oSf- = VIO?oSf = VIO?O.¥r+ and this a-algebra will be denoted
.970- is not defined and, by convention, we put.¥O- =.¥O.
We always have .~_ S; Sf s; Sf+ and these inclusions may be strict. If for
instance Q = D(JR.+, JR.), X is the coordinate process and the filtration is (3fo),
the event {XI = a} is in 3fo and is not in 3f~; a little later, we shall see an
example of an event in 3f! which is not in Yf o.
We shall encounter other examples of pairs (Sf), (~) of filtrations such that
.¥r c ~' for all t. We can think about this set-up as the knowledge that two different observers can have gained at time t, the Wr-observer being more skillful than
the Sf-observer. For instance, the Yf!-observer above can foresee the immediate
future.
This also leads us to remark that, in a given situation, there may be plenty of
filtrations to work with and that one may choose the most convenient. That is to
say that the introduction of a filtration is no restriction on the problems we may
treat; at the extreme, one may choose the constant filtration (.~ = !¥' for every t)
which amounts to having no filtration.
~. The a-algebra
(4.3) Definition. If .¥r = .¥r+ for every t, the filtration is said to be rightcontinuous.
For any filtration (.¥r) the filtration (.¥r+) is right-continuous.
We come now to an important definition.
(4.4) Definition. A stopping time relative to the filtration Pf) is a map on Q with
values in [0,00], such that for every t,
{T ::: t} E .?T,.
§4. Filtrations and Stopping Times
43
In particular, T is a positive r.v. taking possibly infinite values. If (.:¥!) is
right-continuous, it is equivalent to demand that {T < t} belongs to .91 for every
t. In that case, the definition is also equivalent to: T is a stopping time if and
only if the process X t = 1JO,Tj(t) is adapted (X is then a left-continuous adapted
process, a particular case of the predictable processes which will be introduced in
Sect. 5 Chap. IV and in Exercise (4.20».
The class of sets A in .Y'60 such that An {T :s t} E .91 for all t is a a-algebra
denoted by .¥T; the sets in .¥T must be thought of as events which may occur
before time T. The constants, i.e. T (w) == s for every cu, are stopping times and
in that case ,¥f = .¥S. Stopping times thus appear as generalizations of constant
times for which one can define a "past" which is consistent with the "pasts" of
constant times.
The proofs of all the above facts are left to the reader whom we also invite to
solve Exercises (4.16-19) to become acquainted with stopping times.
A stopping time may be thought of as the first time some physical event occurs.
Here are two basic examples.
(4.5) Proposition. If E is a metric space, A a closed subset of E and X the coordinate process on W = C(lR+, E), and ifwe set
DA(w) = inf{t 2: 0, Xt(w) E A}
with the understanding that inf(0) = +00, then DAis a stopping time with respect
to the natural filtration .¥f0 = a(Xs, s :s t). It is called the entry time of A.
Proof For a metric d on E, we have
{DA :s t} = {w:
inf d (Xs(w), A) =
SEQi.SSt
o}
and the right-hand side set obviously belongs to .9ifo.
D
This is one of the rare examples of interesting stopping times with respect to
(.~o). If we remove the assumption of path-continuity of X or of closedness of
A, we have to use larger filtrations. We have for instance
(4.6) Proposition. If A is an open subset of E and Q is the space of rightcontinuous paths from lR.+ to E, the time
TA = inf{t > 0: X t E A}
( inf(0) = +(0)
is a stopping time with respect to .:Y;~. It is called the hitting time of A.
Proof As already observed, TA is a .~~-stopping time if and only if {TA < t} E
.:y;o for each t. If A is open and Xs(w) E A, by the right-continuity of paths,
Xt(w) E A for every t E [s, s + E'[ for some E' > O. As a result
{TA < t} =
U {Xs
E A} E .Jfo.
SEQi.S<t
D
44
Chapter 1. Introduction
It can be seen directly or by means of Exercise (4.21) that TA is not a (.y;o)_
stopping time and that, in this setting, .¥;! is strictly larger than .y;o. This is a
general phenomenon; most interesting stopping times will be (.y;!)-stopping times
and we will often have to work with (Y,!) rather than (.~o).
We now tum to the use of stopping times in a general setting.
Let (.31) be a filtration on (Q,.?') and T a stopping time. For a process X,
we define a new mapping X T on the set {w : T(w) < oo} by
XT(w) = Xt(w)
if
T(w) = t.
This is the position of the process X at time T, but it is not clear that X T is
a random variable on {T < oo}. Moreover if X is adapted, we would like X T
to be .~-measurable just as X t is .'Yf-measurable. This is why we lay down the
following definitions where .A>([O, tD is the a-algebra of Borel subsets of [0, t].
(4.7) Definition. A process X is progressively measurable or simply progressive
(with respect to the filtration (.X») if for every t the map (s, w) --+ Xs (w) from
[0, t] x Q into (E, 3') is .»>([0, tD 0 .X-measurable. A subset r oflK+ x Q is
progressive if the process X = I r is progressive.
The family of progressive sets is a a -field on lK+ x Q called the progressive
a-field and denoted by Prog. A process X is progressive if and only if the map
(t, w) --+ Xt(w) is measurable with respect to Prog.
Clearly, a progressively measurable process is adapted and we have conversely
the
(4.8) Proposition. An adapted process with right or left continuous paths is progressively measurable.
Proof Left to the reader as an exercise.
We now come to the result for which the notion was introduced.
(4.9) Proposition. If X is progressively measurable and T is a stopping time (with
respect to the samefiltration (.¥;')) then X T is .¥i-measurable on the set {T < oo}.
Proof The set {T < oo} is itself in .y;;.. To say that X T is .~-measurable on this
set is to say that X T . 1[T",t] E .X for every t. But the map
T : (T :'S t), (T :'S t) n.Y{) --+ ([0, t], .)3>([0, t]))
is measurable because T is a stopping time, hence the map w --+ (T(w), w) from
(Q, .X) into ([0, t] x Q, .)3>([0, tD 0.Y{) is measurable and X T is the composition of this map with X which is .)9([0, tD 0 . X-measurable by hypothesis.
D
With a stopping time T and a process X, we can associate the stopped process
XT defined by Xi (w) = XtAT(W). By Exercise (4.16) the family of a-fields
(,XAT) is a filtration and we have the
§4. Filtrations and Stopping Times
45
(4.10) Proposition. If X is progressive, then XT is progressive with respect to the
filtration PfAT).
Proof Left to the reader as an exercise.
The following remark will be technically important.
(4.11) Proposition. Every stopping time is the decreasing limit of a sequence of
stopping times taking only finitely many values.
Proof For a stopping time T one sets
if
T?:.. k,
if
(q -1)r k :::: T < qrk,q < 2kk.
It is easily checked that Tk is a stopping time and that {Tk} decreases to T.
0
The description we have given so far in this section does not involve any
probability measure. We must now see what becomes of the above notions when
(Q, .r) is endowed with a probability measure or rather, as will be necessary in
the study of Markov processes, with a family of probability measures.
(4.12) Definition. If Pe, e E e, is afamity ofprobability measures on (Q, .r) a
property is said to hold almost-surely if it holds Pe-a.s. for every e E e.
With this notion, two processes are, for instance, indistinguishable if they are
indistinguishable for each Pe. If we have a filtration (,Yi) on (Q, .97) we will
want that a process which is indistinguishable from an adapted process be itself
adapted; in particular, we want that a limit (almost-sure, in probability, in the
mean) of adapted processes be an adapted process. Another way of putting this is
that a process which is indistinguishable from an adapted process can be turned
into an adapted process by altering it on a negligible set; this demands that the
negligible sets be in .Y[ for all t which leads to the following definition.
(4.13) Definition. If, for each e, we call ~ the completion of.¥'oo with respect
to Pe, the filtration (.~) is said to be complete if.¥O, hence every.~ contains all
the negligible sets of ne.~.
Of course, as follows from Definition (4.12), negligible means negligible for
every Pe , e E e, in other words r is negligible if for every e there exists a set
Ae in.¥'oo such that r c Ae and Pe[Ael = O.
If (.31) is not complete, we can obtain a larger but complete filtration in the
following way. For each e we call .¥iIJ the (J"-field (J" (.:YfU .Ve) where ,Ve is
the class of Pe-negligible,
-measurable sets; we then set :9r = ne.¥,"'e. The filtration (:9r+) is complete and right continuous. It is called the usual augmentation
of (.Y[).
Of course, if we use the usual augmentation of (.:Yf) instead of (.:Yf) itself,
we will have to check that a process with some sort of property relative to (.31)
.¥e:;
46
Chapter I. Introduction
retains this property relative to the usual augmentation. This is not always obvious,
the completion operation for instance is not an innocuous operation, it can alter
significantly the structure of the filtration. Evidence of that will be given later
on; in fact, all the canonical processes with the same state spaces have the same
uncompleted natural filtrations and we will see that the properties of the completed
ones may be widely different.
We close this section with a general result which permits to show that many
random variables are in fact stopping times. To this end, we will use a difficult
result from measure theory which we now recall.
(4.14) Theorem. If(E, g') is a LCCB space endowed with its Borel a-field and
(Q, .t7, P) is a complete probability space, for every set A E tff· ® $ , the projection :rr(A) of A into Q belongs to .t7.
If r is a subset of Q x lR+, we define the debut Dr of r by
Dr(w) = inf{t ~ 0: (t, w) E r},
with the convention that inf(0) = +00.
(4.15) Theorem. If the filtration (.91) is right-continuous and complete, the debut
of a progressive set is a stopping time.
Proof It is enough to reason when there is only one probability measure involved.
Let r be a progressive set. We apply Theorem (4.14) above to the set r, =
r n ([0, t[x.Q) which belongs to .J9([0, tD ®s;r As a result {Dr < t} = :rr(r,)
belongs to s;r
#
(4.16) Exercise. Let (.91) be a filtration and S, T be (.%')-stopping times.
1°) Prove that S /\ T and S v T are stopping times.
2°) Prove that the sets {S = T}, {S s T}, {S < T} are in ~ n §iT.
3°) If SST, prove that ~ C §iT.
#
(4.17) Exercise. F) If (Tn) is a sequence of (~)-stopping times, then the r.v.
sUPn Tn is a stopping time.
2°) If moreover (.91) is right-continuous, then
infTn ,
n
are stopping times. If Tn -!- T, then §iT =
#
nn §iT
n•
(4.18) Exercise. Let (.91) be a filtration. If T is a stopping time, we denote by
§iT- the a-algebra generated by the sets of §'6 and the sets
{T>t}nr
where r E .91.
1°) Prove that §iT _ c §iT. The first jump time of a Poisson process (Exercise
(1.14) Chapter II) affords an example where the inclusion is strict.
§4. Filtrations and Stopping Times
47
2°) If SST prove that.¥S- C :¥i-. Ifmoreover S < Ton {S < oo}n{T > O},
prove that .¥S C.¥;-.
3°) Let (Tn) be a sequence of stopping times increasing to a stopping time T
and such that Tn < T for every n; prove that Vn:¥in =:¥i-.
4°) If (Tn) is any increasing sequence of stopping times with limit T, prove
that Vn ·¥Tn- = .¥;-.
#
(4.19) Exercise. Let T be a stopping time and r E Y. The random variable Tr
defined by T r = T on r, T r = +00 on r c is a stopping time if and only if
r E.¥;.
(4.20) Exercise. Let (,g;1f) be a right-continuous filtration.
1°) Prove that the a-fields generated on Q x lR.+ by
i) the space of adapted continuous processes,
ii) the space of adapted processes which are left-continuous on ]0, 00[,
are equal. (This is solved in Sect. 5 Chap. IV). This a-field is denoted ;-YJ(,g;1f) or
simply 9> and is called the predictable a-field (relative to (,g;1f». A process Z on
Q is said to be predictable if the map (w, t) -+ Zt(w) is measurable with respect
to .:?P(,g;1f). Prove that the predictable processes are progressively measurable.
2°) If Sand T are two ,g;1f-stopping times and SST, set
]S, T] = {(w, t) : S(w) < t s T(w)}.
Prove that .:?P(.o/f) is generated by the family of sets ]S, T].
3°) If S is a positive r.v., we denote by .9i?§- the a-field generated by all the
variables Zs where Z ranges through the predictable processes. Prove that, if Sis
a stopping time, this a-field is equal to the a-field .9i1i- of Exercise (4.18).
[Hint: For A E .9i?§-, consider the process Zt(w) = lA(W)lJO,s(wl](t).]
Prove that S is .9i1i_-measurable.
4°) In the general situation, it is not true that SST entails .9i1i- C .9f-. Give
an example of a variable S S 1 such that .9i1i- = .~.
*#
(4.21) Exercise (Galmarino's test). Let Q = D(lR.+, JRd) or C(JR+, JRd) and use
the notation at the beginning of the section.
1°) Prove that T is a (.jfo)-stopping time if and only if, for every t, the
properties T(w) Stand Xs(w) = Xs(w' ) for every s :s t, imply T(w) = T(w' ).
Prove that the time TA of Proposition (4.6) is not a (.jf°)-stopping time.
2°) If T is a (~o)-stopping time, prove that A E .¥;O if and only if w E A,
T(w) = T(w' ) and Xs(w) = Xs(w' ) for every s S T(w) implies Wi E A.
3°) Let WT be the point in Q defined by WT(S) = w(s /\ T(w». Prove that f
is y;;:o-measurable if and only if f(w) = f(WT) for every w.
4°) Using the fact that .~ is the union of the a-fields generated by the
countable sub-families of coordinate mappings, prove that .~o = a (X;, S 2: 0).
5°) Deduce from 4°) that ~o is countably generated.
48
Chapter I. Introduction
(4.22) Exercise. A positive Borel function ¢ on lR.+ is said to have the property
(P) if for every stopping time T in any filtration (.3~·"') whatsoever, ¢(T) is a
(J~')-stopping time. Show that ¢ has the property (P) if and only if there is a
to :s +00 such that ¢(t) 2: t for t :s to and ¢(t) = to for t > to.
Notes and Comments
Sect. 1 There are many rigorous constructions of Brownian motion, some of which
are found in the following sections and in the first chapter of the book of Knight
[5). They are usually based on the use of an orthonormal basis of L2(lR.+) or on a
convergence in law, a typical example of which is the result of Donsker described
in Chap. XIII. The first construction, historically, was given by Wiener [1] which
is the reason why Brownian motion is also often called the Wiener process.
Follmer [3] and Le Gall [8] give excellent pedagogical presentations of Brownian motion. The approach we have adopted here, by means of Gaussian measures,
is a way of unifying these different constructions; we took it from the lecture course
of Neveu [2], but it goes back at least to Kakutani. Versions and modifications
have long been standard notions in Probability Theory (see Dellacherie-Meyer
[1 D.
Exercise (1.17) is from Levy, Exercise (1.18) from Durrett [1] and Exercise
(1.19) from Hardin [1]. The first study of polar functions for BM2 appears in
Graversen [1]; this will be taken up more thoroughly in Chap. V. Exercises (1.21)
and (1.22) are from Barlow et aL [1], Song-Yor [1] and Yor [14].
Sect. 2 Our proof of Kolmogorov's criterion is borrowed from Meyer [8] (see
also Neveu's course [2]). Integral type improvements are found in Ibragimov [1];
we also refer to Weber [1). A very useful Sobolev type refinement due to Garsia
et aL [1] is found in Stroock-Varadhan [1] (see also Dellacherie-MaisonneuveMeyer [1]): it has been used in manifold contexts, for instance in Barlow-Yor [2],
to prove the BDG inequalities in Chap. IV; see also Barlow ([3] and [5D and
Donati-Martin ([1]).
The rest of this section is due to Levy. The proof of Theorem (2.7) is borrowed
from Ito-McKean [1] which contains additional information, namely the "ChungErdos-Sirao" test.
Exercise (2.8) is due to Levy and Exercise (2.9) to Dvoretzky et aL [1).
Sect.3 The material covered in this section is now the common lore of probabilists
(see for instance Dellacherie-Meyer [1] Vol. I). For Gaussian processes, we refer
to Neveu [1]. A direct proof of the existence of the Wiener measure is given in
Ito [4).
Exercise (3.9) deals with fractional Brownian motion; a number of references
about this family of Gaussian processes, together with original results are found
in Kahane [1], Chap. 18. The origin of question 2°) in this exercise is found
in Albeverio et aL [1], p. 213-216, Stoll [1] and Yor [20]. Fractional Brownian
motions were introduced originally in Mandelbrot and Van Ness [1]; they include
Notes and Comments
49
Levy's Brownian motions with several parameters, and arise naturally in limit
theorems for intersection local times (Weinryb-Yor [1], Biane [3]).
Exercise (3.11) is due to Aronszajn [1] (see Neveu [I]).
Sect. 4 Filtrations and their associated notions, such as stopping times, have, since
the fundamental work of Doob, been a basic feature of Probability Theory. Here
too, we refer to Dellacherie-Meyer [1] for the history of the subject as well as for
many properties which we have turned into Exercises.
Chapter II. Martingales
Martingales are a very important subject in their own right as well as by their
relationship with analysis. Their kinship to BM will make them one of our main
subjects of interest as well as one of our foremost tools. In this chapter, we describe
some of their basic properties which we shall use throughout the book.
§1. Definitions, Maximal Inequalities and Applications
In what follows, we always have a probability space (Q, .¥', P), an interval T of
Nor lR+ and an increasing family .91, t E T, of sub-a-algebras of .'7. We shall
call it a filtration as in the case of lR+ introduced in Sect. 4 Chap. I, the results of
which apply as well to this case.
(1.1) Definition. A real-valued process Xt, t E T, adapted to (.~ is a submartingale (with respect to .~ if
i)
E [Xi] < 00 for every t E T;
ii) E [Xt I.¥'] :::': Xs a.s.for every pair s, t such that s < t.
A process X such that - X is a submartingale is called a supermartingale and
a process which is both a sub and a supermartingale is a martingale.
In other words, a martingale is an adapted family of integrable random variables such that
i
XsdP =
i
XtdP
for every pair s, t with s < t and A E.¥,.
A sub(super)martingale such that all the variables Xt are integrable is called
an integrable sub(super)martingale.
Of course, the filtration and the probability measure P are very important in this
definition. When we want to stress this fact, we will speak of (.'Fr}-submartingales,
(.91, P)-supermartingales, ... ). A (.Y;)-martingale X is a martingale with respect
to its natural filtration a(X s , s :::: t). Conversely, if;~ ::> .¥;, there is no reason why
a (.¥r}-martingale should be a (~)-martingale. Obviously, the set of martingales
with respect to a given filtration is a vector space.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
52
Chapter II. Martingales
(1.2) Proposition. Let B be a standard linear BM; then the following processes
are martingales with respect to a (Bs, s ::s t):
i) Bt itself, ii) Bl- t,
iii)
M: = exp (aB
t -
~t) for a E IR..
Proof Left to the reader as an exercise. This proposition is generalized in Exercise
(1.18).
0
These properties will be considerably generalized in Chap. IV. We notice that
the martingales in this proposition have continuous paths. The Poisson process of
Exercise (1.14) affords an example of a martingale with cadlag paths. Finally, if
(.~ is a given filtration and Y is an integrable random variable, we can define
a martingale Yt by choosing for each t one random variable in the equivalence
class of E [Y I .91]. Of course, there is no reason why the paths of this martingale
should have any good properties and one of our tasks will precisely be to prove
the existence of a good version.
Another important remark is that if XI is a martingale then, because of Jensen's
inequality, IXtlP is a submartingale for p 2: 1 provided E [IXtIP] < 00 for
every t.
We now turn to the systematic study of martingales and of submartingales.
Plainly, by changing X into -X any statement about submartingales may be
changed into a statement about supermartingales.
(1.3) Proposition. Let (X n ), n = 0, 1, ... be a (sub)martingale with respect to a
discrete filtration (~) and Hn , n = 1, 2, ... , a positive bounded process such that
Hn E .:¥n-l for n > 1; the process Y defined by
is a (sub)martingale. In particular, ifT is a stopping time, the stopped process XT
is a (sub)martingale.
Proof The first sentence is straightforward. The process Y thus defined is the
discrete version of the stochastic integral we will define in Chap. IV. It will be
denoted H . X.
The second sentence follows from the first since Hn = 1[ng] is .¥n-lmeasurable being equal to 1 - I[T:;:n-l].
0
We use this proposition to obtain a first version of the optional stopping theorem which we will prove in Sect. 3. The setting is the same as in Proposition
(1.3).
(1.4) Proposition. If Sand T are two bounded stopping times and S < T, i.e.
there is a constant M such that for every w,
S(w) ::s T(w) ::s M < 00,
then
§ 1. Definitions, Maximal Inequalities and Applications
53
Xs :::: E [XTIYs] a.s.,
with equality in case X is a martingale. Moreover, an adapted and integrable
process X is a martingale if and only if
E[Xs] = E[Xr]
for any such pair of stopping times.
Proof Suppose first that X is a martingale. If Hn = l[n:ST) - l[n:sS), for n > M,
we have
(H· X)n - Xo = X T - X s ,
but since E [(H. X)n] = E[Xo] as is easily seen, we get E[Xs] = E[XT]'
If we apply this equality to the stopping times SB = S 1B + M 1Bc and TB =
T 1B + M 1BC where BE.¥$" (see Exercise (4.19) Chap. I) we get
E[XT1B +XM1Bc] = E[XslB +XM1Bc],
whence it follows that Xs = E[XT!-¥$"] a.s. In particular, the equality E[Xs] =
E [X T] for every pair of bounded stopping times is sufficient to insure that X is a
martingale.
If X is a submartingale, max(X, a) is an integrable submartingale to which we
can apply the above reasoning getting inequalities instead of equalities. Letting a
tend to -00, we get the desired result.
0
We derive therefrom the following maximal inequalities.
(1.5) Proposition. If(Xn) is an integrable submartingale indexed by thefinite set
(0, I, ... , N), then for every ).. > 0,
)..P
[s~p Xn ~ ).. ] :::: E [XN I
(SUP n
X"~A)]
::::
E [IX N I1(SUPn
Xn~A)] .
Proof Let T = inf{n : Xn ~ )..} if this set is non-empty, T = N otherwise. This
is a stopping time and so by the previous result
E[XN] >
>
E[XT] = E [XT1(suPnXn~A)] + E[XT1(suPnXn<A)]
)..p
[s~p Xn :i=: ).. ] + E [XN l(suPn x,,<A)]
because X T ~ ).. on (suPn Xn ~ )..). By subtracting E [XN 1(suPn Xn<A)] from the
two extreme terms, we get the first inequality, while the second one is obvious. 0
(1.6) Corollary. If X is a martingale or a positive submartingale indexed by the
finite set (0, 1, ... , N), then for every p ~ 1 and).. >
°
)..P P [s~p IXnl ~).. ] :::: E [IXNIP] ,
and for any p > 1,
E [IXNI P] :::: E
r
[s~p IXnlP ] :::: (p ~ 1
E [IXNI P].
54
Chapter II. Martingales
Proof By Jensen's inequality, if X N is in LP, the process IXnlP is a submartingale
and the first part of the corollary follows from the preceding result.
To prove the second part, we observe that the left-hand side inequality is trivial;
to prove the right-hand side inequality we set X* = supn IX n I. From the inequality
AP[X* ::: A] S E [IXNII(x':'::A)]' it follows that, for a fixed k > 0,
E[(X* /\ k)P]
=
E
[IoX'J\k PAP-IdA] = E [10 kpAP-I 1(X':'::A)dA]
10 k pAP- I P[X* ::: A]dA S
pE [IXNI
=
lk
PA P- 2 E [IX N 1 1(X':'::A)] dA
rX'J\k AP- 2dA] = -P-E [IXNI(X* /\ k)p-I].
10
p - 1
Holder's inequality then yields
E [(X* /\ k)P] S ~I E [(X* /\ k)prp-I)/p E [IXNIPf/ P ,
p-
and after cancellation
E[(X* /\ k)P] S (_P-)P E[IXNIP].
p-I
The proof is completed by making k tend to infinity.
D
These results carry over to general index sets. If X is a martingale indexed by
an interval T of JR, we can look at its restriction to a countable subset D of T.
We can then choose an increasing sequence Dn of finite subsets of D such that
U Dn = D and apply the above results to Dn. Since E[IXtIP] increases with t,
we get, by passing to the limit in n, that
APP [sUP1Xtl:::
tED
A] SsupE[IXtIP]
t
and for p > I
E [sup IXtIP] S (-p-)P sup E[IXtIP].
lED
p - 1
I
We insist that this is true without any hypothesis on the filtration (Yr) which may
be neither complete nor right continuous.
(1.7) Theorem (Doob's LP-inequality). If X is a right-continuous martingale or
positive submartingale indexed by an interval T oJR then if X* = SUPI IX I I, Jor
p:::l,
AP P[X* ::: A] S sup E[IXII P]
I
andJor p > I,
IIX*lIp S - p - sup IIXlllp.
p -I
I
§ 1. Definitions, Maximal Inequalities and Applications
55
Proof If D is a countable dense subset of T, because of the right-continuity
X* = SUPtED IXtl and the results follow from the above remarks.
D
If td is the point on the right of T, we notice that SUPt II X t lip is equal to II Xtd lip
if td E T and to limtttd II X t lip if T is open on the right.
For p = 2, the second inequality reads
IIX*112 :s 2 sup IIXt ll 2
t
and is known as Doob's L 2 -inequality. Since obviously IIX t l12 :s IIX*1I2 for every
t, we see that X* is in L2 if and only if SUPt IIXtlI2 < +00, in other words if
the martingale is bounded in L 2. We see that in this case the martingale, i.e. the
family of variables {Xt. t E T} is uniformly integrable. These remarks are valid
for each p > 1. They are not for p = 1 and a martingale can be bounded in
L 1 without being uniformly integrable and a fortiori without X* being integrable.
This has been the subject of many studies. Let us just mention here that
E[X*] :s _e_
e- 1
(1 +
sup E [Xt log+ Xt])
t
(see Exercise (1.16».
In the following section, we will apply these inequalities to establish the convergence theorems for martingales. We close this section with some important
applications to Brownian motion. The first is known as the exponential inequality.
It will be considerably generalized in Exercise (3.16) of Chap. IV. We recall that
St = sUPs:S! Bs.
(1.8) Proposition. For a > 0,
P[St ~ at] :s exp(-a 2 t/2).
Proof For a > 0, we use the maximal inequality for the martingale M a of
Proposition (1.2) restricted to [0, t]. Since exp (aS
P[St
~ at]
t - a~t) :s sUPs:S! M:, we get
[s.~~ M; ~ exp (aat - a 2 t /2)]
<
P
<
exp(-aat+a 2 t/2)E[M:J;
but E[Mn = E[Mo] = 1 and infa>o (-aat + ~t) = -a 2 t/2, whence the result
follows.
Remark. This inequality is also a consequence of the equality in law: St <!f1 IB t I
valid for fixed t, which may be derived from the strong Markov property of BM
(see Exercise (3.27) in Chap. III).
56
Chapter II. Martingales
(1.9) Theorem (Law of the iterated logarithm). For the linear standard Brownian motion B,
Bt
- 1] - 1
(2tlog2(l/t))l/2 -
Plim
[liO
where log2 x = log (log x) for x > l.
Proof Let h(t) = J2t log2(1/t) and pick two numbers e and 8 in ]0,1[. We set
f3n = h(e n )/2.
By the same reasoning as in the preceding proof
P [sup (Bs - a ns/2) 2: f3n] ::::: e- anfJn = Kn- l - 8
s:<ol
for some constant K. Thus, by the Borel-Cantelli lemma, we have
P [lim {SUP(Bs - a ns/2) < f3n }] = 1.
s::sl
n---7>OO
If we restrict s to [0, en - l ] we find that, a fortiori, for almost every w, there is an
integer no(w) such that for n > no(w) and s E [0, en - l ]
Bs(w) :::::
a s
a e
T
+ f3n ::::: T
n- l
[I 8 1]
+ +"2 h(e n).
+ f3n = ---u;-
But the function h is increasing on an interval [0, a[, with a > 0, as can easily
be checked. Therefore, for n sufficiently large and s in the interval ](In, en- l ], we
have, for these w's,
Bs(w):::::
[1~8 +~]h(S).
As a result, limsiOBs/ h(s) ::::: (l + 8)/2e + 1/2 a.s. Letting e tend to 1 and then
8 tend to zero, we get that limstoBs/ h(s) ::::: 1 a.s.
We now prove the reverse inequality. For e E]O, 1[, the events
are independent; moreover (see the proof of Theorem (2.7) Chap. I)
..[2; P [To] = ('X) e-u2/2du > _a_ e -a 2 /2
n Ja
with a = (1-
1 + a2
-Je) (210g2 e- /(l-e) f/2. This is of the order of n-(l-2v1eH)/(l-e)
n
= n- a with a < 1. As a result, L~ P[Tn ] = +00 and by the Borel-Cantelli
lemma,
Ben 2: (1 h (en) + Ben+1 infinitely often a.s.
-Je)
§ 1. Definitions, Maximal Inequalities and Applications
57
Since - B is also a Brownian motion, we know from the first part of the proof
that -Ben+l(w) < 2h((r+ 1) from some integer no(w) on. Putting the last two
inequalities together yields, since h(e n+1) :s 2Jeh(e n ) from some n on, that
(1 - v'e) h (en) - 2h (e n+
> h(e
1 - v'e - 4v'e) infinitely often,
Be" >
1)
n) (
and consequently
It remains to let
e tend to zero to get the desired inequality.
0
Using the various invariance properties of Brownian motion proved at the end
of Sect. 1 in Chap. I, we get some useful corollaries of the law of the iterated
logarithm.
(1.10) Corollary. P [limttOBt / J2t log2(l/t) = -1] = 1.
o
Proof This follows from the fact that - B is a BM.
Since the intersection of two sets of probability 1 is a set of probability 1, we
actually have
which may help to visualize the behavior of B when it leaves zero. We see that
in particular 0 is a.s. an accumulation point of zeros of the Brownian motion, in
other words B takes a.s. infinitely many times the value 0 in any small interval
[0, a[.
By translation, the same behavior holds at every fixed time. The reader will
compare with the Holder properties of Sect. 2 in Chap. I.
(1.11) Corollary. For any fixed s,
P [lim(Bt+s - Bs)/J2t log2(l/t) = 1 and
ttO
lim(Bt+s - Bs)/J2t log2(l/t) =
ttO
Proof (B t +s - B s , t :::: 0) is also a BM.
-1] =
1.
o
58
Chapter II. Martingales
Finally using time inversion, we get
(1.12) Corollary.
P [limBt/j2t log2 t =
ttoo
1
and
lim Bt/j2t log2 t =
ttoo
-1] =
1.
Remark. This corollary entails the recurrence property of BM which was proved
in Exercise (1.13) of Chap. I, namely, for every x E lR the set {t : B t = x} is a.s.
unbounded.
#
(1.13) Exercise. If X is a continuous process vanishing at 0, such that, for every
real a, the process M~ = exp {aX t -
'ft} is a martingale with respect to the
filtration (.Y[), prove that X is a (.Y[)-Brownian motion (see the Definition (2.20)
in Chap. III).
[Hint: Use the following two facts:
i) a LV. X is ./V(O, 1) if and only if E[e AX ] = e A2 / 2 for every real A,
ii) if X is a r.v. and .JIJ is a sub-a-algebra such that
E[e AX I.33'] = E[e AX ] < +00
for A in a neighborhood of 0, then X and ./i are independent.]
#
(1.14) Exercise (The Poisson process). Let (Xn) be a sequence of independent
exponential LV.'S of parameter c. Set Sn = "L7 X k and for t :::: 0, Nt = "L~ I[S,,:9]'
1°) Prove that the increments of Nt are independent and have Poisson laws.
2°) Prove that Nt - ct is a martingale with respect to a(Ns , s ::s t).
3°) Prove that (Nt - ct)2 - ct is a martingale.
(1.15) Exercise (Maximal inequality for positive supermartingales). If X is a
right-continuous positive supennartingale, prove that
P
[s~PXt > A] ::s A-I E[Xo]
(1.16) Exercise (The class L log L). 1°) In the situation of Corollary (1.6), if ¢
is a function on lR+, increasing, right-continuous and vanishing at 0, prove that
E [¢(X*)]::S E [IXNllX' A-1d¢(A)].
2°) Applying 1°) with ¢(A) = (A - 1)+, prove that there is a constant C such
that
E[X*] ::s
C(1 + s~p E [IXnllog+(IXnI)])
[Hint: For a, b > 0, we have a log b ::s a log+ a + e-1b.]
The class of martingales for which the right-hand side is finite is called the
class L log L. With the notation of the following exercise, we have L log L CHI.
§ 1. Definitions, Maximal Inequalities and Applications
59
(1.17) Exercise (The Space H Pl. The space of continuous martingales indexed
by ~+, such that X* = SUPt IX t I is in LP, P 2: 1, is a Banach space for the norm
IIXIIHP = IIX*llp·
[Hint: See Proposition (1.22) in Chap. IV.]
Remark. In this book, we focus on continuous processes, which is the reason for
limiting ourselves to continuous martingales. In fact, the same result holds for the
space lHI P of cadlag martingales such that X* is in LP; the space H P above is a
closed subspace of lHI P •
#
(1.18) Exercise. Retain the notation of Exercise (1.14) of Chap. I and prove that
t
1t f(s)dBs, (1 f(S)dB s ) 2 -1t f(S)2ds,
t
exp (1 f(s)dBs -
~ 1t f(S)2dS)
are continuous martingales. This will be considerably generalized in Chap. IV.
(1.19) Exercise. Let X and Y be two positive supermartingales with respect to
the same filtration (.';;<;";) and T a stopping time such that X T 2: YT on {T < oo}.
Prove that the process Z defined by
Zn(W) = Xn(w)
if n < T(w),
Zn(w) = Yn(w)
if n 2: T(w),
is a supermartingale.
**
(1.20) Exercise. 1°) For any measure f.L on ~+, prove that
P [limIBS+h - Bsl/J2hlog2(l/h) = 1 for
htO
f.L- a.e.
sJ = 1.
The following questions deal with the exceptional set.
2°) Using Levy's modulus of continuity theorem (Theorem (2.7) of Chap.
I) prove that for a.e. wand any pair a < b, one can find s] and tl such that
a < s] < t] < band
1
IBtl - BS11 > "2d(t] - s])
where d(h) = J2h log(l/ h).
3°) Having chosen SI, ... , Sn, tl, ... , tn such that IBti - BSi I > i~1 d(ti - Si),
choose s~ E ]sn,tnL s~ ::: Sn + 2- n and IBtn - Bsl > n:ld(tn - s) for every
S E]Sn, s~]. Then, choose Sn+], tn+] in ]sn, s~[ and so on and so forth. Let {so} =
nn[sn, tn]; prove that
limlBso+h - Bsol/d(h) 2: 1.
htO
4°) Derive from 3°) that for a.e. w there is a set of times t dense in ~+ and
such that
60
Chapter II. Martingales
limlBt +h - B t l/J2h log 20/ h) = +00.
h.j.O
5°) Prove that the above set of times is even uncountable.
[Hint: Remove from ]sn, s~[ the middle-third part in Cantor set-like fashion
and choose two intervals ]Sn+l, tn+l [ in each of the remaining parts.]
* (1.21) Exercise. If B is the BMd , prove that
-.
Phm
[
IBtl
t.j.O J2t log2(l/t)
]
=1=1.
[Hint: Pick a countable subset (en) of the unit sphere in IRd such that Ixl =
sup" I(x, en)I.]
Using the invariance properties, state and prove other laws of the iterated
logarithm for BMd.
(1.22) Exercise. If lBl is the Brownian sheet, for fixed sand t,
~i~ (lBl(s+h.t) -lBl(s.t)) /J2h log2(l/ h) =.ji
a.s.
(1.23) Exercise. Prove that if B is the standard BMd
P
[~~~ IBs I ~ 8] : : 2d exp( _8 2/2dt).
[Hint: Use Proposition (1.8) for (e, B t ) where e is a unit vector.]
§2. Convergence and Regularization Theorems
Let us first recall some facts about real-valued functions. Let I be a function
which maps a subset T of IR into iR. Let tl < t2 < ... < td be a finite subset F
of T. For two real numbers a, b with a < b, we define inductively
Sl = inf{t; : I(t;) > b},
S2n+l = inf{t; > S2n : I(t;) > b},
S2 = inf{t; > Sl : l(t;) < a},
S2n+2 = inf{t; > S2n+l : I(ti) < a},
where we put inf(0) = td. We set
D(f, F, [a, b]) = sup {n : S2n < td},
and we define the number of downcrossings of [a, b] by I as the number
D(f, T, [a, b]) = sup {D(f, F, [a, b]) : F
finite
, Fe T}.
One could define similarly the number U (f, T, [a, b]) of upcrossings. The function
I has no discontinuity of the second kind, in particular I has a limit at the
boundaries of T whenever T is an open interval, if and only if D(f, T, [a, b]) (or
U(f, T, [a, b])) is finite for every pair [a, b] of rational numbers.
We now consider the case where I is the path of a submartingale X; if T is
countable, D(X, T, [a, b]) is clearly a random variable and we have the
§2. Convergence and Regularization Theorems
61
(2.1) Proposition. If X is a submartingale and T is countable, then for any pair
(a, b),
(b - a)E[D(X, T, [a, b])] .:::: sup E [eXt - b)+].
lET
Proof It is enough to prove the inequality when T is finite and we then use the
notation above. The Sk'S defined above are now stopping times with respect to the
discrete filtration (.¥r:"). We are in the situation of Proposition (1.4) which we can
apply to the stopping time Sk. Set Ak = {Sk < td}; then Ak E.~ and Ak J A k+1.
On A 2n - 1 , we have X S2n _1 > b, on A2n we have X S2n < a and therefore
o <
<
i2n-1 (X S2n _1 - b) dP .:::: i2n-1 (XS2n - b) dP
(a - b)P(A 2n ) + {
(XS2n - b) dP.
JA2n-1\A2n
Consequently, since S2n = td on A~n'
But P(A 2n ) = P[D(X, T, [a, b]) :::: n] and the sets A 2n - 1\A 2n are pairwise disjoint so that by summing up the above inequalities, we get
(b - a)E[D(X, T, [a, b])] .:::: E [(X ld - b)+],
o
which is the desired result.
We now apply this to the convergence theorem for discrete submartingales.
(2.2) Theorem. If (X n ), n E N, is a submartingale such that
supE [X:] < +00,
n
then (Xn) converges almost-surely to a limit which is < +00 a.s.
Proof Fatou's lemma ensures that limXn < +00 a.s. So if our claim were false,
there would be two real numbers a and b such that limXn < a < b < limXn with
positive probability; thus, we would have D(X, N, [a, b]) = +00 with positive
probability, which, by the foregoing result, is impossible.
0
It is also useful to consider decreasing rather than increasing families of ualgebras, or in other words to "reverse"the time in martingales. Let (J~;)n:::O, be
a sequence of sub-u-fields such that .¥,;' c .~ if n .:::: m .:::: O. A submartingale
with respect to (.¥,;) is an adapted family (Xn) of real-valued r.v.'s such that
E[X;;] < 00 for every nand Xn .:::: E[Xml.¥';] for n .:::: m .:::: O. We then get the
following
62
Chapter II. Martingales
(2.3) Theorem. If(Xn), n E -N, is a submartingale, then lim n-+_ oo Xn exists a.s.
Ifmoreover sUPn E[IXnl] < 00, then (Xn) is uniformly integrable, the convergence
holds in L I and, for every n
lim Xk::: E [Xnl.~oo]
k-+-oo
Proof It is easily seen that sUPn E[X;;J ::: E[Xt] < +00, so that the first state-
ment is proved as Theorem (2.2). To prove the second, we first observe that the
condition sUPn E[IXnl] < +00 is equivalent to limn-->_oo E[Xn] > -00. Now, for
any c > and any n, we have
°
For E: > 0, there is an integer no such that E[Xn] > E[Xno] - E: for n ::: no; using
this and the submartingale inequality yields that for n ::: no,
As P[IXnl > c] ::: c- I sUPn E[IXnl], the uniform integrability of the family X n ,
n E -N, now follows readily from (*) and implies that the convergence holds in
LI. Finally, if r E .~oo, for m < n,
and we can pass to the limit, thanks to the L I-convergence, to get
which ends the proof.
D
The following corollary to the above results is often useful.
(2.4) Corollary. Let Xn be a sequence of r. v. 's converging a.s. to a r. v. X and
such that for every n, 1Xn 1 ::: Y where Y is integrable. If (.7,,) is an increasing
(resp: decreasing) sequence of sub-a-algebras, then E[Xn 1 .7,,] converges a.s. to
E[X 1.97] where.r = a(U·~¥,;) (resp .
= nn .:P;;).
.cr
§2. Convergence and Regularization Theorems
63
Proof Pick 8 > 0 and set
v = supXn
U = inf X n ,
n?:.m
n:::.m
where m is chosen such that E[V - U] < 8. Then, for n :::: m we have
the left and right-hand sides of these inequalities are martingales which satisfy the
conditions of the above theorems and therefore
We similarly have
E[U 1.37] :::: E[X 1.:37] :::: E[V 1.¥].
It follows that E [limE [Xn 1.~] -limE [Xn 1Yh]] :::: e, hence E [Xn 1.9;;'] converges a.s. and the limit is E[X .:37].
0
1
We now tum to the fundamental regularization theorems for continuous time
(sub )martingales.
(2.5) Theorem. if Xt, t E ffi.+, is a submartingale, then for almost every w, for
each t E]O, 00[, limrtt,rEIQi Xr(w) exists and for each t E [0,00[, limr-l-t,rEIQi Xr(w)
exists.
Proof It is enough to prove the results for t belonging to some compact subinterval I. If td is the right-end point of I, then for any tEl,
It follows from Proposition (2.1) that there exists a set Do C D such that
P(Do) = 1 and for w E Do
D (X(w), I n Q, [a, b]) < 00
for every pair of rational numbers a < b. The same reasoning as in Theorem (2.2)
then proves the result.
0
We now define, for each t E [0, 00[,
and for t E]O, 00[,
Xt-
=
lim X r .
rtt,rEIQi
By the above result, these upper limits are a.s. equal to the corresponding
lower limits. We study the processes thus defined.
64
Chapter II. Martingales
(2.6) Proposition. Suppose that E[IXtl] < +00 for every t, then E[IXt+l] < 00
for every t and
X t :s E[Xt+ 1 .¥r] a.s.
This inequality is an equality if the function t --+ E[Xd is right-continuous, in
particular if X is a martingale. Finally, (X t +) is a submartingale with respect to
(.Yl+) and it is a martingale if X is a martingale.
Proof We can restrict ourselves to a compact subinterval. If (tn) is a sequence
of rational numbers decreasing to t, then (XtJ is a submartingale for which we
are in the situation of Theorem (2.3). Thus, it follows immediately that X t + is
integrable and that Xt" converges to X t+ in L 1. Therefore, we may pass to the
limit in the inequality X t :s E [Xtn .¥r] to get
1
Xt :s E [Xt+ 1.%'].
Also, the L1-convergence implies that E[X t+] = limn E[XtJ so that ift --+ E[Xd
is right-continuous, E[Xtl = E[X t+] hence X t = E[Xt+ 1.31] a.s.
Finally, let s < t and pick a sequence (sn) of rational numbers smaller than t
decreasing to s. By what we have just proved,
and applying Theorem (2.3) once again, we get the desired result.
Remark. By considering X t Va instead of Xr, we can remove the assumption that
X t is integrable for each t. The statement has to be changed accordingly.
The analogous result is true for left limits.
(2.7) Proposition. If E[IX t I] <
t > 0 and
00 for each t,
Xt- :s E [Xt 1.%'-]
then E[IXt-l] <
+00 for each
a.s.
This inequality is an equality ift --+ E[Xtl is left-continuous, in particular if X is
a martingale. Finally, X t - , t > 0, is a submartingale with respect to (.31-) and a
martingale if X is a martingale.
Proof We leave as an exercise to the reader the task of showing that for every
a E JR, {X t va} , t E J, where J is a compact subinterval, is uniformly integrable.
The proof then follows the same pattern as for the right limits.
0
These results have the following important consequences.
(2.8) Theorem. If X is a right-continuous submartingale, then
J) X is a submartingale with respect to (.jf+), and also with respect to the com-
pletion of (.3~'·+),
2) almost every path of X is cadlag.
Proof Straightforward.
o
§2. Convergence and Regularization Theorems
65
(2.9) Theorem. Let X be a submartingale with respect to a right-continuous and
complete filtration (Jif); ift -+ E[Xd is right-continuous (in particular, if X is a
martingale) then X has a cadlag modification which is a (.9()-submartingale.
Proof We go back to the proof of Theorem (2.5) and define
The process X is a right-continuous modification of X by Proposition (2.6). It is
adapted to (.9(), since this filtration is right-continuous and complete and gg is
negligible. Thanks again to Proposition (2.6), X is a submartingale with respect
to (Ji;) and finally by Theorem (2.5) its paths have left limits.
0
These results will be put to use in the following chapter. We already observe
that we can now answer a question raised in Sect. 1. If (.9() is right-continuous
and complete and Y is an integrable random variable, we may choose Yt within
the equivalence class of E[Y I .9(] in such a way that the resulting process is a
cadlag martingale. The significance of these particular martingales will be seen in
the next section.
From now on, unless otherwise stated, we will consider only right-continuous
submartingales. For such a process, the inequality of Proposition (2.1) extends at
once to
(b - a)E [D (X, ffi.+, [a, b))] ::'S sup E [(X t - b)+]
I
and the same reasoning as in Theorem (2.2) leads to the convergence theorem:
(2.10) Theorem. /fSUPt E[Xn < 00, then limHoo XI exists almost-surely.
A particular case which is often used is the following
(2.11) Corollary. A positive supermartingale converges a.s. as t goes to infinity.
In a fashion similar to Theorem (2.3), there is also a convergence theorem as
t goes to zero for submartingales defined on ]0,00[. We leave the details to the
reader.
The ideas and results of this section will be used in many places in the sequel.
We close this section by a first application to Brownian motion. We retain the
notation of Sect. 2 in Chap. I.
(2.12) Proposition. /f{..1 n } is a sequence o/refining (i.e . ..1 n C ..1 n +1) subdivisions
0/[0, t] such that l..1 n l -+ 0, then
almost-surely
.
66
Chapter II. Martingales
Proof We use the Wiener space (see Sect. 3 Chap. I) as probability space and
the Wiener measure as probability measure. If 0 = fo < fl < ... < fk = f is
a subdivision of [0, f], for each sequence B = (BI, ... , Bk) where Bi = ±1, we
define a mapping Be on Q by
Bew(O)
=
BeW(fi_l) + Bi (w(s) - W(fi_l))
Bew(s)
Bew(s)
0,
=
Bew(fk) + w(s) - W(tk)
if
S E [fi -I, fi] ,
if
S 2: fk.
Let .Y!i be the a-field of events left invariant by all Be's. It is easy to see that
W is left invariant by all the Be's as well. For any integrable r.v. Z on W, we
consequently have
e
hence E [( Bti - Bti _l ) (Btj - Btj_J 1.99] = 0 for i i= j. If .Y!in is the a-field
corresponding to L1 n , the family.Y!in is decreasing and moreover
By Theorem (2.3), Li (Bti - Bti _I )2 converges a.s. and, as we already know that
it converges to t in L 2, the proof is complete.
D
#
(2.13) Exercise. 1°) Let (Q,.~, P) be a probability space endowed with a filtration (~) such that a(U~) = .~. Let Q be another probability measure on
,¥ and Xn be the Radon-Nikodym derivative of the restriction of Q to ~ with
respect to the restriction of P to ~.
Prove that (Xn) is a positive (~, P)-supermartingale and that its limit Xoo
is the Radon-Nikodym derivative dQ/dP. If Q « P on ~, then (X n ) IS a
martingale and Xn = E[Xoo I.¥nl
More on this matter will be said in Sect. 1 Chap. VIII.
2°) Let P be a transition probability (see Sect. 1 Chap. III) on a separable
measurable space (E, tF) and 'A be a probability measure on tF. Prove that there
is a bimeasurable function I on E x E and a kernel N on (E, g') such that for
each x, the measure N(x, .) is singular with respect to 'A and
P(x, A) = N(x, A) +
i
I(x, y)'A(dy).
* (2.14) Exercise (Dubins' inequality). If (Xn ), n = 0, 1, ... is a positive supermartingale, prove, with the notation of the beginning of the section, that
State and prove a similar result for upcrossings instead of downcrossings.
§2. Convergence and Regularization Theorems
#
67
(2.15) Exercise. Let (Q,.7", P) be a probability space and (M", n ::: 0) be a
sequence of sub-a-fields of.7" such that ~ C ~m, if 0 :s m :s n. If ~' is
another sub-a-field of.7" independent of ~, prove that
up to P-negligible sets.
[Hint: Show that, if C E ~, D E ~, then limn - H )(, P (C D I ~ V ~) belongs
to ~ V (nn ~).]
(2.16) Exercise. For the standard BM, set ~' = a(Bu , U ::: t). Prove that for
every real A, the process exp {(ABr/t) - (A 2 /2t)} ,t > 0, is a martingale with
respect to the decreasing family (.~).
[Hint: Observe that Bs - (s/t)B t is independent of ~ for s < t or use timeinversion. ]
(2.17) Exercise. Suppose that we are given two filtrations (~o) and (.91) such
that ~o ~ .91 for each t and these two a-fields differ only by negligible sets of
.¥oo. Assume further that (~o) is right-continuous.
1°) Show that every (.~%)-adapted and right-continuous process is indistinguishable from a (~o)-adapted process.
2°) Show that a right-continuous (.91)-submartingale is indistinguishable from
a cadlag (~o)-submartingale.
(2.18) Exercise (Krickeberg decomposition). A process X is said to be LIbounded or bounded in L I if there is a finite constant K such that for every
t ::: 0, E[IX t 11 :s K.
1°) If M is a L I-bounded martingale, prove that for each t the limits
M/±) = lim E [M; I .gif]
n--->oo
exist a.s. and the processes M(±) thus defined are positive martingales.
2°) If the filtration is right-continuous and complete, prove that a rightcontinuous martingale M is bounded in L I iff it can be written as the difference
of two cadlag positive martingales M(+) and M(-).
3°) Prove that M(+) and MH may be chosen to satisfy
s~p E [IMtl] = E [Mci+)] + E [Mci-)]
in which case the decomposition is unique (up to indistinguishability).
4°) The uniqueness property extends in the following way: if Y and Z are two
positive martingales such that M = Y - Z, then Y ::: M(+) and Z ::: MH where
M(±) are the martingales of 3°).
68
Chapter II. Martingales
§3. Optional Stopping Theorem
We recall that all the (sub, super)martingales we consider henceforth are cadlag.
In the sequel, we shall denote by .¥oo the a-algebra Vt:P;. In Theorem (2.9) of
last section, the limit variable Xoo is measurable with respect to.¥oo. We want to
know whether the process indexed by 1R+ U {+oo} obtained by adjoining Xoo and
.~ is still a (sub)martingale. The corresponding result is especially interesting
for martingales and reads as follows.
(3.1) Theorem. For a martingale Xt, t E 1R+, the following three conditions are
equivalent,
i) limHoo X t exists in the LI-sense;
ii) there exists a random variable X00 in L I, such that Xt = E [X 00 I .%'J;
iii) the family {Xt, t E 1R+} is uniformly integrable.
If these conditions hold, then Xoo = limHOO X t a.s. Moreover, if for some
p > 1, the martingale is bounded in LP, i.e. SUPt E[IXt IP] < 00, then the equivalent
conditions above are satisfied and the convergence holds in the LP -sense.
Proof That ii) implies iii) is a classical exercise. Indeed, if we set I't =
{IE [Xoo I ~]I > a},
at ~jlE [Xoo 1·%']1 dP ~ j E [IXooll.%'] dP = jlXooldP.
r,
r,
r,
On the other hand, Markov's inequality implies
It follows that, by taking a large, we can make at arbitrarily small independently
of t.
If iii) holds, then the condition of Theorem (2.10) is satisfied and X t converges
to a r.v. Xoo a.s., but since {XI> t E 1R+} is uniformly integrable, the convergence
holds in the L I-sense so that i) is satisfied.
If i) is satisfied and since the conditional expectation is an L I-continuous
operator, passing to the limit as h goes to infinity in the equality
yields ii).
Finally, if SUPt E[IXtIP] < 00, by Theorem (1.7), SUPt IXtl is in LP, and
consequently the family {IXtIP, t E 1R+} is uniformly integrable.
0
It is important to notice that, for p > 1, a martingale which is bounded in
LP is automatically uniformly integrable and its supremum is in LP. For p = 1,
the situation is altogether different. A martingale may be bounded in L I without
being uniformly integrable, and may be uniformly integrable without belonging
§3. Optional Stopping Theorem
69
to !HI l , where !HI l is the space of martingales with an integrable supremum (see
Exercise (1.17». An example of the fonner is provided by exp{ B t - t /2} where B
is the BM; indeed, as B t takes on negative values for arbitrarily large times, this
martingale converges to zero a.s. as t goes to infinity, and thus, by the preceding
theorem cannot be unifonnly integrable. An example of the latter is given in
Exercise (3.15).
The analogous result is true for sub and supennartingales with inequalities in
ii); we leave as an exercise to the reader the task of stating and proving them.
We now tum to the optional stopping theorem, a first version of which was
stated in Proposition (1.4). If X is a unifonnly integrable martingale, then Xoo
exists a.s. and if S is a stopping time, we define X s on {S = oo} by setting
Xs = Xoo.
(3.2) Theorem. If X is a martingale and S, T are two bounded stopping times
with S .:s T,
Xs = E [XTI.~] a.s.
If X is uniformly integrable, the family {X s} where S runs through the set of all
stopping times is uniformly integrable and if S .:s T
Remark. The two statements are actually the same, as a martingale defined on an
interval which is closed on the right is unifonnly integrable.
Proof We prove the second statement. We recall from Proposition (1.4) that
if Sand T take their values in a finite set and S .:s T. It is known that the family U
ofr.v.'s E[Xool.Y9] where.'Zi runs through all the sub-a-fields of.:7' is unifonnly
integrable. Its closure U in L 1 is still unifonnly integrable. If S is any stopping
time, there is a sequence Sk of stopping times decreasing to S and taking only
finitely many values; by the right-continuity of X, we see that X s also belongs
to U, which proves that the set {Xs, S stopping time} is unifonnly integrable.
As a result, we also see that X Sk converges to X s in L l. If r E ~, it belongs a
fortiori to ~k and we have
passing to the limit yields
1
XsdP =
1
XoodP ;
in other words, Xs = E[Xoo I ~] which is the desired result.
D
70
Chapter II. Martingales
We insist on the importance of uniform integrability in the above theorem. Let
X be a positive continuous martingale converging to zero and such that Xo = 1,
for instance X t = exp(B t - tI2); if for IX < 1, T = inf{t : Xl :S IX} we have
X T = IX, hence E[XTJ = IX, whereas we should have E[Xr] = E[XoJ = 1 if
the optional stopping theorem applied. Another interesting example with the same
martingale is provided by the stopping times dt = inf{s > t : Bs = O}. In this
situation, all we have is an inequality as is more generally the case with positive
supermartingales.
(3.3) Theorem. If X is a positive right-continuous supermartingale and if we set
Xoo = O,for any pair S, T oJstopping times with S :S T,
Proof Left to the reader as an exercise as well as analogous statements for submartingales.
0
Before we proceed, let us observe that we have a hierarchy among the processes
we have studied which is expressed by the following strict inclusions:
supermartingales ~ martingales ~ uniformly integrable martingales ~ 1HI1.
We now tum to some applications of the optional stopping theorem.
(3.4) Proposition. If X is a positive right-continuous supermartingale and
T(w) = inf{t : Xt(w) = O} /\ inf{t > 0: Xl-(w) = O}
then,for almost every w, X.(w) vanishes on [T(w), 00[.
Proof Let Tn = inf{t : X t :S lin}; obviously, Tn- 1 :S Tn :S T. On {Tn = oo}, a
fortiori T = 00 and there is nothing to prove. On {Tn < oo}, we have XTn :S lin.
Let q E CQ+; T + q is a stopping time> Tn and, by the previous result,
Passing to the limit yields
Since {T < oo} C {Tn < 00, 'v'n}, we finally get XT+q = 0 a.s. on {T < oo}. The
proof is now easily completed.
0
(3.5) Proposition. A cadlag adapted process X is a martingale if and only ifJor
every bounded stopping time T, the r. v. X T is in L 1 and
E[XT J = E[XoJ·
§3. Optional Stopping Theorem
71
Proof The "only if' part follows from the optional stopping theorem. Conversely,
if s < t and A E §f the r.v. T = tlAc + SIA is a stopping time and consequently
On the other hand, t itself is a stopping time, and
Comparing the two equalities yields Xs = E[X t I .~.
(3.6) Corollary. If M is a martingale and T a stopping time, the stopped process
MT is a martingale with respect to (Sf).
Proof The process MT is obviously cadlag and adapted and if S is a bounded
stopping time, so is S /\ T; hence
E [MIl = E [MSATJ = E [Mol = E [Ml].
o
Remarks. 1°) By applying the optional stopping theorem directly to M and to
the stopping times T /\ s and T /\ t, we would have found that MT is a martingale
but only with respect to the filtration (.9;;At). But actually, a martingale with
respect to (.9;; At) is automatically a martingale with respect to .¥r.
2°) A property which is equivalent to the corollary is that the conditional
expectations E [. I .~] and E [. 1.9;;] commute and that E [. I ~ 1.9;;] =
E [. I ~AT ]. The proof of this fact, which may be obtained also without referring
to martingales, is left as an exercise to the reader.
Here again, we close this section with applications to the linear BM which we
denote by B. If a is a positive real number, we set
Ta = inf{t > 0 : B t = a},
thanks to the continuity of paths, these times could also be defined as
Ta = inf{t > 0 : Bt ~ a},
they are stopping times with respect to the natural filtration of B. Because of the
recurrence of BM, they are a.s. finite.
(3.7) Proposition. The Laplace transforms of the laws of Ta and fa are given by
E [exp (-ATa )] = exp
(-aJn), E [exp (-Af
a )] =
(COSh
(am)) -\ .
72
Chapter II. Martingales
Proof For s ~ 0, Mt = exp (sB t - s2t/2) is a martingale and consequently,
M:/\Ta is a martingale bounded by e sa . A bounded martingale is obviously uni-
formly integrable, and therefore, we may apply the optional stopping theorem to
the effect that
E [MU = E [M~] = 1,
which yields E [ exp ( - ~ Ta)] = e- sa , whence the first result follows by taking
A = s2/2.
For the second result, the reasoning is the same using the martingale Nt =
(MJ + M t- s ) /2 = cosh (sB t ) exp (- ~t), as N:/\Ta is bounded by cosh(sa).
0
Remark. By inverting its Laplace transform, we could prove that Ta has a law
given by the density a(2rrx3)-1/2exp(-a2/2x), but this will be done by another
method in the following chapter. We can already observe that
The reason for that is the independence of Ta and (Ta+b - Ta), which follows from
the strong Markov property of BM proved in the following chapter.
Here is another application in which we call Px the law of x + B.
(3.8) Proposition. We have, for a < x < b,
b-x
b-a
Px [Ta < Tbl = - - ,
x-a
Px [Tb < TaJ = - - .
b-a
Proof By the recurrence of BM
Px [Ta < Tbl + Px [Tb < Tal = 1.
On the other hand, BTa/\Tb is a bounded martingale to which we can apply the
optional stopping theorem to get, since BTa = a, BTb = b,
We now have a linear system which we solve to get the desired result.
(3.9) Exercise. If X is a positive supermartingale such that E[limn Xnl =
E[Xol < 00, then X is a uniformly integrable martingale.
(3.10) Exercise. Let c and d be two strictly positive numbers, B a standard linear
BM and set T = Tc /\ Ld.
1°) Prove that, for every real number s,
E [e-(S2/ 2)T 1(T=Tc) ] = sinh(sd)/ sinh (s(c + d»),
and derive therefrom another proof of Proposition (3.8). Prove that
§3. Optional Stopping Theorem
73
E[exp (- s; T) ] = cosh (s(e - d)/2)/ cosh (s(e + d)/2),
*
and compare with the result in Proposition (3.7).
[Hint: Use the martingale exp (Bt - C;d) - ~t).]
(s
2°) Prove that for 0 :::; s < n(e + d)-I,
E [exp (s;
T) ] = cos (s(e - d)/2)/ cos (s(e + d)/2).
[Hint: Either use analytic continuation or use the complex martingale
exp
(is (Bt - C;d) + ~t).]
(3.11) Exercise. 1°) With the notation of Proposition (3.7), if B is the standard
linear BM, by considering the martingale
t, prove that Ta is integrable and
compute E[Ta].
[Hint: To prove that Ta ELI, use the times Ta /\ n.]
2°) Prove that Ta is not integrable.
[Hint: If it were, we would have a = E [BTa] = 0.]
3°) With the notation of Proposition (3.8), prove that
B; -
Ex [Ta /\ Tb ] = (x - a)(b - x).
This will be taken up in Exercise (2.8) in Chap. VI.
[Hint: This again can be proved using the martingale
derived from Exercise (3.10) 2°).]
#
B; - t, but can also be
(3.12) Exercise. Let M be a positive continuous martingale converging a.s. to
zero as t goes to infinity. Put M* = SUPt Mt .
1°) For x > 0, prove that
p [M* ~ xl.91J] = 1 /\ (Mo/x).
[Hint: Stop the martingale when it first becomes larger than x.]
2°) More generally, if X is a positive .91J-measurable r.v. prove that
p [M* ~ XI.91J] = 1/\ (Mo/X).
*
Conclude that Mo is the largest .:¥6"-measurable r.v. smaller than M* and that
M* cg] Mo/ U where U is independent of Mo and uniformly distributed on [0, 1].
3°) If B is the BM started at a > 0 and To = inf{t : Bt = OJ, find the law of
the r.v. Y = SUPt<To B t •
4°) Let B be the standard linear BM; using Mt = exp (2JL (B t - JLt», JL > 0,
prove that the r.v. Y = SUPt (B t - JLt) has an exponential density with parameter
2JL. The process Bt - JLt is called the Brownian motion with drift (-JL) and is
further studied in Exercise (3.14).
5°) Prove that the r.v. h of Exercise (1.21) Chap. I is integrable and compute
the constant C 2 (X).
N.B. The questions 3°) through 5°) are independent from one another.
74
Chapter II. Martingales
(3.13) Exercise. Let B be the standard linear BM and f be a locally bounded
Borel function on R
1°) If f(Bt) is a right-continuous martingale with respect to the filtration
prove that f is an affine function (one could also make
(g;O) = (0' (Bs, s :s
no assumption on f and suppose that f(Bt) is a continuous (g;o)-martingale).
Observe that the assumption of right-continuity is essential; if f is the indicator
function of the set of rational numbers, then feB) is a martingale.
2°) If we suppose that f (B t ) is a continuous (g;o)-submartingale, prove that
f has no proper local maximum.
[Hint: For c > 0, use the stopping times T = Tc 1\ LI and
t»,
S=inf{t~T:Bt=-1
or
c+e
or
c-e}.]
3°) In the situation of 2°), prove that f is convex.
[Hint: A continuous function is convex if and only if f (x) + ax + {J has no
proper local maximum for any a and {J.]
* (3.14) Exercise. Let B be the standard linear BM and, for a > 0, set
O'a =
inf{t: B t < t -a}.
1°) Prove that O'a is an a.s. finite stopping time and that lima-> 00 O'a = +00 a.s.
2°) Prove that E [exp (!O'a)] = exp(a).
[Hint: For).., > 0, use the martingale exp (-(.Jl + 2), - 1)(Bt - t) - At)
stopped at 0'a to prove that E [e -Aua ] = exp ( -a (.J 1 + 2), - 1)). Then, use analytic continuation.]
3°) Prove that the martingale exp (Bt - 4t) stopped at O'a is uniformly integrable.
4°) For a > 0 and b > 0, define now
O'a.b =
inf{t : B t < bt - a};
in particular, O'a = O'a.l. Prove that
[Hint: Using the scaling property of BM, prove that O'a,b ~ b- 2 0'ab,I']
5°) For b < 1, prove that E [exp (!O'I,b)] = +00.
[Hint: Use 2°).]
* (3.15) Exercise. Let ([2, $7, P) be ([0, 1], .9c9([0, 1]), dw) where dw is the
Lebesgue measure. For 0 :s t < 1, let .%' be the smallest sub-O'-field of $7
containing the Borel subsets of [0, t] and the negligible sets of [0, 1].
1°) For f E LI([O, 1], dw), give the explicit value of the right-continuous
version of the martingale
Xt(w) = E [J1.9r"] (w), 0
:s t < 1.
§3. Optional Stopping Theorem
2°) Set iI f(t) = I~t
75
t f(u)du and, for p > 1, prove Hardy's U-inequality
II iI f
t:; ~
p
1 II !II p •
[Hint: Use Doob's U-inequality.]
3°) Use the above set-up to give an example of a unifonnly integrable martingale which is not in HI.
4°) If fol If(w)llog+ If(w)ldw < 00, check directly that iI f is integrable.
Observe that this would equally follow from the continuous-time version of the
result in Exercise (1.16).
**
(3.16) Exercise (BMO-martingales). 1°) Let Y be a continuous unifonnly integrable martingale. Prove that for any p E [1,00[, the following two properties are
equivalent:
i) there is a constant C such that for any stopping time T
ii) there is a constant C such that for any stopping time T
[Hint: Use the stopping time Tr of Exercise (4.19) Chap. I.]
The smallest constant for which this is true is the same in both cases and
oo}
is called BMOp and
is denoted by IIYIIBMo p' The space {Y: IIYIIBMop <
IIBMop is a semi-nonn on this space. Prove that for p < q, BMO q S; BMO p .
The reverse inclusion will be proved in the following questions, so we will write
simply BMO for this space.
2°) The conditions i) and ii) are also equivalent to
iii) There is a constant C such that for any stopping time T there is an .'F[measurable, LP-r.v. CiT such that
3°) If Yt = E [Y00 I .~] for a bounded r.v. Y00, then Y E BMO and II Y IIBMol :::;
2 II Y 00 11 00 , Examples of unbounded martingales in BMO will be given in Exercise
(3.30) of Chap. III.
4°) If Y E BMO and T is a stopping time, yT E BMO and II yT IIBMo l :::;
IIYIIBMo l ,
5°) (The John-Nirenberg inequality). Let Y E BMO and II YII BMO I :::; 1. Let
a > 1 and T be a stopping time and define inductively
Ro = T, Rn = inf{t > Rn- I : I Yt - YRn-11 > a};
prove that P [Rn < 00] :::: a P [ Rn+ 1 < 00]. Prove that there is a constant C such
that for any T
76
Chapter II. Martingales
P [sup IYt - YTI > A] ::s Ce- icje P[T < 00];
t~T
in particular, if y* = SUPt IYtl,
P [Y*:::: AJ::s Ce- icje •
**
As a result, Y* is in U' for every p.
[Hint: Apply the inequality E[IYs - YTI] ::s IIYIIBM0 1 P[T < 00] which is
valid for S :::: T, to the stopping times Rn and Rn+ I .]
6°) Deduce from 5°) that BMOp is the same for all p and that all the seminorms IIYllBMO p are equivalent.
(3.17) Exercise (Continuation of Exercise (1.17». [The dual space of HI].
1°) We call atom a continuous martingale A for which there is a stopping time T
such that
l for every t. Give examples of
i) At = 0 for t ::s T; ii) IAtl ::s P[T <
atoms and prove that each atom is in the unit ball of HI.
2°) Let X E HI and suppose that Xo = 0; for every p E Z, define
oor
Tp = inf{t : IXtl > 2 P }
and Cp = 3 . 2PP[Tp < 00]. Prove that AP = (XTp+l - X T,,) jC p is an atom for
each p and that X = L~: CpAP in HI. Moreover, L~: ICpl ::s 611XIIHl.
3°) Let Y be a uniformly integrable continuous martingale. Prove that
1
"2IIYIIBMOI ::s sup {IE [AooYoo] I ; A atom}::s IIYIIBM0 1
and deduce that the dual space (H I )* of HI is contained in BMO.
[Hint: For the last step, use the fact that the Hilbert space H2 (Sect. 1 Chap. IV)
is dense in HI.]
4°) If X and Yare in H2, prove FefJerman's inequality
IE [(XooYoo)]I ::s 611XIIHI IIYIIBMO,
and deduce that (H I )* = BMO.
[Hint: Use 2°) and notice that IL~N CpAPI ::s 2X*.]
The reader will notice that if X is an arbitrary element in H I and Y an
arbitrary element in BMO I , we do not know the value taken on X by the linear
form associated with Y. This question will be taken up in Exercise (4.24) Chap. IV.
*
(3.18) Exercise (Predictable stopping). A stopping time T is said to be predictable if there exists an increasing sequence (Tn) of stopping times such that
i) limn Tn = T
ii) Tn < T for every n on {T > O}. (See Sect. 5 Chap. IV)
If Xr, t E lR+, is a uniformly integrable martingale and if S < T are two
predictable stopping times prove that
Xs- = E[XT-I·¥s-J = E[XTI·¥s-J
[Hint: Use Exercise (4.18) 3°) Chap. I and Corollary (2.4).]
Notes and Comments
77
Notes and Comments
Sect. 1. The material covered in this section as well as in the following two is
classical and goes back mainly to Doob (see Doob [1]). It has found its way in
books too numerous to be listed here. Let us merely mention that we have made
use of Dellacherie-Meyer [1] and Ikeda-Watanabe [2].
The law of the iterated logarithm is due, in varying contexts, to Khintchine
[1], Kolmogorov [1] and Hartman-Wintner [1]. We have borrowed our proof from
McKean [1], but the exponential inequality, sometimes called Bernstein's inequality, had been used previously in similar contexts. In connection with the law of
the iterated logarithm, let us mention the Kolmogorov and Dvoretzky-Erdos tests
which the reader will find in Ito-McKean [1] (see also Exercises (2.32) and (3.31)
Chap. III).
Most exercises are classical. The class L log L was studied by Doob (see Doob
[1]). For Exercise (1.20) see Walsh [6] and Orey-Taylor [1].
Sect. 2. The proof of Proposition (2.12) is taken from Neveu [2] and Exercise
(2.14) is from Dubins [1]. The result in Exercise (2.13) which is important in
some contexts, for instance in the study of Markov chains, comes from Doob [1];
it was one of the first applications of the convergence result for martingales. The
relationship between martingales and derivation has been much further studied;
the reader is referred to books centered on martingale theory.
Sect. 3. The optional stopping theorem and its applications to Brownian motion
have also been well-known for a long time. Exercise (3.10) is taken from ItoMcKean [1] and Lepingle [2].
The series of exercises on H I and BMO of this and later sections are copied
on Durrett [2] to which we refer for credits and for the history of the subject. The
notion of atom appears in the martingale context in Bernard-Maisonneuve [1]. The
example of Exercise (3.15) is from Dellacherie et al. [1].
Knight-Maisonneuve [1] show that the optional stopping property for every
u.i. martingale characterizes stopping times; a related result is in Williams [14]
(See Chaumont-Yor [1], Exercise 6.18).
Chapter III. Markov Processes
This chapter contains an introduction to Markov processes. Its relevance to our
discussion stems from the fact that Brownian motion, as well as many processes
which arise naturally in its study, are Markov processes; they even have the strong
Markov property which is used in many applications. This chapter is also the
occasion to introduce the Brownian filtrations which will appear frequently in the
sequel.
§1. Basic Definitions
Intuitively speaking, a process X with state space (E, i5') is a Markov process
if, to make a prediction at time s on what is going to happen in the future, it is
useless to know anything more about the whole past up to time s than the present
state Xs.
The minimal "past" of X at time s is the a-algebra ~o = a(Xu, U ::'S s). Let
us think about the conditional probability
where A E i5', s < t. If X is Markov in the intuitive sense described above, this
should be a function of X s , that is of the form g(Xs) with g an i5'-measurable
function taking its values in [0, 1]. It would better be written gs,t to indicate its
dependence on sand t. On the other hand, this conditional expectation depends on
A and clearly, as a function of A, it ought to be a probability measure describing
what chance there is of being in A at time t, knowing the state of the process at
time s. We thus come to the idea that the above conditional expectation may be
written gs,t(X s , A) where, for each A, x ~ gs,t(x, A) is measurable and for each
x, A ~ gs,t(x, A) is a probability measure. We now give precise definitions.
(1.1) Definition. Let (E, (F) be a measurable space. A kernel N on E is a map
from E x g' into lR+ U{+oo} such that
i) for every x E E, the map A ~ N (x, A) is a positive measure on (F;
ii) for every A E g, the map x ~ N(x, A) is (F-measurable.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
80
Chapter III. Markov Processes
A kernel:rr is called a transition probability if :rr (x, E) = 1 for every x E E. In
a Markovian context, transition probabilities are often denoted Pi where i ranges
through a suitable index set.
If f E
and N is a kernel, we define a function N f on E by
g+
Nf(x) =
Ie N(x, dy)f(y)·
It is easy to see that N f is also in 8)+. If M and N are two kernels, then
MN(x, A) ~
Ie M(x, dy)N(y, A)
is again a kernel. We leave the proof as an exercise to the reader.
A transition probability :rr provides the mechanism for a random motion in E
which may be described as follows. If, at time zero, one starts from x, the position
Xl at time 1 will be chosen at random according to the probability :rr(x, .), the
position X2 at time 2 according to :rr(Xl' .), and so on and so forth. The process
thus obtained is called a homogeneous Markov chain and a Markov process is a
continuous-time version of this scheme.
Let us now suppose that we have a process X for which, for any s < t, there
is a transition probability Ps,t such that
P [Xt E Ala(Xu , u ~ s)] = Ps,t(X s , A)
a.s.
Then for any f E g+, we have E [J(Xt)la(X u , u ~ s)] = Ps,r!(X s ) as is proved
by the usual arguments of linearity and monotonicity. Let s < t < v be three
numbers, then
P [Xv E Ala(Xu , u ~ s)]
P [Xv E Ala(Xu , u ~ t)la(X u , u ~ s)]
=
E [Pt,v(X t , A)la(Xu , u ~ s)]
f
Ps,t(Xs, dy)Pt,v(Y, A).
But this conditional expectation should also be equal to Ps,v(Xs , A). This leads
us to the
(1.2) Definition. A transition function (abbreviated tf) on (E, g) is afamity Ps,t,
o ~ S < t of transition probabilities on (E, (5) such that for every three real
numbers s < t < v, we have
f
Ps,t(x, dy)Pt,v(Y, A) = Ps,v(x, A)
for every x E E and A E 8). This relation is known as the Chapman-Kolmogorov
equation. The tf is said to be homogeneous if Ps,t depends on sand t only through
the difference t-s. In that case, we write Ptfor PO,t and the Chapman-Kolmogorov
equation reads
Pt+s(x, A) =
f
Ps(x, dy)Pt(y, A)
for every s, t ~ 0; in other words, thefamity {Pt, t ~ O}forms a semi-group.
§ 1. Basic Definitions
81
The reader will find in the exercises several important examples of transition
functions. If we refer to the heuristic description of Markov processes given above,
we see that in the case of homogeneous t.f.'s, the random mechanism by which the
process evolves stays unchanged as time goes by, whereas in the non homogeneous
case, the mechanism itself evolves.
We are now ready for our basic definition.
(1.3) Definition. Let (Q, ,Y, (,~), Q) be a filtered probability space; an adapted
process X is a Markov process with respect to (;~), with transition function PS,f
iffor any! E g+ and any pair (s, t) with s < t,
Q-a.s.
The probability measure X 0 (Q) is called the initial distribution of X. The process is
said to be homogeneous if the tf is homogeneous in which case the above equality
reads
Let us remark that, if X is Markov with respect to (.(~), it is Markov with
respect to the natural filtration (.~o) = (O"(X U , u :s
Ifwe say that X is Markov
without specifYing the filtration, it will mean that we use (.~o). Let us also stress
the importance of Q in this definition; if we alter Q, there is no reason why
X should still be a Markov process. By Exercise (1.16) Chap. I, the Brownian
motion is a Markov process, which should come as no surprise because of the
independence of its increments, but this will be shown as a particular case of a
result in Sect. 2.
Our next task is to establish the existence of Markov processes. We will need
the following
t».
(1.4) Proposition. A process X is a Markov process with respect to (.~o)
(0" (Xu, u :s t») with tf Ps,I and initial measure v if and only if for any 0
to < tl < ... < tk and f; E (5+,
E
[0 !dXt;)] =
Ie
v (dxo)!o(xo)
Ie
PO,lI(Xo,dXI)!I(XI) ..
·1e
P1'_I,I,(Xk-l,dxk)!k(Xk).
Proof Let us first suppose that X is Markov. We can write
82
Chapter III. Markov Processes
this expression is the same as the first one, but with one function less and fk-l
replaced by ik-IPtk-l,(kfk; proceeding inductively, we get the formula ofthe statement.
Conversely, to prove that X is Markov, it is enough, by the monotone class
theorem, to show that for times tl < t2 < ... < tk S t < v and functions
fl,···,fbg
but this equality follows readily by applying the equality of the statement to both
sides.
Remark. The forbiddingly looking formula in the statement is in fact quite intuitive. It may be written more loosely as
Q [X to E dxo, Xtl E dXI, ... , X tk E dXk] =
v(dxo)PO,tl (xo, dxd ... Ptk-I,lk (Xk-I, dXk)
and means that the initial position Xo of the process is chosen according to the
probability measure v, then the position XI at time tl according to PO,II (xo, .) and
so on and so forth; this is the continuous version of the scheme described after
Definition (1.1).
We now construct a canonical version of a Markov process with a given t.f.
Indeed, by the above proposition, if we know the t.f. of a Markov process, we
know the family of its finite-dimensional distributions to which we can apply the
Kolmogorov extension theorem.
From now on, we suppose that (E, g') is a Polish space endowed with the
a-field of Borel subsets. This hypothesis is in fact only used in Theorem (1.5)
below and the rest of this section can be done without using it. We set Q = Ell?+,
.~ = g'll?+ and .9fo = a (Xu, u S t) where X is the coordinate process.
(1.5) Theorem. Given a transition function Ps'( on (E, ~), for any probability
measure v on (E, ~), there is a unique probability measure Pv on (Q,~) such
that X is Markov with respect to (3fo) with transition function Ps,t and initial
measure v.
Proof We define a projective family of measures by setting
p~Io ... ,(n (Ao X
Al x ... x An) =
r v(dxo) ~Ir PO,(I(XO,dXI) ~2r Ptl ,t2(XI, dx2) ... J~r Ptn_l,In(Xn-l,dxn)
~o
and we then apply the Kolmogorov extension theorem. By Proposition (1.4), the
coordinate process X is Markov for the resulting probability measure Pv •
0
§ 1. Basic Definitions
83
From now on, unless otherwise stated, we will consider only homogeneous
transition functions and processes. In this case, we have
(eq. (1.1))
Thus, for each x, we have a probability measure P'x which we will denote
simply by Px . If Z is an .37'02-measurable and positive r.v., its mathematical expectation with respect to Px (resp. P v ) will be denoted by Ex[Z] (resp. Ev[Z]). If, in
particular, Z is the indicator function of a rectangle all components of which are
equal to E with the exception of the component over t, that is to say, Z = I{XrEAJ
for some A E g, we get
PxlX t E A] = Pt(x, A).
This reads: the probability that the process started at x is in A at time t is given
by the value Pt(x, A) of the t.f. It proves in particular that x ~ PxlX t E A] is
measurable. More generally, we have the
(1.6) Proposition. If Z is .~ -measurable and positive or bounded, the map x ~
Ex[Z] is t'f-measurable and
Ev[Z] = lV(dX)ExlZ].
Proof The collection of sets r in .~ such that the proposition is true for Z = 1r
is a monotone class. On the other hand, if r = {XoEAo, X tl EA 1 , ••• , Xtn EAn},
then px[r] is given by eq.(1.1) with v = Ex and it is not hard to prove inductively
that this is an 15-measurable function of x; by the monotone class theorem, the
proposition is true for all sets r E .~. It is then true for simple functions and,
by taking increasing limits, for any Z E (.~)+.
0
Remark. In the case of BMd , the family of probability measures Pv was already
introduced in Exercise (3.14) Chap. I.
In accordance with Definition (4.12) in Chap. I, we shall say that a property
of the paths w holds almost surely if the set where it holds has Pv-probability 1
for every v; clearly, it is actually enough that it has Px-probability 1 for every x
in E.
Using the translation operators of Sect. 3 Chap. I, we now give a handy form
of the Markov property.
(1.7) Proposition (Markov property). If Z is .37'02-measurable and positive (or
bounded), for every t > 0 and starting measure v,
84
Chapter III. Markov Processes
The right-hand side of this fonnula is the r.v. obtained by composing the two
measurable maps w --+ Xt(w) and x --+ Ex[Z], and the fonnula says that this r.v.
is within the equivalence class of the left-hand side. The reader will notice that,
by the very definition of Ot, the r.v. ZoOt depends only on the future after time
t; its conditional expectation with respect to the past is a function of the present
state X t as it should be. If, in particular, we take Z = l(x,EAJ> the above fonnula
reads
Pv [Xr+s E A\.YfO] = Px,[Xs E A] = Ps(X t , A)
which is the fonnula of Definition (1.3).
Moreover, it is important to observe that the Markov property as stated in
Proposition (1.7) is a property of the family of probability measures Px , x E E.
Proof of Proposition (1.7). We must prove that for any .Yf°-measurable and positive Y,
Ev [Z oOt' Y] = Ev [Ex,[Z]. Y].
By the usual extension arguments, it is enough to prove this equality when Y =
11~=1 fi(X r,) with fi E g'+ and ti :s t and Z = 117=1 gj(X t) where gj E g+, but
in that case, the equality follows readily from Proposition (1.4).
D
We now remove a restriction on Pr . It was assumed so far that Pr (x, E) = 1,
but there are interesting cases where Pt(x, E) < 1 for some x's and t's. We will
say that Pt is Markovian in the fonner case, submarkovian in the general case
i.e. when Pt (x, E) may be less than one. If we think of a Markov process as
describing the random motion of a particle, the submarkovian case corresponds to
the possibility of the particle disappearing or dying in a finite time.
There is a simple trick which allows to turn the submarkovian case into the
Markovian case studied so far. We adjoin to the state space E a new point L1
called the cemetery and we set E il = E U {L1} and gil = a (g, {L1}). We now
define a new t.f. P on (E il , gil) by
Pt(X, A)
Pr(x, {L1})
=
Pt(x, A)
if
1 - Pr(x, E),
ACE,
Pr(L1, {L1}) = 1.
In the sequel, we will not distinguish in our notation between Pt and Pr and in
the cases of interest for us L1 will be absorbing, namely, the process started at L1
will stay in L1.
By convention, all the functions on E will be extended to E il by setting
f(L1) = O. Accordingly, the Markov property must then be stated
Pv-a.s. on the set
{Xt i= L1} ,
because the convention implies that the right-hand side vanishes on {Xr = L1} and
there is no reason for the left-hand side to do so.
Finally, as in Sect. I of Chap. I, we must observe that we cannot go much
further with the Markov processes thus constructed. Neither the paths of X nor
the filtration (.Yf 0 ) have good enough properties. Therefore, we will devote the
§ 1. Basic Definitions
85
following section to a special class of Markov processes for which there exist
good versions.
#
(1.8) Exercise. Prove that the following families of kernels are homogeneous t.f.'s
(i) (Uniform translation to the right at speed v) E = JR, f5'. = .~(JR);
(ii) (Brownian motion) E = JR, f5' = .J5>(JR); Pt(x, .) is the probability measure
with density
gt(Y - x) = (2JTt)-1/2 exp ( - (y - x)2/2t).
(iii) (Poisson process). E = JR, 3" = .J5>(JR);
L (e-tt nIn!) t:x+n(dy).
00
Pt(x, dy) =
o
This example can be generalized as follows: Let JT be a transition probability on
a space (E, (5); prove that one can define inductively a transition probability JTn
by
Then
Pt(x, dy) =
L (e-tt nIn!) JTn(x, dy)
00
o
is a transition function. Describe the corresponding motion.
(1.9) Exercise. Show that the following two families of kernels are Markovian
transition functions on (JR+, .J5>(JR+»:
(i) Pt!(x) = exp(-tlx)!(x) + !xOOty-2exp(-tly)f(y)dy
(ii) Qt!(x) = (xl(x + t))!(x + t) + !xOO t(t + y)-2 f(t + y)dy.
#
(1.10) Exercise (Space-time Markov processes). If X is an inhomogeneous
Markov process, prove that the process (t, Xt) with state space (lR+ x E) is a
homogeneous Markov process called the "space-time" process associated with X.
Write down its t.f. For example, the heat process (see Sect. I Chap. I) is the
space-time process associated with BM.
#
(1.11) Exercise. Let X be a Markov process with t.f. (Pt ) and! a bounded Borel
function. Prove that (Pt-s!(X s ), s :::: t) is a Px-martingale for any x.
(1.12) Exercise. Let X be the linear BM and set X t = !~ Bsds. Prove that X
is not a Markov process but that the pair (B, X) is a Markov process with state
space JR 2 . This exercise is taken up in greater generality in Sect. 1 of Chap. X.
86
#
Chapter III. Markov Processes
(1.13) Exercise (Gaussian Markov processes). 1°) Prove that a centered Gaussian process XI> t ::: 0, is a Markov process if and only if its covariance satisfies
the equality
res, u)T(t, t) = res, t)r(t, u)
for every s < t < u.
If ret, t) = 0, the processes (Xs, s ~ t) and (Xs, s ::: t) are independent. The
process B t - tBl, t ::: (the restriction of which to [0, 1] is a Brownian Bridge)
is an example of such a process for which ret, t) vanishes at t = 1. The process
Y of Exercise (1.14) Chap. I is another example of a centered Gaussian Markov
process.
2°) If r is continuous on lR.~ and> 0, prove that res, t) = a(s)a(t)p(inf(s, t))
where a is continuous and does not vanish and p is continuous, strictly positive
and non decreasing. Prove that (Xt!a(t), t ::: 0) is a Gaussian martingale.
3°) If a and p are as above, and B is a BM defined on the interval [p(O), p(oo)[,
the process Yt = a(t)Bp(t) is a Gaussian process with the covariance r of
2°). Prove that the Gaussian space generated by Y is isomorphic to the space
L2(lR.+, dp), the r.v. Yt corresponding to the function a(t) Iro,tj.
4°) Prove that the only stationary Gaussian Markov processes are the stationary
au processes of parameter f3 and size c (see Sect. 3 Chap. I). Prove that their
transition functions are given by the densities
°
Pt(x, y) = (2Jl'c (1 - e- 2{:1t))-1/2 exp (- (Y - e-{:It X)2 /2c (1 _ e- 2{:1t)) .
Give also the initial measure m and check that it is invariant (Sect. 3 Chap. X) as
it should be since the process is stationary. Observe also that limHoo Pt (x, A) =
meA).
SO) The OU processes (without the qualifying "stationary") with parameter f3
and size c are the Markov processes with the above transition functions. Which
condition must satisfy the initial measure v in order that X is still a Gaussian
process under Pv? Compute its covariance in that case.
6°) If u and v are two continuous functions which do not vanish, then
res, t) = u( inf(s, t) )v( sup(s, t))
is a covariance if and only if u/v is strictly positive and non decreasing. This
question is independent of the last three.
#
(1.14) Exercise. 1°) If B is the linear BM, prove that IB I is, for any probability
measure Pv , a homogeneous Markov process on [0, oo[ with transition function
given by the density
_ 1 [exp (_ ~(y _ X)2) + exp (_ ~(y +X)2)].
-J2Jl't
2t
2t
This is the BM reflected at 0. See Exercise (1.17) for a more general result.
2°) More generally, prove that, for every integer d, the modulus of BMd is a
Markov process. (This question is solved in Sect. 3 Chap. VI).
§ 1. Basic Definitions
*
3°) Define the linear BM reflected at 0 and 1; prove that it is a homogeneous
Markov process and compute its transition function.
I}.]
[Hint: The process may be defined as Xr = Br - 2n on Bt - 2n
The questions 2°) and 3°) are independent.
I
#
87
I
{I
I .: :
(1.15) Exercise (Killed Brownian motion). 1°) Prove that the densities
~ [exp (- ~(y
- X)2) - exp (- ~(y + X)2)],
2t
2t
oy 27ft
X > 0, Y > 0,
define a submarkovian transition semi-group Qt on ]0, 00[. This is the transition
function of the BM killed when it reaches 0 as is observed in Exercise (3.29).
2°) Prove that the identity function is invariant under Qf> in other words,
00
Qt(X, dy)y = x. As a result, the operators Ht defined by
10
Hrf(x) = -1
x
1
00
0
Qr(X, dy)yf(y)
also form a semi-group. It may be extended to [0, oo[ by setting
Hr(O, dy) = (2/7ft 3 )1/2l exp( - l /2t)dy.
This semi-group is that of the Bessel process of dimension 3, which will be
studied in Chap. VI and will play an important role in the last parts of this book.
#
(1.16) Exercise (Transition function of the skew BM). Let 0 .::: a .::: 1 and gt
be the transition density (i.e. the density of the t.f. with respect to the Lebesgue
measure) of BM. Prove that the following function is a transition density
p~(x, y)
= l(x>o) [(gt(y - x) + (2a - l)gt(Y + x») l(po) + 2(1 - a)gt(Y - x) 1(y<0)]
+ l(x<o)[ (gr(y - x) + (1 - 2a)gt(Y + x») 1(y<0) + 2agt(Y - x) l(po)].
What do we get in the special cases a = 0, a = 1 and a = 1/2 ?
#
(1.17) Exercise (Images of Markov processes). 1°) Let X be a Markov process
with t.f. (Pt ) and ¢ a Borel function from (E, g') into a space (E', g") such that
¢(A) E g" for every A E t5. If moreover, for every t and every A' E 15 '
then the process X; = ¢ (X t) is under Px , x E E, a Markov process with state
space (E', g'1). See Exercise (1.14) for the particular case ofBM reflected at O.
2°) Let X = BMd and ¢ be a rotation in ]Rd with center x. For W E Q, define
;Pew) by xt(;P(w») = ¢(Xt(w»). Prove that;P is measurable and for any r E.~
88
Chapter III. Markov Processes
3°) Set Tr = inf{t > 0: IX t - Xol :::: r} and prove that Tr and X T, are independent. Moreover, under Px , the law of X T, is the unifonn distribution on the
sphere centered at x of radius r.
[Hint: Use the fact that the unifonn distribution on the sphere is the only
probability distribution on the sphere which is invariant by all rotations.]
These questions will be taken up in Sect. 3 Chap. VIII.
§2. Feller Processes
We recall that all the t.f.' s and processes we consider are time-homogeneous. Let
E be a LCCB space and Co(E) be the space of continuous functions on E which
vanish at infinity. We will write simply Co when there is no risk of mistake. We
recall that a positive operator maps positive functions into positive functions.
(2.1) Definition. A Feller semi-group on Co(E) is a family 'Tt, t :::: 0, of positive
linear operators on Co(E) such that
i) To = I d and II Tt II :::: 1 for every t;
ii) Tt+s = Tt 0 Ts for any pair s, t :::: 0;
iii) limtto IITrf -!II = Ofor every f E Co(E).
The relevance to our discussion of this definition is given by the
(2.2) Proposition. With each Feller semi-group on E, one can associate a unique
homogeneous transition function Pt , t ~ 0 on (E, (5) such that
Trf(x) = Pt!(x)
for every f E Co, and every x in E.
Proof For any x E E, the map f --+ Tt!(x) is a positive linear fonn on Co; by
Riesz's theorem, there exists a measure Pt(x, .) on fS such that
Trf(x) =
f
Pt(x, dy)f(y)
for every f E Co. The map x --+ f Pr(x, dy)f(y) is in Co, hence is Borel, and,
by the monotone class theorem, it follows that x ~ Pt (x, A) is Borel for any
A E g. Thus we have defined transition probabilities Pt. That they fonn a t.f.
follows from the semi-group property of Tt (Property ii» and another application
of the monotone class theorem.
0
(2.3) Definition. A tf associated to a Feller semi-group is called a Feller transition
function.
§2. Feller Processes
89
With the possible exception of the generalized Poisson process, all the t.f.'s
of Exercise (1.8) are Feller t.f.'s. To check this, it is easier to have at one's
disposal the following proposition which shows that the continuity property iii) in
Definition (2.1) is actually equivalent to a seemingly weaker condition.
(2.4) Proposition. A tf is Feller if and only if
i) PtCO C Co for each t;
ii) \I! E Co, \Ix E E, limqo Pd(x) = lex).
Proof Of course, only the sufficiency is to be shown. If ! E Co, Pt ! is also
in Co by i) and so limsto Pt+s/(x) = Pd(x) for every x by ii). The function
(t, x) ~ Pt ! (x) is thus right-continuous in t and therefore measurable on lR+ x E.
Therefore, the function
x ~ Up!(x) =
1
00
e- pt Pd(x)dt,
p > 0,
is measurable and by ii),
lim pUp!(x) = lex).
p-+oo
Moreover, Up! E Co, since one easily checks that whenever Xn ~ x (resp. the
point at infinity whenever E is not compact), then Up!(x n ) ~ Up!(x) (resp. 0).
The map ! ~ Up! is called the resolvent of order p of the semi-group Pt and
satisfies the resolvent equation
as is easily checked. As a result, the image D = Up(Co) of Up does not depend
on p > O. Finally IIpUp!11 ::: II!II.
We then observe that D is dense in Co; indeed if fL is a bounded measure
vanishing on D, then for any! E Co, by the dominated convergence theorem,
f
!dfL = lim
p-+oo
f
pUp! dfL = 0
so that fL = O. Now, an application of Fubini's theorem shows that
PtUp!(x) = ept
hence
1
00
e- Ps Ps/(x)ds
IlptUp! - Up!11 ::: (e pt -1) IIUp!11 +tll!ll.
It follows that lim t to II P,! - ! II = 0 for ! E D and the proof is completed by
means of a routine density argument.
0
90
Chapter Ill. Markov Processes
By Fubini's theorem, it is easily seen that the resolvent Up is given by a kernel
which will also be denoted by Up that is, for f E Co,
Upf(x) =
!
Up(x, dy)f(y)·
For every x E E, Up(x, E) :::: 1/ p and these kernels satisfy the resolvent equation
Up(x, A) - Uq(x, A)
(q - p)
=
(q - p)
!
!
Up(x, dy)Uq(y, A)
Uq(x, dy)Up(Y, A).
One can also check that for f E Co, limp -+ oo IIpUpf - fll = O. Indeed
x
<
sup roo pe- pt IPd(x) - f(x)1 dt
x
<
10
10 e- I ps/pf - fll ds
00
S
which converges to 0 by the property iii) of Definition (2.1) and Lebesgue's
theorem. The resolvent is actually the Laplace transform of the semi-group and
therefore properties of the semi-group at 0 translate to properties of the resolvent
at infinity.
Basic examples of Feller semi-groups will be given later on in this section and
in the exercises.
(2.5) Definition. A Markov process having a Feller transition function is called a
Feller process.
From now on, we work with the canonical version X of a Feller process for
which we will show the existence of a good modification.
ct.
(2.6) Proposition. For any a and any f E
the process e- at Ua f(X t ) is a
supermartingale for the filtration (.~o) and any probability measure P V '
Proof By the Markov property of Proposition (1.7), we have for s < t
Ev [e-atUaf(X t )
I.¥. 0] = e- at Ev [Uaf (X t - s es ) I.~O] = e- at Pt-sUaf(Xs ).
0
But it is easily seen that e-a(t-s) Pt-sUaf :::: Uaf everywhere so that
which is our claim.
o
§2. Feller Processes
91
We now come to one of the main results of this section. From now on, we
always assume that ELlis the one-point compactification of E if E is not compact,
the point L1 being the point at infinity and an isolated point in ELI if E is compact.
We recall (Sect. 3 Chap. I) that an ELI-valued cadlag function is a function on
lR+ which is right-continuous and has left limits on ]0, oo[ with respect to this
topology on ELI.
(2.7) Theorem. The process X admits a cadlag modification.
Since we do not deal with only one probability measure as in Sect. 1 of
Chap. I but with the whole family Pv , it is important to stress the fact that the
above statement means that there is a cadlag process X on (D, .'17) such that
X t = Xt Pv-a.s. for each t and every probability measure Pv.
To prove this result, we will need the
(2.8) Lemma. Let X and Y be two random variables defined on the same space
(D, .7, P) taking their values in a LCCB space E. Then, X = Y a.s. ifand only
if
E[J(X)g(Y)] = E[J(X)g(X)]
for every pair (f, g) of bounded continuous functions on E.
Proof Only the sufficiency needs to be proved. By the monotone class theorem,
it is easily seen that
E[f(X, Y)] = E[J(X, X)]
for every positive Borel function on Ex E. But, since E is metrizable, the indicator
function of the set {(x, y) : x =1= Y} is such a function. As a result, X = Y a.s.
Proof of Theorem (2.7). Let (fn) be a sequence in Cri which separates points,
namely, for any pair (x, y) in ELI, there is a function fn in the sequence such that
fn(x) =1= fn(Y)· Since aUafn converges uniformly to fn as a ~ 00, the countable
set .9Jft5 = {Ua fn, a EN, n EN} also separates points.
Let S be a countable dense subset of lR+. By Proposition (2.6) and Theorem
(2.5) in Chap. II, for each h E .9Jft5, the process h(Xt ) has a.s. right limits along
S. Because .9fIJ separates points and is countable, it follows that almost-surely the
function t ~ Xt(w) has right limits in ELI along S.
For any w for which these limits exist, we set Xt (w) = lim Xs and for an w
s tt
SES
for which the limits fail to exist, we set X. (w) == x where x is an arbitrary point
in E. We claim that for each t, Xt = X t a.s. Indeed, let g and h be two functions
of C(E LI ); we have
lim Ev [g(Xt)h(X s )]
s tt
SES
lim Ev [g(Xt)Ps-th(X t )] = Ev [g(Xt)h(X t )]
stt
SES
since Ps-th converges uniformly to h as s t t. Our claim follows from Lemma
(2.8) and thus X is a right-continuous modification of X.
92
Chapter III. Markov Processes
This modification has left limits, because for h E 915, the processes h(Xt ) are
now right-continuous supermartingales which by Theorem (2.8) of Chap. II, have
a.s. left limits along lR+. Again, because 915 separates points, the process X has
a.s. left limits in E4 along lR+.
0
Remark. In almost the same way we did prove X t = X t a.s., we can prove that
for each t, X t = X t- a.s., in other words, X t- is a left continuous modification
of X. It can also be said that X has no fixed time of discontinuity i.e. there is no
fixed time t such that P [X t - =1= X t ] > O.
From now on, we consider only cadlag versions of X for which we state
(2.9) Proposition. If~(w) = inf{t ~ 0: Xt-(w) = .1 or Xt(w) =
have almost-surely X. = .1 on [~, 00[.
.1}, we
Proof Let ¢ be a strictly positive function of Co. The function g = Ul¢ is also
strictly positive. The supermartingale Zt = e- t g(X t ) is cadlag and we see that
Zt- = 0 if and only if X t- = .1 and Zt = 0 if and only if X t = .1. As a result
~(w) = inf{t ~ 0: Zt-(w) = 0
or
we then conclude by Proposition (3.4) in Chap. II.
Zt(w) = OJ;
o
With a slight variation from Sect. 3 in Chap. I, we now call D the space
of functions w from lR+ to E4 which are cadlag and such that w(t) = .1 for
t > s whenever w (s -) = .1 or w (s) = .1. The space D is contained in the space
Q =
and, by the same reasoning as in Sect. 3 of Chap. I, we can use it as
probability space. We still call X t the restrictions to D of the coordinate mappings
and the image of Pv by the canonical mapping ¢ will still be denoted P v . For
each Pv , X is a cadlag Markov process with transition function Pt , we call it the
canonical cadlag realization of the semi-group Pt.
For the canonical realization, we obviously have a family Ot of shift operators
and we can apply the Markov property under the form of Proposition (1.7). We will
often work with this version but it is not the only version that we shall encounter
as will be made clear in the following section. Most often however, a problem
can be carried over to the canonical realization where one can use freely the shift
operators. The following results, for instance, are true for all cadlag versions. It
may nonetheless happen that one has to work with another version; in that case,
one will have to make sure that shift operators may be defined and used if the
necessity arises.
So far, the filtration we have worked with, e.g. in Proposition (1.7), was the
natural filtration (.9i1f0). As we observed in Sect. 4 of Chap. I, this filtration is not
right-continuous and neither is it complete; therefore, we must use an augmentation
of (.9i1f0).
We shall denote by ~ the completion of ~ with respect to Pv and by
(.9i1fV) the filtration obtained by adding to each .9i1f0 all the Pv-negligible sets in
~. Finally, we will set
E!+
J7
n
-
oJ'oo -
§2. Feller Processes
93
,J7V
~fOO'
V
(2.10) Proposition. The filtrations (.~V) and (.7r) are right-continuous.
Proof Plainly, it is enough to prove that (3'fV) is right-continuous and, to this
end, because .~v and .~~ are Pv-complete, it is enough to prove that for each
.~-measurable and positive r.v. Z,
By the monotone class theorem, it is enough to prove this equality for Z =
TI7=1 fi(X t,) where fi E Co and tl < tz < ... tn. Let us observe that
Pv-a.s. for each
t.
Let t be a real number; there is an integer k such that tk-I :s t < tk and for h
sufficiently small
n
k-I
Ev [ZI'~~h] =
where
f
... f
fi (Xt,)gh (Xt+h)
i=1
Ptk-t-h(X, dXk)fk(Xk)
f
Ptk+l-tk(Xko dXk+I)'"
Ptn-tn_l(Xn-l,dxn)fn(xn).
If we let h tend to zero, gh converges uniformly on E to
g(x)
f
... f
Ptk-t(X, dXk)fk(Xk)
f
Ptk+l-tk(Xko dXk+I) ...
Ptn-tn-l (Xn-I, dXn)fn(xn).
Moreover, Xt+h converges to X t as h decreases to 0, thanks to the rightcontinuity of paths and therefore, using Theorem (2.3) in Chap. II,
n
k-I
Ev [ZI·~~] = ~rz Ev [ZI'~~h] =
which completes the proof.
i=1
fi(Xt,)g(Xt ) = Ev [ZI·~V]
o
It follows from this proposition that (,)1') is the usual augmentation (Sect. 4
Chap. I) of (.~o) and so is (.~V) if we want to consider only the probability
measure Pv . It is remarkable that completing the filtration was also enough to
make it right-continuous.
The filtrations (.Yf) and (.~V) are those which we shall use most often in the
sequel; therefore, it is important to decide whether the properties described so far
for (.~o) carry over to (.9iif). There are obviously some measurability problems
which are solved in the following discussion.
94
Chapter III. Markov Processes
(2.11) Proposition. if Z is !J7'oo-measurable and bounded, the map x -+ Ex[Z] is
(t'* -measurable and
Ev[Z] =
f
Ex[Z]v(dx).
Proof For any v, there are, by definition of the completed a-fields, two ~­
measurable r.v.'s Zl and Z2 such that Zl ::::: Z ::::: Z2 and Ev[Z2 - Zd = O.
Clearly, Ex[Zd ::::: Ex[Z] ::::: E x [Z2] for each x, and since x -+ Ex[Z;], i = 1,2,
is (t'-measurable and (Ex[Z2] - Ex[Zd) dv(x) = Ev[Z2 - Zd = 0, it follows
that E.[Z] is in (t'v. As v is arbitrary, the proof is complete.
J
(2.12) Proposition. For each t, the r. v. X t is in ~I (t'*.
Proof This is an immediate consequence of Proposition (3.2) in Chap. O.
0
We next want to extend the Markov property of Proposition (1.7) to the
a-algebras.9if We first need the
(2.13) Proposition. For every t and h > 0, Ohl(~) C ~+h.
Proof As Oh E 9fO/9f!h' the result will follow from Proposition (3.2) in Chap. 0
if we can show that for any starting measure v, there is a starting measure M
such that Oh(Pv) = Pw Define M = Xh(Pv ); then using the Markov property of
Proposition (1.7) we have, for r E ~
o
which completes the proof.
We may now state
(2.14) Proposition (Markov property). if Z is !J7'oo-measurable and positive (or
bounded), then, for every t > 0 and any starting measure v,
Ev [Z OOtl~]
= Ext[Z]
on the set {Xt i= ..1}. In particular, X is still a Markov process with respect to
(~).
Proof By Propositions (2.11) and (2.12), the map Exto[Z] is ~-measurable, so
we need only prove that for any A E .g;;r,
We may assume that Z is bounded; by definition of!J7'oo, there is a ~-measurable
r.v. Z' such that {Z i= Z'} c r with r E ~ and PfL[r] = 0 where M = Xt(Pv )
as in the preceding proof. We have {Z 0 Ot i= Z' 0 Ot} C 0t- 1(r) and as in the
above proof, Pv [Ot-1(r)] = PfL[r] = O. Since it was shown in the last proofthat
EfL [EXt [ . = E v ['], it now follows that
n
§2. Feller Processes
95
Ev [Ex,[IZ - Z'I1] = EJt[IZ - Z'I] = 0
so that Ex, [Z] = Ex, [Z'] Pv-a.s. Therefore, we may replace Z by Z' on both
sides of (*) which is then a straightforward consequence of Proposition (1. 7). D
Feller processes are not the only Markov processes possessing good versions,
and actually they may be altered in several ways to give rise to Markov processes in
the sense of Sect. 1, which still have all the good probabilistic properties of Markov
processes but no longer the analytic properties of Feller transition functions. The
general theory of Markov processes is not one of the subjects of this book; rather,
the Markov theory is more something we have to keep in mind when studying
particular classes of processes. As a result, we do not want to go deeper into
the remark above, which would lead us to set up axiomatic definitions of "good"
Markov processes. In the sequel, if the necessity arises, we will refer to Markov
processes with values in (E, t5) as collections X = (f2,.'¥,.¥[, Px,x E E,el );
these symbols will then have the same meaning and can be used in the same
manner as for Feller processes. For instance, the maps t ~ XI are supposed to be
a.s. cadlag. This may be seen as a sad departure from a rigorous treatment of the
subject, but we shall make only a parcimonious use of this liberty, and the reader
should not feel uneasiness on this count. Exercise (3.21) gives an example of a
Markov process which is not a Feller process.
We proceed to a few consequences of the existence of good versions. The
following observation is very important.
(2.15) Theorem (Blumenthal's zero-one law). For any x E E and r
either Px[r] = 0 or PAr] = 1.
E .~'x,
Proof If r E a(X o), then Px[r] = 0 or 1 because PxlX o = x] = 1. Since one
obtains .~'x by adding to a(Xo) sets of Px-measure zero, the proof is complete.
D
(2.16) Corollary. If T is a (.Y;'x )-stopping time, then either PAT = 0] = 1 or
PAT> 0] = 1.
This corollary has far-reaching consequences, especially in connection with the
following result (see Exercise 2.25). If A is a set, we recall from Sect. 4 Chap. I
that the entry and hitting times of A by X are defined respectively by
D A = inf {t 2: 0 : X I E A} •
TA
= inf {t > 0 : X E A}
I
where as usual, inf(0) = +00. For any s,
es = s + inf{t 2: 0 : X,+s E A} = inf{t 2: s : XI E A}.
It follows that s + D A es = D A on {D A 2: s} and also that
s + DA 0
0
Similarly, one proves that t + TA 0 el = TA on {TA > t}.
96
Chapter III. Markov Processes
(2.17) Theorem. If A is a Borel set, the times D A and TA are (.Y;)-stopping times.
Proof Since X is right-continuous, it is clearly progressively measurable and,
since (..0f) is right-continuous and complete, Theorem (4.15) of Chap. I shows
that DA which is the debut of the set r = {(t, w) : Xt(w) E A} is a (.jf)-stopping
time.
The reader will now check easily (see Proposition (3.3» that for each s, the
time s + D A 0 es is a (.jf)-stopping time. As a limit of (.'Y;")-stopping times, TA
is itself a (.jf)-stopping time.
D
We will next illustrate the use of the Markov property with two interesting
results. For the first one, let us observe that a basic example of Feller semi-groups
is provided by convolution semi-groups i.e. families (fJt, t 2: 0) of probability
measures on ]Rd such that
i) fJt * fJs = fJt+s for any pair (s, t);
ii) fJo = 80 and limt.j.o fJt = 80 in the vague topology.
Ifwe set
Pt(x, A) =
r 1 (x + Y)fJt(dy)
JITI!.d
A
we get a Feller t.f. as is easily checked by means of Proposition (2.4) and the
well-known properties of convolution. Most of the examples of Exercise (1.8), in
particular the t.f. of BMd , are of this type. A Feller process with such a t.f. has
special properties.
(2.18) Proposition. If the transition function of X is given by a convolution semigroup (fJt), then X has stationary independent increments. The law of the increment
Xt - Xs is fJt-s.
The word stationary refers to the fact that the law of the increment Xt - Xs
depends only on t - s, hence is invariant by translation in time. The process X
itself is not stationary in the sense of Sect. 3 Chap. I.
Proof For any f E g+ and any t we have, since Px[X o = x] = 1
Ex [f(X t - Xo)] = Ex [f(X t - x)] = fJt(f)
which no longer depends on x. Consequently, by the Markov property, for s < t,
which completes the proof.
D
Conversely, if a Feller process has stationary independent increments, it is
easily checked that its t.f. is given by a convolution semi-group having property ii)
above Proposition (2.18). These processes will be called processes with stationary
independent increments or Levy processes. Some facts about these processes are
collected in Sect. 4.
We now tum to another result which holds for any Markov process with good
versions.
§2. Feller Processes
97
(2.19) Proposition. Let x E E and ax = inf{t > 0: X t i= x}; there is a constant
a E [0, 00] depending on x such that
Proof The time ax is the hitting time of the open set {x}C and therefore a stopping
time (see Sect. 4 Chap. I). Furthermore ax = t+axoet on {ax> t} as was observed
before Theorem (2.17); thus, we may write
Px [ax> t + s] = Px [(ax> t) n (ax> t + s)] = Ex [I(ax>t) I (ax>s) 0 et ]
and by the Markov property, since obviously X t i= L1 on {ax> t}, this yields
Px [ax> t + s] = Ex [1(lTx>t)E x, [ax> s]];
but, on {ax> t}, we have X t = X, so that finally
Px [ax> t + s] = Px [ax> t] Px [ax> s]
o
which completes the proof.
Finally, this proposition leads to a classification of points. If a = +00, ax is
Px-a.s. zero; in other words, the process leaves x at once. This is the case for all
points if X is the BM since in that case Pt (x, {x l) = for every t > 0. If a = 0,
the process never leaves x which can be said to be a trap or an absorbing point.
If a E]O, 00[, then ax has an exponential law with parameter a; we say that x is a
holding point or that the process stays in x for an exponential holding time. This
°
is the case for the Poisson process with a = I for every x, but, in the general
case, a is actually a function of x. Let us further observe that, as will be proved
in Proposition (3.13), X can leave a holding point only by a jump; thus, for a
process with continuous paths, only the cases a = and a = 00 are possible.
We close this section by a few remarks about Brownian motion. We have now
two ways to look at it: one as the process constructed in Chap. I which vanishes at
time zero and for which we consider only one probability measure; the other one
as a Markov process which can be started anywhere so that we have to consider
the whole family of probability measures Pv • The probability measure of the first
viewpoint, which is the Wiener measure in the canonical setting, identifies with
the probability measure Po = PEa of the second viewpoint. Any result proved for
Po in the Markov process setting will thus be true for the Wiener measure.
In the sequel, the words Brownian motion will refer to one viewpoint or the
other. We shall try to make it clear from the context which viewpoint is adopted
at a given time; we shall also use the adjective standard to mean that we consider
only the probability measure for which Bo = a.s., i.e. the Wiener measure.
°
°
(2.20) Definition. If (~t) is a filtration, an adapted process B is called a (.~)­
Brownian motion if
i) it is a Brownian motion,
ii) for each t ::: 0, the process Bt+s - B t , S > 0, is independent of (.'§t).
98
Chapter III. Markov Processes
It is equivalent to say that B is a Markov process with respect to (~) with
the t.f. of Exercise (1.8) ii).
In this definition, the notion of independence may refer to one or to a family of
probability measures. We want to stress that with the notation of this section, B is
a (..%")-Brownian motion if we consider the whole family of probability measures
Pv or a (9ifIL)-Brownian motion if we consider only one probability measure PJ.L'
Each of these filtrations is, in its context, the smallest right-continuous complete
filtration with respect to which B is a BM.
(2.21) Definition. Let X be a process on a space (Q,!7) endowed with a family Po,
E e, of probability measures. We denote by (9ifx) the smallest rightcontinuous and complete filtration with respect to which X is adapted. A ~x_
stopping time is said to be a stopping time of X.
In the case of BM, we have ~ B = ..%" or 9ifIL according to the context. These
filtrations will be called the Brownian filtrations.
e
(2.22) Exercise. Prove that the transition functions exhibited in Exercises (1.8)
(with the exception of the generalized Poisson t.f.), (1.14), (1.15) are Feller t.f.'s.
Do the same job for the OU processes of Exercise (1.13).
#
(2.23) Exercise. Show that the resolvent of the semi-group of linear BM is given
by Up(x, dy) = up(x, y)dy where
up(x, y) = ~ exp
(-J2PIX - YI).
(2.24) Exercise. If X is a Markov process, ep and eq two independent exponential
r.v.'s with parameters p and q, independent of X prove that for a positive Borel
function f
and derive therefrom the resolvent equation.
*
(2.25) Exercise. 1°) A subset A of E is called nearly Borel if, for every v, there
are two Borel sets AI, A2 such that Al cAe A2 and Pv [DA2\Al < 00] = O.
Prove that the family of nearly Borel sets is a sub-a-algebra of the universally
measurable sets. Prove that, if A is nearly Borel, then D A and TA are ..%"-stopping
times.
2°) If A is nearly Borel and x E E, prove that either Px[TA = 0] = 1 or
Px[TA = 0] = O. In the former (latter) case, the point x is said to be regular
(irregular) for A.
3°) A set 0 is said to be finely open if, for every x E 0, there is a nearly
Borel set G such that x E G c 0 and X is irregular for G C • Prove that the finely
open sets are the open sets for a topology which is finer than the locally compact
topology of E. This topology is called the fine topology.
4°) If a nearly Borel set A is of potential zero, i.e. 00 Pt (·, A)dt = 0 (see
Exercise (2.29», then A C is dense for the fine topology.
10
§2. Feller Processes
99
5°) If ! is universally measurable and t -+ !(X t ) is right-continuous, then!
is finely continuous.
6°) Prove the converse of the property in 5°).
[Hints: Pick e > and define To = and for any ordinal ex of the first kind
define
Ta+\ = inf{t > Ta: \!(Xt ) - !(X T.) \ > e}
°
°
and if ex is a limit ordinal
Prove that Ta < Ta+\ a.s. on {Ta < oo} and that, as a result, there are only
countably many finite times Ta.]
°
*
(2.26) Exercise. Prove that for a Feller process X, the set {Xs(w), ~ s ~ t,
t < ~(w)} is a.s. bounded.
[Hint: Use the quasi-left continuity of Exercise (2.33) applied to exit times of
suitable compact sets.]
*
(2.27) Exercise (A criterion for the continuity of paths). 1°) Let d be a metric
on E, and! a function from [0, 1] to E,d with left (right) limits on ]0, 1]([0, 1[).
Then, ! is not continuous if and only if there is an e > such that
°
Nn(f) =
max d(J(k/n), !«k + 1)/n») > e
O:::;k:::;n-\
for all n sufficiently large.
2°) Let B(x, e) = {y : d(x, y) ~ e}. For e >
°and a compact set K, define
M! = {w : Nn(X.(w» > e; Xs(w) E K
for every s E [0, I]}.
Prove that Pv (M~) ~ n SUPxEK Pl/n(X, B(x, ey).
3°) Using the result in the preceding exercise, prove that if X satisfies the
condition
°
lim sup ~Pt(x, B(x,
t-l-0 XEK t
en °
=
for every e > and compact set K, then a.s. X has continuous paths.
4°) Check that the condition in 3°) is satisfied for BM. Thus, the results of this
section together with 3°) give another construction of BM independent of Chap. I.
*
(2.28) Exercise. Let B be the BMd , u~t = u(Bs, s ::: t) and .~ = nt L~t its
asymptotic u-field.
1°) Use the time inversion of Sect. 1 Chap. I and Blumenthal's zero-one law to
prove that L~ is Po-a.s. trivial i.e. for any A E A either Po(A) = or Po(A) = 1.
2°) If A is in ~~, then for any fixed t, there is an event B E .~ such that
lA = IB 0 (Jt. Prove that
°
Px[A] =
and conclude that either P,[A] ==
f
°or
P,(x, dy)Py(B)
P,[A] == 1.
100
Chapter III. Markov Processes
3°) Prove that for any initial distribution v and r
E
Yoo
[Hint: Use Theorem (2.3) of Chap. 11.]
4°) If VI and V2 are two starting measures, show that
lim lI(vl -
t->oo
V2) Ptll =
0
where the norm is the variation norm on bounded measures.
[Hint: Use a Jordan-Hahn decomposition of (VI - V2).]
(2.29) Exercise. Let X be a Feller process. For x E E and A E i5', set
U(x, A) =
1
00
Pt(x, A)dt.
1°) Prove that this integral is well defined, that U is a kernel on (E, g) and
that if f E i5'+, U f(x) = Ex [JOOO f(Xt)dt]. The kernel U is called the potential
kernel of X.
2°) Check that U f = limA,t.o UAf and that for every ).. > 0
U = UA +)..UAU = UA +)..UUA.
3°) Prove that for X = BMd , d S 2, the potential kernel takes only the values
o and +00 on g+. This is linked to the recurrence properties ofBM in dimensions
1 and 2 (see Sect. 3 Chap. X).
4°) Prove that for BMd , d :::: 3, the potential kernel is the convolution kernel
associated with (1 /2Tr d / 2 ) r (d /2) -1) Ix 12 - d i.e. the kernel of Newtonian potential
theory. In particular, for d = 3,
Uf(x) = _1
2JT
f
fey) dy.
Ix - yl
5°) Compute the potential kernel of linear BM killed when it reaches 0 (Exercise (3.29» and prove that it has the density 2(x /\ y) with respect to the Lebesgue
measure on IR+.
6°) Prove that gt is a density for the potential kernel of the heat process.
(2.30) Exercise. Let A be a Borel set.
1°) Prove that for every s, t :::: 0,
2°) Let V be a probability measure such that v(A) = O. Prove that under P v ,
the process Y defined by
is a Markov process with respect to (Yr). One says that Y is the process X killed
when entering A. See Exercise (3.29) for a particular case.
§2. Feller Processes
#
101
(2.31) Exercise. Let X be the standard linear BM and set 1/I(t) = t-a, a 2: 0.
Prove that the following three properties are equivalent
i) lime./-o fel B t 1/1 (t)dt exists on a set of strictly positive probability;
ii) a < 3/2;
iii) foI 1/l(t)IBt ldt < 00 a.s.
[Hint: Use Blumenthal's zero-one law to prove that i) is equivalent to a stronger
property. Then, to prove that i) entails ii) use the fact that for Gaussian r.v.'s
almost-sure convergence implies convergence in L 2 , hence convergence of the
L 2 -norms.]
The assertion that for, say, any positive continuous function 1/1 on ]0, 1], the
properties i) and iii) are equivalent is false. In fact, it can be shown that if 1/1 E
Lloc (]O, I]), iii) is equivalent to
iv) foI 1/l(t)t l / 2 dt < 00,
and there exist functions 1/1 satisfying i) but not iv).
This subject is taken up in Exercise (3.19) Chap. IV.
*
(2.32) Exercise. Let B be the standard linear BM and h a continuous function on
]0,1[. Let r be the event
(w: Bt(w) < h(t)
on some interval
°
]0, T(w)[C]O, I[}.
Prove that either p(r) = or P(r) = 1; in the former (latter) case, h is said to
belong to the lower (upper) class. For every e > 0, h(t) = (l + e)J2t log2(l/t)
belongs to the upper class and h(t) = (1 - e)J2t log2(1/t) to the lower class.
*
(2.33) Exercise. 1°) (Quasi-left continuity). If X is a Feller process and (Tn) a
sequence of (§f)-stopping times increasing to T, prove that
li~XTn = X T
a.s. on
{T < oo}.
[Hint: It is enough to prove the result for bounded T. Set Y = limn XTn (why
does it exist?) and prove that for continuous functions f and g
Ex [f(Y)g(XT)] = lim lim Ex [J(Xr. )g(Xr. +t)] = Ex [f(Y)g(Y)] .].
t'/-O
n
n
n
This result is of course totally obvious for processes with continuous paths.
For processes with jumps, it shows that if XL =1= X T on {O < T < oo}, then
a sequence (Tn) can increase to T only in a trivial way: for a.e. w, there is an
integer n(w) such that Tn(w) = T(w) for n 2: n(w). Such a time is said to be
totally inaccessible as opposed to the predictable times of Sect. 5 Chap. IV, a
typical example being the times of jumps of the Poisson process.
2°) Using only 1°) and Proposition (4.6) in Chap. I, prove that if A is a closed
set, then TA is a (Jif)-stopping time.
102
Chapter III. Markov Processes
§3. Strong Markov Property
Stopping times are of constant use in the study of Markov processes, the reason
being that the Markov property extends to them as we now show. We must first
introduce some notation.
We shall consider the canonical cadlag version of a Feller process. We use
the results and notation of §2. For a (s;f)-stopping time T, we define X T on the
whole space Q by putting X T = .1 on {T = oo}. The r.v. X T is .~-measurable
as follows from Sect. 4 in Chap. I and is the position of the process at time T.
We further define a map (h from Q into itself by
(h(w) = (}t(w)
if T(w) = t,
if T(w) = +00,
(}T(W) = WLI
where WLI is the path identically equal to .1. Clearly, X t 0 (}T = XT+t so that
(}TI(~) C a (X T +t , t :::: 0).
We now prove the Strong Markov property of Feller processes.
(3.1) Theorem. If Z is a .'700-measurable and positive (or bounded) random variable and T is a stopping time, for any initial measure v,
Pv-a.s. on the set {X T =j:. .1}.
Proof We first prove the formula when T takes its values in a countable set D.
We have
= L 1(T=d) 1(Xd#LI)ExAZ]
dED
which proves our claim.
To get the general case, let us observe that by setting
we define a sequence of stopping times taking their values in countable sets and
and times tl < t2 < ... <
decreasing to T. For functions ii, i = 1,2, ... , k in
tk, let
ct
Because X is Feller the function g is in
Ev [
r;
ct and by the special case,
ii(X ti ) 0 (}T. I
~n] = g(X
T.).
§3. Strong Markov Property
103
Because of the right-continuity of paths, and by Corollary (2.4) Chap. II, we get
the result for the special case Oi fi(X t,). By an application of the monotone class
Theorem, we get the result for every positive Z in .¥'c2.
It remains to prove the theorem for Z E (~)+. By working with P~ =
P v (. n(X T # .1)) / Pv (X T # .1) for which the conditional expectation given .YT
is the same as under P v , we may assume that X T # .1 a.s. and drop the corresponding qualification. Call J.L the image of Pv by X T i.e. J.L(A) = PAX TEA].
By definition of .'Voo, there are two .¥~-measurable r.v.'s Z' and Z" such that
Z' :s Z :s Z" and P/L[Z" - z' > 0] = O. By the first part of the proof
Pv[Z" OOT -
z' OOT > 0] = Ev [EXT[Z" - z' > OJ] = O.
Since v is arbitrary, it follows that ZOOT is .~-measurable.
The conditional expectation Ev[Z 0 OT 1.:YiT] is now meaningful and
By the foregoing, the two extreme terms are Pv-a.s. equal to EXT[Z], which ends
the proof.
0
Remark. The qualification {X T # Ll} may be forgotten when ~ = +00 a.s. and
T < 00 a.s., in which case we will often drop it entirely from the notation.
In the course of the above proof, we saw that OT is a .'Voo-measurable mapping.
We actually have
(3.2) Lemma. For any t > 0, T + t is a stopping time and OT 1(y;) C .YT+t.
Proof By a monotone class argument, it is easily seen that 0T 1 (.g;;o) C .% +t and
the reasoning in the above proof yields the result.
0
We shall use this lemma to prove
(3.3) Proposition. If Sand T are two VFr)-stopping times, then S + T 0 Os is an
(.¥i)-stopping time.
Proof Since (.~) is right-continuous, it is enough to prove that {S + T 0 Os <
t} E .¥i for every t. But
{S + T 0 Os < t} = U{S < t - q} n {T 0 Os < q}.
qEiQ
By the lemma, the set {T 0 Os < q} is in .~+q; by definition of .9i's+q, the set
{S < t -q} n {ToO s < q} = {S+q < t} n {ToOs < q} is in .¥iwhich proves
0
our claim.
104
Chapter III. Markov Processes
If we think of a stopping time as the first time some physical event occurs,
the stopping time S + T 0 (}s is the first time the event linked to T occurs after
the event linked to S has occured. For instance, using the notation of §2, if A and
B are two sets in ~, the stopping time TA + TB 0 (}TA is the first time the process
hits the set B after having hit the set A. This will be used in the sequel of this
section.
We now give a first few applications of the strong Markov property. With a
stopping time T, we may also associate a kernel PT on (E, g') by setting
PT(X, A) = Px[X T E A]
or more generally for !
E g'+,
P T !(x)
= Ex[f(X T )].
The following result tells us how to compose these kernels (see Definition (1.1».
(3.4) Proposition. If Sand T are two stopping times, then
Proof By definition
so that, using the strong Markov property, we have
PS(PTf)(x)
= Ex [1 (Xs#L1) Ex [J(X T ) o (}si.¥'S]]
=
Ex [l(xs#J!(X T )o(}s].
Now !(XT) 0 (}s = !(XsHo(Js) and !(X T ) 0 (}s = 0 on {Xs = L1}, so that the
0
result follows.
Remark. We have thus generalized to stopping times the fundamental semi-group
property. Indeed if T = t a.s., then PT = Pt and S + T 0 (}s = S + t.
We now prove that, if we start to observe a Markov process at a stopping time,
the resulting process is still a Markov process with the same t.f.
(3.5) Proposition. If T is a stopping time, the process Yt = X T +t is a Markov
process with respect to (.~ +t) and with the same transition function.
Proof Let! E g'+; for every v, and every s ~ 0,
on the set {XT+t =1= L1}. But, on the set {X T +t = L1}, the equality holds also so
0
that X Ht satisfies the conditions of Definition (1.3).
§3. Strong Markov Property
105
Remarks. 1) The process Y is another version of the Feller process X. This shows
that non canonical versions arise naturally even if one starts with the canonical
one.
2) The above property is in fact equivalent to the strong Markov property as
is stated in Exercise (3.16).
In the case of processes with independent increments, the above proposition
can be stated more strikingly.
(3.6) Corollary. If X has stationary independent increments, the process
(X T +t - X T , t ~ 0) is independent of.¥; and its law under Pv is the same as
that of X under Po.
Proof For /; E ~+, ti E 1R+, i = 1,2, ... , n,
and, as in Proposition (2.18), this is a constant depending only on Ii and ti.
D
In particular, in the case of BM, BT+t - BT is a (~+t)-Brownian motion
independent of ~. Another proof of this fact is given in Exercise (3.21) Chap. IV.
We devote the rest of this section to an application of the Strong Markov property
to linear BM.
We recall that the continuous increasing process St = sUPs<t Bs and the stopping time Ta introduced in Sect. 3 Chap. II are inverses of one another in the
sense that
Ta = inf{t : St ~ a},
St = inf{a : Ta ~ t}.
The map a --+ Ta is increasing and left-continuous (see Sect. 4 Chap. 0 and Sect. 1
in Chap. V).
In the next result, P is the probability measure for the BM started at 0, i.e.
the Wiener measure if we use the canonical version.
(3.7) Proposition (ReOection principle). For every a >
°
and t ~ 0,
P[St ~ a] = P[Ta .:s t] = 2P[Bt ~ a] = P(IBri ~ a).
The name for this proposition comes from the following heuristic argument.
Among the paths which reach a before time t, "half' will be above a at time t;
indeed, if we consider the symmetry with respect to the line y = a (in the usual
representation with the heat process paths) for the part of the path between Ta and
t, we get a one-to-one correspondence between these paths and those which are
under a at time t. Those which are exactly in a at time t have zero probability
and therefore
P[St ~ a]
P[St ~ a, Bt > a] + P[St ~ a, Bt < a]
2P[St ~ a, Bt > a] = 2P[Bt > a]
106
Chapter III. Markov Processes
I
Fig. 3. Reflection in b
since (B t > a) C (St ::: a). This argument which is called the reflection principle
can be made rigorous but it is easier to use the Strong Markov property. See also
Exercise (3.14).
Proof Indeed, since BTa = a,
P [St ::: a, Bt < a] = P(Ta .::; t, BTa+(t-Ta) - BTa < 0],
and since B Ta +s - BTa is a BM independent of ~a this is further equal to
Since
P[St ::: a] = P[Bt ::: a] + P[St ::: a, Bt < a],
the result follows.
Remarks. 1°) For each t, the random variables St and IBt I have thus the same
law. Of course, the processes S and IB I do not have the same law (S is increasing
and IBI is not). More will be said on this subject in Chap. VI.
2°) As an exercise, the reader may also prove the above result by showing,
with the help of the strong Markov property, that the Laplace transforms in t of
P(Ta .::; t) and 2P(Bt ::: a) are equal.
The preceding result allows to derive from the law of B t the laws of the other
variables involved. As we already pointed out, St has the same law as IBt I namely
the density 2(2Jrt)-1/2 exp( -y2/2t) on [0, 00[. As for the law of Ta , it could have
been obtained by inverting its Laplace transform e- a ../2S found in Proposition (3.7)
Chap. II, but we can now observe that
§3. Strong Markov Property
P[Ta::S: t] = 2
i
a
OO
107
1
~exp(-l/2t)dy.
v2nt
Upon differentiation with respect to t, the density fa of Ta is found to be equal
on [0, oo[ to
fa(s)=
1 (r;c
v 2n
s
1
3/2
i
a
oo
1
exp(-l/2s)dY+5ji
S
i
oo
a
lexp(-l/2s)dy )
and integrating by parts in the integral on the right yields
fa(s) = a(2 ns 3)-1/2 exp( _a 2 /2s).
The reader will find in Proposition (3.10) another proof based on scaling properties.
The densities fa form a convolution semi-group, namely fa * fb = fa+b; this is an
easy consequence of the value of the Laplace transform, but is also a consequence
of the following proposition where we look at Ta as a function of a.
(3.8) Proposition. The process Ta, a :::: 0, is a left-continuous increasing process
with stationary independent increments and is purely discontinuous, i.e. there is
a.s. no interval on which a -+ Ta is continuous.
Proof It is left-continuous and increasing as already observed. To prove that
P [{w : a -+ Ta(w)
is continuous on some interval}] = 0,
we only need to prove that for any pair (p, q) of rational numbers with p < q
P [{w : a -+ Ta(w)
is continuous on
[p, q]}] = 0.
We remark that a -+ Ta(w) is continuous on [p, q] iff S is strictly increasing on
[Tp, Tq] (see Sect. 4 Chap. 0), but, since B Tp +1 - BTp is a Brownian motion, this
is impossible by the law of the iterated logarithm.
To prove the independence of the increments, pick two real numbers < a <
h. Since Tb > Ta a.s. we have Tb = Ta + Tb 0 eTa a.s., hence, for f E ~+,
°
E [f (Tb 0 tha ) I~']
=
EETa [f (Tb)]
a.s.
But BTa = a a.s. and because of the translation invariance ofBM the last displayed
term is equal to the constant E [J (Tb- a )] which shows that Tb - Ta is independent
of ~a' thus completing the proof.
Thus, we have proved that Ta , a :::: 0, is a process with stationary independent
increments, hence a Feller process (see Sect. 2). It is of course not the canonical
version of this process since it is defined on the probability space of the Brownian
motion. It is in fact not even right-continuous. We get a right-continuous version
by setting Ta+ = limb.j,a Tb and proving
(3.9) Proposition. For any fixed a, Ta = Ta+P-a.s.
108
Chapter III. Markov Processes
°
Proof We also have Ta+ = inf{t : St > a} (see Sect. 4 Chap. and Sect. 1
Chap. V). The strong Markov property entails that for every t > 0, we have
STa+t > a whence the result follows immediately.
0
Remark. This could also have been proved by passing to the limit as h tends to
zero in the equality
E [exp ( -
~2 (Ta+h - Ta») ] = exp( -Ah)
which is a consequence of results in Chap. II.
Furthermore, since Ta , a 2: 0, is a Feller process, Proposition (3.9) is also a
consequence of the remark after Theorem (2.8).
The above results on the law of Ta which can in particular be used to study
the Dirichlet problem in a half-space (see Exercise (3.24» may also be derived
from scaling properties of the family Ta which are of intrinsic interest and will be
used in Chap. XL If a is a positive real, we denote by Bt(a) the Brownian motion
a-I Ba2t and adorn with the superscript (a) anything which is defined as a function
of B(a). For instance
.
TJ(a) =lll
t: B(a)
t
=
.
f{
I}
With this notation, we obtain
(3.10) Proposition. We have Ta = a 2T I(a) and consequently Ta !4J a 2TJ. Moreover,
Ta !4J (ajSt>2 rgj (ajBJ)2.
Proof By definition
Ta
inf{t : a-I B t = 1}
inf{a 2t: Bt(a) =
I} = a 2T?).
As for the second sentence, it is enough to prove that T J !4J S12. This follows
from the scaling property of St, namely: St !4J "fiSI. Indeed
P [TJ 2: u] = P [Su S 1] = P [-JUSJ S 1] = P [S1 2 2: u] .
o
Knowing that SI !4J IBd, it is now easy to derive anew the law of TI . We will
rather use the above results to prove a property of importance in Chap. X (see
also Exercise (3.24) in this section, of which it is a particular case).
(3.11) Proposition. If f3 is another standard linear BM independent of B, then
f3 Ta <gJ a . C where C is a Cauchy random variable with parameter 1.
§3. Strong Markov Property
109
Proof Because of the independence of f3 and B and the scaling properties of f3,
f3 1 <t1 ~
. f3 1 <t1 ~
f3 Ta <t1 YIT..
1a
SI
IBd . f3 1
which ends the proof, since f31 IIBII is known to have the Cauchy distribution.
0
Remarks. (i) The reader may look in Sect. 4 for the properties of the processes
Ta+ and f3Ta , a 2: O.
(ii) Proposition (3.11) may also be seen as giving the distribution of BM2 when
it first hits a straight line.
We now give a first property of the set of zeros of the linear BM.
(3.12) Proposition. The set Z = {t : B t = O} is a.s. closed, without any isolated
point and has zero Lebesgue measure.
Proof That Z is closed follows from the continuity of paths. For any x,
Ex
[1
00
Iz(S)dS] =
1
00
P,(x, {O})ds = 0,
since Ps(X, {On = 0 for each s and x. It follows that Z has a.s. zero Lebesgue
measure, hence also empty interior.
The time zero belongs to Z and we already know from the law of the iterated
logarithm that it is not isolated in Z. We prove this again with the techniques of
this section. Let To = inf{t : B t = O}; the time t + To 0 et is the first point in Z
after time t; by the Markov property and the explicit value of Eo[exp -aTa] we
have
Eo [exp {-a(t + To 0 et )}]
exp(-at)Eo [EBt [exp(-aTo)]]
exp(-at)Eo [exp {-IBrI~}J
and this converges to I as t goes to zero. It follows by Fatou's lemma that
Po [lim t to (t + To 0 et ) = 0] = 1, namely, 0 is a.s. the limit of points in Z. Now
for any rational number q, the time dq = q + To 0 eq is the first point in Z
after q; by Corollary (3.6), Bdq+t is a standard linear BM and therefore dq is a.s.
the limit of points in Z. The set N = UqEiQI {dq is not a limit of points in Z}
is negligible. If h E Z(w) and if we choose a sequence of rational numbers qn
increasing to h, either h is equal to some dq , or is the limit of the dq, 'so Thus,
if w rf. N, in either case h is the limit of points in Z(w) which establishes the
proposition.
o
Thus Z is a perfect set looking like the Cantor "middle thirds" set. We will see
in Chap. VI that it is the support of a random measure without point masses and
singular with respect to the Lebesgue measure and which somehow accounts for
the time spent at 0 by the linear BM. Moreover, the complement of Z is a countable
union of disjoint intervals In called the excursion intervals; the restriction of B to
110
Chapter III. Markov Processes
such an interval is called an excursion of B. Excursions will be studied in great
detail in Chap. XII.
Finally, we complete Proposition (2.19) by proving that X can leave a holding
point only by a jump, namely, in the notation of Proposition (2.19):
(3.13) Proposition. !f0 < a < 00, then Px [Xax = x] = o.
Proof If a < 00, then Px[ux < 00] = 1. On {Xax = x} we have U x 0 eax = 0 and
by the strong Markov property
Px [ux < 00; Xa x = x, U x 0 eax = 0]
Ex [1{ax<oo;Xox=x}PXux [ux = 0]]
Px [ux = 0] Px [ux < 00; Xa x = x].
Thus if Px [Xa x = x] > 0 we have Px [ux = 0] = 1 which completes the proof.
#
(3.14) Exercise (More on the reflection principle). We retain the situation and
notation of Proposition (3.7). Questions 3°) and 4°) do not depend on each other.
1°) Prove that the process B a defined by
B~
=B
t
on
{t < Ta},
B~=2a-Bt
on
{teTa}
has the same law as B.
2°) For a :::: b, b > 0, prove that
P [Sf> b, Bt < a] = P [B t < a - 2b] = P2b [B t < a],
and that the density of the pair (B t , St) is given on {(a, b); a :s b, b > O} by
This can also be proved by computing the Laplace transform
1
00
e- at P [St > b, Bf < a]dt.
3°) Prove that for each t, the r.v. St - Bt has the same law as IBtl and that
2St - Bt has the same law as IBM:I; prove further that, conditionally on 2St - Bf>
the r.v.'s St and St - B t are uniformly distributed on [0, 2St - Br]. Much better
results will be proved in Chap. VI.
4°) Let St = infs:ot Bs and a > 0; prove that under the probability measure Pa
restricted to {Sf> O}, the r.v. B t has a density equal to
(2nt)-1/2 [exp ( - (b
;t
Compare with Exercise (3.29).
a )2) _ exp ( - (b
:t
a )2) ] '
b > O.
§3. Strong Markov Property
*
(3.15) Exercise. 1°) Let a < 0 < b; prove that for F C] -
111
00, a] and t > 0,
P [Tb < Ta, B t E F] = P [B t E O'bF] - P [Ta < Tb, Bt E O'bF]
where O'bF = {2b - y, y E F}.
2°) In the notation of Exercise (3.14), prove that for every Borel subset E of
[a, b],
P [a ::::: St < St ::::: b, Bt E E] =
1
k(x)dx
where
k(x) = (2rrt)-1/2
~ {exp ( - ;t (x + 2k(b - a»2)
k=-oo
- exp ( -
;t
(x - 2b + 2k(b -
a»2) }.
[Hint: P [Ta < Tb, Ta ::::: t, B t E E] = P [Ta < Tb, Bt E O'aE]. Apply repeatedly the fonnula of 1°) to the right-hand side.]
> a}. This
3°) Write down the laws of Bt = SUPsg IBsl and fa = inf{t :
can be done also without using 2°».
IBtl
(3.16) Exercise. Let X be the canonical version of a Markov process with transition semi-group Pt. If for every stopping time T, every initial distribution v and
every j E ~+,
on tXT i= L1} then X has the strong Markov property.
(3.17) Exercise. Prove that (Bt, St) is a Markov process with values in E
{ (a, b); a ::::: b, b > o} and using 2°) in Exercise (3.14) compute its transition
function.
(3.18) Exercise. For the standard linear BM, prove that
.
hm ..(iP[Bs ::::: 1, "Is::::: t] =
t-+oo
#
H
-.
Jr
(3.19) Exercise. 1°) Let X be a Feller process, T a finite stopping time. Prove
that any .¥co-measurable and positive r.v. Z may be written ¢(w, (h(w» where ¢
is !Pi ® .¥co-measurable. Then
Ev [ZI!Pi] (w) =
f
¢(w, w') PXT(w) (dw')
Pv- a.s.
2°) Let S be 2: 0 and !Pi-measurable. For a positive Borel function j, prove
that
This can be proved using 1°) or directly from the strong Markov property.
3°) Write down the proof of Proposition (3.7) by using 2°) with T = Ta.
112
#
Chapter III. Markov Processes
(3.20) Exercise (First Arcsine law). The questions 3°) through 5°) may be solved
independently of 1°) and 2°).
1°) For a real number u, let d u = u + To 0 ()u as in the proof of Proposition
(3.12). Using the BM (B t +u - Bu, t ::: 0), prove that du Yfl u + B~ . TI where T\
is independent of Bu. Hence du Yfl u(l + C 2 ) where C is a Cauchy variable with
parameter 1.
[Hint: du = u + LBu where T. refers to Bt +u - Bu.]
2°) Prove that the r.v. gl = sup{t S 1 : B t
o} has the density
(JTy'y(l - y)r\ on [0, I].
[Hint: {g\ < u} = {du > I}.]
30) Use the strong Markov property and the properties of hitting times recalled
before Theorem (2.17) to give another proof of 2°).
4°) Let d l = inf{t > 1 : B t = o}; by the same arguments as in 3°), prove that
the pair (g\, d\) has the density 2~ y-\/2(Z - y)-3/2 on S Y SiS z.
5°) Compute the law of d\ - gl. In the language of Chap. XII, this is the law
of the length of the excursion straddling I (see Exercise (3.7) in Chap. XII). For
more about g\ and dJ, see the following Exercise (3.23).
°
(3.21) Exercise. Let TJ be a Bernoulli r.v. Define a family Xx of processes on
JR. U {..1} by
+t
X~ = x + t
X~ = x
X~ =..1
X; = x + t
if x<o
and
x + t < 0,
if x < 0, x + t :::
if x < 0, x + t :::
if x ::: 0.
° and
° and
TJ =
1,
TJ =
-1,
Let Px be the law of XX. Prove that under Px , x E JR., the canonical process is
a Strong Markov process which is not a Feller process.
(3.22) Exercise. Let E be the union in JR.2 of the sets {x SO, y = O}, {x > 0; y =
x} and {x > 0; y = -x}; define a transition function on E by setting:
for x SO,
for x > 0,
Pt «x, 0), .)
Pt «x, 0), .)
Pt «x, x),·)
Pt «x, -x),·)
if x + t S 0,
~£(x+t.x+t) + ~£(x+t.-x-t)
£(x+t.O)
if x + t > 0,
£(x+t,x+t) ,
£(x+t.-x-t) .
Construct a Markov process X on E with t.f. Pt and prove that it enjoys neither
the Blumenthal zero-one law nor the Strong Markov property.
[Hint: For the latter, consider the time T = inf{t : X t E {x > 0, y = x}}.]
(3.23) Exercise. For the standard BM and t > 0, let
gt = sup {s < t : Bs = O} , d t = inf {s > t : Bs = O} .
1°) By a simple application of the Markov property, prove that the density of
the pair (Br, d t ) is given by
§3. Strong Markov Property
(2n )-llx I (t (s - t) 3
113
f l/2 exp (-sx 2/2t(s - t)) l(s~f).
2°) By using the time-inversion invariance of Proposition (1.10) in Chap. I,
derive from 1°) that the density of the pair (Bt, gt) is given by
(2n) -llx I (s(t - s )3f 1/2 exp (_x 2/2(t - s)) 1 (s:sf).
Sharper results along these lines will be given in Sect. 3 Chap. XII.
#
(3.24) Exercise. 1°) Denote by (X t , Yt ) the Brownian motion in IRn x IR started
at (0, a) with a > 0. Let Sa = inf{t : Yt = O} and prove that the characteristic
function of XSa is exp(-alul). In other words, the law of XSa is the Cauchy
law with parameter a, and this generalizes Proposition (3.11); the corresponding
density is equal to
r(Cn + 1)/2)a/n (l x l2 +a 2)(n+I)/2.
2°) If (Xt. Y t ) is started at (x, a), write down the density p(x.a) (z) of XSa' This
density is the Poisson kernel. If f E Ck(IRn), prove that
g(x, a) =
{
J~n
p(x.a)(z)f(z)dz
is a harmonic function in IRn x]O, 00[.
#
(3.25) Exercise. Let (X, Y) be the standard planar BM and for a >
inf{t: IXtl = a}. Show that the LV. Yia has the density
°
let fa =
(2a cosh(n x /2a)) -I .
*
(3.26) Exercise (Local extrema of BM). Let B be the standard linear BM.
1°) Prove that the probability that a fixed real number x be a local extremum
of the Brownian path is zero.
2°) For any positive real number r, prove that
P [{w: Sr(w)
is a local extremum of t -+ Bt(w), t > r}] = 0.
3°) Prove that consequently a.e. Brownian path does not have two equal local
extrema. In particular, for every r, there is a.s. at most one s < r such that
Bs = Sr.
4°) Show that the set of local extrema of the Brownian path is a.s. countable.
(3.27) Exercise. Derive the exponential inequality of Proposition (1.8) in Chap.
II from the reflection principle.
(3.28) Exercise. 1°) Prove that the stopping time Ua.b of Exercise (3.14) in Chap.
II has a density equal to
a(2nt 3)-1/2 exp (-(a - bt)2/2t).
[Hint: Use the scaling property of Exercise (3.14) in Chap. II and the known
forms of the density and Laplace transform for Ta.]
2°) Derive therefrom another proof of 5°) in the above mentioned exercise.
114
Chapter III. Markov Processes
(3.29) Exercise. 1°) Let B be the linear BM and T = inf{t :::: 0: Bt = OJ. Prove
that, for any probability measure p" such that v is carried by ]0, 00[, the process
X defined by
X t = Bt
on
{t < T},
X t =.1
on
{t:::: T}
is a Markov process on ]O,oo[ with the transition function Qt of Exercise
(1.15). This process can be called the BM killed at 0. As a result for a > 0,
Qt (a, ]0, oo[) = Po(Ta < t); check this against Proposition (3.10).
[Hint: See Exercise (2.30). To find the transition function, use the joint law of
(Bt, St) found in Exercise (3.14).]
2°) Treat the same question for the BM absorbed at 0, that is X t = B tAT .
* (3.30) Exercise. (Examples of unbounded martingales of BMO). 1°) If B is
the standard linear BM, prove that B I (i.e. the BM stopped at time 1) is in BMO
and that IIB11IBM01 = (2/n)I/2. Prove that E [Bn%'] is not in BMO.
2°) If B is the standard linear BM and S is a positive r.v. independent of B
then X t = B tAS is a martingale of BMO for the filtration '('/1 = u(S, ,%') if and
only if S is a.s. bounded.
**
(3.31) Exercise. If h is a real-valued, continuous and increasing function on ]0, 1[,
if h(t)/vt is decreasing and 10+ t- 3/ 2 h(t) exp (-h 2 (t)/2t) dt < 00, prove that
p[ B t :::: h(t) for some t E ]0, b[] .:::
l
h(t)
~ exp (-h 2 (t)/2t) dt.
0+ v 2nt 3
b
Show that, as a result, the function h belongs to the upper class (Exercise (2.32».
[Hint: For 0 < a < b and a subdivision (tn) of ]a, b[,
P[Bt::::hCt)forsometE]a,b[]
<
P[Th(a)':::a]
+ L P [tk-l < Th(t,_Il .::: tk]']
k
It is also true, but more difficult to prove, that if the integral diverges, h belongs
to the lower class. The criterion thus obtained is known as Kolmogorov's test.
§4. Summary of Results on Levy Processes
In Sect. 2, we defined the Levy processes which include Brownian motion and
the Poisson process. In Sect. 3, we found, while studying BM, another example
of a Levy process, namely the process Ta+. This is just one of the many examples
of Levy processes cropping up in the study of BM. Thus, it seems worthwhile
to pause a little while in order to state without proofs a few facts about Levy
processes which may be used in the sequel. Levy processes have been widely
studied in their own right; if nothing else, their properties often hint at properties
§4. Summary of Results on Levy Processes
115
of general Markov processes as will be seen in Chap. VII about infinitesimal
generators.
In what follows, we deal only with real-valued Levy processes. We recall that
a probability measure /-l on lR., or a real-valued r.v. Y with law /-l, is said to be
infinitely divisible if, for any n :::: 1, there is a probability measure /-In such that
/-l = /-l~n or equivalently if Y has the law of the sum of n independent identically
distributed random variables. It is easy to see that Gaussian, Poisson or Cauchy
variables are infinitely divisible.
Obviously, if X is a Levy process, then any r.v. X t is infinitely divisible.
Conversely, it was proved by Levy that any infinitely divisible r.v. Y may be
imbedded in a unique convolution semi-group, in other words, there is a Levy
process X such that Y l:fl Xl. This can be proved as follows. By analytical
methods, one can show that /-l is infinitely divisible, if and only if, its Fourier
transform fl is equal to exp( 1/!) with
1/!(u) = if3u -
where 13 ERa ::::
a U
-22
-+
2
f(
e
iux
- 1-
.)
lUX
--
1 +x 2
°and v is a Radon measure on lR. -
f l+x
v(dx)
{o} such that
x2
- - 2 v(dx) < 00.
This formula is known as the Levy-Khintchine formula and the measure v as the
Levy measure. For every t E lR.+, exp(t1/!) is now clearly the Fourier transform of
a probability measure /-It and plainly /-It /-ls = /-It+s and limtto /-It = £0 which
*
proves Levy's theorem.
The different terms which appear in the Levy-Khintchine formula have a probabilistic significance which will be further emphasized in Chap. VII. If a = and
v = 0, then /-l = lOp and the corresponding semi-group is that of translation at
and v = 0, the semi-group is that of a multiple of BM and
speed 13; if 13 =
the corresponding Levy process has continuous paths; if 13 = and a = 0, we
get a "pure jump" process as is the case for the process Ta+ of the preceding
section. Every Levy process is obtained as a sum of independent processes of the
three types above. Thus, the Levy measure accounts for the jumps of X and the
knowledge of v permits to give a probabilistic construction of X as is hinted at in
Exercise (1.18) of Chap. XII.
Among the infinitely divisible r.v. 's, the so-called stable r.v. 's form a subclass
of particular interest.
°
°
°
(4.1) Definition. A r. v. Y is stable if, for every k, there are independent r. v. 's
Y 1 , ••• , Y k with the same law as Y and constants ak > 0, b k such that
(d)
Y 1 + ... + Yk = akY + b k •
116
Chapter III. Markov Processes
°
It can be proved that this equality forces ak = k 1/ ex where
< a :s 2. The
number a is called the index of the stable law. Stable laws are clearly infinitely
divisible. For a = 2, we get the Gaussian r.v.'s; for
< a < 2 we have the
following characterization of the corresponding function 1/1.
°
°
(4.2) Theorem. If Y is stable with index a E]0,2[, then a =
and the Levy
measure has the density (m 1 l(x<o) + m21(x>o») Ixl-(l+ex) with ml and m2 ::: 0.
With each stable r.v. of index a, we may, as we have already pointed out,
associate a Levy process which will also be called a stable process of index a.
The process Ta+ of the last section is thus a stable process of index 1/2. If, in the
above result, we make f3 = and ml = m2 we get the symmetric stable process
of order a. In that case, 1/I(u) = -clul ex where c is a positive parameter. Among
those are the linear BM and the Cauchy process which is the symmetric stable
process of index 1 such that [It = exp(-tlul). These processes have a scaling
invariance property germane to that of BM, namely, for any c > 0, c- 1Xcat has
the same law as Xt.
Another interesting subclass of stable processes is that of stable subordinators.
Those are the non-decreasing stable processes or equivalently the stable processes
X such that X t is a.s. ::: 0. The corresponding stable law is thus carried by [0, oo[
and may be characterized by its Laplace transform. It turns out that for the index a,
this Laplace transform is equal to exp( -CA.ex) for < a < 1; indeed, for a E] 1,2]
this function is not a Laplace transform and there is no stable subordinator of
index a (the case a = 1 is obvious). Once again, the process Ta+ provides us with
an example of a stable subordinator of index 1/2.
If La is a stable subordinator of index a E]O, 1[ vanishing at and X is a Levy
process independent of the process La, the map a ---+ X Ta makes sense and is again
a Levy process as the reader can prove as an exercise. If X is the linear BM,
an easy computation shows that XTa is a symmetric stable process of index 2a.
If, in particular, a = 1/2, XTa is a symmetric Cauchy process, which generalizes
Proposition (3.11).
°
°
°
(4.3) Exercise. If X is a Levy process, prove that exp (iuX t - t1/l(u» is a complex
martingale for every real u.
(4.4) Exercise. Derive the scaling invariance property of the process Ta+ from the
scaling invariance property of BM.
*
°
(4.5) Exercise. Let Ta , a ::: be a right-continuous stable subordinator of index
a vanishing at zero. Since its paths are right-continuous and increasing for every
positive Borel function f on lR+, the Stieltjes integral
1t
f(a)dTa
makes sense and defines a r.v. T(f)t.
1°) What is the necessary and sufficient condition that f must satisfy in order
that T(f)t < a.s. for every t? What is the law of T(f)t in that case?
Notes and Comments
117
2°) Prove that there is a constant Ca such that the process S defined by
St = Ca
10
t'/(I-·)
a-ldTa
has the same law as Ta, a ~ O.
(4.6) Exercise (Harnesses and Levy processes). Let X be an integrable Levy
process i.e. such that E[IXII] < 00, and for 0 ~ s < t define .o/f.t = a{Xu, U ~
s; Xv, v ~ t}.
1°) Prove that for any reals 0 ~ C ~ a < b ~ d,
A process satisfying this condition is called a harness.
2°) For a fixed T > 0, prove that if X is a harness, then
Xt
_lo «X
tAT
T -
Xs)/(T - s)) ds,
t ~ T,
is a (.~f.r)-martingale. Compare with Exercise (3.18) Chap. IV.
3°) Prove that the process X' defined by X; = tX I / t , t > 0, is also a harness.
Notes and Comments
The material covered in this chapter is classical and is kept to the minimum necessary for the understanding of the sequel. We have used the books by BlumenthalGetoor [1], Meyer [1] and Chung [2]; the latter is an excellent means of getting
more acquainted with Markov processes and their potential theory and complements very nicely our own book. The reader may also use volume 4 of DellacherieMeyer [1]. For a more advanced and up-to-date exposition of Markov processes
we recommend the book of Sharpe [3].
Most results of this chapter may be found in the above sources. Let us merely
mention that Exercise (1.16) is taken from Walsh [3] and that Exercise (1.17)
1°) is from Dynkin [1] whereas the other questions were taken in Chung [2].
In connection with this exercise, let us mention that Pitman-Rogers [1] contains
another useful criterion for a function of a Markov process to still be a Markov
process.
Kolmogorov's test of Exercise (3.31) may be found in Ito-McKean [1]. The
equivalence between iii) and iv) in Exercise (2.31) may be found in Jeulin-Yor [2]
and Jeulin [4]. Exercise (3.15) is borrowed from Freedman [1].
For advanced texts on Levy processes we refer to Bertoin [7] and Sato [1].
The notion of harness of Exercise (4.6) is due to Hammersley, and was discussed by Williams in several papers (see Chaumont-Yor [1], Exercise 6.19). The
result in question 2°), in the case of Levy processes, is found in Jacod-Protter [1].
The result in 3°) is due to Williarns (unpublished).
Chapter IV. Stochastic Integration
In this chapter, we introduce some basic techniques and notions which will be used
throughout the sequel. Once and for all, we consider below, a filtered probability
space (Q, .¥, .%" P) and we suppose that each .'J1 contains all the sets of Pmeasure zero in .¥. As a result, any limit (almost-sure, in the mean, etc.) of
adapted processes is an adapted process; a process which is indistinguishable
from an adapted process is adapted.
§1. Quadratic Variations
(1.1) Definition. A process A is increasing (resp. offinite variation) ifit is adapted
and the paths t -+ At(w) arefinite, right-continuous and increasing (resp. offinite
variation) for almost every w.
We will denote by, ~+ (resp .. ~) the space of increasing (resp. of finite variation)
processes. Plainly, ,~+ C . ¥; and conversely, it is easily seen from Sect. 4
Chap. 0 that any element A E '/6 can be written At = Ai - A;- where A +
and A-are in .~+. Moreover, A + and A - can be chosen so that for almost
every w, Ai(w) - A;-(w) is the minimal decomposition of At(w). The process
f~ IdAls = Ai + A;- is in. //::+ and for a.e. w the measure associated with it is
the total variation of that which is associated with A (w); it is called the variation
of A.
One can clearly integrate appropriate functions with respect to the measure
associated to A(w) and thus obtain a "stochastic integral". More precisely, if X
is progressively measurable and - for instance - bounded on every interval [0, t]
for a.e. w, one can define for a.e. w, the Stieltjes integral
(X· A)r(w) = lot Xs(w)dAs(w).
If w is in the set where A. (w) is not of finite variation or X,(w) is not locally
integrable with respect to dA(w), we put (X· A) = O. The reader will have no
difficulty in checking that the process X . A thus defined is in .~. The hypothesis
that X be progressively measurable is precisely made to ensure that X . A is
adapted. It is the "stochastic integral" of X with respect to the process A of ,/6.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
120
Chapter IV. Stochastic Integration
Our goal is now to define a "stochastic integral" with respect to martingales. A
clue to the difficulty, already mentioned in the case of BM, is given by the
(1.2) Proposition. A continuous martingale M cannot be in . ~ unless it is constant.
Proof We may suppose that Mo = 0 and prove that M is identically zero if it is
of finite variation. Let Vt be the variation of M on [0, t] and define
Sn = inf{s : Vs ::: n};
then the martingale M S" is of bounded variation. Thus, it is enough to prove the
result whenever the variation of M is bounded by a number K.
Let .1 = {to = 0 < t] < ... < tk = t} be a subdivision of [0, t]; we have
E [M~]
E
[~(M~+1 - MD]
E
[~(Mt'+1 - MtJ2]
since M is a martingale. As a result,
E[M~] S E [Vt (s~p 1Mt'+1 - M I) JS KE [s~p IM '+1 - M IJ;
I,
I
I,
when the modulus of .1 goes to zero, this quantity goes to zero since M
continuous, hence M = 0 a.s.
IS
0
Remark. The reader may find more suggestive the proof outlined in Exercise
( 1.32).
Because of this proposition, we will not be able to define integrals with respect
to M by a path by path procedure. We will have to use a global method in which
the notions we are about to introduce play a crucial role. We retain the notation
of Sect. 2 Chap. 1. If.1 = {to = 0 < t] < ... } is a subdivision of lR.+ with only a
finite number of points in each interval [0, t] we define, for a process X,
k-]
T/",(X) =
L (X ,+1 - Xr,)2 + (X
I
t -
X t ,)2
i=O
where k is such that tk S t < tk+]; we will write simply T/" if there is no risk of
confusion. We recall from Sect. 2 Chap. I that X is said to be of finite quadratic
variation if there exists a process (X, X) such that for each t, T/' converges in
probability to (X, X)t as the modulus of .1 on [0, t] goes to zero. The main result
of this section is the
(1.3) Theorem. A continuous and bounded martingale M is of finite quadratic
variation and (M, M) is the unique continuous increasing adapted process vanishing at zero such that M2 - (M, M) is a martingale.
§ 1. Quadratic Variations
121
Proof Uniqueness is an easy consequence of Proposition (1.2), since if there were
two such processes A and B, then A - B would be a continuous martingale of
,~ vanishing at zero.
To prove the existence of (M, M), we first observe that since for ti < S < ti+l,
it is easily proved that
E [~L1(M) - T/'(M) I.~]
(1.1)
=
E [(Mt - Ms)2 I.~]
=
E [M~ - M; I.~].
As a result, M~ - T/(M) is a continuous martingale. In the sequel, we write T/'
instead of ~L1(M).
We now fix a > 0 and we are going to prove that if {L1n} is a sequence of
subdivisions of [0, a] such that lL1nl goes to zero, then {TaL1n } converges in L2.
If ..1 and ..1' are two subdivisions we call ..1..1' the subdivision obtained by
taking all the points of ..1 and ..1'. By eq. (1.1) the process X = TL1 - TL1' is a
martingale and, by eq. (1.1) again, applied to X instead of M, we have
Because (x + y)2 :::s 2(x 2 + y2) for any pair (x, y) of real numbers,
TaL1L1' (X) :::s 2 {TaL1L1' (TL1) + T/,L1' (T L1')}
and to prove our claim, it is enough to show that E [TaL1L1' (TL1)] converges to 0
as 1..11 + 1..1'1 goes to zero.
Let then Sk be in ..1..1' and tl be the rightmost point of ..1 such that tl :::s Sk <
Sk+l :::s tl+l; we have
=
(MSHI - Mt/)2 - (MSk - Mt,)2
=
(MSHI - MSk ) (MSHI + MSk - 2Mt,) ,
and consequently,
1
L1
( s~p MSHI + MSk - 2Mt,
TaL1L1' (T):::s
12) Ta
L1L1' .
By Schwarz's inequality,
Whenever 1..11 + 1..1'1 tends to zero, the first factor goes to zero because M is
continuous; it is therefore enough to prove that the second factor is bounded by a
constant independent of ..1 and ..1'. To this end, we write with a = tn,
122
Chapter IV. Stochastic Integration
n
2L
k=!
(Tall - ~~) (~~ -
Because of eq. (1.1), we have E [Tall consequently
n
Tt~J + L (Mtk - Mtk_1 )4.
k=!
Tt~ 191f,;] = E [(Ma - Mtkf I.~] and
n
2 LE [(Ma - Mtkf (Tt~ k=!
Tt~J]
n
+L
k=!
<
E
[(2 s~p
E [(Mtk - Mtk_J 4 ]
IMa - Mtk 12 +
Let C be a constant such that IMI
E[Tall ] ::::: 4C 2 and therefore
s~p IMtk - Mtk_ 2) Tall] .
11
::::: C; by eq. (1.1), it is easily seen that
E [(Tall )2] ::::: 12 C 2 E[Tall ] ::::: 48 C 4 •
We have thus proved that for any sequence {..1n} such that l..1n I -+ 0, the
sequence {Talln } has a limit (M, M}a in L2 hence in probability. It remains to
prove that (M, M}a may be chosen within its equivalence class in such a way that
the resulting process (M, M) has the required properties.
Let {..1n} be as above; by Doob's inequality applied to the martingale Tlln Tllm ,
Since, from a sequence converging in L 2 , one can extract a subsequence converging a.s., there is a subsequence {..1nk} such that Ttllnk converges a.s. uniformly
on [0, a] to a limit (M, M}t which perforce is a.s. continuous. Moreover, the original sequence might have been chosen such that ..1n+! be a refinement of ..1n and
Un..1n be dense in [0, a]. For any pair (s, t) in Un..1n such that s < t, there is
an no such that sand t belong to ..1n for any n ~ no. We then have Tslln ::::: T/n
and as a result (M, M) is increasing on Un ..1n; as it is continuous, it is increasing
everywhere (although the Tlln are not necessarily increasing!).
Finally, that M2 - (M, M) is a martingale follows upon passing to the limit
in eq. (1.1). The proof is thus complete.
To enlarge the scope of the above result we will need the
(1.4) Proposition. For every stopping time T,
(MT, MT) = (M, M}T.
§ 1. Quadratic Variations
123
Proof By the optional stopping theorem, (MT)2 - (M, M)T is a martingale, so
that the result is a consequence of the uniqueness in Theorem (1.3).
Much as it is interesting, Theorem (1.3) is not sufficient for our purposes; it
does not cover, for instance, the case of the Brownian motion B which is not a
bounded martingale. Nonetheless, we have seen that B has a "quadratic variation",
namely t, and that
t is a martingale exactly as in Theorem (1.3). We now
show how to subsume the case of BM and the case of bounded martingales in a
single result by using the fecund idea of localization.
B; -
(1.5) Definition. An adapted, right-continuous process X is an C-%', P)-local martingale if there exist stopping times Tn, n ~ 1, such that
i) the sequence {Tn} is increasing and limn Tn = +00 a.s.;
ii) for every n, the process XTn 1[1;,>0] is a uniformly integrable (.,¥;, P)-martingale.
We will drop (.91, P) when there is no risk of ambiguity. In condition ii) we can
drop the uniform integrability and ask only that XTn l[Tn>O] be a martingale; indeed,
one can always replace Tn by Tn 1\ n to obtain a u.i. martingale. Likewise, if X is
continuous as will nearly always be in this book, by setting Sn = inf{t : IXtl = n}
and replacing Tn by Tn 1\ Sn, we may assume the martingales in ii) to be bounded.
This will be used extensively in the sequel. In Sect. 3 we will find a host of
examples of continuous local martingales.
We further say that the stopping time T reduces X if XT l[T>o] is a u.i. martingale. This property can be decomposed in two parts if one introduces the process
Y t = X t - Xo: T reduces X if and only if
i) Xo is integrable on {T > OJ;
ii) yT is a u.i. martingale.
A common situation however is that in which Xo is constant and in that case
one does not have to bother with i). This explains why in the sequel we will often
drop the qualifying l[T>o], As an exercise, the reader will show the following
simple properties (see also Exercise (1.30»:
i) if T reduces X and S ::::; T, then S reduces X;
ii) the sum of two local martingales is a local martingale;
iii) if Z is a .9'¥}-measurable r.v. and X is a local martingale then, so is Z X; in
particular, the set of local martingales is a vector space;
iv) a stopped local martingale is a local martingale;
v) a positive local martingale is a supermartingale.
Brownian motion or, more generally, any right-continuous martingale is a
local martingale as is seen by taking Tn = n, but we stress the fact that local
martingales are much more general than martingales and warn the reader against
the common mistaken belief that local martingales need only be integrable in order
to be martingales. As will be shown in Exercise (2.13) of Chap. V, there exist
local martingales possessing strong integrability properties which nonetheless, are
not martingales. However, let us set the
124
Chapter IV. Stochastic Integration
(1.6) Definition. A real valued adapted process X is said of class (D) if the family of random variables X T I (T <00) where T ranges through all stopping times is
uniformly integrable. It is of class D L iffor every a > 0, the family of random
variables X T, where T ranges through all stopping times less than a, is uniformly
integrable.
A uniformly integrable martingale is of class (D). Indeed, by Sect. 3 in
Chap. II, we then have XTI[T<oo] = E [Xoo I.YT] I[T<oo] and it is known that if
Y is integrable on (Q, ./~, P), the family of conditional expectations E[Y 1.37]
where .7cJ ranges through the sub-O'-fields of.4; is uniformly integrable. But other
processes may well be uniformly integrable without being of class (D). For local
martingales, we have the
(1. 7) Proposition. A local martingale is a martingale if and only if it is of class
(DL).
Proof Left to the reader as an exercise. See also Exercise (1.46).
We now state the result for which the notion of local martingale was introduced.
(1.8) Theorem. If M is a continuous local martingale, there exists a unique increasing continuous process (M, M), vanishing at zero, such that M2 - (M, M) is
a continuous local martingale. Moreover, for every t and for any sequence {.1 n }
of subdivisions of [0, t] such that l.1 n -+ 0, the r. v. 's
1
sup ITsL1 n(M) - (M, M)s I
s~t
converge to zero in probability.
Proof Let {T,l} be a sequence of stopping times increasing to +00 and such that
Xn = MTn I[Tn>O] is a bounded martingale. By Theorem (1.3), there is, for each
n, a continuous process An in .~+ vanishing at zero and such that X~ - An
is a martingale. Now, (X~+l - An+l(n I[Tn>O] is a martingale and is equal to
X~ - A~+ll[Tn>O]. By the uniqueness property in Theorem (1.3), we have A~+l =
An on [Tn> 0] and we may therefore define unambiguously a process (M, M)
by setting it equal to An on [Tn> 0]. Obviously, (MTn)21[Tn>O] - (M, M)T" is
a martingale and therefore (M, M) is the sought-after process. The uniqueness
follows from the uniqueness on each interval [0, Tn].
To prove the second statement, let 8, £ >
and t be fixed. One can find a
stopping time S such that MSI[s>o] is bounded and P[S :S t] :S 8. Since TL1(M)
and (M, M) coincide with TL1(Ms) and (M s , M S ) on [0, S], we have
°
P
[S,~f ITsL1(M) - (M, M)sl > £] :S 8 + P [S,~f IT/(M s ) - (M s , MS)sl > £]
and the last term goes to zero as 1.11 tends to zero.
Theorem (1.8) may still be further extended by polarization.
o
§ 1. Quadratic Variations
125
(1.9) Theorem. 1f M and N are two continuous local martingales, there exists
a unique continuous process (M, N) in ,r/;, vanishing at zero and such that
M N - (M, N) is a local martingale. Moreover, for any t and any sequence {.d n }
of subdivisions of[O, t] such that l.d n I -+ 0,
P-lim sup 11',11" - (M, N)s I = 0,
s~t
Proof The uniqueness follows again from Proposition (1.2) after suitable stoppings. Moreover the process
1
(M, N) = 4' [(M + N, M + N) - (M - N, M - N)]
is easily seen to have the desired properties.
(1.10) Definition. The process (M, N) is called the bracket of M and N, the process (M, M) the increasing process associated with M or simply the increasing
process of M.
In the following sections, we will give general examples of computation of
brackets; the reader can already look at Exercises (1.36) and (1.44) in this section.
In particular, if M and N are independent, the product M N is a local martingale
hence (M, N) = (see Exercise (1.27».
°
(1.11) Proposition. 1fT is a stopping time,
(MT, NT) = (M, NT) = (M, N)T.
Proof This is an obvious consequence of the last part of Theorem (1.9). As an
exercise, the reader may also observe that MT NT - (M, N) T and MT (N - NT)
are local martingales, hence by difference, so is MT N - (M, N)T.
D
The properties of the bracket operation are reminiscent of those of a scalar
product. The map (M, N) -+ (M, N) is bilinear, symmetric and (M, M) ::: 0; it
is also non-degenerate as is shown by the following
(1.12) Proposition. (M, M) =
a.s. for every t.
°if
and only if M is constant, that is Mt = Mo
Proof By Proposition (1.11), it is enough to consider the case of a bounded M
and then by Theorem (l.3), E[(Mt - Mo)2] = E[(M, M)tl; the result follows
immediately.
This property may be extended in the following way.
(1.13) Proposition. The intervals ofconstancy are the same for M andfor (M, M),
that is to say, for almost all w's, Mt(w) = Ma(w) for a .::: t .::: b if and only if
(M, M)b(W) = (M, M)a(w).
126
Chapter IV" Stochastic Integration
Proof We first observe that if M is constant on [a, b], its quadratic variation is
obviously constant on [a, b]. Conversely, for a rational number q, the process
Nt = M t+q - Mq is a (.Yf+q)-local martingale with increasing process (N, N)t =
(M, M)t+q - (M, M)q. The random variable
Tq = inf{s > 0 : (N, N)s > O}
is a (.jf+q )-stopping time, and for the stopped local martingale NTq, we have
(NTq, NTq) = (N, N)Tq = (M, M)q+Tq - (M, M)q = O.
By Proposition (l.l2), M is a.s. constant on the interval [q, q + Tq], hence is
a.s. constant on all the intervals [q, q + Tq] where q runs through Q+. Since any
interval of constancy of (M, M) is the closure of a countable union of intervals
[q, q + Tq ], the proof is complete.
The following inequality will be very useful in defining stochastic integrals. It
shows in particular that d (M, N) is absolutely continuous with respect to d (M, M).
(1.14) Definition. A real-valued process H is said to be measurable if the map
(w, t) -+ Ht(w) is ~ ® .7i(Tff.+)-measurable.
The class of measurable processes is obviously larger than the class of progressively measurable processes.
(1.15) Proposition. For any two continuous local martingales M and N and measurable processes Hand K, the inequality
t
Jo IHsIIKslld(M, N)ls::::
(t
Jo H;d(M, M)s
)1/2(t
Jo K}d(N, N)s
)1/2
holds a.s. for t :::: 00.
Proof By taking increasing limits, it is enough to prove the inequality for t < 00
and for bounded Hand K. Moreover, it is enough to prove the inequality where
the left-hand side has been replaced by
indeed, if is is a density of d(M, N)s/ Id(M, N)ls with values in {-l, l} and we
replace H by Hi sgn (H K) in this expression, we get the left-hand side of the
statement.
By a density argument, it is enough to prove that for those K's which may be
written
K = Kol(o} + K1l10,td + ... + Knlltn_l,tnl
for a finite subdivision {to = 0 < tl < ... < tn = t} of [0, t] and bounded measurable r.v. K; 'so By another density argument, we can also take H of the same
form and with the same subdivision.
§ 1. Quadratic Variations
127
If we now define (M, N)~ = (M, N)I - (M, N)s, we have
I(M, N)~I :so ((M, M)~)1/2 ((N, N)~)1/2
a.s.
Indeed, almost surely, the quantity
(M, M)~ + 2r(M, N)~ + r2(N, N)~ = (M + rN, M + rN)~
is non-negative for every r E Q, hence by continuity for every r E JR, and our
claim follows from the usual quadratic form reasoning.
As a result
111 HsKsd(M, N)sl < L IHiK;lI(M, N):;+ll
< L IH;lIK;I ((M, M)::+1)1/2 ((N, N):;+1)1/2
and using the Cauchy-Schwarz inequality for the summation over i, this is still
less than
(1
1
H}d(M, M)sY/2
(1
1
K;d(N, N)sY/2
which completes the proof.
(1.16) Corollary (Kunita-Watanabe inequality). For every p > 1 and p-l +
q-l = 1,
E
[1
<
00
IHsIIKslld(M, N)ls]
II(f H,'d(M,M),tll,ll(f K;d(N,N),Y'l
Proof Straightforward application of H61der's inequality.
We now introduce a fundamental class of processes of finite quadratic variation.
(1.17) Definition. A continuous (.¥t, P)-semimartingale is a continuous process
X which can be written X = M + A where M is a continuous (.,y;, P)-local
martingale and A a continuous adapted process offinite variation.
As usual, we will often drop (.Yr, P) and we will use the abbreviation cont.
semi. mart. The decomposition into a local martingale and a finite variation process
is unique as follows readily from Proposition (1.2); however, if a process X
is a continuous semimartingale in two different filtrations Pif) and (,~), the
decompositions may be different even iL;.<'f C ,~; for each t (see Exercise (3.18)).
128
Chapter IV. Stochastic Integration
More generally one can define semimartingales as the sums of local martingales and finite variation processes. It can be proved, but this is outside the scope
of this book, that a semimartingale which is a continuous process is a continuous
semimartingale in the sense of Definition (1.17), namely that there is a decomposition, necessarily unique, into the sum of a continuous local martingale and a
continuous finite variation process.
We shall see many reasons for the introduction of this class of processes, but
we may already observe that their definition recalls the decomposition of many
physical systems into a signal (the f. v. process) and a noise (the local martingale).
(1.18) Proposition. A continuous semimartingale X = M + A has a finite quadratic variation and (X, X) = (M, M).
Proof If ,1 is a subdivision of [0, t],
where Vart(A) is the variation of A on [0, t], and this converges to zero when 1,11
tends to zero because of the continuity of M. Likewise
lim ~ (At+l - At )2 = o.
,
1.11-+0 L...,
i
o
(1.19) Fnndamental remark. Since the process (X, X) is the limit in probability
of the sums T.1 n (X), it does not change if we replace (.Yr) by another filtration for
which X is still a semimartingale and likewise if we change P for a probability
measure Q such that Q « P and X is still a Q-semimartingale (see Sect. 1 Chap.
VIII).
(1.20) Definition. If X = M + A and Y = N + B are two continuous semimartingales, we define the bracket of X and Y by
(X, Y) = (M, N) =
1
4' [(X + Y, X + Y) - (X - Y, X - Y)].
Obviously, (X, Y)t is the limit in probability of Li (X ti + 1 - X ti ) (Yti + 1 - Yti ), and
more generally, if H is left-continuous and adapted,
P-lim
supILHt(X:
1.11----+0 s<t
.
I
-
I
1+1
-X:)(Y:l+! -Y;')- JotHud(X,Y)ul=o,
I
I
the proof of which is left to the reader as an exercise (see also Exercise (1.33).
Finally, between the class oflocal martingales and that of bounded martingales,
there are several interesting classes of processes among which the following ones
§ 1. Quadratic Variations
129
will be particularly important in the next section. We will indulge in the usual
confusion between processes and classes of indistinguishable processes in order
to get norms and not merely semi-norms in the discussion below.
(1.21) Definition. We denote by JH[2 the space of L 2-bounded martingales, i.e. the
space of (Yr, P)-martingales M such that
sup E[M;l < +00.
t
We denote by H2 the subset of L 2-bounded continuous martingales, and Hg the
subset of elements of H2 vanishing at zero.
An (jif)-Brownian motion is not in H2, but it is when suitably stopped, for
instance at a constant time. Bounded martingales are in JHI2 • Moreover, by Doob's
inequality (Sect. 1 in Chap. II), M'/x, = SUPt IM t I is in L 2 if M E JH[2; hence
M is u.i. and M t = E [ Moo I .9f with Moo E L 2. This sets up a one to one
correspondence between JH[2 and L (Q,~, P), and we have the
1
(1.22) Proposition. The space JH[2 is a Hilbert space for the norm
IIMIIlHI2 = E [M~t2 = t~~ E [M;]I/2,
and the set H2 is closed in JH[2.
Proof The first statement is obvious; to prove the second, we consider a sequence
{Mn} in H2 converging to M in JH[2. By Doob's inequality,
E[(
s~p IM~ - M Ir] ~ 411 M n - M 1I~2 ;
t
as a result, one can extract a subsequence for which SUPt IM~k - Mt I converges
to zero a.s. which proves that M E H2.
D
The mapping M -+ IIM'/x,1I2 = E [(SUPt IMtl)2f2 is also a norm on JH[2; it is
equivalent to II 1IJHI2 since obviously IIMIIH2 ~ IIM'/x,112 and by Doob's inequality
IIM'/x,1I2 ~ 211MIIH2, but it is no longer a Hilbert space norm.
We now study the quadratic variation of the elements of H2.
(1.23) Proposition. A continuous local martingale M is in H2 if and only if the
following two conditions hold
i) Mo E L2;
ii) (M, M) is integrable i.e. E [(M, M}ool < 00.
In that case, M2 - (M, M) is uniformly integrable and for any pair S < T of
stopping times
E [M~ - M~ I~] = E [(MT - Ms)2 I.¥S] = E [(M, M}I I'¥s].
Proof Let {Tn} be a sequence of stopping times increasing to +00 and such that
MTn 1[Tn >0] is bounded; we have
E [MtM1[Tn>0]] - E [(M, M} TnM l[Tn>o]] = E [M~l[Tn>O]]'
l30
Chapter IV. Stochastic Integration
If M is in H2 then obviously i) holds and, since M:x, E L 2 , we may also pass
to the limit in the above equality to get
E[M~] - E[(M, M}oo] = E[M~]
which proves that ii) holds.
If, conversely, i) and ii) hold, the same equality yields
E [Minl\tl[Tn>o]] ::; E [(M, M}oo] + E [M~] = K < 00
and by Fatou's lemma
E [M;] ::; limE [ML,t1[Tn>O]] ::; K
n
which proves that the family of r. v.' s M t is bounded in L 2 • Furthermore, the
same inequality shows that the set ofr.v.'s MT"l\tl[Tn>o] is bounded in L2, hence
uniformly integrable, which allows to pass to the limit in the equality
E [Mt/\Tn l[Tn>O] I 37f] = Ms/\Tn l[Tn>o]
to get E [Mt I .¥s] = Ms. The process M is a L 2 -bounded martingale.
To prove that M2 - (M, M) is u.i., we observe that
sup 1M; - (M, M}t I ::; (M:x,)2 + (M, M}oo
t
which is an integrable r.v. The last equalities derive immediately from the optional
stopping theorem.
(1.24) Corollary. If M E Hg,
IIMlIlH[2 = I (M, M}~2112 == E [(M, M}00]1/2.
Proof If Mo = 0, we have E [M~] = E [(M, M}oo] as is seen in the last proof.
Remark. The more general comparison between the LP-norms of Moo and
(M, M}~2 will be taken up in Sect. 4.
We could have worked in exactly the same way on [0, t] instead of [0,00] to
get the
(1.25) Corollary. If M is a continuous local martingale, the following two conditions are equivalent
i) Mo E L2 and E [(M, M}rl < 00;
ii) {Ms, s ::; t} is an L 2 -bounded martingale.
Remark. It is not true (see Exercise (2.13) Chap. V) that L 2 -bounded local martingales are always martingales. Likewise, E[ (M, M) 00] may be infinite for an
L 2 -bounded cont. loco mart.
§ 1. Quadratic Variations
l31
We notice that for ME H2, simultaneously (M, M)oo is in Ll and lim Hoo Mt
exists a.s. This is generalized in the following
(1.26) Proposition. A continuous local martingale M converges a.s. as t goes to
infinity, on the set {(M, M)oo < oo}.
Proof Without loss of generality, we may assume M o = O. Then, if Tn =
inf{t : (M, M)t :::: n}, the local martingale MTn is bounded in L2 as follows from
Proposition (1.23). As a result, limHoo M/n exists a.s. But on {(M, M)oo < oo}
the stopping times Tn are a.s. infinite from some n on, which completes the proof.
Remark. The converse statement that (M, M)oo < 00 on the set where M t converges a.s. will be shown in Chap. V Sect. 1. The reader may also look at Exercise
(1.42) in this section.
#
(1.27) Exercise. 1°) If M and N are two independent continuous local martingales
(i.e. the u-fields u(Ms,s :::: 0) and u(N",s :::: 0) are independent), show that
(M, N) = O. In particular, if B = (B l , ••• , B d ) is a BMd , prove that (B i , Bj)t =
oUt. This can also be proved by observing that (B i + Bj)/v'2 and (B i - Bj)/v'2
are linear BM's and applying the polarization formula.
2°) If B is a linear BM and T a stopping time, by considering BT and B - BT
prove that the converse to the result in 1°) is false.
[Hint: T is measurable with respect to the u-fields generated by both BT
(observe that (B T, B T) t = t /\ T) and B - B T which thus are not independent if
T is not constant a.s.].
If (X, Y) is a BM2 and T a stopping time, XT and yT provide another example.
#
(1.28) Exercise. If X is a continuous semimartingale, and T a stopping time, then
Xt = XT+t is a (.~+t)-semimartingale. Compute (X, X) in terms of (X, X).
(1.29) Exercise. If X = (Xl, ... , X d ) is a vector continuous local martingale,
there is a unique process A E .~ such that IXI 2 - A is a continuous local martingale.
#
(1.30) Exercise. 1°) If Sand T are two stopping times which reduce M, then
S v T reduces M.
2°) If (Tn) is a sequence of stopping times increasing a.s. to +00 and if MTn is
a continuous local martingale for every n, then M is a continuous local martingale.
This result may be stated by saying that a process which is locally a cont. loco
mart. is a cont. loco mart.
3°) If M is a (.j;'f)-cont. loco mart., the stopping times Sk = inf{t : IMtl :::: k}
reduce M. Use them to prove that M is also a cont. loco mart. with respect to
(.'Y;M) and even u(Ms , s :S t). The same result for semimarts. (Stricker's theorem)
is more difficult to prove and much more so in the discontinuous case.
4°) In the setting of Exercise (2.17) Chap. II, prove that if X is a (.~%)-cont.
semimart., then one can find (.j<f°)-adapted processes M', A', B' such that
i)
M' is a (Yr o)-cont. loco mart.;
132
Chapter IV. Stochastic Integration
ii) A' and B' are continuous, A' is of finite variation, B' is increasing;
iii) X is indistinguishable from X' = M' + A';
iv) B' is indistinguishable from (X, X) and M,2 - B' is a (~o)-cont. loco mart ..
We will write (X', X') for B'.
50) Prove that (X', X') is the quadratic variation of X' and that the fundamental
Remark (1.19) is still valid. Extend these remarks to the bracket of two cont.
semimarts. X and Y.
(1.31) Exercise. If Z is a bounded r.V. and A is a bounded continuous increasing
process vanishing at 0, prove that
E [ZAoo] = E
[1
00
E [z I
~]dAtl
(1.32) Exercise. (Another proof of Proposition (1.2». If M is a bounded continuous martingale of finite variation, prove that
M; = M~ + 21t MsdMs
(the integral has then a path by path meaning)
is a martingale. Using the strict convexity of the square function, give another
proof of Proposition (1.2).
* (1.33) Exercise. All the processes considered below are bounded. For a subdivi-
sion L1 = (ti) of [0, t] and A E [0, 1], we set t/' = ti + A (ti+! - ti).
1 If X = M + A, Y = N + B are two cont. semimarts. and H is a continuous
adapted process, set
0
)
K~ = LHti {(Xtt - Xti) (Ytt - Yti ) - (X, Y):t}.
i
Prove that
lim sup E [(K~)2] = 0.
1L1140
A
20 ) If A = 1 or if d (X, Y) is absolutely continuous with respect to the Lebesgue
measure, then
lim'" Ht(Xtl. - X t.) (Ytl. - Yt.) = A
1L1140
L:-t
, "
'
,
'
Jot Hsd(X, Y)s
in the L 2-sense. In the second case, the convergence is uniform in A.
3 If F is a C1-function on JR, prove that
0)
P-lim L(F(Xti +J-F(Xti ))2 =
n..... oo L1
Jot F'(Xs )2d(X,X)s'
(1.34) Exercise. If B is the standard linear BM and Sand T are two integrable
stopping times with S :::: T, show that
E [(B T -
BS)2] = E [B} - B~] = E[T - S].
§ 1. Quadratic Variations
#
*
133
(1.35) Exercise (Gaussian martingales). If M is a continuous martingale and a
Gaussian process, prove that (M, M) is deterministic i.e. there is a function f on
IR+ such that (M, M}t = f(t) a.s. The converse will be proved in Sect. 1 Chap. V.
(1.36) Exercise. 1°) If M is a continuous local martingale, prove that M2 is of
finite quadratic variation and that (M2, M2}t = 4 J~ M;d(M, M}s.
[Hint: Use 2°) in Exercise (1.33). In the following sections, we will see the
profound reasons for which M2 is of finite quadratic variation as well as a simple
way of computing (M 2, M2), but this can already be done here by brute force.]
2°) Let R i , i = 1, 2, ... ,r be the squares of the moduli of r independent
di-dimensional BM's and Ai be r distinct, non zero real numbers. We set
_ ' " ,kRi
X t(k) -~Ai
t'
k = 1,2, ....
Prove that each X(k) is of finite quadratic variation and that
(X(l), X(k)}t = 4 lot X;k+l)ds.
Prove that each R'. is adapted to (JJf X(J») .
(1.37) Exercise. Let W = C(IR+, IR), X be the coordinate process and JJf o =
a(Xs, s :5 t). Prove that the set of probability measures on (W, .giro) for which
X is a (.g;;;o)-local martingale, is a convex set. Is this still true with the space D
of cadlag functions in lieu of W?
* (1.38) Exercise. If Xt(w) = f(t) for every w, then X is a continuous semi-
martingale if and only if f is continuous and of bounded variation on every
interval.
[Hint: Write f(t) = f(O) + M t + At for the decomposition of the semimart.
f. Pick a > 0. Then choose c > sufficiently large so that the stopping time
°
T = inf{t: IMtl + lot IdA Is 2:
°
c}
satisfies P[T 2: a] > and If I is bounded by c on [0, a]. For a finite subdivision
L1 = (ti) of [0, a], by comparing Sl1(f) with the sum
L If(ti) - f(ti-\) I
S=
tis,T
prove that Sl1(f) is less than 3cj P[T 2: a]. Another proof is hinted at in Exercise
(2.21) and yet another can be based on Stricker's theorem (see the Notes and
Comments).]
*#
(1.39) Exercise. Let (Q, .~, P), t E [0, 1] be a filtered probability space. For a
right-continuous adapted process X and a subdivision L1 of [0, 1], we set
n
V l1 (X) =
L E [IE [Xti
i=l
+J -
Xti I ~]I]·
134
Chapter IV. Stochastic Integration
1°) If ,1' is a refinement of,1 prove that VLl,(X) ~ VLl(X). If
V(X) = sup VLl(X) < +00,
Ll
we say that X is a quasimartingale. If M E H2 prove that M2 is a quasimartingale.
2°) If B is the linear BM, define (~) as (j;fva(Bd)+ where j;f is the
Brownian filtration of Chap. III. Prove that for 0 :::; s :::; t :::; 1,
t-s
E [Bt - Bs I ~] = --(Bl - Bs).
I-s
Prove that Bt , t E [0, 1] is no longer a (;~)-martingale but that it is a quasimartingale and compute V(B).
This exercise continues in Exercise (3.18).
**
(1.40) Exercise. (Continuation of Exercise (3.16) of Chap. II on BMO). 1°) If
ME BMO, prove that IIMII BM02 is the smallest constant C such that
E [(M, M)oo - (M, Mh I §T] :::; C 2
for every stopping time T.
2°) Prove that, for every T and n
E [«(M, M)oo - (M, Mht I §T] :::; n! IIMII~'!vt02'
[Hint: One can use Exercise (1.13) Chap. V.]
3°) If IIMIIBM02 < 1, then
E [exp«(M, M)oo - (M, Mh) I §T] :::; (1 -IIMII~M02rl.
* (1.41) Exercise (A weak definition of BMO). 1°) Let A be a continuous increas-
ing process for which there exist constants a, b, a, {3 such that for any stopping
time T
P [Aoo - AT > a I .~] :::; a,
Prove that
P [Aoo - AT > b I §T] :::; {3.
P [Aoo - AT > a + b I .~] :::; a{3.
[Hint: Use the stopping time U = inf{t : At - AT > b} and prove that the
event {Aoo - AT > b} is in .9ii.]
2°) A continuous local martingale is in BMO if and only if one can find two
constants a and s > 0 such that
P [(M, M)oo - (M, M)T > a I §T] :::; 1 - s
for every stopping time T. When this is the case, prove that for a sufficiently
small
E [exp (a «(M, M)oo - (M, M)T)) I.~]
is bounded.
§ 1. Quadratic Variations
*
135
.r,
(1.42) Exercise. Let (D,
P) be a probability space and (.3if) a complete and
right-continuous filtration. A local martingale on ]0, oo[ is an adapted process Mf>
t E ]0, oo[ such that for every E > 0, Me+f> t :::: 0, is a continuous local martingale
with respect to (.~+t). The set of these processes will be denoted by At.
1°) If A E .~ and M E .~t, then lAM E ./#6. If T is a (.Yf)-stopping time
and M E ./#6, then MT l[T>o] E .~6.
2°) For any M E U#6, there is a unique random measure (M)(w, .) on ]0, oo[
such that for every E > 0
is a (.Yf+£ )-continuous local martingale.
3°) For any M E .~t, prove that the two sets A = {w : limq,o Mt(w) exists in
JR} and B = {w : (M)(w, ]0,1]) < oo} are a.s. equal and furthermore, that lAM
is a continuous local martingale in the usual sense.
[Hint: Using 1°) , reduce to the case where M is bounded, then use the continuous time version of Theorem (2.3) of Chap. II.]
This exercise continues in Exercise (3.26).
*
(1.43) Exercise. (Continuation of Exercise (3.15) Chap. II). Assume that f
L2([0, 1]) and set
[X, XJr(w) = (if few) - few)
E
f l[t~w]'
Prove that X; - [X, X]t is a uniformly integrable martingale.
[Hint: Compute if ((if f)2 - 2 f (if f) ).]
The reader will notice that albeit we deal with a discontinuous martingale X,
the process [X, X] is the quadratic variation process of X and plays a role similar
to that of (X, X) in the continuous case.
(1.44) Exercise. Let X be a positive r.v. independent of a linear Brownian motion
B. Let M t = B tx , t :::: 0, and (.Pf M) be the smallest right-continuous and complete
filtration with respect to which M is adapted.
1°) Prove that M is a (.~ M)-local martingale and that it is a martingale if and
only if E[XI/2] < 00.
2°) Find the process (M, M).
3°) Generalize the preceding results to M t = BAt where A is an increasing
continuous process, vanishing at 0 and independent of B.
(1.45) Exercise. Let M and N be two continuous local martingales vanishing
at 0 such that (M, N)2 = (M, M)(N, N). If R = inf{s : (M, M)s > OJ, S =
inf{s : (N, N)s > O} prove that a.s. either R v S = 00 or R = S and there is a
.J7R n.~-measurable r.v. y vanishing on {R v S = oo} and such that M = yN.
(1.46) Exercise. Prove that a local martingale X such that for every integer N,
the process (X-)N is of class (D) is a supermartingale, In particular, a positive
local martingale is a supermartingale,
136
#
Chapter IV. Stochastic Integration
(1.47) Exercise. Let M and N be two continuous local martingales and T a finite
stopping time. Prove that
I(M, M)y2 - (N, N)y21 :s (M -N, M _N)Y2
a.s.
[Hint: The map (M, N) --+ (M, N) has the properties of a scalar product and
this follows from the corresponding "Minkowski" inequality.]
*
(1.48) Exercise. (Continuous local martingales on a stochastic interval). Let
T be a stopping time. We define a continuous local martingale M on [0, T[ as
a process on [0, T[ for which there exists a sequence (Tn) of stopping times
increasing to T and a sequence of continuous martingales M n such that M t = M~
on {t < Tn}. In the sequel, we will always assume T > 0 a.s.
1°) Prove that there exists a unique continuous increasing process (M, M) on
[0, T[ such that M2 - (M, M) is a continuous local martingale on [0, T[. Prove
that M and (M, M) have a.s. the same intervals of constancy.
2°) If E [SUPt<T IMt I] < 00, then limttT M t = M T- exists and is finite a.s. and
if we set M t = M T- for t :::: T, the process Mr, t E lR+ is a uniformly integrable
martingale.
3°) If Mo = 0 and E[(M, M)T] < 00, prove that M may be continued in a
continuous uniformly integrable martingale.
(1.49) Exercise. (Krickeberg decomposition for loco marts. Continuation of
Exercise (2.18) Chap. II). Let X be a loco mart. and set
N1(X) = supE[IXTI]
T
**
where T ranges through the family of finite stopping times.
1°) If (Tn) is a sequence of stopping times reducing X and such that
lim Tn = 00 a.s., then NI (X) = sUPn E [IX T" I]. A loco mart. X is bounded in
LI iff NI (X) < 00.
2°) If NI (X) < 00 prove that there is a unique pair (XC+), XH) of positive
loco marts. such that
i) X = XC+l - XC-l,
ii) N1(X) = E [X6+ l ] + E [X6- l ].
[Hint: See Exercise (2.18) Chap. II and remember that a positive Joc. mart. is
a supermartingale.]
3°) If NI (X) < 00, then (X t ) converges a.s. as t tends to infinity, to an
integrable r.v.
§2. Stochastic Integrals
137
§2. Stochastic Integrals
For several reasons, one of which is described at length at the end of Sect. 1 of
Chap. VII, it is necessary to define an integral with respect to the paths of BM.
The natural idea is to consider the "Riemann sums"
L
KUi
(B ti +1 - B ti )
i
where K is the process to integrate and Ui is a point in [ti' ti+l]. But it is known
from integration theory that these sums do not converge pathwise because the paths
of Bare a.s. not of bounded variation (see Exercise (2.21». We will prove that
the convergence holds in probability, but in a first stage we use L 2-convergence
and define integration with respect to the elements of H2. The class of integrands
is the object of the following
(2.1) Definition. If M E H2, we call 2§'2(M) the space ofprogressively measurable processes K such that
IIKII~ = E [1 K;d(M, M)s] < +00.
00
If, for any r E .33'(lR.+) ® .¥oo, we set
PM(r) = E
[1
00
Ids, w)d(M, M)s(W)]
we define a bounded measure PM on .)3'(lR.+) ® ~ and the space YP(M) is
nothing else than the space of PM-square integrable, progressively measurable,
functions. As usual, L2(M) will denote the space of equivalence classes of elements of 2ff2(M); it is of course a Hilbert space for the norm II . 11M.
Since those are the processes we are going to integrate, it is worth recalling
that they include all the bounded and left (or right)-continuous adapted processes
and, in particular, the bounded continuous adapted processes.
(2.2) Theorem. Let M E H2; for each K E L 2(M), there is a unique element of
H5, denoted by K . M, such that
(K·M, N) = K·(M, N)
for every N E H2. The map K --+ K·M is an isometry from L2(M) into H5.
Proof a) Uniqueness. If Land L' are two martingales of H5 such that (L, N) =
(L', N) for every N E H2, then in particular (L - L', L - L') = 0 which by
Proposition (1.12) implies that L - L' is constant, hence L = L'.
b) Existence. Suppose first that M is in H5. By the Kunita-Watanabe inequality
(Corollary (1.16» and Corollary (1.24), for every N in H5 we have
IE
[1
00
Ksd(M, N)s]1 :oS IINlbHI2I1KIIM;
138
Chapter IV. Stochastic Integration
the map N -+ E [(K· (M, N})oo] is thus a linear and continuous form on the
Hilbert space H~ and, consequently, there is an element K . M in H~ such that
E [(K ·M)ooNoo ] = E [(K· (M, N})oo]
for every N E H~. Let T be a stopping time; the martingales of H2 being u.i.,
we may write
(2.1)
E [(K·MhNT ]
E [E [(K ·M)oo 19T] NT] = E [(K ·M)ooNT]
E [(K .M)ooN~] = E [(K. (M, NT})oo]
E [(K. (M, N}T)oo] = E [(K. (M, N}h]
which proves, by Proposition (3.5) Chap. II, that (K· M)N - K· (M, N) is a
martingale. Furthermore, by eq. (2.1),
IIK.MII~2 = E [(K.M)~] = E [(K 2 .(M, M})oo] = IIKlIt
which proves that the map K -+ K· M is an isometry.
If N E H2 instead of H~, then we still have (K·M, N) = K·(M, N} because
the bracket of a martingale with a constant martingale is zero.
Finally, if M E H2 we set K· M = K· (M - Mo) and it is easily checked that
the properties of the statement carry over to that case.
(2.3) Definition. The martingale K . M is called the stochastic integral of K with
respect to M and is also denoted by
1·
KsdMs.
It is also called the Ito integral to distinguish it from other integrals defined in
Exercise (2.18). The Ito integral is the only one among them for which the resulting
process is a martingale.
We stress the fact that the stochastic integral K . M vanishes at O. Moreover,
as a function of t, the process (K . M)t may also be seen as an antiderivative (cf.
Exercise (2.17».
The reasons for calling K . M a stochastic integral will become clearer in
the sequel; here is one of them. We shall denote by &' the space of elementary
processes that is the processes which can be written
K = K_ll{o) + LKilJti.ti+Il
i
where 0 = to < tl < t2 < ... , lim
, ti = +00, and the r.v.'s Ki are .9if.-measurable
and uniformly bounded and K_I E.91). The space & is contained in L 2 (M). For
KEg, we define the so-called elementary stochastic integral K·M by
n-I
(K·M)t = LKi (Mti+! - Mti ) + Kn (Mt - MtJ
i=O
§2. Stochastic Integrals
139
whenever tn :s t < tn+ \. It is easily seen that K· M E Hg; moreover, considering subdivisions ,1 including the ti' s, it can be proved using the definition of the
brackets, that for any N E H2, we have (K·M, N) = K·(M, N).
As a result, the elementary stochastic integral coincides with the stochastic
integral constructed in Theorem (2.2). This will be important later to prove a
property of convergence of Riemann sums which will lead to explicit computations
of stochastic integrals.
We now review some properties of the stochastic integral. The first is known
as the property of associativity.
(2.4) Proposition. If K E L 2(M) and H E L 2(K. M) then H K E L 2(M) and
(HK)·M = H·(K·M).
Proof Since (K·M, K·M) = K 2·(M, M), it is clear that HK belongs to L 2(M).
For N E H2, we further have
(HK)·M, N) = HK·(M, N) = H·(K·(M, N)
because of the obvious associativity of Stieltjes integrals, and this is equal to
H·(K·M,N) = (H·(K·M),N);
the uniqueness in Theorem (2.2) ends the proof.
D
The next result shows how stochastic integration behaves with respect to optional stopping; this will be all important to enlarge the scope of its definition to
local martingales.
(2.5) Proposition. If T is a stopping time,
K·M T = K l[o,T]"M = (K.Ml.
Proof Let us first observe that MT = l[o,T]"M; indeed, for N E H2,
(MT,N) = (M,N)T = l[o,T]"(M,N) = (l[o,T]"M,N).
Thus, by the preceding proposition, we have on the one hand
and on the other hand
(K ·Ml = l[o,T]"(K ·M) = l[o,T]K·M
which completes the proof.
Since the Brownian motion stopped at a fixed time t is in H2, if K is a process
which satisfies
for all t,
140
Chapter IV. Stochastic Integration
f;
we can define
KsdBs for each t hence on the whole positive half-line and the
resulting process is a martingale although not an element of H2. This idea can of
course be used for all continuous local martingales.
(2.6) Definition. lfM is a continuous local martingale, we call Lfoc(M) the space
of classes of progressively measurable processes K for which there exists a sequence (Tn) of stopping times increasing to infinity and such that
E [1I;, K;d(M, M)s] < +00.
Observe that Lfoc(M) consists of all the progressive processes K such that
1t
K;d(M, M)s < 00
for every t.
(2.7) Proposition. For any K E Lroc(M), there exists a unique continuous local
martingale vanishing at 0 denoted K . M such that for any continuous local martingale N
(K·M, N) = K·(M, N).
Proof One can choose stopping times Tn increasing to infinity and such that MTn
is in HZ and KTn E L 2 (MI;,). Thus, for each n, we can define the stochastic
integral x(n) = KTn. MI;,. But, by Proposition (2.5), x(n+l) coincides with x(n)
on [0, Tn]; therefore, one can define unambiguously a process K·M by stipulating
that it is equal to x(n) on [0, Tn]. This process is obviously a continuous local
martingale and, by localization, it is easily seen that (K· M, N) = K· (M, N) for
every local martingale N.
Remark. To prove that a continuous local martingale L is equal to K· M, it is
enough to check the equality (L, N) = K· (M, N) for all bounded N's.
Again, K· M is called the stochastic integral of K with respect to M and is
alternatively written
l'
KsdMs.
Plainly, Propositions (2.4) and (2.5) carry over to the general case after the obvious changes. Also again if KEg this stochastic integral will coincide with
the elementary stochastic integral. Stieltjes pathwise integrals having been previously mentioned, it is now easy to extend the definition of stochastic integrals to
semimartingales.
(2.8) Definition. A progressively measurable process K is locally bounded if there
exists a sequence (Tn) ofstopping times increasing to infinity and constants C n such
that IKI;, I :s Cn.
All continuous adapted processes K are seen to be locally bounded by taking Tn = inf {t : IK t I :::: n}. Locally bounded processes are in L foe (M) for every
continuous local martingale M.
§2. Stochastic Integrals
141
(2.9) Definition. If K is locally bounded and X = M + A is a continuous semimartingale, the stochastic integral of K with respect to X is the continuous semimartingale
K·X = K·M + K·A
where K . M is the integral of Proposition (2.7) and K· A is the pathwise Stieltjes
integral with respect to d A. The semimartingale K . X is also written
L
KsdXs.
(2.10) Proposition. The map K ~ K· X enjoys the following properties:
i) H· (K· X) = (H K)· X for any pair H, K of locally bounded processes;
ii) (K.X)T = (Kl[o.T])·X = K·X T for every stopping time T;
iii) if X is a local martingale or a process offinite variation, so is K· X;
iv) if K E t5, then iftn :s t < tn+l
n
(K ,X)l =
L K (Xl;+l - Xr,) + Kn (Xl - Xln)'
;=0
Proof Straightforward.
At this juncture, several important remarks are in order. Although we have used
Doob's inequality and L 2 -convergence in the construction of K . M for M E H2,
the stochastic integral depends on P only through its equivalence class. This is
clear from Proposition (2.7) and the fundamental remark (1.19). It is actually true,
and we will see a partial result in this direction in Chap. VIII, that if X is a Psemimartingale and Q « P, then X is also a Q-semimartingale. Since a sequence
converging in probability for P converges also in probability for Q, the stochastic
integral for P of, say, a bounded process is Q-indistinguishable from its stochastic
integral for Q.
Likewise, if we replace the filtration by another one for which X is still a
semimartingale, the stochastic integrals of processes which are progressively measurable for both filtrations are the same.
Finally, although we have constructed the stochastic integral by a global procedure, its nature is still somewhat local as is suggested by the following
(2.11) Proposition. For almost every w, the function (K-X).{w) is constant on any
interval [a, b] on which either K.(w) = 0 or X.(w) = Xa(W).
Proof Only the case where X is a local martingale has to be proved and it is then
an immediate consequence of Proposition (1.13) since K2·(X, X) hence K·X are
then constant on these intervals.
As a result, for K and K locally bounded and predictable processes and X
and X semimartingales we have (K,X)r - (K,X)a = (K,X)r - (K,X)a a.s. on
any interval [a, b] on which K = K and X. - Xa = X. - Xa; this follows from
the equality
K·X - K·X = K·(X - X) + (K - K).X.
142
Chapter IV. Stochastic Integration
Remark. That stochastic integrals have a path by path significance is also seen in
3°) of Exercise (2.18).
We now tum to a very important property of stochastic integrals, namely the
counterpart of the Lebesgue dominated convergence theorem.
(2.12) Theorem. Let X be a continuous semimartingale. If(K n) is a sequence of
locally bounded processes converging to zero pointwise and if there exists a locally
bounded process K such that IK nI :s K for every n, then (K n . X) converges to
zero in probability, uniformly on every compact interval.
Proof The convergence property which can be stated
is clear if X is a process of finite variation. If X is a local martingale and if
T reduces X, then (Kn)T converges to zero in L2(XT) and by Theorem (2.2),
(Kn ·X)T converges to zero in H2. The desired convergence is then easily established by the same argument as in Theorem (l.8).
D
The next result on "Riemann sums" is crucial in the following section.
(2.13) Proposition. If K is left-continuous and locally bounded, and (L1n) is a
sequence of subdivisions of[O, t] such that lL1 n l -+ 0, then
r
10
KsdXs = P- lim
L KI, (Xt'+1 - XI;)'
n ..... oo I, EL1n
Proof If K is bounded, the right-hand side sums are the stochastic integrals of
the elementary processes L K I , l]t,.I,+'] which converge pointwise to K and are
bounded by 11K 1100; therefore, the result follows from the preceding theorem. The
general case is obtained by the use of localization.
D
(2.14) Exercise. Let X be a continuous semimartingale and b.%; the algebra of
bounded .91}-measurable random variables. Prove the b.%;-linearity of the map
H -+ H· X namely, for a and b in b.%;,
i t (aH + bK)dX = a i t H dX + b i t K dX.
(2.15) Exercise. Let BI and B2 be two independent linear BM's. For i = 1, ... ,4,
define the following operations on continuous progressively measurable processes
K:
Oi(K)t = it K.,dBi(s),
03(K)t = it Ksds,
i = 1,2,
04(K 1 , K 2)t = KI (t)K2(t).
§2. Stochastic Integrals
143
Let '6 be the class of processes obtained from {B', BZ} by a finite number of
operations Oi; we define by induction a real-valued mapping d on '6 by
d(O,(K» = d(Oz(K» = d(K) + 1/2,
d(03(K» = d(K) + 1
Prove that for every K E '6, there is a constant C K , which may be zero, such that
(2.16) Exercise. Let f be a locally bounded Borel function on JR+ and B be a
BM. Prove that the process
Zt = i t f(s)dBs
is Gaussian and compute its covariance r(s,t). Prove that exp{Zt - ~r(t,t)}
is a martingale. This generalizes the example given in Proposition (1.2) iii) of
Chap. II.
*
(2.17) Exercise. 1°) Let B be a BM and H an adapted right-continuous bounded
process. Prove that for a fixed t
in probability.
The result is also true for H unbounded if it is continuous.
[Hint: One may apply Schwarz's inequality to
2°) Let B = (B I , ... , B d ) be ad-dimensional BM and for each j, Hj be a
bounded right-continuous adapted process. Prove that for a fixed t,
converges in law as h ---+ 0, to H/ + L~=z H/ (Nj IN') where (N I , ••• , Nd ) is
a centered Gaussian LV. with covariance [d, independent of (H/, ... , Htd ).
* (2.18) Exercise. Let X and Y be two continuous semimartingales. For a subdivision .1 of [0, t], a function f
define
E C I (JR) and a probability measure fL on [0, 1], we
144
Chapter IV. Stochastic Integration
1°) Prove that
lim S~ =
1.11---+0
t f(Xs)dYs + Ii 10t f'(Xs)d(X, Y)s
10
in probability, with Ii = fOI S df.L(s). For f.L = 01/2 and f(x) = x, this limit
is called the Stratonovich integral of X against Y. For f.L = 00, we get the Ito
stochastic integral of f(X) against Y. Observe that this is the only integral for
which the resulting process is a local martingale when Y is a local martingale.
[Hint: Use Exercise (1.33).]
For f.L = 01, the limit is called the backward integral.
2°) If we set
S~ = L (Y
t ;+1 -
i
Yt;}
tf
~
(Xt;+S(I;+I-I;)) df.L(s)
and if d (X, Y) is absolutely continuous with respect to the Lebesgue measure,
then S~ has the same limit as S~ in probability whenever 1.11 -+ O.
3°) If OJ = Li fi (XI, ... , Xd) dXi is a closed differential form of class C I on
an open set U of]R.d and X = (Xl, ... , X d ) a vector semimartingale with values
in U (i.e. P[3t : X t (j. U] = 0) then
1 -- ~ 11
W -
X(O,t)
~
i=1
0
1" 1t -(Xs)d(X
afi
,X
fi(Xs)dX,i + - ~
2 i,j
i
0
j
)s
aXj
where X(O, t) is the continuous path (Xs(w), 0.::: s .::: t). We recall that the integral
of a closed form OJ along a continuous but not necessarily differentiable path
y : [0, t] -+ ]R.d is defined as n(y(t» - n(y(O» where n is a function such that
dn = OJ in a string of balls which covers y.
4°) If B is the planar BM, express as a stochastic integral the area swept by
the line segment joining 0 to the point B s , as s varies from 0 to t.
*
(2.19) Exercise (Fubini-Wiener identity in law). 1°) Let Band C be two independent standard BM's. If ¢ E L 2 ([0, 1]2, ds dt), by using suitable filtrations, give
a meaning to the integrals
and
and prove that they are almost-surely equal.
2°) Conclude from 1°) that
11 (1 1
du
dBs¢(u,
S») 2 ~
11 (11
du
dBs¢(s,
U») 2
3°) If f is a CI-function on [0, 1] and f(l) = 1, prove that
11
ds (Bs
-1
1
f'(t)Btdt
Y~ 10
1
ds(Bs - f(s)Blf
§2. Stochastic Integrals
145
If, in particular, B is a Brownian bridge,
1
10 1ds ( Bs -10 1)2
Btdt ct1 10 dsB;.
[Hint: Take ¢(s, u) = l(u:"s) + feu) - 1.]
(2.20) Exercise. Prove that in Theorem (2.12) the pointwise convergence of (Kn)
may be replaced by suitable convergences in probability.
#
(2.21) Exercise. Let X be a real-valued function on [0, 1]. For a finite subdivision
L1 of [0, 1] and hEW = C([O, 1], lR), we set
S",(h) = Lh(ti) (X ti +1 - X t, ) .
liE'"
1°) Prove that the map h -+ S'" (h) is a continuous linear form on W with
norm
fiE~
2°) If {S"'n (h)} converges to a finite limit for every hEW and any sequence
{L1n} such that IL1n I tends to 0, prove that X is of bounded variation. This shows
why the stochastic integral with respect to a cont. loco mart. cannot be defined in
the ordinary way.
[Hint: Apply the Banach-Steinhaus theorem.]
3°) Use the same ideas to solve Exercise (1.38).
#
°
(2.22) Exercise (Orthogonal martingales). 1°) Two martingales M and N of
H2, vanishing at 0, are said to be weakly orthogonal if E[MsNd =
for every sand t :::: 0. Prove that the following four properties are equivalent:
i) M and N are weakly orthogonal,
ii) E[MsNs] = for every s :::: 0,
iii) E[(M, N)s] = for every s:::: 0,
iv) E[MTNs] = for every s :::: and every stopping time T :::: S.
°
°
°
°
2°) The two martingales are said to be orthogonal (see also Exercise (5.11»
if MN is a martingale. Prove that M and N are orthogonal iff E[MTNs] = for
every s :::: and every stopping time T ::::: S.
Prove also that M and N are orthogonal iff H·M and N are weakly orthogonal,
for every bounded predictable process H.
3°) Give examples of weakly orthogonal martingales which are not orthogonal.
[Hint: One can use stochastic integrals with respect to BM.]
4°) If the two-dimensional process (M, N) is gaussian, and if M and N are
weakly orthogonal, prove that they are orthogonal.
Further results relating orthogonality and independence for martingales may
be found in Exercise (4.25) Chap. V.
°
°
146
Chapter IV. Stochastic Integration
§3. Ito's Formula and First Applications
This section is fundamental. It is devoted to a "change of variables" formula for
stochastic integrals which makes them easy to handle and thus leads to explicit
computations.
Another way of viewing this formula is to say that we are looking for functions
which operate on the class of continuous semimartingales, that is, functions F such
that F (X t ) is a continuous semimartingale whatever the continuous semimartingale
X is. We begin with the special case F (x) = x 2 •
(3.1) Proposition (Integration by parts formula). If X and Yare two continuous semimartingales, then
In particular,
X; = X~ + 2it XsdXs + (X, X}t.
Proof It is enough to prove the particular case which implies the general one by
polarization. If L1 is a subdivision of [0, f], we have
letting 1L11 tend to zero and using, on one hand the definition of (X, X), on the
other hand Proposition (2.13), we get the desired result.
D
If X and Y are of finite variation, this formula boils down to the ordinary
integration by parts formula for Stieltjes integrals. The same will be true for the
following change of variables formula. Let us also observe that if M is a local
martingale, we have, as a result of the above formula,
M; - (M, M}t =
M~ + 2it MsdMs;
we already knew that M2 - (M, M) is a local martingale but the above formula
gives us an explicit expression of this local martingale. In the case of BM, we
have
B; - f = 2it BsdBs,
which can also be seen as giving us an explicit value for the stochastic integral
in the right member. The reader will observe the difference with the ordinary
integrals in the appearance of the term f. This is due to the quadratic variation.
All this is generalized in the following theorem. We first lay down the
§3. Ito's Fonnula and First Applications
147
(3.2) Definition and notation. A d-dimensional vector local martingale (resp.
vector continuous semimartingale) is a JRd-valued process X = (Xl, ... , X d)
such that each Xi is a local martingale (resp. cont. semimart.). A complex local martingale (resp. complex cont. semimart.) is a C-valued process whose real
and imaginary parts are local martingales (resp. cont. semimarts.j.
(3.3) Theorem (Ito's formula). Let X = (Xl, ... , X d ) be a continuous vector
semimartingale and F E C 2(JRd, JR); then, F(X) is a continuous semimartingale
and
1ta
F
F(X t ) = F(Xo) + "
.
1
-(Xs)dX~ + - "
~
0 aXi
2~
1
I,)
1t a
0
2F
..
--(XS)d(XI, X})s.
aXi aXj
Proof If F is a function for which the result is true, then for any i, the result is
true for G(XI, ... ,Xd) = Xi F(XI, ... ,Xd); this is a straightforward consequence
of the integration by parts formula. The result is thus true for polynomial functions. By stopping, it is enough to prove the result when X takes its values in a
compact set K ofJRd . But on K, any F in C 2(JR d ,JR) is the limit in C 2(K,JR)
of polynomial functions. By the ordinary and stochastic dominated convergence
theorems (Theorem (2.12», the theorem is established.
Remarks. 1°) The differentiability properties of F may be somewhat relaxed. For
instance, if some of the Xi'S are of finite variation, F needs only be of class C I in
the corresponding coordinates; the proof goes through just the same. In particular,
if X is a continuous semimartingale and A E ,/~, and if a2F jax2 and aF jay exist
and are continuous, then
F(Xo, Ao) +
1t
aF
-(Xs, As)dXs +
o ax
t a2 F
+ 2' 10 ax2 (XS, As)d(X, X)s.
1t
0
aF
-(X" As)dAs
ay
1
2°) One gets another obvious extension when F is defined only on an open
set but X takes a.s. its values in this set. We leave the details to the reader as an
exercise.
3°) Ito's formula may be written in "differential" form
More generally, if X is a vector semimartingale, dYt = Li H/dX; will mean
In this setting, Ito's formula may be read as "the chain rule for stochastic differentials".
148
Chapter IV. Stochastic Integration
4°) Ito's fOlmula shows precisely that the class of semimartingales is invariant
under composition with C 2 -functions, which gives another reason for the introduction of semimartingales. If M is a local martingale, or even a martingale, F(M)
is usually not a local martingale but only a semimartingale.
5°) Let ¢ be a Cl-function with compact support in ]0, 1[. It is of finite
variation, hence may be looked upon as a semimartingale and the integration by
parts formula yields
XI¢(l) = Xo¢(O) +
which reduces to
11
11
¢(s)dXs +
¢(s)dXs = -
11
11
Xs¢'(s)ds + (X, ¢)I
X,¢'(s)ds.
Stochastic integration thus appears as a random Schwartz distribution, namely
the derivative of the continuous function t --+ X t (w) in the sense of distributions.
The above formula, a special case of the integration by parts formula, was taken as
the definition of stochastic integrals in the earliest stage of the theory of stochastic
integration and is useful in some cases (cf. Chap. VIII, Exercise (2.14)). Of course,
the modem theory is more powerful in that it deals with much more general
integrands.
To some extent, the whole sequel of this book is but an unending series of
applications of Ito's formula. That is to say that Ito's formula has revolutionized
the study of BM and other important classes of processes. We begin here with a
few remarks and a fundamental result.
In the following proposition, we introduce the class of exponential local martingales which turns out to be very important; they are used in many proofs and
playa fundamental role in Chap. VIII. For the time being, they provide us with
many new examples of local martingales.
(3.4) Proposition. If f is a complex valued function, defined on lR x lR+, and such
iJ2 f
af
'Jf
1 a2 f
that dx 2 and dy exist, are continuous and satisfY ~y + 2" ax 2 = 0, then for any cont.
local mart. M, the process f (Mt , (M, M)t) is a local martingale. In particular for
any "A E CC, the process
is a local martingale.
For "A = 1, we write simply geM) and speak of the exponential of M.
Proof This follows at once by making A = (M, M) in Remark 1 below Theorem
(3.3).
Remarks. 1°) A converse to Proposition (3.4) will be found in Exercise (3.14).
is a martingale. Let us
2°) For BM, we already knew that exp{"ABt further observe that, for f E L?oc(lR+), the exponential
ttl
§3. Ito's Fonnula and First Applications
149
f:
is a martingale; this follows easily from the fact that
fCu)dB u is a centered
f2(u)du and is independent of .¥S. Likewise for
Gaussian r.v. with variance
BMd and a d-uple f = (/1, ... , /d) of functions in Lfoc(lR+), and for the same
reason,
f:
~f = exp (
L 1t fk(S)dB; - 21 L 1t ffcs)ds
d
k=1
d
k=1
0
0
I
is a martingale. These martingales will be used in the following chapter.
30) Following the same train of thought as in the preceding remark, one can ask
more generally for the circumstances under which exponentials are true martingales
(not merely local martingales). This will be studied in great detail in connection
with Girsanov's theorem in Chap. VIII. We can already observe the following
facts:
a) As already mentioned in Sect. 1, a local martingale which is 2: 0 is a
supermartingale; this can be seen by applying Fatou's lemma in passing to the
limit in the equalities
E [Mt!\Tn I .X] = Ms!\T".
To obtain a martingale, one would have to be able to use Lebesgue's theorem. This
will be the case if M is bounded, hence the exponential of a bounded martingale
is a martingale.
b) If Mo = 0 then gA(M) is a martingale if and only if E [3 ACM)t] == 1. The
necessity is clear and the sufficiency comes from the fact that a supermartingale
with constant expectation is a martingale.
4°) For a cont. semimart. X, we can equally define gA(X)t = exp{)..Xt (X, X}tl and we still have
¥
3 ACX)t = gACX)O + 1t )..(5ACX)SdXs.
This can be stated as: 3 A (X) is a solution to the stochastic differential equation
dYt = )..YtdX t •
When Xo = 0, it is in fact the unique solution such that Yo = 1 (see Exercise
(3.10»).
In the same spirit as in the above result we may also state the
(3.5) Proposition. If B is ad-dimensional BM and f E C 2 (lR+ x lR d ), then
M( = f(t, B t ) -
1(1 + -aat
t
o
-11f
2
f ) (s, Bs)ds
is a local martingale. In particular if f is harmonic in lRd then feB) is a local
martingale.
150
Chapter IV. Stochastic Integration
Proof Because of their independence, the components Bi of B satisfy (B i , Bj)t =
8ijt (see Exercise (l.27». Thus, our claim is a straightforward consequence ofIto's
formula.
0
The foregoing proposition will be generalized to a large class of Markov processes and will be the starting point of the fundamental method of Martingales
problems (see Chap. VII). Roughly speaking, the idea is that Markov processes
may be characterized by a set of local martingales of the above type. Actually
much less is needed in the case of BM where it is enough to consider M! for
f (x) = Xi and f (x) = Xi xj. This is the content of the fundamental
(3.6) Theorem (P. Levy's characterization theorem). For a (.'Yr,)-adapted continuous d-dimensional process X vanishing at 0, the following three conditions are
equivalent:
i) X is an .¥;-Brownian motion;
ii) X is a continuous local martingale and (Xi, X j) t = 8ij t for every 1 :::; i,
j :::; d;
iii) X is a continuous local martingale and for every d -uple f = (fl, ... , fd) of
functions in L 2 (]R+), the process
is a complex martingale.
Proof That i) implies ii) has already been seen (Exercise (l.27».
Furthermore if ii) holds, Proposition (3.4) applied with A = i and M t =
Lk J~ !k(s)dX~ implies that gil is a local martingale; since it is bounded, it is a
complex martingale.
Let us finally assume that iii) holds. Then, if f = ~ l[o,T] for an arbitrary ~
in ]Rd and T > 0, the process
is a martingale. For A E.¥" s < t < T, we get
(Here, and below, we use the notation (x, y) for the euclidean scalar product of
x and yin ]Rd, and Ixl2 = (x, x).)
Since this is true for any ~ E ]Rd, the increment X t - Xs is independent of.¥,
and has a Gaussian distribution with variance (t - s); hence i) holds.
0
(3.7) Corollary. The linear BM is the only continuous local martingale with t as
increasing process.
§3. Ito's Formula and First Applications
151
Proof Stated for d = 1, the above result says that X is a linear BM if and only
if X t and
t are continuous local martingales.
X; -
Remark. The word continuous is essential; for example if N is the Poisson process
with parameter c = 1, Nt - t and (Nt - t)2 - t are also martingales. (See Exercise
(1.14) Chap. II).
The same principle as in Proposition (3.4) allows in fact to associate many other
martingales with a given local martingale M. We begin with a few prerequisites.
The Hermite polynomials h n are defined by the identity
u,X E JR,
whence it is deduced that
For a > 0, we also have
exp ( ux -
au2)
2
if we set Hn(x, a) = a nj2 h n(x/Ja); we also set Hn(x, 0) = xn.
(3.8) Proposition. if M is a local martingale and Mo = 0, the process
Lin) = Hn (Mt , (M, M)t)
is, for every n, a local martingale and moreover
Proof It is easily checked that Ga 2 /ax 2 +a/aa)Hn(x,a)
aHn/ax = nHn-l; thus Ito's formula implies that
o and that
L(n)
=nlt L(n-l)dM
t
s
s,
o
which entails that L (n) is a loco mart. and its representation as a multiple stochastic
integral is obtained by induction.
0
Remark. The reader is invited to compute explicitly L(n) for small n. For n =
0,1,2, one finds the constant 1, M and M2 - (M, M), but from n = 3 on, new
examples of local martingales are found.
152
Chapter IV. Stochastic Integration
(3.9) Exercise. Prove the following extension of the integration by parts formula.
If f(t) is a right-continuous function of bounded variation on any compact interval
and X is a continuous semimartingale
t
f(t)X = f(O)X o +
#
!at f(s)dXs + !at Xsdf(s).
(3.10) Exercise. 1°) If X is a semimartingale and Xo = 0, prove that (fA(X)t is
the unique solution of dZ t = AZtdX t such that Zo = 1.
[Hint: If Y is another solution, compute Y gA(X)-l using the integration by
parts formula and Remark 2 below Ito's formula.]
2°) Let X and Y be two continuous semimartingales. Compute tS (X + Y)
and compare it to tS(X)f5(Y). When does the equality occur? This exercise is
generalized in Exercise (2.9) of Chap. IX.
(3.11) Exercise. 1°) If X = M + A and Y = N + B are two cont. semimarts.,
prove that XY - (X, Y) is a cont. loco mart. iff X . B + Y . A = O. In particular,
X2 - (X, X) is a cont. loco mart. iff P-a.s. Xs = 0 dAs-a.e.
2°) If the last condition in 1°) is satisfied, prove that for every C 2 -function f,
f(X
t)- f(Xo) - f'(O)A
t -
(1/2)
!at j"(Xs)d(X, X)s
is a cont. loco mart.
The class E of semi martingales X which satisfy the last condition in 1°) is
considered again in Definition 4.4, Chap. VI.
#
(3.12) Exercise. (Another proof and an extension of Ito's formula). Let X be
a continuous semimart. and g : ~ x (D x ~+) -+ ~ a function such that
(x, u) -+ g(x, w, u) is continuous for every w;
ii) x -+ g(x, w, u) is C 2 for every (w, u);
iii) (w, u) -+ g(x, W, u) is adapted for every x.
i)
1°) Prove that, in the notation of Proposition (2.13),
!at ag/ax(Xu, u)dXu + (1/2) !at a g/ax2(Xu, u)d(X, X)u.
2
2°) Prove that if in addition g(O, w, u) == 0, then
P-lim
'"' g(Xt+l -X(,ti) =
n---+oo ~
I
I
ti Ed. n
3°) Resume the situation of 1°) and assume moreover that g satisfies
§3. Ito's Fonnula and First Applications
153
iv) for every (x, w), the map u ---+ g(x, w, u) is of class C l and the derivative is
continuous in the variable x,
then prove the following extension of Ito's formula:
g(Xr. t)
=
g(Xo, 0) + fot (ag/ax)(X u, u)dXu + fot (ag/au)(X u , u)du
+ (1/2) fot (a 2g/ax2)(xu, u)d(X, X)u'
4°) Extend these results to a vector-valued cont. semimart. X.
(3.13) Exercise (Yet another proof of Ito's formula). 1°) Let x be a continuous
function which is of finite quadratic variation on [0, t] in the following sense:
there exists a sequence (Ll n ) of subdivisions of [0, t] such that ILlnl ---+ 0 and the
measures
converge vaguely to a bounded measure whose distribution function denoted by
(x, x) is continuous. Prove that for a C 2 -function F,
F(xt) = F(xo) +
r
10
F'(xs)dx s + ~
r
210
F"(xs)d(x, X)s
where J~ F'(xs)dxs = limn-->oo LtiELln F' (Xt,) (Xti+! -xr,).
[Hint: Write Taylor's formula up to order two with a remainder r such that
rea, b) ~ 4>(la - bl)(b - a)2 with 4> increasing and liIllc--> 0 4> (c) = 0.]
2°) Apply the result in 1°) to prove Ito's formula for continuous semimartingales.
#
(3.14) Exercise. If M is an adapted continuous process, A is an adapted continu-
{AM ¥
ous process of finite variation and if, for every A, the process exp
is a local martingale, then M is a local martingale and (M, M) = A.
[Hint: Take derivatives with respect to A at A = 0.]
t -
At }
(3.15) Exercise. If X and Yare two continuous semimartingales, denote by
fot Xs odYs
the Stratonovich integral defined in Exercise (2.18). Prove that if F E C 3(lRd , lR)
and X = (Xl, ... , X d ) is a vector semimartingale, then
F(X t ) = F(Xo) + ~
~
i
#
i
t aF
.
-(Xs) 0 dX!.
0 ax·I
(3.16) Exercise. (Exponential inequality, also called Bernstein's inequality). If
M is a continuous local martingale vanishing at 0, prove that
p [M~ ~ x, (M, M)oo ~ Y] ~ exp(-x 2 /2y).
154
Chapter IV. Stochastic Integration
Derive therefrom that, if there is a constant c such that (M, M}t :s ct for all t,
then
P [sup Ms ::: at] :s exp( _a 2 t 12c).
s~t
[Hint: Use the maximal inequality for positive supermartingales to carry
through the same proof as for Proposition (1.8) in Chap. II.]
* (3.17) Exercise. Let f.t be a positive measure on ~+ such that the function
fIL(x, t) =
10 exp (YX - ~\) df.t(y)
00
is not everywhere infinite. For c; > 0, define
A(t, c;) = inf{x : fIL(x, t) ::: c;}
and suppose that this is a continuous function of t.
1°) Prove that for any stopping time T of the linear BM,
P [ sup (B t - A(t, c;)) ::: 01.9iT ] = 1/\
t?=T
fIL(BT, T)
c;
.
[Hint: Use the result in Exercise (3.12) Chap. II. To prove the necessary convergence to zero, look at times when B vanishes.]
2°) By suitably altering f.t, prove that for h ::: 0 and b E ~,
P [sup (B t - A(t + h, c;)) ::: -b I .9iT] = 1 /\ fIL(b + B T , h + T) .
t?=T
c;
3°) If f.t is a probability measure, prove that
sup(Bt - A(t, 1)) \11 elY
t?=O
where e is an exponential r.v. with parameter 1, Y is a r.v. with law f.t, and e and
Yare independent.
4°) If f.t({0}) = 0 and if there is an N > 0 such that f exp(-Ny)f.t(dy) < 00,
then, for every n, the following assertions are equivalent
i) E [(SUPt?=o (B t - A(t, c;))+fJ < 00;
ii) Jo+ y-ndf.t(y) < 00.
*#
(3.18) Exercise (Brownian Bridges). 1°) Retain the situation and notation of Exercise (1.39) 2°), and prove that
f3t = Bt -
1
tl\I BI -
o
Bs
1- s
ds
is a ~-Brownian motion, independent of B I . In particular, B
martingale.
IS
a ~-semi­
§3. Ito's Fonnula and First Applications
155
2°) If X~ = xt + Bt - t BI is the Brownian Bridge of Sect. 3 Chap. I then
X~ =
f3t +
- xx
ds.
lo x___
1t
s
o
s
The same equality obtains directly from 1°) by defining XX as the BM conditioned
to be equal to x at time 1.
The following questions, which are independent of 2°) are designed to give
another (see Exercise (3.15) Chap. II) probabilistic proof of Hardy's L 2-inequality,
namely, if for f E L2([0, 1]) one sets
11
Hf(x) = x
0
X
f(y)dy,
then Hf E L2([0, 1]) and IIHfII2 ::s 211f1l2.
3°) Prove that if f is in L2([0, 1]), there exists a Borel function F on [0, 1[
such that for any t < 1,
1
t
o
f(u)
BI -Bu
du =
1- u
11
0
F(v 1\ t)dBv.
Then, observe that
10 F(V)2dv ::s 410 f2(u)du,
1
1
then, prove by elementary transfonnations on the integrals, that this inequality is
equivalent to Hardy's L2-inequality.
* (3.19) Exercise. 1°) Let 1/1 be a Borel function on ]0, 1] such that for every e > °
11
11/I(u)ldu < 00,
and define t/J(u) = JuI 1/l(s)ds. If B is the standard linear BM, prove that the limit
lim
8""""*0
11
8
1/I(s)Bs ds
exists in probability if and only if
10 q/(u)du <
1
00
and
lim v'£t/J(e) = 0.
8""""*0
Compare with Exercise (2.31) of Chap. III.
[Hint: For Gaussian r.v. 's, convergence in probability implies convergence
in L2.]
156
Chapter IV. Stochastic Integration
2°) Deduce from Exercise (3.18) 3°) that for 1 E L2([0, 1])
lim
t~O
[I
t
I(u)u- I Bu du
exists a.s. If H* is the adjoint of the Hardy operator H (see Exercise (3.18)),
prove that for every 1 E L2([0, 1]),
lim .J£H* 1(8) = 0.
e~O
3°) Admit the equivalence between iii) and iv) stated in Exercise (2.31)
Chap. III. Show that there exists a positive function 1 in L2([0, 1]) such that
lim[1 I(u)u- I Bu du exists a.s. and
I~O t
Jofol l(u)u-1IBuldu = 00 a.s.
[Hint: Use I(u) = l[u::':I/2Jlu l / 2( -log u)", 1/2 < a .:s 1.]
4°) Let X be a stationary au process. Prove that
r
lim
g(s)Xsds
l~ooJo
exists a.s. and in L2 for every g E L2([0, ooD.
[Hint: Use the representation of X given in Exercise (3.8) of Chap. I and the
fact that for {J > 0, the map g -+ (2{Ju)-1/2g (2{J)-llog(1lu)) is an isomorphism
from L2([0, oo[) onto L2([0, ID.]
Moreover using the same equivalence as in 3°) prove that if g is a positive
function of Lloc([O, ooD then
1
00
g(s)IXslds < 00 a.s. iff
1
00
g(s)ds < 00.
5°) For /L E JR, /L =1= 0, and g locally integrable prove that
for a suitable stationary au process X. Conclude that limHoo J~ g(s)ei/J-B'ds exists
in L2 whenever g is in L2([0, ooD. Show that the a.s. convergence also holds.
* (3.20) Exercise. Let A be ad xd-matrix and B a BMd (0). Prove that the processes
(ABr. B t ) (where (, ) is the scalar product in JRd) and J~ (A + N)Bs, dBs) have
the same filtration.
#
(3.21) Exercise. Prove the strong Markov property of BM by means of P. Levy's
characterization theorem. More precisely, if B is an (9f)-BM, prove that for any
starting measure v and any (9f)-stopping time T, the process (BT +1 - B T ) 1(T <00)
is a standard BM for the conditional probability Pv('1 T < 00) and is independent
of~.
§3. Ito's Formula and First Applications
157
(t))
(3.22) Exercise. Let B be a BMd and 0 (t) = ( 0;
be a progressively measurable process taking its values in the set of d x d-orthogonal matrices.
Remark that f~ 110 (s) 11 2 ds < 00 for every t. Prove that the process X defined
by
d
t
j=l
0
X; = L 10 OJ (s)dBj (s)
is a BMd.
(3.23) Exercise. Let (X, Y) be a standard BM2 and for 0 < p < I put
Zt = pX t + !l=P2Yt .
1°) Prove that Z is a standard linear BMI and compute (X, Z) and (Y, Z).
2°) Prove that Xl and a(Zs, s ::: 0) are conditionally independent with respect
to Zl.
(3.24) Exercise. 1°) If M is a continuous process and A an increasing process,
then M is a local martingale with increasing process A if and only if, for every
f E
C;,
lit"
f(Mt) - f(Mo) - 2
0
f (Ms)dAs
is a local martingale.
[Hint: For the sufficiency, use the functions x and x 2 suitably truncated.]
2°) If M and N are two continuous local martingales, the process (M, N) is
equal to the process of bounded variation C if and only if for every f E C; (JR.2, JR.),
the process
f(M t , Nt) - f(Mo, No) - "21
-~
f
f"
f X 2(Ms , Ns)d(M, M)s
f;;(Ms, Ns)d(N, N)s -
f f~~(Ms,
Ns)dCs
is a local martingale.
(3.25) Exercise. If M is a continuous local martingale such that Mo = 0, prove
that
{15(M)oo = o} = {(M, M)oo = oo} a.s.
[Hint: geM) = g(!Mfexp(-~(M,M)).]
**
(3.26) Exercise (Continuation of Exercise (1.42». 1°) Let X = Xo + M + A be
a positive semimartingale such that A is increasing and X t ::'S c a.s. for every t.
Prove that E[Aoo] ::'S c.
2°) Let M E .~6 and suppose that M t ::'S k for every t > O. Prove that
Vt = (k + 1 - Mt)-l
-lot +
(k
is in j6 and that (V)(" ]0,1]) is a.s. finite.
1 - M s )-3d(M)s
158
Chapter IV. Stochastic Integration
3°) Let M E vlt and suppose that limqoMt < 00 a.s. Prove that limt,j,o M t
exists a.s.
4°) Prove that for M E ./16 and for a.e. W one ofthe following three properties
holds
i) limt.j.o Mt(w) exists in 1R;
ii) limt.j.o IMt(w)1 = +00;
iii) li!!1.j.oMt(w) = -00 and limt.j.oMt(w) = +00.
(3.27) Exercise. Let {.1 n } be a sequence of refining (i.e . .1 n C .1 n + l ) finite subdivisions of [0, T] such that l.1 n l --+ 0; write .1 n = (0 = to < tl < ... < tPn = T).
1°) If F is a continuous function on [0, T], prove that the sequence of measures
Pn- I
L (tp+I - tpr l (F(tp+d - F(tp)) l[tp,tp+d(u)du
p=o
converges to F' in the sense of Schwartz distributions.
2°) Let B be the standard linear BM. Prove that, for f
sequence of random variables
I
E L2([0, T]),
the
tP + 1
Pn- I
L (tp+I - tpr l
f(u)du (Btp+1 - Btp)
p=o
tp
converges a.s. and in L 2 to a limit which the reader will identify.
3°) Prove that nonetheless the sequence of measures defined by making F =
B,(w) in 1°) does not converge vaguely to a measure on [0, T].
4°) Prove that if f is a function of bounded variation and Op is a point of the
interval [tp, tp+I], then
Pn- I
Lf(Op) (Btp+1 - Btp)
p=o
converges a.s.
* (3.28) Exercise (Continuation of Exercise (1.48». 1°) If M is a continuous local
°
martingale on [0, T[, vanishing at and with increasing process t /\ T and if B is
a BM independent of M, then M t + Bt - BlAT is a BM. We will say that M is a
BM on [0, T[.
2°) State and prove an
formula for continuous local martingales on [0, T[.
(The two questions are independent).
Ito
**
(3.29) Exercise. (Extension of P. Levy's characterization theorem to signed
measures). Let (ll, jff, P) be a filtered probability space and Q a bounded signed
measure on ~ such that Q « P. We suppose that (jff) is right-continuous
and P-complete and call M the cadlag martingale such that for each t, M t =
(dQ/dP)l.p; a.s. A continuous process X is called a (Q, P)-local martingale if
i) X is a P-semimartingale;
ii) XM is a P-Iocal martingale (see Sect. 1 Chap. VIII for the significance ofthis
condition).
§3. Ito's Formula and First Applications
159
1°) If H is a locally bounded (.¥r)-predictable process and H-X is the stochastic
integral computed under P, then if X is a (Q, P)-local martingale, so is H· X.
2°) We assume henceforth that X t and X; - tare (Q, P)-local martingales,
and Xo = 0, P-a.s. Prove that for any real U the process
2
Yt = exp(iuXt ) - 1 + -u
2
1t
0
exp(iuXs)ds
is a (Q, P)-local martingale and conclude that f YtdQ == 0.
3°) Prove that
f
exp(iuXt)dQ = Q(l)exp(-tu 2/2)
and, more generally, that for
f
°< tl < ... < tn and real numbers Uk,
expi (UIXtl + U2 (Xt2 - XtJ + ... + Un (Xt• - Xt._J) dQ
= Q(l) exp (
Conclude that
-~ (tlur + (t2 - tl) u~ + ... + (tn - tn-I) u~)) .
°
- if Q(1) = 0, then Q = on a {Xs,s ~ OJ,
- if Q(I) i= 0, then Q/Q(I) is a probability measure on a {Xs, s ~ O} under
which X is a standard BM.
(3.30) Exercise (The Kailath-Segal identity). Let M be a cont. loco mart. such
that Mo = 0, and define the iterated stochastic integrals of M by
In =
10 = 1,
1·
In-I (s)dMs.
Prove that for n ~ 2,
nIn = In_ 1M - In- 2(M, M).
Relate this identity to a recurrence formula for Hermite polynomials.
[Hint: See Proposition (3.8).]
*
(3.31) Exercise. (A complement to Exercise (2.17». Let B be a BMI(O) and
H and K two (~B)-progressively measurable bounded processes. We set X t =
f~ HsdBs + f~ Ksds and assume in addition that H is right-continuous at 0.
1°) If ¢ is a C 1-function, show that
h- 1/2
1
h
¢(Xs)dXs
converges in law, as h tends to 0, to ¢(O)HoB I.
2°) Assume now that ¢(O) = and that ¢ is C 2 , and prove that
°
h- I
lh
¢(Xs)dXs
converges in law to ¢'(O)Hl(B? - 1)/2.
160
Chapter IV. Stochastic Integration
3°) State and prove an extension of these results when </> is C p + 1 with </>(0) =
</>'(0) = ... = q/p-I) (0) = O.
[Hint: Use Proposition (3.8).]
* (3.32) Exercise. Let B and C be two independent BM(O)'s. Prove that
11
(B t + CI_t)2 dt <gJ
11 +
(B;
(BI - B t )2) dt.
[Hint: The Laplace transform in (J.. 2 /2) of the right-hand side is the characteristic function in J.. of the sum of two stochastic integrals; see Exercise (2.19).]
(3.33) Exercise. 1°) Let B be a BMI and H a (.~B)-adapted process such that
f~ H;ds < 00 for every t, and oo H;ds = 00. Set
fo
T = inf {t 2:: 0,
1t
Hs2ds = a 2 }
where a 2 is a strictly positive constant. Prove that fOT HsdBs '" JV(O, a 2 ).
2°) (Central-limit theorem for stochastic integrals) Let (Bn) be a sequence
of linear BM's defined on the same probability space and (Kn) a sequence of
(3f Bn )-adapted processes such that
for some constants (Tn). Prove that the sequence
(ftn K: dB:) converges in law
to JV(O, a 2 ).
[Hint: Apply 1°) to the processes H n = K n l[o,I;,] +al]Tn .Tn +I].]
§4. Burkholder-Davis-Gundy Inequalities
In Sect. 1, we saw that, for an L 2 -bounded continuous martingale M vanishing
at zero, the nOTInS IIM~1I2 and II(M, M}~2112 are equivalent. We now use the Ito
formula to generalize this to other LP -norms. We recall that if M is a continuous
local martingale, we write Mt = sups<t IMsl.
The whole section will be devoted to proofs of the Burkholder-Davis-Gundy
inequalities which are the content of
(4.1) Theorem. For every p E ]0,00[, there exist two constants c p and C p such
that, for all continuous local martingales M vanishing at zero,
It is customary to say that the constants c p and C p are "universal" because
they can be taken the same for all local martingales on any probability space
§4. Burkho1der-Davis-Gundy Inequalities
161
whatsoever. If we call H P the space of continuous local martingales such that
M~ is in LP, Theorem (4.1) gives us two equivalent norms on this space. For
p ~ 1, the elements of H P are true martingales and, for p > 1, the spaces H Pare
the spaces of continuous martingales bounded in LP; this is not true for p = 1,
the space H I is smaller than the space of continuous L I-bounded martingales and
even of uniformly integrable martingales as was observed in Exercise (3.15) of
Chap. II.
Let us also observe that, by stopping, the theorem has the obvious, but nonetheless important
(4.2) Corollary. For any stopping time T
cpE [(M, M}j/2] :::: E
[(M;YJ :::: CpE [(M, M}j/2].
More generally, for any bounded predictable process H
T
cpE[(i Hs2d(M,M}sy/2]
<
E[~~?lit HsdMsIP]
<
CpE[ (iT Hs2d(M, M}sy/2l
The proof of the theorem is broken up into several steps.
(4.3) Proposition. For p ~ 2, there exists a constant Cp such that for any continuous local martingale M such that Mo = 0,
Proof By stopping, it is enough to prove the result for bounded M. The function
x -+ Ixl P being twice differentiable, we may apply Ito's formula to the effect that
Consequently,
P(P2- 1) E
[LX) IMs IP- 2d(M, M}s]
<
pep 2- 1) E [(M~y-2 (M, M}oo ]
<
pcp - 1) II(M* )p- 2 11
II(M, M}ooll
.
2
00
p/(p-2)
p/2
On the other hand, by Doob's inequality, we have IIM~lIp :::: (pip - 1)IIMoollp,
and the result follows from straightforward calculations.
0
(4.4) Proposition. For P ~ 4, there exists a constant cp such that
162
Chapter IV. Stochastic Integration
Proof By stopping, it is enough to prove the result in the case where (M, M) is
bounded. In what follows, ap will always designate a universal constant, but this
constant may vary from line to line. For instance, for two reals x and y
Ix + yiP :s ap (lxl P + lyIP).
From the equality
M; = 2 J~ MsdMs + (M, M}/o it follows that
E[(M, M}~2] :s a E[(M~y] + E[
p(
11
00
MsdMsIP/2J)
and applying the inequality of Proposition (4.3) to the local martingale
we get
Jo MsdMs,
E[(M, M};;L2] < a E[(M::'y] + E[(1 M;d(M, M}sy/4J)
< a (E [(M::')P] + (E [(M::'y] E[(M, M}~2]) 1/2) .
p(
OO
p
If we set x =
E [ (M, M)~/2]1/2 and y = E [(MC:Y] 1/2 ,the above inequality
reads x 2 - apxy - apy2 :s 0 which entails that x is less than or equal to the
positive root of the equation x 2 - apxy - apy2 = 0, which is of the form apy.
This establishes the proposition.
Theorem (4.1) is a consequence of the two foregoing propositions and of a
reduction procedure which we now describe.
(4.5) Definition (Domination relation). A positive, adapted right-continuous process X is dominated by an increasing process A, if
for any bounded stopping time T.
(4.6) Lemma. If X is dominated by A and A is continuous, for x and y > 0,
1
P [X::, > x; Aoo :s Y] :s -E [Aoo 1\ y]
x
where X~ = suPs Xs.
Proof It suffices to prove the inequality in the case where P(A o :s y) > 0
and, in fact, even P(A o :s y) = 1, which may be achieved by replacing P by
pi =
I Ao :s y) under which the domination relation is still satisfied.
Pc.
Moreover, by Fatou's lemma, it is enough to prove that
1
P [XZ > x; An :s Y] :s - E [Aoo 1\ y] ;
x
but reasoning on [0, n] amounts to reasoning on [0,00] and assuming that the
r.v. Xoo exists and the domination relation is true for all stopping times whether
§4. Burkholder-Davis-Gundy Inequalities
163
bounded or not. We define R = inf{t : At > y}, S = inf{t : X t > x}, where
in both cases the infimum of the empty set is taken equal to +00. Because A is
continuous, we have {Aoo :::: y} = {R = oo} and consequently
<
P [X~ > x; R = 00]
P [Xs :::: x; (S < (0) n (R = (0)]
<
P [XSAR :::: x] :::: -E [XS AR ]
<
-E[AsAR]:::: -E[Aoo!\Y],
P [X~ > x; Aoo :::: Y]
I
I
I
x
x
x
the last inequality being satisfied since, thanks to the continuity of A, and Ao :::: Y
o
a.s., we have ASAR :::: Aoo !\ y.
(4.7) Proposition. Under the hypothesis of Lemma (4.6),for any k E ]0, 1[,
E[(X~)kJ:::: ~=~E[A~].
Proof Let F be a continuous increasing function from lFt+ into lFt+ with F(O) = o.
By Fubini's theorem and the above lemma
E [F
(X~)]
E
<
<
<
[1
00
I(Xi,,>x)dF(X)]
1 (p [x~
1 (~E
1
00
00
> x; Aoo :::: x] + P [Aoo > xl) dF(x)
[Aoo!\ x] + P [Aoo > X]) dF(x)
i:
(2P [Aoo > x] +
~ E [Aoo . 1(Aoo:SX)] ) dF(x)
2E [F(Aoo)] + E [ Aoo
dFx(X)] = E [FcAoo) ]
00
if we set F(x) = 2F(x) +x fxoo dFu(u). Taking F(x) = xk, we obtain the desired
result.
Remark. For k:::: 1 and f(x) = xk, F is identically +00 and the above reasoning
has no longer any interest. Exercise (4.16) shows that it is not possible under the
hypothesis of the proposition to find a universal constant c such that E [X~] ::::
cE[Aoo]. This actually follows also from the case where X is a positive martingale
which is not in HI as one can then take At = Xo for every t.
To finish the proof of Theorem (4.1), it is now enough to use the above
result with X = (M*)2 and A = C2 (M, M) for the right-hand side inequality,
X = (M, M)2 and A = C 4 (M*)4 for the left-hand side inequality. The necessary
domination relations follow from Propositions (4.3) and (4.4), by stopping as in
Corollary (4.2).
164
Chapter IV. Stochastic Integration
Other proofs of the BDG inequalities in more or less special cases will be
found in the exercises. Furthermore, in Sect. 1 of the following chapter, we will
see that a method of time-change permits to derive the BDG inequalities from the
special case of BM. We will close this section by describing another approach to
this special case.
(4.8) Definition. Let ¢ be a positive real function defined on ]0, a], such that
limx-+o ¢ (x) = and f3 a real number > 1. An ordered pair (X, Y) of positive
random variables is said to satisfY the "good A inequality" I (¢, f3) if
°
P [X ~ f3A; Y < 8A]
.:s ¢(8)P[X ~ A]
°and 8 E]O, a]. We will write (X, E I(¢, f3).
In what follows, F will be a moderate function, that is, an increasing, continuous function vanishing at °and such that
for every A>
Y)
sup F(ax)1 F(x) = y < 00
x>o
for some a > 1.
°
The property then actually holds for every a > 1 with y depending on a. The
function F(x) = x P , < P < 00 is such a function.
The key to many inequalities is the following
(4.9) Lemma. There is a constant c depending only on ¢, f3 and y such that if
(X, Y) E I(¢, f3), then
E[F(X)]
.:s cE[F(Y)].
Proof It is enough to prove the result for bounded F's because the same y works
for F and F /\ n. We have
E[F(Xlf3)]
=
1
1
00
P[X
~ f3A]dF(A)
1
~ A]dF(A) +
pry ~ 8A]dF(A)
= ¢(8)E[F(X)] + E[F(Y 18)].
By hypothesis, there is a y such that F (x) .:s y F (x I f3) for every x. Pick 8 E
]0, a/\ 1[ such that y¢(8) < 1; then, we can choose y' such that F(xI8) .:s y' F(x)
<
00
¢(8)P[X
00
for every x, and it follows that
E[F(X)]
.:s y' E[F(y)]/(1 - y¢(8»).
D
The foregoing lemma may be put to use to prove Theorem (4.1) (see Exercise
(4.25». We will presently use it for a result on BM. We consider the canonical BM
with the probability measures Px , X E JR, and translation operators Or, t ~ 0. We
denote by (3if) the Brownian filtration of Chap. III. Then, we have the following
§4. Burkholder-Davis-Gundy Inequalities
165
(4.10) Theorem. Let At, t :::: 0, be an (.Y()-adapted. continuous, increasing process such that
(i) limb->oo SUPX,A PAA A2 > bA] = 0,
(ii) there is a constant K such that for every sand t
At+s - As':::: KAt a ()s.
Then, there exists a constant c F such that for any stopping time T,
Eo [F(A T )] .:::: cFEo [F(T I/ 2)].
Proof It is enough to prove the result for finite T and then it is enough to prove
that there exist ep and f3 such that (AT, T1/2) E J(ep, f3) for every finite T. Pick
any f3 > I and set S = inf {t : At > A}. Using the strong Markov property of BM
at time S, we have
Po [AT:::: f3A, TI/2 < 8A]
Po [AT - As:::: (f3 -I)A, T < [/).. 2, S < T]
<
Po [AS+PA' - As :::: (f3 - I)A, S < T]
<
Po [A 8'A' a ()s :::: (f3 - I)AK- 1, S < T]
<
Eo [EBs [A82A' :::: (f3 - I)AK- I ], S < T]
supPx [A82A2:::: (f3 -I)AK- I ]. Po[S < T]
<
x
<
which ends the proof.
D
We may likewise obtain the reverse inequality.
(4.11) Theorem. If At, t :::: 0, is an (.¥i)-adapted, continuous, increasing process
such that
(i) limb->o SUPX,A Px [AA2 < bA] = 0,
(ii) there is a constant K such that for every s < t
A t- s a ()s .:::: KAt·
Then, there is a constant C F such that for any stopping time T,
Eo [F(TI/2)] .:::: C FEo [F(A T )].
Proof It follows the same pattern as above. Pick f3 > I, 8 < I; we have
Po [TI/2:::: f3A, AT < 8A]
<
Po [T:::: f32A2, AT_A' a()A' < K8A]
<
Po [T :::: A2, A/l2A'-A2 a ()A' < K8A]
Eo [EBA2 [A/l2 A2_ A2 < K8A], T:::: A2]
<
sup Px [A(/l2_IW < K8A]' Po [T1/2 :::: A]
X,A
which ends the proof.
D
166
Chapter IV. Stochastic Integration
The reader will check that these results apply to At = sUPs:st IBs - Bol, thus
yielding the BDG inequalities for B, from which by time-change (see Sect. 1
Chap. V), one gets the general BDG inequalities. This method is actually extremely
powerful and, to our knowledge, can be used to prove all the BDG-type inequalities
for continuous processes.
#
#
(4.12) Exercise. Let Band B' be two independent standard linear BM's. Prove
that for every p, there exist two constants cp and Cp such that for any locally
bounded (.~ B)-progressively measurable process H,
(4.13) Exercise. For a continuous semimartingale X = M + V vanishing at 0, we
set
IIXll.vp =
I (M, M)IJo2 + 1 IdVls Ilu'
00
1°) Check that the set of X's such that IIXII.vp < 00 is a vector-space denoted
by Y'p and that X -+ IIXll.vp is a semi-norm on .Y'p.
2°) Prove that if X* = SUPt IXIt. then IIX*lI p ::::: cpllXII.vp for some universal
constant cpo Is there a constant c~ such that IIXII.v" ::::: c~IIX*llp?
3°) For p > 1, the quotient of.5"'P by the subspace of processes indistinguishable from the zero process is a Banach space and contains the space H p.
#
(4.14) Exercise. 1°) If M is a continuous local martingale, deduce from the BDG
inequalities that {M~ < oo} = {(M, M)oo < oo} a.s. (A stronger result is proved
in Proposition (1.8) Chap. V).
2°) If M n is a sequence of continuous local martingales, prove that (Mn)~
converges in probability to zero if and only if (M n, Mn)oo does likewise.
[Hint: Observe that it is enough to prove the results when the Mn's are uniformly bounded, then apply Lemma (4.6).]
* (4.15) Exercise. (A Fourier transform proof of the existence of occupation
densities). 1°) Let M be a continuous local martingale such that E [(M, M);] <
00; let J1t be the measure on 1R defined by
and ilt its Fourier transform. Prove that
and conclude that J1t (dx) « dx a.s.
2°) Prove that for fixed t, there is a family L~ of random variables, .J9(1R)®.%'measurable, such that for any positive Borel function f
§4. Burkholder-Davis-Gundy Inequalities
i
r
o
1+00
f(Ms)d(M, M)s =
167
L~ f(a)da.
~oo
This will be taken up much more thoroughly in Chap. VI.
(4.16) Exercise. 1°) If B is a BM, prove that IBI is dominated by 2S where
St = sup Bs.
[Hint: If x and yare two real numbers and y 2: x+, one has Ix I ::: (y - x) + y.]
2°) By looking at X t = IBrATJI where Tl = inf{t : Br = I}, justify the remark
following Proposition (4.7).
(4.17) Exercise. Let M be a continuous local martingale with Mo = 0 and define
Sr = supMs,
St =
s-:::;:.t
infMs .
s:s.t
Let A be an increasing adapted continuous process with Ao 2: a > O.
1°) Remark that
l'
(Ss - Ms)dS, == O.
2°) Suppose M bounded and prove that
E
[A~} (Soo - Moo)2]::: E
[1
00
A-;ld(M, M)s].
3°) Prove that (Mt)2 ::: 2 ((S, - M t )2 + (Mt E [(M::..,)2 (M,
M)~1/2]
<
4E
[1
00
St )2) and that
(M, M)-;1/2d(M, M)s]
8E [(M, M)~2]
and extend this result to non bounded M's.
[Hint: To prove the last equality, use the time-change method of Sect.
Chap. V.]
4°) Derive therefrom that
III
E [M::"'] ::: 2hE [(M, M)~2].
5°) Using 2°) , prove also that
E [(Soo - soo)2] ::: 4E [M~] .
For another proof of this inequality, see Exercise (4.11), Chap. VI.
*
(4.18) Exercise. 1°) Let M be a continuous local martingale with Mo = 0 and A,
B, C three continuous increasing processes such that Bo = Co = 0, and Ao 2: O.
If X = M + B - C is 2: 0, prove that A ~l X is dominated by Y where
Y, =
(It is understood that %
11
A-;ldBs.
is taken equal to 0).
168
Chapter IV. Stochastic Integration
[Hint: Replace Ao by Ao + E, then let E decrease to 0.]
2°) Prove that for p > q > 0, there exists a constant C pq such that
E [(M~y (M, M)~q/2] S CpqE [(M~)P-q].
#
(4.19) Exercise. 1°) Let A be an increasing continuous process and X a r.v. in
L~ such that for any stopping time S
E [Aoo - As l(s>o)] S E [X l(s<oo)].
Prove that for every ).. > 0,
E [(Aoo -)..) l(Aoo~A)] s E [X l(Aoo~A)].
[Hint: Consider the stopping time S = inf {t : At > )..}.]
2°) Let F be a convex, increasing function vanishing at zero and call fits
right derivative. Prove that, under the hypothesis of 1°) ,
[Hint: Integrate the inequality of 1°) with respect to df()..).]
3°) If M is a continuous local martingale, show that
E [(M, M)oo] S cE [(M, M)~2M~]
for a universal constant c.
4°) For an L 2 -bounded martingale M, define
S(M)t
sup (Mt, (M, M):/2) ,
I(M)t
inf( Mt, (M, M):/2) .
Prove that E [S(M)~] S dE [I(M)~] for a universal constant d.
(4.20) Exercise. 1°) For
°
< p < 1 set, in the usual notation,
Nt = 1t (M, M)~p-l)/2dMs
(to prove that this integral is meaningful, use the time-change method of Sect. 1
Chap. V) and prove that, if E [(M, M)f] < 00, then E [(M, M)f] = pE [Nn
2°) By applying the integration by parts formula to Nt (M, M);1-P)/2, prove
that IMtl S 2Nt(M, M);I-P)/2, and conclude that
E [(Mn 2P ] s (16/ p)P E [(M, M)f] .
(4.21) Exercise. Let M = (M 1 , ••• , M d ) be a vector local martingale and set
A = ,£1=1 (Mi, Mi). For E, TJ > and two finite stopping times SST, prove that
°
p [sup IMt - Msl2 >
Sg"""T
EJ S !lE + P [AT - As 2: TJ]·
§4. Burkholder-Davis-Gundy Inequalities
*
169
(4.22) Exercise. For a continuous local martingale M let (P) be a property of
(M, M)oo such that i) if (N, N)oo ::::: (M, M)oo and M satisfies (P) then N satisfies
(P), ii) if M satisfies (P) then M is a uniformly integrable martingale.
1°) If M satisfies (P), prove that
sup {E
[11
00
HsdMsIJ; H progressively measurable and IHI ::::: I} < 00.
[Hint: Use the theorem of Banach-Steinhaus.]
2°) By considering H = L Ai l)ti,!i+') for a subdivision L1 = (ti) prove that (P)
entails that E [(M, M)~2] < +00. As a result, the property E [(M, M)~2] < 00
is the weakest property for which i) and ii) are satisfied.
[Hint: Prove that sup Ll E [
*#
(L (Mti +
1 -
,)2) 1/2J < 00.]
Mt
(4.23) Exercise. Let R t be the modulus of the B M d , d 2:: 3, started at x i= O.
1°) After having developed log Rt by Ito's formula, prove that
~~~E [(1 R;2dS!10gtrJ < +00
t
for every p > O.
[Hint: One may use the argument which ends the proof of Proposition (4.4).]
2°) Prove that (log R t I log t) converges in probability as t goes to infinity to a
constant c and conclude that I~ R;2ds I log t converges in probability to l/(d - 2).
The limit holds actually a.s. as is proved in Exercise (3.20) of Chap. X.
3°) Let now x be 0 and study the asymptotic behavior of lEI R;2ds as c tends
to zero.
[Hint: Use time-inversion.]
**
(4.24) Exercise. (The duality between HI and BMO revisited). 1°) Prove that
IIIXIIIHI = E [(X, X)~2] is a norm on the space HI of Exercise (1.17) of Chap. II
which is equivalent to the norm IIXIIH'.
2°) (Fefferman's inequality). Let X E HI and Y E BMO; using the result in
Exercise (1.40), prove that
E
[1
00
Id(X, Y)ls
J : : : 211I XIIIH,IIYIIBMO,.
[Hint: Write 1000 Id (X, Y) Is = 1000 (X, X);1/4 (X, X);/4 Id (X, Y) Is and apply the
Kunita-Watanabe inequality.]
3°) Prove that the dual space of HI is BMO and that the canonical bilinear
form on HI x BMO is given by
(X, Y) ~ E [(X, Y)oo].
170
Chapter IV. Stochastic Integration
* (4.25) Exercise. 1°) Let A and B be two continuous adapted increasing processes
such that Ao = Bo =
°
and
E [(AT - As)P] ~ DIiBrII~P[S < T]
for some positive real numbers p and D and every stopping times S, T with S ~ T.
Prove that (Aoo, Bee,) E [(if;, (3) for every f3 > 1 and if;(x) = D(f3 -l)-Px p.
[Hint: Set T = inf{t : B t = GA}, Sn = inf{t : At > A(1 - lin)} and prove that
the left-hand side in [(if;,f3) is less than P [AT - ATI\Sn ~ (f3 -1 + I/n)A].]
2°) If M is a continuous local martingale vanishing at 0, prove, using only
the results of Sect. 1, that for A = (M, M) 1/2 and B = M* or vice-versa, the
conditions of 1°) are satisfied with p = 2. Conclude that for a moderate function
F there are constants c and C such that
cE [F ((M, M)~2)] ~ E [F (M~)] ~ CE [F ((M, M)~2)].
3°) Derive another solution to 2°) in Exercise (4.14) from the above inequalities.
[Hint: The function x I (1 + x) is moderate increasing.]
* (4.26) Exercise. If Z is a positive random variable define
a z = SUpAP[Z ~ A],
A>O
lz = lim AP[Z ~ A].
A"'" 00
If (X, Y) E [(if;, (3), prove that there is a constant c depending only on if; and f3
such that
ax ~ cay,
Ix ~ ely.
(4.27) Exercise. Apply Theorems (4.10) and (4.11) to
sup (IBs - Bri/is - rl(l/2)-et/ 2
At =
where
°< e <
O:o;r9:9
1/2.
* (4.28) Exercise. Let A (resp. B) satisfy the assumptions of Theorem (4.10) (resp.
(4.11». For ex > 0, prove that there is a constant CF such that for any stopping
time
E[F(A~+IIB~)] ~cFE[F(BT)].
* (4.29) Exercise (Garsia-Neveu lemma). Retain the situation and notation of Exercise (4.19) and assume further that sUPx>o xf(x)1 F(x) = p < +00.
1°) Prove that if U and V are two positive r.v.'s such that
E[Uf(U)] < +00,
then
E[Uf(U)] ~ E[Vf(U)],
E[F(U)] ~ E[F(V)].
[Hint: If g is the inverse of f (Sect. 4 Chap. 0) then for u, v ~ 0,
uf(u) = F(u) +
rf(U)
10
g(s)ds,
vf(u) = F(v) +
rf(U)
10
g(s)ds. ]
§5. Predictable Processes
E [F(Aoo)]
171
::s E [F(pX)] ::s pP E[F(X)].
(4.30) Exercise (Improved constants in domination). Let C k be the smallest
constant such that E [( x~l] ::s Ck E [A~] for every X and A satisfying the
condition of Definition (4.5).
1°) Prove that C k ::s k-k(l - k)-l ::s (2 - k)/(l - k), for k E ]0,1[.
Reverse inequalities are stated in Exercise (4.22) Chap. VI.
[Hint: Follow the proof of Proposition (4.7) using Ax instead of x and y I A
instead of y in the inequality of Lemma (4.6).]
2°) Prove that, for k, k' E ]0,1[, C kk , ::s Ck' (Ck)k'.
(4.31) Exercise. 1°) Retain the notation of Exercise (3.14) Chap. II and prove that
E [Hn(Baa' aa)] = 0 where Hn is the n-th Hermite polynomial.
2°) Prove an induction formula for the moments of aa' Compare with the
Laplace transform found in Chap. II, Exercise (3.14).
§5. Predictable Processes
Apart from the definition and elementary properties of predictable processes, the
notions and results of this section are needed in very few places in the sequel.
They may therefore be skipped until their necessity arises.
In what follows, we deal with a filtration U¥f) supposed to be right-continuous
and complete. We shall work with the product space D x][~+ and think of processes
as functions defined on this space. Recall that a a-field is generated by a set of
functions if it is the coarsest a-field for which these functions are measurable.
(5.1) Proposition. The a-jields generated on D x lR+ by
i) the space g of elementary processes,
ii) the space of adapted processes which are left-continuous on ]0, 00[,
iii) the space of adapted continuous processes
are equal.
Proof Let us call Ti, i = 1,2,3, the three a-fields of the statement. Obviously
T3 C T2; moreover T2 C Tl since a left-continuous process X is the pointwise limit
of the processes
00
X;(w) = Xo(w)l{oj(t) + LX(k/n)(w)ljk/n,(k+I)/nj(t).
k=O
r
On the other hand, the function Iju,vj is the limit of continuous functions
with compact support contained in ]u, v + lin]. If H E .~, the process Hr is
continuous and adapted which implies that TI C T3.
172
Chapter IV. Stochastic Integration
(5.2) Definition. The unique u-jield discussed in the preceding proposition is
called the predictable u-field and is denoted by ;-r or ,f(.¥i) (when one wants to
stress the relevant filtration). A process X with values in (U, r~) is predictable ((
the map (w, t) -+ Xt(w)from (Q x lR.+) to (U,16) is measurable with respect to
,UJ'.
Observe that if X is predictable and if Xo is replaced by another .'¥O-measurable
r.v., the altered process is still predictable; predictable processes may be thought
of as defined on ]0, 00[. It is easily seen that predictable processes are adapted;
they are actually (.Yf_)-adapted.
The importance of predictable processes comes from the fact that all stochastic integrals are indistinguishable from the stochastic integrals of predictable processes. Indeed, if we call L 2eY' (M) the set of equivalent classes of predictable
processes oL:0?2(M), it can be proved that the Hilbert spaces L2(M) and L~(M)
are isomorphic, or in other words, that every process of c'Lf2(M) is equivalent to a
predictable process. We may also observe that, since g is an algebra and a lattice,
the monotone class theorem yields that g is dense in L~(M). Consequently,
had we constructed the stochastic integral by continuity starting with elementary
stochastic integrals, then L ~(M) would have been the class of integrable processes.
We now introduce another important u-field.
(5.3) Definition. The u -field generated on Q x lR.+ by the adapted cadlag processes
is called the optional u-field and is denoted by (0. or &(31} A process which is
measurable with respect to
is called optional.
r:/
It was already noticed in Sect. 4 Chap. I that, if T is a stopping time, the
process l]o,T], namely (w, t) -+ 1[O<t::'OT(w)], is predictable. We can now observe
that T is a stopping time if and only if l[o,T[ is optional.
Since the continuous processes are cadlag, it is obvious that (0 ~ .9J and this
inclusion is usually strict (however, see Corollary (5.7) below). If we denote by
Prog the progressive u-field, we see that
.~ ® ,%'(lR.+) ~ Prog ~ (0 ~ .f.
The inclusion Prog ~ e may also be strict (see Exercise (5.12».
We proceed to a few properties of predictable and optional processes.
(5.4) Definition. A stopping time T is said to be predictable if there is an increasing
sequence (Tn) of stopping times such that almost-surely
i) limn Tn = T,
ii) Tn < T for every n on {T > OJ.
We will now state without proof a result called the section theorem. Let us
recall that the graph [T] of a stopping time is the set {(w, t) E Q x lR.+ : T (w) = t}.
If T is predictable, this set is easily seen to be predictable. Let us further call TC
the canonical projection of Q x lR.+ onto Q.
§5. Predictable Processes
173
(5.5) Theorem. Let A be an optional (resp. predictable) set. For every s > 0,
there is a stopping time (resp. predictable stopping time) such that
i) [T] C A,
ii) P[T < 00] ~ P(Jr(A») - s.
This will be used to prove the following projection theorem. The a-field SZiis defined in Exercise (4.18) of Chap. I. By convention 31)- =.91>.
(5.6) Theorem. Let X be a measurable process either positive or bounded. There
exsists a unique (up to indistinguishability) optional process Y (resp. predictable
process Z) such that
E [X T 1(T <(0) I SZi] = YT 1(T <(0) a.s. for every stopping time T
(resp. E [X T 1(T <(0) I SZi- ] = ZT 1(T <(0) a.s. for every predictable stopping time
T).
The process Y (resp. Z) is called the optional (predictable) projection of X.
Proof The uniqueness follows at once from the section theorem. The space of
bounded processes X which admit an optional (predictable) projection is a vector
space. Moreover, let xn be a uniformly bounded increasing sequence of processes
with limit X and suppose that they admit projections yn and zn. The section
theorem again shows that the sequences (yn) and (zn) are a.s. increasing; it is
easily checked that lim yn and lim zn are projections for X.
By the monotone class theorem, it is now enough to prove the statement for a
class of processes closed under pointwise multiplication and generating the a-field
~ ® .99(lR+). Such a class is provided by the processes
Xt(W) = l[o.u[(t)H(w),
O.:s u .:s 00,
H E Loo(~).
Let H t be a cadlag version of E[H I Sf! (with the convention that Ho- = Ho).
The optional stopping theorem (resp. the predictable stopping theorem of Exercise
(3.18) Chap. II) proves that
Yt = l[o.u[(t)Ht
(resp. Zt = l[o,u[(t)Ht-)
satisfies the condition of the statement. The proof is complete in the bounded case.
F or the general case we use the processes X /\ n and pass to the limit.
Remark. The conditions in the theorem might as well have been stated
E [X T 1(T<00)] = E [YT 1(T<00)] for any stopping time T
(resp.
E [X T 1(T<00)] = E [Z T 1(T<00)] for any predictable stopping time T).
174
Chapter IV. Stochastic Integration
(5.7) Corollary. Let 7' be the a-field generated by the processes M t - M t - where
M ranges through the bounded (§f)-martingales; then
In particular, if all (§f)-martingales are continuous, then (9' = [7'.
Proof Since every optional process is its own optional projection, it is enough to
prove that every optional projection is measurable with respect to [7' V 7', but
this is obvious for the processes 1[O,u[ H considered in the above proof and an
application of the monotone class theorem completes the proof.
(5.8) Exercise. Prove that [7' is generated by the sets
[S, T] = {(t, w) :'S(w) ::::; t ::::; T(w)}
where S is a predictable stopping time and T an arbitrary stopping time larger
than S.
#
(5.9) Exercise. Prove that if M t is a uniformly integrable cadlag martingale, its
predictable projection is equal to M t -.
(5.10) Exercise. For any optional process X, there is a predictable process Y such
that the set {(w, t) : Xt(w) =1= Yt(w)} is contained in the union of the graphs of
countably many stopping times.
[Hint: It is enough to prove it for a bounded cadlag X and then Y = X _ will
do. For e > 0, define T(e) = inf{t > 0 : IXt - Xt-I > e}
Tn(e) = inf{t > Tn-l(e) : IXt - Xt-I > e}
and use Up,n Tn(ep) with ep ,(, 0.]
#
(5.11) Exercise. A closed subspace S of
HJ is said to be stable if
i) for any M E S and stopping time T, MT is in S;
ii) for any r E 3"6, and M E S, lrM is in S.
1°) If S is stable and r E~, prove that lr(M - MT) E S for any M E S.
2°) The space S is stable if and only if, for any M E S and any predictable
H E L 2 (M), the martingale H . M is in S.
3°) Let SCM) be the smallest stable subspace of
containing the martingale
M. Prove that
SCM) = {H·M: H E L 2 (M)}.
HJ
HJ
4°) Let S be stable and S.L be its orthogonal subspace in
(for the Hilbert
norm). For any M E Sand N E S.L, prove that (M, N) = O. Show that S.L is
stable.
5°) If M and N are in HJ, prove that the orthogonal projection of N on
SCM) is equal to H· M where H is a version of the Radon-Nikodym derivative
of d(M, N} with respect to d(M, M}.
§5. Predictable Processes
*
175
(5.12) Exercise. Retain the situation of Proposition (3.12) Chap. III and for f ~ 0
set d t = f + To 0 Ot.
1°) Prove that the map f --+ dt is a.s. right-continuous.
2°) Let H = {(w, f) : Bt(w) = OJ. For each w, the set (H(w, .»e is a countable
union of open intervals and we denote by F the subset of G x ll~+ such that F(w, .)
is the set of the left-ends of these intervals. Prove that F is progressive with respect
to the Brownian filtration.
[Hint: Prove that H\F = {(w, f) : dt(w) = f}.]
3°) Prove that for any stopping time T, one has P[[T] C F] = O.
[Hint: Use the strong Markov property of BM.]
4°) Prove that F is not optional, thus providing an example of a progressive
set which is not optional.
[Hint: Compute the optional projection of 1F']
(5.13) Exercise. Let (.91) C (.~) be two filtrations. If M is a (:~)-martingale,
prove that its optional projection w.r.t. (.~) is a (.~)-martingale.
(5.14) Exercise. Let X be a measurable process such that, for every t,
E [fot IXsldS] < 00;
and set Yt = f~ Xsds. If we denote by iI the optional projection of H w.r.t. a
filtration (9f), prove that Yt - f~ Xsds is a (.91)-martingale.
(5.15) Exercise (Filtering). Let (9f) be a filtration, B a (.91)-BM and h a
bounded optional process. Let Yt = f~ hsds + Bt ; if h is the optional projection of h w.r.t. (.¥,Y), prove that the process
Nt = Yt - fot hsds
is a .¥,Y -BM. In filtering theory, the process N is called the innovation process.
[Hint: Prove first that E[Nt2 ] < 00 for every t, then compute E[Nr] for
bounded stopping times T.]
(5.16) Exercise. Let X and X' be two continuous semimartingales on two filtered
probability spaces; let H and H' be two predictable locally bounded processes
such that (H, X) <!J. (H', X'). Prove that (H· X, H, X) ~ (H' . X', H', X').
[Hint: Start with the case of elementary H and H'.]
*#
(5.17) Exercise (Fubini's theorem for stochastic integrals). 1°) Let (A,A) be
a measurable space, (G,Y, P) a probability space and {Xn(a, .)} a sequence of
~7'6 ®.:r -measurable r.v.'s which converges in probability for every a. Prove that
there exists a v--6 ® .:r-measurable LV., say X, such that for every a, X(a, .) is
the limit in probability of {Xn(a, .)}.
[Hint: Define inductively a sequence nk(a) by no(a) = 1 and
176
Chapter IV. Stochastic Integration
nk(a) = inf{m > nk_,(a): sup P [IXp(a, ·)-Xq(a, ')1> rk] :5 r k};
p.q?:.m
prove that limk X nda ) (a, .) exists a.s. and answers the question.]
2°) In the setting of Sect. 2, if H(a, s, w) is a uniformly bounded <~ ® .~­
measurable process and X a continuous semimartingale, there exists a <~ ® .q>measurable process K(a, ., .) such that for each a, K(a, ., .) is indistinguishable
H(a, s, ·)dXs. Moreover, if v is a bounded measure on .~, then a.s.,
from
10
i
K(a, t, ·)v(da) =
1t ({
H(a, s, ')V(da») dXs.
[Hint: These properties are easily checked when H(a, s, w) = h(a)g(s, w)
with suitable hand g. Apply the monotone class theorem.]
(5.18) Exercise. 1°) Let M E H2 and K be a measurable, but not necessarily
adapted process, such that
Prove that there exists a unique Z E H2 such that for N E H2
2°) Let i be the projection (in the Hilbert space sense) of K on L~(M).
Prove that Z = i.M.
#
(5.19) Exercise. 1°) If K is a continuous adapted process and X a cont. semimart.,
prove that
11'
P-lim 840 e
0
Ks (X S +8-XS ) ds =
l'
0
KsdXs.
[Hint: Use Fubini's theorem for stochastic integrals.]
2°) Under the same hypothesis, prove that
11'
P-lim 840 e
0
Ks (XS+8 - Xs)2 ds =
l'
0
Ksd(X, X}s.
Notes and Comments
Sect. 1. The notion of local martingale appeared in Ito-Watanabe [1] and that of
semimartingale in Doleans-Dade and Meyer [1]. For a detailed study, we direct the
reader to the book of Dellacherie and Meyer ([1] vol 2) from which we borrowed
some of our proofs, and to Metivier [1].
Notes and Comments
177
The proof of Theorem (l.3) is taken from Kunita [4]. The existence of quadratic
variation processes is usually shown by using Meyer's decomposition theorem
which we wanted to avoid entirely in the present book. Proposition (l.13) is due
to Getoor-Sharpe [1]. Let us mention that we do not prove the important result of
Stricker [1] asserting that a C.7r)-semimartingale X is still a semimartingale in its
own filtration C.Y?; x).
Exercise (1.33) is from Yor [2], Exercise (l.41) from Emery [1] and Exercise
(l.42) from Sharpe [1]. Exercise (l.48) is due to Maisonneuve [3] and Exercise
(l.49) is taken from Dellacherie-Meyer [1], Vol. II. The method hinted at in Exercise (1.38) was given to us by F. Delbaen (private communication).
Sect. 2. Stochastic integration has a long history which we will not attempt to
sketch. It goes back at least to Paley-Wiener-Zygmund [1] in the case of the
integral of deterministic functions with respect to Brownian motion. Our method
of defining stochastic integrals is that of Kunita and Watanabe [1] and is originally
due to Ito [1] in the BM case. Here again, we refer to Dellacherie-Meyer [1] vol.
2 and Metivier [1] for the general theory of stochastic integration with respect to
(non continuous) semimartingales.
Exercise (2.17) is due to Isaacson [1] and Yoeurp [2] and Exercise (2.18) is
from Yor [2]. Exercise (2.19) is taken from Donati-Martin and Yor [2]; a number
of such identities in law have been recently discussed in Chan et al. [1] and
Dean-Jansons [1]. The important Exercise (2.21) is taken from a lecture course of
Meyer.
Sect. 3. Ito's formula in a general context appears in Kunita-Watanabe [1] and
our proof is borrowed from DeUacherie-Meyer [1], Vol. II. Exercise (3.12) is but
one of many extensions of Ito's formula; we refer to Kunita ([5], [6]). The proof
of Exercise (3.13) is due to FoUmer [2].
Exponentials of semimartingales were studied by Doleans-Dade [2] and are
important in several contexts, in particular in connection with Girsanov's theorem
of Chap. VIII. We refer to the papers of Kazamaki and Sekiguchi from which
some of our exercises are taken, and also to Kazamaki's Lecture Notes [4].
The proof given here of P. Levy's characterization theorem is that of KunitaWatanabe [1]; the extension in Exercise (3.29) is due to Ruiz de Chavez [1].
This result plays a central role in Chap. V and its simplicity explains to some
extent why the martingale approach to BM is so successful. Moreover, it contains
en germe the idea that the law of a process may be characterized by a set of
martingale properties, as is explained in Chap. VII. This eventually led to the
powerful martingale problem method of Stroock and Varadhan (see Chap. VII).
Exercise (3.17) is from Robbins-Siegmund [1]; Exercise (3.19) is from DonatiYor [1] and Exercise (3.26) from Calais and Genin [1]. Exercise (3.30) comes from
Carlen-Kree [1] who obtain BDG type inequalities for multiple stochastic integrals.
Sect. 4. Our proof of the BDG inequalities combines the method of Getoor-Sharpe
[1] with the reduction procedure of Lenglart [2] (see also Lenglart et al. [1]). The
use of "good A inequalities" is the original method of Burkholder [2]. The method
178
Chapter IV. Stochastic Integration
used in Theorems (4.10) and (4.11), due to Bass [2] (see also Davis [5]), is the
most efficient to date and works for all inequalities of BDG type known so far.
For further details about the proof of Fefferman's inequality presented in Exercise (4.24), see e.g. Durrett [2], section 7.2, or Dellacherie-Meyer ([1], Chap. VII).
The scope of Fefferman's inequality may be extended to martingales Y not
necessarily in BMO; see, e.g., Yor [26] and Chou [1].
Exercise (4.30) gives an improvement on the constant (2 - k) I (l - k) obtained
in Proposition (4.7), but the following natural question is still open.
Question 1: For k E ]0, 1[, find the best constant C k for the inequality
which follows from the domination relation.
Sect. 5. This section contains only the information on predictable processes which
we need in the sequel. For a full account, we direct the reader to DellacherieMeyer [1].
The reader shall also find some important complements in Chung-Williams [1]
(Sect. 3.3) to our discussion, following Definition (5.2), of the various classes of
stochastic integrands.
Chapter V. Representation of Martingales
In this chapter, we take up the study of Brownian motion and, more generally, of
continuous martingales. We will use the stochastic integration of Chap. IV together
with the technique of time changes to be introduced presently.
§ 1. Continuous Martingales as Time-changed Brownian Motions
It is a natural idea to change the speed at which a process runs through its path;
this is the technique of time changes which was described in Sect. 4 Chap. 0 and
which we now transpose to a stochastic context.
Let (~) be a right-continuous filtration. Throughout the first part of this
section, we consider an increasing, right-continuous, adapted process A with which
we associate
Cs = inf{t : At > s}
where, as usual, inf(0) = +00. Since Cs increases with s, the limit Cs limuts C u exists and
=
Cs_=inf{t:At~s}.
By convention Co- = o.
(1.1) Proposition. The family (C s ) is an increasing right-continuous family of
stopping times. Moreover for every t, the r. v. At is an (§if:,)-stopping time.
Proof Each Cs is a stopping time, by the reasoning of Proposition (4.6) Chap. I.
The right-continuity ofthe map s ~ Cs was shown in Lemma (4.7) Chap. 0, and it
follows easily (Exercise (4.17) Chap. I) that the filtration (.9t,) is right-continuous.
Finally it was also shown in Lemma (4.7) Chap. 0, that At = inf{s : Cs > t} which
0
proves that At is an (§if:,)-stopping time.
It was proved in Chap. 0 that ACt ~ t with equality if t is a point of increase
of C, that is Ct +e - Ct > 0 for every s > O. If A is strictly increasing, then X is
continuous; if A is continuous and strictly increasing, then C is also continuous
and strictly increasing and we then have ACt = CAt = t. While reasoning on this
situation, the reader will find it useful to look at Figure 1 in Sect. 4 Chap. O. He
will observe that the jumps of A correspond to level stretches of C and vice-versa.
Actually, A and C play symmetric roles as we will see presently.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
180
Chapter V. Representation of Martingales
(1.2) Definition. A time-change C is a family cs , s 2: 0, of stopping times such
that the maps s ----f Cs are a.s. increasing and right-continuous.
Thus, the family C defined in Proposition (1.1) is a time-change. Conversely,
given a time change C, we get an increasing right-continuous process by setting
At =inf{s: C s > t}.
It may happen that A is infinite from some finite time on; this is the case if
Coo = lims~oo C s < 00, but otherwise we see that time-changes are not more
general than the inverses of right-continuous increasing adapted processes.
In the sequel, we consider a time-change C and refer to A only when necessary.
We set .~ = .¥ct • If X is a (.31)-progressive process then Xt = X C t is a (.~)­
adapted process; the process X will be called the time-changed process of X.
Let us insist once more that, if A is continuous strictly increasing and Aoo = 00,
then C is continuous, strictly increasing, finite and Coo = 00. The processes A
and C then play totally symmetric roles and for any (.jf)-progressive process X,
we have XA = X. If Aoo < 00, the same holds but X is only defined for t < Aoo.
Finally let us observe that by taking C t = t /\ T where T is a stopping time, one
gets the stopped processes as a special case of the time-changed processes.
An important property of the class of semi martingales is its invariance under
time changes. Since, in this book, we deal only with continuous semimartingales,
we content ourselves with some partial results. We will need the following
(1.3) Definition. If C is a time-change, a process X is said to be C -continuous if
X is constant on each interval [C t - , C,].
If X is increasing and right-continuous, so is X; thus, if X is right-continuous
and of finite variation, so is X. The Stieltjes integrals with respect to X and X are
related by the useful
(1.4) Proposition. If H is (/¥i)-progressive, then fI is (.~)-progressive and if X
is a C -continuous process offinite variation, then fI· X = D; in other words
rCt HsdXs
Jco
= t l(c,<oo)HcudX cu = tl\Aoo HcudXc u'
Jo
Jo
Proof The first statement is easy to prove. The second is a slight variation on
Proposition (4.10) of Chap. O.
0
In many cases, Co = 0 and Ct is finite for every t and the above equality reads
l
Ct
HsdXs =
lt
HcudXcu"
We now turn to local martingales. A first problem is that, under time changes,
they remain semimartingales but not always local martingales; if for instance, X
is a BM and At = St. then Xt = t. Another problem for us is that C may have
jumps so that X may be discontinuous even when X is continuous. This is why
again, we will have to assume in the next proposition that X is C -continuous.
§ 1. Continuous Martingales as Time-changed Brownian Motions
181
(1.5) Proposition. Let C be a.s. finite and X be a continuous (.7,)-local martingale.
i) If X is C -continuous, then X is a continuous (,7r)-local martingale and
(X,X) = ( 0 ) ;
ii) Ifmoreover His (.§?f)-progressive and J~ H}d(X, X)s < 00 a.s./or every
t then J~ iI}d(X, X)s < 00 a.s.for every t and H.X = D.
Proof i) Since X is C -continuous the process X is continuous. Let T be a (.9if)stopping time such that XT is bounded; the time T = inf{t; C t 2: T} is a (instopping time because 1[1',00] =
is §{'-adapted. It is clear that XT is bounded.
Moreover
l'
X t = XT/\t = XC'At
r;;:::;r
A
A
and because X is C-continuous, X is constant on [T, Cj] and we have XC'At =
X~,; it follows from the optional stopping theorem that XT is a (§{')-martingale.
Finally if (Tn) increases to +00, so does the corresponding sequence (Tn); thus,
we have proved that X is a cont. loco mart.
By Proposition (1.13) of Chap. IV, the process (X, X) is also C-continuous,
thus, by the result just proved, X2 - ( 0 ) is a cont. loco mart. which proves that
(X,X) = ( 0 ) .
ii) The first part follows from Proposition (1.4). To prove the second part, we
need only prove that the increasing process of the local martingale D - iI. X
vanishes identically and this is a simple consequence of i) and Proposition (1.4).
D
Thus, we have proved that suitably time-changed Brownian motions are local
martingales. The following converse is the main result of this section.
(1.6) Theorem (Dambis, Dubins-Schwarz). If M is a (.%" P)-cont. loco mart.
vanishing at 0 and such that (M, M)oo = 00 and ifwe set
Tt = inf{s : (M, M)s > tlo
then, B t = M T, is a (!YT,)-Brownian motion and M t = B(M,M),.
The Brownian motion B will be referred to as the DDS Brownian motion 0/ M.
Proof The family T = (Tt ) is a time-change which is a.s. finite because
(M, M)oo = 00 and, by Proposition (1.13) of Chap. IV, the local martingale M is
obviously T -continuous. Thus, by the above result, B is a continuous (':~)-local
martingale and (B, B)t = (M, Mh, = t. By P. Levy's characterization theorem,
B is a (,JiiT, )-Brownian motion.
To prove that B(M,M) = M, observe that B(M,M) = MT(M.M) and although
T(M,M), may be> t, it is always true that MT(M.M), = M t because of the constancy
of M on the level stretches of (M, M).
D
182
Chapter V. Representation of Martingales
In the above theorem, the hypothesis (M, M}oo = 00 ensures in particular that
the underlying probability space is rich enough to support a BM. This may be also
achieved by enlargement.
We call enlargement ofthe filtered probability space (ll, $?{, P) another filtered
p) together with a map rr from Q onto Q, such that
probability space (Q,
1
rr- ($?{) C
for each t and rr(p) = P. A process X defined on Q may be
viewed as defined on Q by setting X(w) = X(w) if rr(w) = w.
If (M, M}oo < 00, recall that Moo = limHoo M t exists (Proposition (1.26)
Chap. IV). Thus we can define a process W by
Yr,
Yr
Wt = M Tt for t < (M, M}oo,
W t = Moo if t ~ (M, M}oo.
By Proposition (1.26) Chap. IV, this process is continuous and we have the
(1.7) Theorem. There exist an enlargement (Q, .%"', P) of(Q, ~t' P) and a BM
fi on Q independent of M such that the process
B _ { MT,
t-
Moo
+ fit-(M,M)oo
if t < (M, M}oo,
if t ~ (M, M}oo
is a standard linear Brownian motion. The process W is a (~-BM stopped at
(M, M}oo.
Proof Let (QI, ~', PI) be a probability space supporting a BM f3 and set
Q = Q x Q',
Yr =.~ ®.¥,', P = P ® pI, fit(w, w') = f3t(w' ).
The process fi is independent of M and B may as well be written as
Bt = M Tt + lot l(s>(M.M)oo)dfis'
In the general setting of Proposition (1.5), we had to assume the finiteness of C
because Met would have been meaningless otherwise. Here where we can define
MT, even for infinite Tt , the reasoning of Proposition (1.5) applies and shows that
W is a local martingale (see also Exercise (1.26)).
Being the sum of two .if-Iocal martingales, B is a local martingale and its
increasing process is equal to
(Mr., Mr.}t + lot l(s>(M,M)oo)ds + 2 lot l(s>(M,M)oo)d(Mr., fi}s.
Because of the independence of Mr. and fi, the last term vanishes and it is easily
deduced from the last proof that (Mr., Mdt = t 1\ (M, M}oo; it follows that
(B, B}t = t which, by P. Levy's characterization theorem, completes the proof.
Remark. An interesting example is given in Lemma (3.12) Chap. VI.
We may now complete Proposition (1.26) in Chap. IV.
§ 1. Continuous Martingales as Time-changed Brownian Motions
183
(1.8) Proposition. For a continuous local martingale M, the sets {(M, M)oo < oo}
and {limt--->oo M t exists} are almost-surely equal. Furthermore, limHooMt = +00
and limHooMt =-00 a.s. on the set {(M, M)oo=oo}.
Proof We can apply the preceding result to M - Mo. Since B is a BM, we have
limt--->ooBt = +00 a.s. On the set (M, M)oo = 00, we thus have limHooMT, = +00
a.s. But on this set, Tr converges to infinity as t tends to infinity; as a result,
limt--->00 M t is larger than limt--->ooMT" hence is infinite. The same proof works for
the inferior limit.
Remarks. I 0) We write lim M t :::: lim M T, because the paths of Mr could a priori
be only a portion of those of M. We leave as an exercise to the reader the task of
showing that they are actually the same.
2°) Another event which is also a.s. equal to those in the statements is given
in Exercise (1.27) of Chap. VI.
3°) This proposition shows that for a cont. loco mart., the three following
properties are equivalent:
i) SUPt M t = 00 a.s., ii) inft M t = -00 a.s., iii) (M, M)oo = 00 a.s.
We now tum to a multi-dimensional analogue of the Dambis, Dubins-Schwarz
theorem which says that if (M, N) = 0, then the BM's associated with M and N
are independent.
(1.9) Theorem (Knight). Let M = (M l , ..• , M d ) be a continuous vector valued
local martingale such that Mo = 0, (M k , Mk)oo = 00 for every k and (Mk, Ml) =
0for k =1= I. Ifwe set
= inf{s : (M k , Mk)s > t}
T/
I
d
and B; = M;"
, the process B = (B , •.• , B ) is ad-dimensional BM.
As in the one-dimensional case, the assumption on (M k , Mk)oo may be removed at the cost of enlarging the probability space. It is this general version that
we will prove below.
(1.10) Theorem. If (M k , Mk) 00 is finite for some k 's, there is a BMd fJ independent
of M on an enlargement of the probability space such that the process B defined
by
for t < (M k , Mk)oo
k
Bt =
k
k
k
Moo + fJ(t-(M'.M'L",) for t:::: (M , Mk)oo
[M;,
is ad-dimensional BM.
Proof By the previous results, we know that each Bk separately is a linear BM, so
all we have to prove is that they are independent. To this end, we will prove that,
with the notation of Theorem (3.6) of Chap. IV, for functions fk with compact
support, E
[get] = 1. Indeed, taking fk = L)=I A{lltj_l,!jl, we will then obtain
184
Chapter V. Representation of Martingales
that the random vectors (BZ - BZ_ I ) , 1 s j S p, 1 S k s d, have the right
characteristic functions, hence the right laws.
In the course of this proof, we write Ak for (M k , Mk). From the equality
M~k - M~k =
t
,
f 1Is,tl(A~)dM~
which follows from Proposition (1.5), we can derive, by the usual monotone class
argument, that
Consequently,
1
00
fk(S)dB; =
1 fk(A~)dM~ + 1
00
00
fk(S +
A~)df3;.
Note that the stochastic integral on the right makes sense since A~ is independent
of f3 k , Passing to the quadratic variations, we get
1
00
f;(s)ds =
1 fk2(A~)dA~ + 1
00
00
fl(s +
A~)ds,
The process X = Lk fo fk(A~)dMi is a local martingale and, using the hypothesis
(M k , Ml) = 0 for k #-1, we get
(X, X) =
L 10r f;(A~)dA~,
k
By Ito's formula, It = exp {iX t + !(X, X)r} is a local martingale. Since it is
bounded (by exp {Lk IIfkll~/2}), it is in fact a martingale. Likewise
is a bounded martingale and E
[g,t] = E [/00100]'
!
Now, conditionally on M, 100 has the law of exp {i Z + Var(Z)} where Z is
gaussian and centered. Consequently, E
martingale, E
[g,t] = E [/00] and, since 1 is a bounded
[g,t] = E [/0] = 1, which ends the proof.
Remarks. 1°) Let us point out that the DDS theorem is both simpler and somewhat
more precise than Knight's theorem, The several time-changes of the latter make
matters more involved; in particular, there is no counterpart of the filtration P'1J
of the former theorem with respect to which the time-changed process is a BM.
2°) Another proof of Knight's theorem is given in Exercise (3.18). It relies on
a representation of Brownian martingales which is given in Sect. 3.
§ 1. Continuous Martingales as Time-changed Brownian Motions
185
An important consequence of the DDS and Knight theorems is that, to some
extent, a property of continuous local martingales which is invariant by timechanges is little more than a property of Brownian motion and actually many
proofs of results on cont. loco mart. may be obtained by using the associated BM.
For instance, the BDG inequalities of Sect. 4 Chap. IV can be proved in that way.
Indeed, if M t = B(M,M)"
and since (M, M}t is a stopping time for the filtration (.%,) with respect to which
B is a Brownian motion, it is enough to prove that, if (.'~) is a filtration, for a
(;~)-BM B and a (.'~t)-stopping time T,
The proof of this is outlined in Exercise (1.23).
Finally, let us observe that, in Theorem (1.6), the Brownian motion B is
measurable with respect to .~M, where, we recall from Sect. 2 Chap. III, (.~x)
is the coarsest right-continuous and complete filtration with respect to which X is
adapted; the converse, namely that M is measurable with respect to .'7c1 is not
always true as will be seen in Exercise (4.16). We now give an important case
where it is so (see also Exercise (1.19)).
(1.11) Proposition. If M is a cant. lac. mart. such that (M, M}oo = 00 and
M t = x + fot a (Ms)df3s
for a BM f3 and a nowhere vanishingfunction a, then M is measurable with respect
to
where B is the DDS Brownian motion of M.
.¥'c1
Proof Since a does not vanish, (M, M) is strictly increasing and
(M, Mh, = foT, a 2(Ms)ds = t.
Using Proposition (1.4) with the time change Tt , we get f~ a2(Bs)dTs = t, hence
Tr = fot a- 2(Bs)ds.
It follows that Tt is .¥; B -measurable, (M, M}t which is the inverse of (Tr) is
.'7c1-measurable and M = B(M,M) is consequently also .'7c1-measurable, which
completes the proof.
Remark. We stress that, in this proposition, we have not proved that M is (.~B)_
adapted; in fact, it is (7(~'M),)-adaPted (see Exercise (4.16».
186
Chapter V. Representation of Martingales
(1.12) Exercise. Let C be a time change and D be a time change relative to
(.'k) = (.¥E). Prove that s --+ CD, is a time change and ·7vr = ·/for·
(1.13) Exercise. 1°) Let A be a right-continuous adapted increasing process. If X
and Yare two positive measurable processes such that
for every (.¥r')-stopping time T, in particular, if Y is the (Yt)-optional projection
of X (Theorem (5.6) Chap. IV), then for every t :s +00,
[Hint: Use the same device as in Proposition (1.4) for the time change associated with A.]
2°) If A is continuous, prove that the same conclusion is valid if the assumption holds only for predictable stopping times, hence, in particular, if Y is the
predictable projection of X.
[Hint: Use Ct- instead of Ct and prove that C t- is a predictable stopping
time.]
The result is actually true if A is merely predictable.
3°) If M is a bounded right-continuous positive martingale and A is a rightcontinuous increasing adapted process such that Ao = 0 and E[AtJ < 00 for every
t, then
t
E[MtAtJ = E
MsdAsJ
[1
for every t > O. The question 3°) is independent of 2°) .
#
(1.14) Exercise. (Gaussian martingales. Converse to Exercise (1.35) Chap.
IV).
1°) If M is a cont. loco mart. vanishing at zero and if (M, M) is deterministic,
then M is a Gaussian martingale and has independent increments.
[Hint: This can be proved either by applying Theorem (1.6) or by rewriting in
that case the proof of P. Levy's characterization theorem.]
2°) If B is a standard BMI and fJ = H . B where H is a (.~B)-predictable
process such that IHI = 1, prove that the two-dimensional process (B, fJ) is
Gaussian iff H is deterministic.
(1.15) Exercise. Let M be a continuous local martingale. Prove that on {(M, M)oo
= oo}, one has
-.
hm Md ( 2(M, M)t log2(M, M)t )1/2 = 1 a.s.
t--+oo
(1.16) Exercise (Law of large numbers for local martingales). Let A E .~+
be such that Ao > 0 a.s. and M be a continuous local martingale vanishing at O.
We set
§ I. Continuous Martingales as Time-changed Brownian Motions
187
1°) Prove that
Mt = lot (Zt - Zs) dAs + ZtAo.
2°) It is assumed that limHoo Zt exists a.s. Prove that limHoo(Mt/A t ) = 0
on the set {Aoo = oo}.
3°) If f is an increasing function from [0, oo[ into ]0, oo[ such that
f(t)-2dt
< 00, then
flO
lim Mt/f«(M, M)t) = 0 a.s. on {(M, M)oo = oo}.
t->oo
°
In particular limHoo Mt/(M, M)t = a.s. on {(M, M)oo = oo}.
4°) Prove the result in 30) directly from Theorem (1.6).
#
(1.17) Exercise. 1°) If M is a continuous local martingale, we denote by f3(M)
the DDS BM of M. If C is a finite time-change such that Coo = 00 and if M is
C-continuous, prove that f3(M) = f3(M).
2°) If h is > 0, prove that f3 (*M) = {3(M)(h) where X?) = *Xh2t. Conclude
that f3(M(h) = {3(M)(h).
*
(1.18) Exercise. Extend Theorems (1.6) and (1.9) to continuous local martingales
defined on a stochastic interval [0, T[ (see Exercises (1.48) and (3.28) in Chap.
IV).
#
(1.19) Exercise. In the situation of Theorem (1.6), prove that .¥:x,M is equal to the
completion of (J «Bs, Ts), s ::: 0). Loosely speaking, if you know Band T, you
can recover M.
*
(1.20) Exercise (Holder condition for semimartingales). If X is a cont. semimartingale and A the Lebesgue measure on lR+ prove that
A
({t::: 0: ~m£-aIXt+E - Xtl > oD = 0 a.s.,
for every a < 1/2.
[Hint: Use the DDS and Lebesgue derivation theorems.]
(1.21) Exercise. Let (X, Y) be a BM2 and H a locally bounded (.jfx)-predictable
process such that fooo H}ds = 00 a.s. Set Mt = f~ HsdYs and cal1 Tt the inverse
of (M, M)t. Prove that the processes M T, and X T, are independent.
#
(1.22) Exercise. In the notation of the DDS theorem, if there is a strictly positive
function f such that
(M, M)t = lot f(Ms)ds and (M, M)oo = 00
then
188
Chapter V. Representation of Martingales
(1.23) Exercise. Let B be a (.Yr)-Brownian motion and T be a bounded (.Yf)stopping time.
1°) Using Exercise (4.25) in Chap. IV and the fact that E [B}] = E[T), prove
that, for p > 2, there is a universal constant Cp such that
J ::s CpE [TP] .
E [Bi p
By the same device as in Sect. 4 Chap. IV, extend the result to all p's.
2°) By the same argument, prove the reverse inequality.
3°) Write down a complete proof of the BDG inequalities via the DDS theorem.
#
(1.24) Exercise. This exercise aims at answering in the negative the following
question: if M is a cont. loco mart. and H a predictable process, such that
J~ H;d(M, M)s < 00 a.s. for every t, is it true then that
whether these quantities are finite or not? The reader will observe that Fatou's
lemma entails that an inequality is always true.
1°) Let B be a standard BM and H be a (.~ B)-predictable process such that
1t
H}ds < 00 for every t < 1,
but
11
H}ds = 00
(the reader will provide simple examples of such H's). Prove that the loco mart.
M t = J~ Hsd Bs is such that
limMt = -00,
t->1
lim M t = 00 a.s.
t--> 1
2°) For a E ~, a i= 0, give an example of a cont. loco mart. N vanishing at
and such that
°
i) for every to < 1, (Nt, t ::s to) is an L 2-bounded martingale.
ii) Nt = a for t ::: 1.
Prove furthermore that these conditions force
E
[(N, N)11/2J = 00,
and conclude on the question raised at the beginning of the exercise. This provides
another example of a local martingale bounded in L 2 which is nevertheless not a
martingale (see Exercise (2.13».
3°) This raises the question whether the fact that (*) is true for every bounded
H characterizes the L 2-bounded martingales. Again the answer is in the negative.
For a > define
°
§2. Conformal Martingales and Planar Brownian Motion
189
By stopping 112(1 - s)-ldBs at time Sx for a suitable ~/~-measurable r.v.
X, prove that t~ere exists a cont. loco mart. M for which (*) obtains for every
bounded Hand E [M?] = E [(M, M) d = 00. Other examples may be obtained
by considering filtrations (.9f) with non trivial initial a-field and local martingales,
such that (M, M}t is ~-measurable for every t.
(1.25) Exercise. By using the stopping times Ta+ of Proposition (3.9) Chap. III,
prove that in Proposition (1.5) the C-continuity cannot be omitted.
(1.26) Exercise. A cont. loco mart. with increasing process t /\ T where T is a
stopping time, is a BM stopped at T.
(1.27) Exercise. A time-changed uniformly integrable martingale is a uniformly
integrable martingale, even if the time change takes on infinite values.
§2. Conformal Martingales and Planar Brownian Motion
This section is devoted to the study of a class of two-dimensional local martingales
which includes the planar BM. We will use the complex representation of ]R2; in
particular, the planar BM will be written B = Bl +i B2 where (B 1 , B2) is a pair of
independent linear BM's and we speak of the "complex Brownian motion". More
generally, we recall from Sect. 3 in Chap. IV that a complex local martingale is
a process Z = X + i Y where X and Yare real local martingales.
(2.1) Proposition. If Z is a continuous complex local martingale, there exists a
unique continuous complex process of finite variation vanishing at zero denoted
by (Z, Z) such that Z2 - (Z, Z) is a complex local martingale. Furthermore, the
following three properties are equivalent:
i) Z2 is a local martingale;
ii) (Z, Z) = 0;
iii) (X, X) = (Y, Y) and (X, Y) = o.
Proof It is enough to define (Z, Z) by C x C-linearity, that is
(Z, Z) = (X + iY, X + iy) = (X, X) - (Y, Y) + 2i(X, Y}.
Plainly, the process thus defined enjoys all the properties of the statement and
the uniqueness follows from the usual argument (Proposition (1.12) in Chap. IV)
applied to the real and imaginary parts.
(2.2) Definition. A local martingale satisfying the equivalent properties of the
above statement is called a conformal local martingale (abbreviated to conf loco
mart.).
190
Chapter V. Representation of Martingales
Obviously, the planar BM is a conf. loco mart. and if H is a complex-valued
locally bounded predictable process and Z a conf. loco mart., then V t =
Hsd Zs
is a conf. loco mart.
For a conf.loc. mart. Z, one sees that (Re Z, Re Z) = !(Z, 2); in particular,
(V, U)t =
IHsI2d(Z, 2)s. Moreover, Ito's fonnula takes on a simpler fonn.
a=l2: ( ax
a - I. ay
a ) an d az
a = 2:1 ( axa + I. ay
a ) an d th at a fiunctIon
.
L et us reca11 that az
J;
J;
F : C ~ C which is differentiable as a function of both variables x and y, is
holomorphic if and only if ~~ = 0, in which case we set F' = ~~.
(2.3) Proposition. If Z is a conf loco mart. and F a complex function on C which
is twice continuously differentiable (as a function of two real variables) then
F(Zo) +
+~
4
itaFaz
o
-(Zs)dZs +
r f1F(Zs)d(Z, 2)s.
it -_aFaz
0
-
(Zs)dZ s
Jo
In particular, ifF is harmonic, F (Z) is a local martingale and, ifF is holomorphic,
Proof Straightforward computations using Ito's fonnula.
D
Remark. If Z is confonnal and F is holomorphic, F(Z) is confonnal. We will
give shortly a more precise result in the case of BM2.
We now rewrite Theorem (1.9) in the case of conf. loco martingales.
(2.4) Theorem. If Z is a conformal local martingale and Zo = 0, there exists
(possibly on an enlargement of the probability space) a complex Brownian motion
B such that
Proof Since (X, X) = (Y, Y) and (X, Y) = 0, Theorem (1.9), applied to the 2dimensional local martingale (X, y), implies the existence of a complex Brownian
motion B such that, for t < (X, X)oo,
where Tt = inf{u : (X, X)u > t}. The result follows as in the proof of Theorem
D
(1.6).
The foregoing theorem has a very important corollary which is known as the
conformal invariance of complex Brownian motion.
§2. Conformal Martingales and Planar Brownian Motion
191
(2.5) Theorem. If F is an entire and non constant function, F(B t ) is a timechanged EM. More precisely, there exists on the probability space of B a complex
Brownian motion B such that
F(B t ) = F(Bo) + B(x,x),
where (X, X}t = J~ 1F'(Bs)1 2ds is strictly increasing and (X, X}oo = 00.
Proof If F is an entire function, F2 is also an entire function and by Proposition
(2.3), F2(B) is a loco mart. As a result, F(B) is a conformal local martingale to
which we may apply Theorem (2.4). By the remarks before Proposition (2.3) and
the Proposition itself, for X = Re F(B t ), we have
As F' is entire and non identically zero, the set r of its zeros is countable;
therefore P (Jooo lr(Bs)ds = 0) = 1 and (X, X) is strictly increasing.
It remains to prove that (X, X}oo = 00; the proof of this fact will require some
additional information of independent interest. What we have proved so far is that
F(B) has the same paths as a complex BM but possibly run at a different speed.
The significance of the fact that (X, X) is strictly increasing is that these paths are
run without gaps and this up to time (X, X}oo. When we prove that (X, X}oo = 00,
we will know that the paths of F(B) are exactly those of a BM. The end of the
proof of Theorem (2.5) is postponed until we have proved the recurrence of planar
BM in Theorem (2.8).
D
We begin with a first result which is important in its own right. We recall
from Sect. 2 Chap. III that hitting times TA are stopping times so that the events
{TA < oo} are measurable.
(2.6) Definition. For a Markov process with state space E, a Borel set A is said
to be polar if
Pz [TA < 00] = 0 for every Z E E.
(2.7) Proposition. For the BM in ]Rd with d 2: 2, the one-point sets are polar sets.
Proof Plainly, it is enough to prove the result for d = 2, and because of the
geometrical invariance properties of BM, it suffices to show that the planar BM
started at 0 does not hit the point set {(-I, O)}. By what we already know, the
process M t = exp(Bt ) - 1 may be written BA , where B is a planar BM and
At = J~ exp(2Xs)ds where X is the real component of B. The process A is
clearly strictly increasing. We also claim that Aoo = 00 a.s. Otherwise, M would
converge in C as t tends to infinity; since I exp(Bt ) I = exp(Xt ) where X, as a
linear BM, is recurrent, this is impossible. As a result, the paths of M are exactly
the paths of a BM (run at a different speed) and, since exp(Bt ) never vanishes,
the result is established.
192
Chapter V. Representation of Martingales
Remarks. 1°) For BMI, no non empty set is polar, whereas for BM2, d 2:: 2, all
one-point sets, hence all countable sets are polar and, for instance, if we call Q2
the set of points in ]R2 with rational coordinates, the Brownian path {Bt, t 2:: O}
is a.s. contained in ]R2\Q2. But, there are also uncountable polar sets even in the
case d = 2.
2°) Another more elementary proof of Proposition (2.7) was given in Exercise
(1.20) of Chap. I and yet another is given in Exercise (2.14).
We may now state the recurrence property of the planar BM, another proof of
which is given in Exercise (2.14).
(2.8) Theorem. Almost surely, the set {t : B t E U} is unbounded for all open
subsets U of]R2.
Proof By using a countable basis of open balls, it is enough to prove the result
whenever U is the ball B(z, r) where z = x + iy and r is > o.
Since one-point sets are polar, we may consider the process M t = log IBt - zl
which by the same reasoning as above is equal to f3 At' where f3 is a linear BM
started at log Izl and At = f~ IBu - ZI-2du. But since sUPs:9 Ms is larger than
sUPS:9log IXs -xl which goes to infinity as t tends to +00, it follows from Remark
3°) after Proposition (1.8) that At converges to infinity. As a result, inf, M t = +00
a.s., hence M takes on values less than log r at arbitrary large times which ends
the proof.
Remarks. 1°) One can actually prove that for any Borel set A with strictly positive
Lebesgue measure, the set {t : B t E A} is a.s. unbounded and in fact of infinite
Lebesgue measure (see Sect. 3 Chap. X).
2°) The above result shows that a.s., the Brownian path, which is of Lebesgue
measure zero, is dense in the plane.
We can now turn to the
(2.9) End of the proof of Theorem (2.5). If we had (X, X)oo < 00, then F(B t )
would have a limit as t tends to infinity. But since F is non constant, one can
find two disjoint open sets UI and U2 such that F(lf) n F(lh) = 0 and as
{t : B t E UI} and {t : B t E U2} are unbounded, F(B t ) cannot have a limit as t
tends to infinity.
0
We now state one more result about recurrence. We have just seen that the
BM in ]Rd is recurrent for d = 1 and d = 2. This is no longer true for d 2:: 3, in
which case the BM is said to be transient. More precisely we have the
(2.10) Theorem. If d 2:: 3, limHoo IBtl = +00 almost surely.
Proof It is clearly enough to prove this when d = 3 and when B is started
at Xo i= o. Since {OJ is a polar set, by Ito's formula, I/1B t l is a positive loco
mart., hence a positive supermartingale which converges a.s. to a r.v. H. By
Fatou's lemma, Exo[H]:s limExo[l/IBt!]. But, the scaling property shows that
§2. Conformal Martingales and Planar Brownian Motion
Exo[1/IBtll = O(l/v'!). As a result, H =
completed.
193
°
Pxo-a.s. and the proof is now easily
D
°
We close this section with a representation result for the complex BM B which
we study under the law Pa for a i= 0. Since Bt i= for all t Pa-a.s., we may
choose a continuous determination ()t(w) of the argument of Bt(w) such that ()o(w)
is a constant and e ilJo = a / la I. We then have Bt = Pt exp(i ()t) and () is adapted
to the filtration of B t . The processes Pt and {)t may be analysed in the following
way.
(2.11) Theorem. There is a planar BM (13, y) such that
Pt = la I exp(f3c),
{)t = ()o + Yet'
where C t = f~ p;2ds. Moreover, .JTJ! = .rr);, hence Y is independent of p.
Proof Because B almost-surely never vanishes, we may define the conformal
local martingale H by
fir = fot B;ldBs
and we have (Re H, Re H)t = Ct. Applying the integration by parts formula to
the product B t exp( - Ht ), it is easily seen, since (B, B) = 0, that Bt = a exp(Rt).
By Theorem (2.4), there is a planar BM which we denote by 13 + iy such that
H t = f3e t + i Yet which proves the first half of the statement.
The process 13 is the DDS Brownian motion of the local martingale Re H t =
log(prllal). But with X = Re B, Y = 1m B,
Re H t =
1
t XsdXs + Y,dYs -_It d~s
2
o
Ps
0
Ps
where ~ is a real BM. We may rewrite this as
logpt = log lal + fot a(logps)d~s
with a (x) = e- x ; it follows from Proposition (l.ll) that .JTJ! = .JTcxf which is the
D
second half of the statement.
This result shows that, as one might expect, the smaller the modulus of B, the
more rapidly the argument of B varies. Moreover, as {)t is a time-changed BM, it
is easy to see that
lim {)t = -00,
t-+oo
lim {)t = +00 a.s.
t-+oo
Thus, the planar BM winds itself arbitrarily large numbers of times around 0, then
unwinds itself and does this infinitely often. Stronger asymptotic results on {)t or,
more generally, on the behavior of planar BM will be given in Chaps. VII and
XIII.
194
Chapter V. Representation of Martingales
At the cost of losing some information on the modulus, the preceding result
may be stated
Br = Preiyc,
where y is a linear BM independent of P and C r = f~ p;2ds. This is known as
the "skew-product" representation of two-dimensional Brownian motion.
It is interesting to stress the fact that we have worked under Pa with a =1= O. For
a = 0 and for t > 0, we may still, by the polarity of {O}, write a.s. unambiguously
Br = Pt ·1~t with 116r I = 1. But we have no means of choosing a continuous
determination of the argument of 16t adapted to (.31). This example hints at the
desirability of defining and studying semimartingales on open subsets of lR+.
L
(2.12) Exercise. Let A = {z : 11m z I < JT 12}; compute the law of B
for the
complex Brownian motion B 1 + i B2 started at O.
[Hint: The exponential function maps A onto the half-plane H. Use the exit
distribution for H.]
#
(2.13) Exercise (An important counterexample). 1°) Let B be the BM in lR d
with d ::': 3 started at x =1= O. Prove that IB t 2 - d is a local martingale.
[Hint: Ixl 2- d is harmonic in lRd\{O}.]
2°) Use 1°) in the case d = 3 to give an example of a local martingale which
is bounded in L 2 but is not a true martingale. This gives also an example of a uniformly integrable local martingale which is not a true martingale and an example of
a uniformly integrable supermartingale X such that the set {X T; T stopping time}
is not uniformly integrable. One can also find for every p ::': 1 a local martingale
bounded in LP which is not a martingale, as is seen in Exercise (1.16) of Chap.
XI.
3°) If B is the complex BM, let M = log IB I and prove that for t: > 0 and
a<2
sup Ex [expaIMtl] < 00.
1
£:'01:::1
This provides an example of a local martingale with exponential moments which
is not a martingale. Furthermore, Ex[(M, M)rJ = 00, which bears out a remark
made after Corollary (1.25) in Chap. IV.
#
(2.14) Exercise. 1°) Retain the situation of 10) in the preceding exercise and let
a and b be two numbers such that 0 < a < Ixl < b. Set Ra = TB(O,a) and
Sb = TB(o,W, and prove that
Px [Ra < Sb] = (lxI2-d - b 2- d) 1 (a 2- d _ b 2- d ).
[Hint: Stop the local martingale IBt 2 - d at Ra 1\ Sb and proceed as in Chap.
1
II.]
Prove that Px[Ra < 00] = (a/lxl)d-2.
2°) Using the function log Ix I instead of Ix 2 - d , treat the same questions for
d = 2. Base on this another proof that one-point sets are polar.
1
§2. Conformal Martingales and Planar Brownian Motion
195
3°) Deduce from 2°) that for d = 2, any open set U and any z, Pz[Tu < 00] =
1.
4°) Let DI and D2 be two disjoint disks and define inductively a double
sequence of stopping times by
TI = TDI'
UI = inf{t > T I , B t E D2},
Tn = inf{t > Un-I, B t E Dd,
Un = inf{t > Tn, B t E D2}·
Prove that, for any z and every n,
and deduce therefrom another proof of Theorem (2.8).
(2.15) Exercise. In the situation of Theorem (2.11) prove that the filtration generated by fit is the same as that of Bt .
[Hint: p is .C7(i -adapted.]
(2.16) Exercise. Let B t be the complex BM and suppose that Bo = 1 a.s. For
r > 0, let
Tr = inf{t : IBtl = r}.
Prove that, if fit is the continuous determination of the argument of Bt which
vanishes for t = 0, then fir, is either for r :s 1 or for r :::: 1, a process with
independent increments and that the law of fir, is the Cauchy law with parameter
Ilogrl.
[Hint: Use the results in Proposition (3.11) of Chap. 111.]
(2.17) Exercise. 1°) (Liouville's theorem). Deduce from the recurrence of BM2
and the martingale convergence theorem that bounded harmonic functions in the
whole plane are constant.
[Hint: See the reasoning in Proposition (3.10) of Chap. X.]
2°) (D' Alembert's theorem). Let P be a non constant polynomial with complex coefficients. Use the properties of BM2 to prove that, for any e > 0, the
compact set {z : IP(z)1 :s e} is non empty and conclude to the existence of a
solution to the equation P(z) = O.
(2.18) Exercise. 1°) Let Z = X + i Y be the complex BM started at -1 and
T = inf{t : Yt = 0, X t :::: O}.
Prove that the law of log X r has a density equal to (2:rr cosh(x /2» -I.
[Hint: See Exercise (3.25) in Chap. III.]
2°) As a result, the law of Xr is that of C 2 where C is a Cauchy r.v ..
[Hint: See Sect. 6 Chap. 0.]
196
*#
Chapter V. Representation of Martingales
(2.19) Exercise. We retain the notation of Exercise (2.18) in Chap. IV.
1°) Let F be a holomorphic function in an open subset U of <C and i.V the
Zt ¢:.
differential form F(z)dz. If Z is a conf. loco mart. such that P[3t :::::
U] = 0, then
°:
(
1z(o.l)
i.V =
t F(Zs)dZs = 10t F(Zs) dZs a.s.,
0
10
where 0 stands for the Stratonovich integral.
2°) In the situation of Theorem (2.11), we have
e eo = (
t -
lr
1z(o.l)
i.V
i (xdy - ydx) = Im(dz/z).
with i.V = (x 2 +
3°) In the same situation, let St be the area swept by the segment [0, Zu],
u :::: t (see Exercise (2.18) in Chap. IV). Prove that there is a linear BM
8 independent of p such that St = 8At where At = ~ f~ p; d S. Another proof is
given in Exercise (3.10) Chap. IX which removes the condition a i= 0.
°: :
(2.20) Exercise. 1°) Let f be a meromorphic function in <C and A be the set of
its poles. For zo ¢:. A, prove that feB) is a time-changed BM2 started at f(zo).
2°) Let z be a non-zero complex number. If T = inf{ t : IBt I = I} prove that
BT has the same law under Pz and P ilz '
*
(2.21) Exercise (Exit time from a cone). Retain the situation and notation of
Theorem (2.11) with a = 1 and set eo = 0. For n, m > 0, define
T = inf{u : eu ¢:. [-n, m]},
r = inf{v : Yv ¢:. [-n, m]}.
1°) For 8 > 0, prove that E [pj,8] = E [exp(28 2r)]. Prove that consequently
E[T 8 ] < 00 implies 28(n +m) < Jr.
[Hint: Use the result in Exercise (3.10) 2°) of Chap. II.]
2°) From the equality T = for exp(2f3u)du, deduce that
E[T 8 ] :::: 2E [rO exp(28 2r)] .
Conclude that E[TO] < 00 if and only if 28(n + m) < Jr.
(2.22) Exercise. Let C (w) = {B t (w), t ::::: o} where B is the BM2.
1°) Prove that C(w) is a.s. a Borel subset of ]R2. Call A(w) its Lebesgue
measure.
2°) Prove that
A(w) = { (sup l{z}(Bt») dx dy
1ITf.2 10':0
and conclude that A(w) =
Chap. I.
°a.s. Compare with the proof given in Exercise (1.17)
§2. Confonnal Martingales and Planar Brownian Motion
197
(2.23) Exercise. If Z is the complex BM, a complex C 2 -function f on ]R2 is
holomorphic if and only if
feZ,) = f(Zo) +
fo' HsdZs
for some predictable process H.
*
(2.24) Exercise. We say that Z E 'C if for every £ > 0, the process ZI+£, t 2:
is a (.Pf+c)-conformal martingale (see Exercises (1.42) and (3.26) in Chap. IV).
1°) Prove that, if Z E ts', then a.s.,
°
< oo} = {w: limZI(w) exists in c}.
{w: limIZI(w)1
ItO
ItO
2°) Derive therefrom that for a.e. w, one of the following three events occurs
i) limqo ZI (w) exists in C;
ii) limqo IZ,(w)1 = +00;
iii) for every 0 > 0, {Zt(w), 0 < t < o} is dense in C.
°
[Hint: For z E C and r >
define T = inf{t : IZI - zl < r} and look at
VI = (Z; - zr 1 I(T>o) which is an element of ~'.]
Describe examples where each of the above possibilities does occur.
#
(2.25) Exercise. 1°) Let PI be the modulus of BMd , d 2: 2, say X, and v a
probability measure such that v({O}) = 0. Prove that, under Pv ,
1 - 1)
B t = p, - Po - -Cd
2
1t
0
p;lds
is a BM. In the language of Chap. IX, PI is a solution to the stochastic differential
equation
1 - 1)
Pt = Po + B t + -Cd
2
1t
0
p;lds.
The processes PI are Bessel processes of dimension d and will be studied in Chap.
XI.
2°) We now remove the condition on v. Prove that under Po the r.v. J~ Ps-1ds
is in LP for every p and extend 1°) to v = £0.
[Hint: PI - Ps - ~(d - 1)
p;;ldu =
(grad r(X u ), dX u } where r is the
distance to zero. Prove that the right-hand side is bounded in L 2 as s tends to 0.]
J:
*
J:
(2.26) Exercise (On polar functions). Let f = (fl, f2) be a continuous ]R2_
valued deterministic function of bounded variation such that f (0) =1= 0 and B =
(X, Y) a planar BM(O). Set Zt = B t + f(t) and T = inf{t : Zt = OJ.
1°) Prove that f3t = J~ eXs + fles»dXfz~I(Ys + f2(s» dYs is a linear BM
on [0, T[ (see Exercise (3.28) Chap. IV).
2°) By first proving that E
[J:/n IZsl-1Idf(s)l] < 00, show that a.s.
198
Chapter V. Representation of Martingales
fot IZsl-1Idf(s)1 < 00, for every t.
[Hint: Use the scaling property of B and the fact that, if G is a two-dimensional
Gaussian reduced r.v., then
sup E [1m + GI- I ] < 00.]
mEiC
3°) By considering log IZI, prove that T is infinite a.s .. In other words, f is a
polar function in the sense of Exercise (1.20) of Chap. I.
[Hint: Use Exercise (1.18).]
If f is absolutely continuous, a much simpler proof will be given in Exercise
(2.15) Chap. VIII.
(2.27) Exercise (Polarity for the Brownian sheet). Let X(s,t) be a complex
Brownian sheet, namely X(s,t) = W(~,t) + i Wl,t) where WI and W2 are two
independent Brownian sheets. If yes) = (x(s), yes)), s E [0,1], is a continuous
path in ]0, oo[x]O, 00[, write Xy for the process s --+ X(X(S),y(s l)'
1°) If x and yare both increasing (or decreasing), prove that the one-point
sets of C are polar for X y .
2°) Treat the same question when x is increasing and y decreasing or the
reverse.
[Hint: Use Exercise (1.13) in Chap. III and the above exercise.]
As a result, if y is a closed path which is piecewise of one of the four kinds
described above, the index of any a E C with respect to the path of Xy is a.s.
defined,
§3. Brownian Martingales
In this section, we consider the filtration (SifB) but we will write simply (.9if). It
is also the filtration (~eo) of Sect. 2 Chap. III, where it was called the Brownian
filtration.
We call g the set of step functions with compact support in IR+, that is, of
functions f which can be written
n
f =
L Aj
l)tj_l,fj)'
j=1
As in Sect. 3 of Chap. IV, we write g'f for the exponential of 10 f(s)dB s •
(3.1) Lemma. The set {g'~, f E
g} is total in L (.¥oo, P).
2
§3. Brownian Martingales
199
Proof We show that if Y E L 2 (,¥oo , P) and Y is orthogonal to every g,1" then the
measure Y . P is the zero measure. To this end, it is enough to prove that it is the
zero measure on the a-field a (Btl' ... , Bt.) for any finite sequence (tl' ... , tn ).
E (Li=1
Y]
The function CP(ZI, ... ,Zn) =
[exp
Zj (Btj - Btj _l ) ) .
is easily
seen to be analytic on en. Moreover, by the choice of Y, for any Ai E JR, we
have
~ (A., ... , A.) ~ E [exp (~Ai (8" - 8,,_,)) . Y ] ~ O.
Consequently, cP vanishes identically and in particular
The image of y.p by the map w -+ (Btl(W), ... ,Btj(w)-Btj_I(W), ... ) is
the zero measure since its Fourier transform is zero. The measure vanishes on
a (Bt;, ... , BtHI - Bt;, ... ) = a (Btl' B t2 , ... , Btn ) which ends the proof.
(3.2) Proposition. For any F
process H in L2(B) such that
E L2(,¥oo, P),
F = E[F] +
1
00
there exists a unique predictable
HsdBs.
Proof We call ~ the subspace of elements F in L 2 (.9fx" P) which can be written
as stated. For F E ~
E[F2] = E[F]2 + E
[1
00
H}dS] .
which implies in particular the uniqueness in the statement. Thus, if {Fn} is a
Cauchy sequence of elements of .~, the corresponding sequence {Hn} is a Cauchy
sequence in L 2(B), hence converges to a predictable H E L2(B); it is clear that
{Fn} converges in L2(,¥oo, P) to
which proves that .~ is closed.
On the other hand, .~ contains all the random variables (5,1, of Lemma (3.1),
since, by Ito's formula, we have
g/ = 1 +
lt ~f
f(s)dB s , for every t :s 00.
This proves the existence of H. The uniqueness in L2(B) follows from the identity
(*).
0
200
Chapter V. Representation of Martingales
Remark. If the condition H E L2(B) is removed, there are infinitely many predictable processes H satisfying the conditions of Proposition (3.2); this is proved in
Exercise (2.31) Chap. VI, but it may already be observed that by taking H = 1[O,dT I
with d T = inf{u > T : Bu = O}, one gets F = O.
We now tum to the main result of this section, namely the extension of Proposition (3.2) to local martingales. The reader will observe in particular the following remarkable feature of the filtration (.¥r): there is no discontinuous (.%')martingale. Using Corollary (5.7) of Chap. IV, this entails the
(3.3) Corollary. For the Brownianfiltration, every optional process is predictable.
The reader who is acquainted with the classification of stopping times will also
notice that all the stopping times of the Brownian filtration are predictable.
(3.4) Theorem. Every (.9f)-local martingale M has a version which may be written
t
M =c+
1t
HsdBs
where C is a constant and H a predictable process which is locally in L 2(B). In
particular, any (.9f)-local martingale has a continuous version.
Proof If M is an L 2-bounded (.%'}-martingale, by the preceding result, there is
a process H E L2(B) such that
[1
+ 1t
E[Mool + E
E[Mool
00
HsdBs
I.¥r]
HsdBs.
hence the result is true in that case.
Let now M be uniformly integrable. Since L 2 (.¥co) is dense in L I (.¥co) there
is a sequence of L 2 -bounded martingales M n such that limn E [IMoo - M~ = o.
By the maximal inequality, for every )" > 0,
I]
Thanks to the Borel-Cantelli lemma, one can extract a subsequence {Mnk} converging a.s. uniformly to M. As a result, M has a continuous version.
If now M is an (.9f)-local martingale, it obviously has a continuous version
and thus admits a sequence of stopping times Tn such that MTn is bounded. By
the first part of the proof, the theorem is established.
D
It is easy to see that the above reasonings are still valid in a multidimensional
context and we have the
§3. Brownian Martingales
201
(3.5) Theorem. Every (.¥;B)-local martingale, say M, where B is the d-dimensional BM (B I , •.• , B d ) has a continuous version and there exist predictable processes Hi, locally in L 2(B i ), such that
Remarks. 1°) The processes Hi are equal to the Radon-Nikodym derivatives
1r(M, Bi)t of (M, Bi) with respect to the Lebesgue measure. But, in most concrete examples they can be computed explicitely. A fairly general result to this
effect will be given in Sect. 2 Chap. VIII. Exercises (3.13), (3.16) of this section
already give some particular cases. When f is harmonic, the representation of the
martingale f(Bt) is given by Ito's formula.
2°) It is an interesting, and for a large part unsolved, problem to study the
filtration of the general local martingale obtained in the above results. The reader
will find some very partial answers in Exercise (3.12).
The above results are, in particular, representation theorems for L 2 (!y"O:). We
now tum to another representation of this space; for simplicity, we treat the onedimensional case. We set
and denote by L2(.1 n ) the L2-space of Lebesgue measure on .1 n. The subset En
of L2(.1 n ) of functions f which can be written
n
n
f (Sl, ... , sn) =
fi(Si)
with Ji E L2(lR.+) is total in L2(.1 n). For f E En, we set
in(f) =
1
00
fl(S])dB sl lSI h(s2)dBs2 •••
1
sn 1
- fn(sn)dBSn.
This kind of iterated stochastic integrals has already been encountered in Proposition (3.8) in Chap. IV, and it is easily seen that
(3.6) Definition. For n :::: 1, the smallest closed linear subspace of L 2(!y"O:) containing in (En) is called the n-th Wiener chaos and is denoted by Kn.
The map in is extended to L 2(.1 n) by linearity and passage to the limit. If f
is in the linear space generated by En it may have several representations as linear
combination of elements of En but it is e~sy to see that In (f) is nonetheless defined
unambiguously. Moreover in is an isometry between L2(.1 n ) and Kn. Actually,
using Fubini' s theorem for stochastic integrals (Exercise (5.17) Chap. IV), the
202
Chapter V. Representation of Martingales
reader may see that In (f) could be defined by straightforward multiple stochastic
integration.
Obviously, there is a one-to-one correspondence between Kn and L2(.1 n).
Moreover, the spaces Kn and Km are orthogonal if n =1= m, the proof of which we
leave to the reader as an exercise. We may now state
(3.7) Theorem. L 2 (~c:) = EB~ Kn where Ko is the space of constants. In other
words, for each Y E L2(~C:) there exists a sequence (r) where r E L2(.1 n)
for each n, such that
Y = E[Y] +
00
LJn(r)
n=!
in the L 2-sense.
Proof By Proposition (3.8) in Chap. IV, the random variables f$'c!o of Lemma (3.1)
may be written 1 + L~ In(r) pointwise with r(s!, ... , sn) = l(sd/(S2) ...
I(sn). As I is bounded and has compact support it is easy to see that this convergence holds in L 2 (.¥"oo). Thus, the statement is true for gt,. It is also true
for any linear combination of variables f$'c!o. Since, by Lemma (3.1), every r.v.
Y E L 2 (.¥"oo) is the limit of such combinations, the proof is easily completed.
Remark. The first chaos contains only Gaussian r.v. 's and is in fact the closed
Gaussian space generated by the r.vo's B t , t ~ 0 (see Exercise (3.11».
We now come to another question. Theorem (3.4) raises the following problem:
which martingales can be written as (H . B)t for a suitable Brownian motion B?
We give below a partial answer which will be used in Chap. IX.
(3.8) Proposition. if M is a continuous local martingale such that the measure
d(M, M}t is a.s. equivalent to the Lebesgue measure, there exist an (~M)_
predictable process It which is strictly positive dt ® dP a.s. and an (~M)_
Brownian motion B such that
d(M, M}t = Itdt and M t = Mo + i t 1//2dBs.
Proof By Lebesgue's derivation theorem, the process
It = lim n ((M, M}t - (M, M}t-l/n)
n ..... oo
satisfies the requirements in the statement. Moreover, (ft)-1/2 is clearly in Lfoc(M)
and the process
Bt = i t Is-l/2dMs
is a continuous local martingale with increasing process t, hence a BM and the
proof is easily completed.
D
§3. Brownian Martingales
203
If d(M, M}t is merely absolutely continuous with respect to dt, the above
reasoning fails; moreover, the filtration (~M) is not necessarily rich enough to
admit an (~M)-Brownian motion. However, if B' is a BM independent of ~M
and if we set
B t = 1t 1(J,>o)/s-I/2dMs + 1t 1(J,=o)dB;,
then, by Levy's characterization theorem, B is again a BM and M t = Mo +
J~ f// 2 dB s . In other words, the foregoing result is still true provided we enlarge
the probability space so as to avail ourselves of an independent BM. Using a little
linear algebra and an enlargement (Q,
P) of (Q,,97,, P), this can be carried
over to the multi-dimensional case. We only sketch the proof, leaving to the reader
the task of keeping track of the predictability and integrability properties of the
processes involved when altered by algebraic transformations.
Yr,
(3.9) Theorem. Let M = (MI, ... , M d ) be a cant. vect. lac. mart. such that
«
d(Mi, Mi}t
dt for every i. Then there exist, possibly on an enlargement of the
probability space, ad-dimensional BM B and a d x d matrix-valued predictable
process a in Lfoc(B) such that
M t = Mo + 1t asdBs .
Proof We may suppose that Mo = O.
«
By the inequality of Proposition (1.15) in Chap. IV, we have d(Mi, Mj}t
dt for every pair (i, j). The same argument as in the previous proof yields a
predictable process y of symmetric d x d matrices such that
(Mi, Mj}t = 1t y;jds
and the matrix y is d P ® dt-a.e. semi-definite positive. As a result one can find a
predictable process fJ with values in the set of d x d orthogonal matrices such that
p = fJt yfJ is diagonal. Setting a ji = fJij (pjj)I/2 we get a predictable process such
that y = at a = aa t . Of course some of the pjj' s may vanish and the rank of a,
which is equal to the rank of y, may be less than d. We call ss the predictable
process which is equal to the rank of Ys'
Define a matrix P~ by setting // = 1 if i = j ::: sand // = 0 otherwise.
There exist a predictable process rjJ such that rjJs is a d x d orthogonal matrix
such that arjJ = arjJP~ and a matrix-valued process).. such that )..arjJ = P~. Set
N =)... M; then N is a cont. vect. loco mart. and (N i , Nj}t = 8} J~ l[ig,]ds as
follows from the equalities
)..y)..t = )..aat)..t = )..arjJrjJtat)..t = P~.
Ifwe set X = (arjJ)· N it is easily seen that (X - M, X - M) = 0, hence X = M.
If we now carry everything over to the enlargement (Q,
P), we have at
our disposal a BMd W = (Wi, W2, ... , W d) independent of N and if we define
Yr,
204
Chapter V. Representation of Martingales
.
W; = N; +
t
10 l[i>{,]dW;,
then, by Levy's theorem, W is a BMd. As {3t is an orthogonal matrix, B = {3. W
is again a BMd (See Exercise (3.22) Chap. IV) and M = (acfJ{3t) . B.
0
Remark. If the matrix y is d P ® dt-a.e. of full rank then, as in Proposition (3.8),
there is no need to resort to an enlargement of the probability space and the
Brownian motion B may be constructed on the space initially given. Actually,
one can find a predictable process 1/1 of invertible matrices such that d(M, M}s =
(1/Is"/lDds and B = 1/1-1 . M.
(3.10) Exercise. Prove that Proposition (3.2) and Theorem (3.4) are still true if B
is replaced by a continuous Gaussian martingale.
[Hint: See Exercise (1.14).]
(3.11) Exercise. 1°) Prove that the first Wiener chaos K 1 is equal to the Gaussian
space generated by B, i.e. the smallest Gaussian space containing the variables
Bt , t:::: O.
2°) Prove that a Y':-measurable r.v. is in KI iff the system (Z, Bt. t :::: 0) is
Gaussian. As a result there are plenty of Y':-measurable Gaussian r.v. which are
not in K 1 •
*
(3.12) Exercise. Let B be a BMI(O) and H an (~B)-progressive process such
that:
i) J~ H;ds < 00 a.s. for every t;
ii) P [A. {s : Hs = O} = 0] = 1 where A. is the Lebesgue measure on lR+. If sgn x =
1 for x > 0 and sgn x = -1 for x S 0, prove that
.¥,'(sgn H)·B C .¥.'H.B C .¥.'B
t
t
t
for every t. Observe that (sgn H) . B is itself a Brownian motion.
[Hint: sgn Hs = HsIIHsl; replace IHI by a suitable ~H.B-adapted process.]
#
(3.13) Exercise. 1°) Let t > 0 and let B be the standard linear BM; if f
L2(lR, gt(x)dx) prove that
E
f(Bt) = Pt/(O) + 1t (Pt-sf)' (Bs)dBs.
[Hint: Recall that
1ft + ~ ~ = 0 and look at Exercise (1.11) in Chap. III
and Exercise (1.20) in Chap. VI!.]
2°) Let B' be an independent copy of B; for Ipl < 1 the process C t =
pBt + J1"=P2B; is a standard BMI. Prove that, if f E L 2 (lR,gl(x)dx), the
process
§3. Brownian Martingales
205
has a measurable version Z and that if I f(x)gl (x)dx = 0
E [f(B» 1Cd = p
11
Z;p)dC s
where Z(p) is the .jrC-predictable projection of Z (see Exercise (1.13». Instead
of Z(p), one can also use a suitable projection in L2(ds dP).
3°) (Gebelein's Inequality) If (X, Y) is a centered two-dimensional Gaussian
r.v. such that E[X2] = E[y2] = 1 and E[XY] = p, then for any f as in 2°) ,
E [(E[f(X) 1 y])2] ~ p2 E [f2(X)] .
The reader will look at Exercise (3.19) for related results.
**
(3.14) Exercise. Let Band B' be two independent standard BMI and set for s,
t :::: 0,
$7,,1 = a (Bu, U ~ s; B~, v ~ t).
1°) Prove that, as f and g range through L 2(lR+), the r.v.'s
i5
(1'
f(S)dBs) 00 g
(1'
g(S)dB;) 00
are total in L2 (.~,oo).
2°) Define a stochastic integral
H(s, t)dBsdB; of suitably measurable
doubly indexed processes H, such that any r.v. X in L2 (~,oo) may be uniquely
written
1010
X = E[X] +
1
00
h(s)dBs +
1
00
h'(s)dB; +
11
00
00
H(s, t)dBsdB;.
where h, resp. h', is predictable w.r.t. the filtration of B, resp. B'.
3°) Let XS,I be a doubly-indexed process, adapted to $7,,1 and such that
i) suPs,! E [X;,I] < +00;
ii) E [XS',I' 1$7,,1] = XS,I a.s. whenever s ~ s' and t ~ t'.
Prove that there is a r.v. X in L2 (~,oo) such that XS,I = E [X 1$7,,1] a.s.
for every pair (s, t) and extend the representation Theorem (3.4) to the present
situation.
#
(3.15) Exercise. 1°) Prove that the family of random variables
Z =
fI 10
(00
e- AiS J;(Bs)ds
i=1
where Ai E lR+ and the functions J; are bounded and continuous on lR is total in
L2(~).
[Hint: The measures 81 are the limit in the narrow topology of probability
measures whose densities with respect to the Lebesgue measure are linear combinations of exponentials.]
2°) Prove that Z has a representation as in Proposition (3.2) and derive therefrom another proof of this result.
206
Chapter V. Representation of Martingales
(3.16) Exercise. Let t be fixed and </J be a bounded measurable function on R
Find the explicit representation (3.2) for the r.v. F = exp (J~ </J(Bs)ds).
[Hint: Consider the martingale E [F I ~] , u :::: t.]
(3.17) Exercise (Gaussian chaoses). Let G be a centered Gaussian subspace of
L2(E, (!;, P) and ~ the sub-a-algebra generated by G. Call Kn the closure in
L 2 of the vector space generated by the set
{hn(X) : X E G, IIXII2 = 1}
where h n is the n-th Hermite polynomial defined in Sect. 3 Chap. IV.
1°) The map X --+ exp(X) is a continuous map from G into L2(E, ~, P) and
its image is total in L2(E, ~,P).
2°) If X and Y are in G and IIXII2 = IIYII2 = 1, prove that
n!E[XYt
if m = n,
o otherwise.
00
L2(E,~, P) = E9Kn.
o
4°) If H is the Gaussian space of Brownian motion (see Exercise (3.11»,
prove that the decomposition of 3°) is one and the same as the decomposition of
Theorem (3.7).
#
(3.18) Exercise (Another proof of Knight's theorem). 1°) Let M be a continuous (.9f)-local martingale such that (M, M)oo = 00 and B its DDS Brownian
motion. Prove that, for any r.v. H E L2 (S7'C:), there is a (9f)-predictable process
K such that E [Jooo K;d(M, M)s] < 00 and H = E[H] + 00 KsdMs. As ~M
may be strictly larger than S7'C:, this does not entail that M has the PRP of the
following section.
[Hint: Apply (3.2) to B and use time changes.]
2°) In the situation of Theorem (1.9), prove that for H E L 2 (S7'C:) there exist
(.9f)-predictable processes Ki such that
10
This can be proved as in 1°) by assuming Theorem (3.4) or by induction from 1°).
3°) Derive Theorem (1.9), i.e. the independence of the Bi'S from the above
representation property.
*#
(3.19) Exercise (Hypercontractivity). Let t-t be the reduced Gaussian measure of
density (2Jl')-1/2 exp( _x 2 /2) on R Let p be a real number such that Ipi :::: 1 and
set
Uf(y) =
f
f (py +
~x) t-t(dx).
§3. Brownian Martingales
207
By making p = e ~t /2, U is the operator given by the transition probability of the
Omstein-Uhlenbeck process.
1°) Prove that for any p E [1, oo[ the positive operator U maps LP(p,) into
itself and that its norm is equal to 1. In other words, U is a positive contraction
on U(p,).
This exercise aims at proving a stronger result, namely, if I < p :::: q :::: qo =
(p - l)p~2 + 1, then U is a positive contraction from LP(p,) into U(p,). We call
q' the conjugate number of q i.e. llq + Ilq' = I.
2°) Let (Yf, Y() be a planar standard BM and set X t = pYt +JI=P2Y:. Prove
that X is a standard linear BM and that for f ~ 0,
Observe that (X 1, Yd is a pair of reduced Gaussian random variables with correlation coefficient p and that U could have been defined using only that.
3°) Let f and g be two bounded Borel functions such that f ~ £ and g ~ £
for some £ > 0. Set M = fP(X 1), N = gq'(y1), a = lip, b = llq'. Using the
representation result (3.2) on [0, 1] instead of [0, 00], prove that
E [MaN b ] :::: IIfllpllgll q ,.
Since also E [Ma N b ] = f g Uf d p, derive therefrom the result stated in 1°).
4°) By considering the functions f(x) = exp(zx) where z E JR, prove that for
q > qo, the operator U is unbounded from LP (p,) into U (p,).
5°) (Integrability of Wiener chaoses) Retaining the notation of Exercise
(3.17) prove that U h n = pn h n and derive from the hypercontractivity property
of U that if Z E K n , then for every q > 2,
IIZllq :::: (q - It/2I1ZI12.
Conclude that there is a constant a* >
°
E [exp (aZ 2 / n )]
<
E[exp(aZ 2 / n )]
such that
00
if a < a*,
00
ifa>a*.
(3.20) Exercise. Let (Q, .''7(, P) be a filtered probability space, B a ~-BM vanishing at zero and (.~) = (.YfB).
1°) If M is a (;~)-martingale bounded in L 2 , prove that X t == E [Mt I .91] is
a (.Jii)-martingale bounded in L 2 which possesses a continuous version. It is the
version that we consider henceforth.
2°) Prove that (M, B}t = f~ asds where a is (;~)-adapted. Prove that the
process t --+ E [at I .~] has a (.91)-progressively measurable version H and that
X
t = E[Mo] + lot HsdBs.
[Hint: For the last result, compute E[MtYtl in two different ways where Y
ranges through the square-integrable (.91)-martingales.]
208
Chapter V. Representation of Martingales
3°) If V is (.(~t)-progressively measurable and bounded
If B' is another (.~)-BM independent of B then
(3.21) Exercise. 1°) If B is a BMI (0) and we set .Yf = Y;B, prove that if .'~ is
a non-trivial sub-O"-field of .Yoo, there does not exist any (.jf)-BM which is also
a (.;fv .(~)-BM.
2°) In contrast, prove that there exist a non-trivial sub-O"-field .'(; of .0/'0: and
a (.;f)-martingale which is also a (.jfv ;(;)-martingale.
[Hint: Let .'9 = .Ye! where f3 is the DDS Brownian motion of 1(8:"0) . Band
take M = I(B~o) . B.]
*
(3.22) Exercise (The Goswami-Rao Brownian filtration). Let B be the standard BM and define .9"6; as the O"-field generated by the variables c/>(B" s :::: f)
where C/> is a measurable function on C([O, f], JR) such that C/>(Bs, s :::: f) =
C/>(-Bs, s :::: t).
1°) Prove that the inclusions .;fIBI C .~ C .¥;B are strict.
[Hint: For s < f, the r.v. sgn (BsBt) is .~-measurable but not .YfIBI_
measurable. ]
2°) Let Y E L2(.'YiB) with f :::: 00. Prove that in the notation of Theorem (3.7),
E [Y I .~] = E[Y] +
L h (f2 )(f)
00
p
p
p=]
where the notation In(r)(f) indicates that the first integral in In(r) is taken
only up to time t.
Deduce therefrom that every (.~ )-martingale is a (.'Y; B)-martingale.
3°) Prove that consequently every (.9'i'5'r )-martingale M may be written
Mt = C +
1t
m(s)df3s
where f3t = f~ sgn (Bs)dBs is a (.~)-BM and m is (.~)-predictable.
The filtration .o/'IBI is in fact generated by f3 (Corollary (2.2) Chap. VI); that
(.9'6;) is the natural filtration of a BM is the content of the following question.
4°) Let (tn )nEZ be an increasing sequence of positive reals, such that tn ----+ 0,
n-----+-oo
and fn ----+
n-++oo
+00. Prove that (.9"6;) is the natural filtration of the BM Yt =
f~ J.lsdBs, where J.ls = sgn (Btn - B tn _.), for s E]tn , tn+]l
§4. Integral Representations
209
§4. Integral Representations
(4.1) Definition. The cant. lac. mart. X has the predictable representation property
(abbr. PRP) if, for any (.y:;x)-local martingale M, there is an (.y:;x)-predictable
process H such that
t
M = Mo +
fot HsdXs.
In the last section, we proved that the Brownian motion has the PRP and, in
this section, we investigate the class of cont. loc. martingales which have the PRP.
We need the following lemma which is closely related to Exercise (5.11) in
Chap. IV.
(4.2) Lemma. If X is any cant. lac. mart., every (.y:;x)-continuous local martingale M vanishing at 0 may be uniquely written
M=H·X+L
where H is predictable and (X, L) = O.
Proof The uniqueness follows from the usual argument.
To prove the existence of the decomposition, let us observe that there is a
sequence of stopping times increasing to infinity and reducing both M and X.
Let T be one of them. In the Hilbert space H~, the subspace G = {H . XT;
H E cY9:;"(XT)} is easily seen to be closed; thus, we can write uniquely
where L E G~. For any bounded stopping time S, we have
E [xI Ls] = E [xI E [Loo I.YS]] = E [xI Loo] = 0
since XTI\S E G. It follows from Proposition (3.5) Chap. II that X T L is a martingale, hence (X T , L) = (X, L)T = O.
Because of the uniqueness, the processes iI and L extend to processes Hand
L which fulfill the requirements of the statement.
0
From here on, we will work with the canonical space W = C(lR+, JR). The
coordinate process will be designated by X and we put .y:;o = a(Xs, s :::: t). Let
.~ be the set of probability measures on W such that X is a local martingale
(evidently continuous). If P E .;;:6, (3f p) is the smallest right-continuous filtration
complete for P and such that .91;0 c .y:; p. The PRP now appears as a property
of P: any (,y:;p)-Iocal martingale M may be written M = H· X where H is
(,y:; P)-predictable and the stochastic integration is taken with respect to P.
We will further designate by .~. the subset of.~ ofthose probability measures
for which X is a martingale. The sets
and .76 are convex sets (see Exercise
(1.37) in Chap. IV). We recall the
.n
210
Chapter V. Representation of Martingales
(4.3) Definition. A probability measure P of .Yh' (resp ..9ffJ) is called extremal if
whenever P = ex PI + (I - ex) P2 with 0 < ex < I and PI, P2 E .% (resp . .~) then
P = PI = P2.
We will now study the extremal probability measures in order to relate extremality to the PRP. We will need the following measure theoretical result.
(4.4) Theorem (Douglas). Let (D, .¥) be a measurable space, .'£. a set of realvalued .r-measurable functions andY;* the vector-space generated by I and C£.
Jf.%'/ is the set ofprobability measures fJ on (D, .¥) such that Y; eLI (fJ) and
f I d fJ = 0 for every I E ..Y;, then .~.'/ is convex and fJ is an extremal point of
'%,/ ifand only if.';/;* is dense in LI(fJ).
Proof That.7&'/ is convex is clear.
Suppose that .'£* is dense in L I (fJ) and that fJ = ex VI + (l - ex) V2 with
o < ex < I and VI, V2 in .~.'/. Since Vi = hi fJ for a bounded function hi, the
spaceY;* is dense in LI(Vi) for i = 1,2. Since clearly VI and V2 agree on ..'Zi*,
it follows that VI = V2.
Conversely, suppose that CIS * is not dense in L I (fJ). Then, by the Hahn-Banach
theorem there is a non-zero bounded function h such that f hi dfJ = 0 for every
I EY3* and we may assume that Ilhll oo :s 1/2. The measures v± = (l ±h)fJ are
D
obviously in .%,/ and fJ being equal to (v+ + v-)/2 is not extremal.
If we now consider the space (W, .¥;);) and choose as set .9S the set of r.v. 's
Xs) where 0 :s s < t and A ranges through .¥,'o, the set .7h,/ of
Theorem (4.4) coincides with the set .% of probability measures for which X is
a martingale. We use Theorem (4.4) to prove the
IA(X t -
(4.5) Proposition. Jf P is extremal in .~', then any (.:y;p)-local martingale has a
continuous version.
Proof Plainly, it is enough to prove that for any r.v. Y E LI(P), the cadlag
martingale Ep[Y I.~ P] has in fact a continuous version. Now, this is easily seen
to be true whenever Y is in 'Zi*. If Y ELI (P), there is, thanks to the preceding
result, a sequence (Yn ) in'Zi'* converging to Y in L I (P). By Theorem (l.7) in
Chap. II, for every £ > 0, every t and every n,
By the same reasoning as in Proposition (l.22) Chap. IV, the result follows.
D
(4.6) Theorem. The probability measure P is extremal in .W; if and only if P has
the PRP and .~p is P-a.s. trivial.
Proof If P is extremal, then .Yo p is clearly trivial. Furthermore, if the PRP did
not hold, there would, by Lemma (4.2) and the preceding result, exist a non
zero continuous local martingale L such that (X, L) = O. By stopping, since
§4. Integral Representations
211
(X, LT) = (X, L)T, we may assume that L is bounded by a constant k; the
probability measures PI = (1 + (Loo/2k)) P and P2 = (1 - (Loo/2k)) P are both
in .% as may be seen without difficulty, and P = (PI + P2 )/2 which contradicts
the extremality of P.
Conversely, assume that P has the PRP, that .~p is P-a.s. trivial, and that
P = aP I + (1 - a)P2 with 0 < a < 1 and Pi E .%. The P-martingale ~, 1.7,
has a continuous version L since P has the PRP and XL is also a continuous
P-martingale, hence (X, L) = O. But since P has the PRP, L t = Lo + f~ HsdXs,
hence (X, L)t = f~ Hsd(X, X)s; it follows that P-a.s. Hs = 0, d(X, X)s-a.e.,
hence f~ HsdXs = 0 a.s. and L is constant and equal to L o. Since .7c/ is P-a.s.
trivial, L is equal to 1, hence P = PI and P is extremal.
Remark. By the results in Sect. 3, the Wiener measure W is obviously extremal,
but this can also be proved directly. Indeed, let Q E .% be such that Q « W.
By the definition of (X, X) and the fact that convergence in probability for W
implies convergence in probability for Q, (X, X)t = t under Q. By P. Levy's
characterization theorem, X is a BM under Q, hence Q = W which proves that
W is extremal. Together with the results of this section, this argument gives another
proof of Theorem (3.3).
We will now extend Theorem (4.6) from .% to .9tJ. The proof is merely
technical and will only be outlined, the details being left to the reader as exercises
on measure theory.
(4.7) Theorem. The probability measure P is extremal in .'7t5 if and only if it has
the PRP and .7c/ is P-a.s. trivial.
Proof The second part of the proof of Theorem (4.6) works just as well for .%
as for .%; thus, we need only prove that if P is extremal in .9!tJ, then ~p is a.s.
trivial, which is clear, and that P has the PRP.
Set Tn = inf{t : IXtl > n}; the idea is to prove that if P is extremal in .'7t5,
then the laws of XTn under P are extremal in .% for each n and hence the PRP
holds up to time Tn, hence for every t.
Let n be fixed; the stopping time Tn is a (.¥,'o)-stopping time and the a-algebra
.~~ is countably generated. As a result, there is a regular conditional distribution
Q(w, .) of P with respect to .:p;.no. One can choose a version of Q such that for
every w, the process X t - XTn/\t is a Q(w, ·)-local martingale.
Suppose now that the law of XTn under P, say P Tn, is not extremal. There
would exist two different probabilities Jrl and Jr2 such that pTn = aJrI + (l-a)Jr2
for some a E]O, 1[ and these measures may be viewed as probability measures
on .~o. Because .~o v eil(~) = ~, it can be proved that there are two
probability measuresnPj onn ~ which are uniquely defined by setting, for two
r. v.' sHand K respectively .:p;.no and 1(.97"cx?)-measurable,
!
H K dPi =
!
er.
Jri(dw)H(w)
!
Q(w, dw')K(w').
212
Chapter V. Representation of Martingales
Then, under Pi, the canonical process X is a continuous local martingale and
P = aP\ + (1 - a)P2 which is a contradiction. This completes the proof.
The equivalent properties of Theorem (4.7) are important in several contexts,
but, as a rule-in contrast with the discrete time case (see the Notes and Comments)
- it is not an easy task to decide whether a particular local martingale is extremal
or not and there is no known characterization - other than the one presented in
Theorem (4.7) - of the set of extremal local martingales. The rest of this section
will be devoted to a few remarks designed to cope with this problem. As may be
surmised from Exercise (3.18), the PRP for X is related to properties of the DDS
Brownian motion of X.
(4.8) Definition. A cont. loco martingale X adapted to afiltration Pf) is said to
have the (.¥;)-PRP if any (,%')-local martingale vanishing at zero is equal to H . X
for a suitable (.¥;)-predictable process H.
Although §fx c .91, the (.91)-PRP does not entail the PRP as the reader will
easily realize (see Exercise (4.22». Moreover, if we define (.jf)-extremality as
extremality in the set of probability measures Q such that (X t ) is an (.¥;, Q)-local
martingale, the reader will have no difficulty in extending Theorem (4.7) to this
situation.
In what follows, X is a P-continuous local martingale; we suppose that
(X, X)oo = 00 and call B the DDS Brownian motion of X.
(4.9) Theorem. The following two properties are equivalent
i) X has the PRP;
ii) B has the (.:Y;.,x)-PRP.
Proof This is left to the reader as an exercise on time-changes.
We turn to another useful way of relating the PRP of X to its DDS Brownian
motion.
(4.10) Definition. A continuous local martingale X such that (X, X)oo = 00 is
said to be pure if, calling B its DDS Brownian motion, we have
~x -:y:B
"'00-
00·
By Exercise (1.19), X is pure if and only if one of the following equivalent
conditions is satisfied
i) the stopping time Tt is .:rcxf-measurable for every t;
ii) (X, X)t is .¥cxf-measurable for every t.
One will actually find in Exercise (4.16) still more precise conditions which
are equivalent to purity. Proposition (1.11) gives a sufficient condition for X to be
pure and it was shown in Theorem (2.11) that if Pt is the modulus of BM2 started
at a =J. 0 then log Pt is a pure local martingale.
§4. Integral Representations
213
Finally, the reader will show as an exercise, that the pure martingales are
those for which the map which sends paths of the martingale into paths of the
corresponding DDS Brownian motion is one-to-one.
The introduction of this notion is warranted by the following result, which can
also be derived from Exercise (3.18). A local martingale is said to be extremal if
its law is extremal in .'7fJ.
(4.11) Proposition. A pure local martingale is extremal.
Proof Let P be a probability measure under which the canonical process X is a
pure loco mart. If Q E.76 and Q « P, then (X, X)t computed for P is a version
of (X, X)t computed for Q and consequently the DDS Brownian motion of X for
P, say {3, is a version of the DDS Brownian motion of X for Q. As a result, P and
Q agree on a ({3s, s ~ 0) hence on the completion of this a -algebra with respect
to P. But, since P is pure, this contains a(Xs, s ~ 0) and we get P = Q.
D
The converse is not true (see Exercise (4.16» but, as purity is sometimes easier
to prove than extremality, this result leads to examples of extremal martingales
(see Exercise (3.11) in Chap. IX).
(4.12) Exercise. With the notation of this section a probability measure P E .'7fJ
is said to be standard if there is no other probability measure Q in .9(1 equivalent
to P. Prove that P is standard if and only if P has the PRP, and .~p is P-a.s.
trivial.
#
(4.13) Exercise. Let (B I , B2) be a two-dimensional BM. Prove that X t =
fol B;dB; does not have the PRP.
[Hint: (B/)2 - t is a (.9if x )-martingale which cannot be written as a stochastic
integral with respect to X.]
*
(4.14) Exercise. Let X be a cont. loco mart. with respect to a filtration (.91).
1°) Let L\ be the space of real-valued, bounded functions with compact support
in lR.+. Prove that if, for every t, the set of r.v.'s
g/ = g
(1'
f(S)dX s }
,
f E L\,
is total in L2(.91, P), then ~p is P-a.s. trivial and X has the (.91)-PRP.
2°) Prove the same result with the set of r.v.'s
where Hn is the Hermite polynomial defined in Sect. 3 Chap. IV.
#
(4.15) Exercise. 1°) If B is a standard linear BM, (jiif) is the Brownian filtration and H is a (.jf)-predictable process a.s. strictly positive with the possible
exception of a set of Lebesgue zero measure (depending on w), then the martingale
214
Chapter V. Representation of Martingales
has the PRP.
[Hint: Use ideas of Proposition (3.8) to show that ~M = .r~.]
2°) In particular, the martingales M~ = J~ B:dBs , n E N, are extremal. For
n odd, these martingales are actually pure as will be proved in Exercise (3.11)
Chap. IX.
3°) Let T be a non constant square-integrable (.:)f)-stopping time; prove that
Yt = Bt - Br is not extremal.
[Hint: T is a !7"y -stopping time which cannot be expressed as a constant plus
a stochastic integral with respect to Y.]
*#
(4.16) Exercise. (An example of an extremal local martingale which is not
pure). 1°) Let (.:)f) and (:'ir) be two filtrations such that .Y; C ;~ for every t.
Prove that the following two conditions are equivalent:
i) every (3f)-martingale is a (.'9r)-martingale;
ii) for every t, the a-algebras .~ and .(~ are conditionally independent with
respect to .%'.
If these conditions are in force, prove that 3f = :~ n.¥oo.
2°) Let M be a continuous (.:)f)-loc. mart. having the (3f)-PRP. If (.'Yr) is a
filtration such that.:)f c ~. c .~ for every t and M is a (.~)-local martingale,
then (31") = U~).
3°) Let M be a cont. loco mart. with (M, M}oo = 00 and B its DDS Brownian
motion. Prove that M is pure if and only if (M, M}t is, for each t, an (.~B)_
stopping time.
[Hint: Prove first that, if M is pure, then .~B = '~tM.]
Prove further that if M is pure, (M, M}r is a (.~B)-stopping time for every
(.~M)-stopping time T (this result has no bearing on the sequel).
4°) Prove that M is pure if and only if .~M = .97(~.M)t for every t.
5°) From now on, f3 is a standard BM and we set fit = J~ sgn(f3s)df3s. In
Sect. 2 of Chap. VI it is proved that fi is a BM and that Yr P = .¥rI.81 for every t.
Prove that fi has the (.¥r.8)-PRP.
6°) Set Tt = J~ (2 + (f3s/ (l + lf3sl)) ds and observe that §iT = .'Yi.8 for
every t. Let A be the inverse of T and define the (.J7l)-loc. mart. M t = fiAt;
prove that M has the PRP, but that M is not pure.
[Hint: Prove that ':¥;tM = .:Yr.8 and use Theorem (4.9).]
(4.17) Exercise. Prove that the Gaussian martingales, in particular exB for any
real ex, are pure martingales.
(4.18) Exercise. If M is a pure cont. loco mart. and C an (.jfM)-time-change such
that M is C-continuous, then tV! = Me is pure.
§4. Integral Representations
215
(4.19) Exercise. Let M be a cont. loco mart. with the pn-PRP.
1°) Prove that S = inf {t : M t +u = M t for every u > O} is a (.Y;)-stopping
time.
2°) If H (w) is the largest open subset of lR.+ such that M t (w) is constant on
each of its connected components, prove that there are two sequences (Sn), (Tn)
of Pf)-stopping times such that H = Un]Sn, Tn[ and the sets ]Sn, Tn[ are exactly
the connected components of H.
[Hint: For c > 0, let ]S~, T;[ be the n-th interval of H with length> c.
Observe that S~ + c is a stopping time and prove that S~ is a (.Y;)-stopping time.]
**
(4.20) Exercise. 1°) If M is an extremal cont. loco mart. and if (.~M) is the
filtration of a Brownian motion, then d (M, M) s is a.s. equivalent to the Lebesgue
measure.
2°) Let F be the Cantor middle-fourths set. Set
M t = lot Ipc(Bs)dBs
where B is a BMl (0). Prove that (.~M) = (.y; B) and derive that M has not the
PRP.
(4.21) Exercise. Let fJ be the standard BM! and set M t = J~ l(fJ,>o)dfJs' Let
a = 1(fJI <0) + 00 1(fJI ::0:0) and
T
= a +inf{c: /!+£ l(fJ,>o)ds >
o}.
By considering the r.v. exp( -T), prove that M does not have the PRP.
*
(4.22) Exercise. 1°) Let (.¥() and (.(~) be two filtrations such that ~ C ,¥( for
every t. If M is a continuous (.'Yf)-local martingale adapted to (.(~) and has the
(.~)-PRP prove that the following three conditions are equivalent
i) M has the C~t)-PRP;
ii) every (;~t)-martingale is a (.Y;)-martingale;
iii) every (;~t )-martingale is a continuous (.Y;)-semimartingale.
2°) Let B be the standard linear BM and (.Y;) be the Brownian filtration. Let
to be a strictly positive real and set
Nt = lot [1(S<IO) + (2 + sgn (Bto)) 1(109)] sgn(Bs)dBs.
Prove that N has the (.Yr)-PRP and has not the PRP.
[Hint: The process H t = (sgn(Bto»)l(tog) is a discontinuous (.~N)-martin­
gale.]
*
(4.23) Exercise. Let fJ be a (.Y;)-Brownian motion and suppose that there exists
a continuous strictly increasing process Ar. with inverse Tr. such that At ::: t and
.x = .Y;A. Set X t = fJTr'
216
Chapter V. Representation of Martingales
1°) Prove that X is pure if and only if .¥0: = .~.
2°) Prove that X is extremal if and only if f3 has the (./f)-PRP.
**
(4.24) Exercise. 1°) Retain the notation of Exercise (3.29) Chap. IV and prove
that the following two conditions are equivalent
i) X has the (./f)-PRP;
ii) for any Q such that X is a (Q, P)-local martingale there is a constant c such
that Q = cPo
2°) Use Exercise (3.29) Chap. IV to give another proof of the extremality of
the Wiener measure, hence also of Theorem (3.3).
#
(4.25) Exercise (PRP and independence). 1°) Prove that if two (.jf")-continuous
loco mart. M and N are independent, then they are orthogonal, that is, the product
M N is a (.~)-loc. mart. (see Exercise (2.22) in Chap. IV). Give examples in
adequate filtrations of pairs of orthogonal loco martingales which, nonetheless, are
not independent.
2°) Prove that if M and N both have the PRP, then they are orthogonal iff
they are independent.
3°) Let B be the standard BM and set
Mt = i t I(B,>o)dBs ,
Nt = i t I(B,<o)dB s '
Prove that these martingales are not independent and conclude that neither has the
PRP. Actually, there exist discontinuous (.~M)-martingales.
Notes and Comments
Sect. 1. The technique of time-changes is due to Lebesgue and its application
in a stochastic context has a long history which goes back at least to Hunt [1],
Volkonski [1] and Ito-McKean [1]. Proposition (1.5) was proved by Kazamaki [1]
where the notion of C-continuity is introduced (with a terminology which differs
from ours.)
Theorem (1.6) appears in Dubins-Schwarz [1] for martingales with no intervals of constancy and Dambis [1]. The formulation and proof given here borrow
from Neveu [2]. Although a nice and powerful result, it says nothing about the
distribution of a given continuous martingale M: this hinges on the stochastic
dependence between the DDS Brownian motion associated with M and the increasing process of M. Let us mention further that Monroe [1] proves that every
semimartingale can be embedded by time change in a Brownian motion, allowing
possibly for some extra randomisation. Proposition (1.8) is from Lenglart [1] (see
also Doss-Lenglart [1]).
The proof of Knight's theorem (Knight [3]) given in the text is from Cocozza
and Yor [1] and the proof in Exercise (3.18) is from Meyer [3]. Knight's theorem
Notes and Comments
217
has many applications as for instance in Sect. 2 and in Chap. VI where it is used
to give a proof of the Arcsine law. Perhaps even more important is its asymptotic
version which is discussed and used in Chap. XIII Sect. 2. We refer the reader to
Kurtz [1] for an interesting partial converse.
Exercise (1.16) is taken from Lepingle [1], Exercise (1.21) from Barlow [I]
and Exercise (1.12) from Bismut [2].
Sect. 2. Conformal martingales were masterfully introduced by Getoor and Sharpe
[I] in order to prove that the dual of Hl is BMO in the martingale setting. The
proof which they obtained for continuous martingales uses in particular the fact
that if Z is conformal, and ex > 0, then IZI" is a local submartingale. The extension
to non-continuous martingales of the duality result was given shortly afterwards
by P.A. Meyer. The first results of this section are taken from the paper of Getoor
and Sharpe.
The conformal invariance of Brownian motion is a fundamental result of P.
Levy which has many applications to the study of the 2-dimensional Brownian
path. The applications we give here are taken from B. Davis [1], McKean [2] and
Lyons-McKean [1]. For the interplay between planar BM and complex function
theory we refer to the papers of B. Davis ([1] and [3]) with their remarkable
proof of Picard's theorems and to the papers by Came [2] and Atsuji ([1], [2]) on
Nevanlinna theory.
For the origin of the skew-product representation we refer to Galmarino [1]
and McKean [2]. Extensions may be found in Graversen [2] and Pauwels-Rogers
[1]. For some examples involving Bessel processes, see Warren-Yor [1].
The example of Exercise (2.13) was first exhibited by Johnson and Helms [1]
and the proof of D' Alembert's theorem in Exercise (2.17) was given by Kono [1].
Exercise (2.19) is from Yor [2]; more general results are found in Ikeda-Manabe
[1]. The results and methods of Exercise (2.14) are found in ito-McKean [1].
Exercise (2.16) is from Williams [4]; they lead to his "pinching" method (see
Messulam-Yor [1]).
Exercise (2.24) is from Calais-Genin [1] following previous work by Walsh
[1]. Exercise (2.25) originates in McKean [1] and Exercise (2.21) is attributed
to H. Sato in ito-McKean [1]. Exercise (2.21) is from Burkholder [3], but the
necessary and sufficient condition of 2°) is already in Spitzer [1].
The subject of polar functions for the planar BM, partially dealt with in Exercise (2.26) was initiated in Graversen [1] from which Exercise (1.20) Chap. I was
taken. Graversen has some partial results which have been improved in Le Gall
[6]. Despite these results, the following questions remain open
Question 1. What are the polar functions of BM2?
The result of Exercise (2.26) may be seen as a partial answer to the following
Question 2. Which are the two-dimensional continuous semimartingales for which
the one-point sets are polar?
Some partial answers may be found in Bismut [2] and Idrissi-Khamlichi [1].
The answer is not known even for the semimartingales the martingale part of
218
Chapter V. Representation of Martingales
which is a BM2. The result of Exercise (2.26) is a special case and another is
treated in Sznitman-Varadhan [1].
Exercise (2.27) is taken from Yor [3] and Idrissi-Khamlichi [1]. The paper of
Yor has several open questions. Here is one of them. With the notation of Exercise
(2.27), if a is polar for Xy the index of Xy with respect to a is well-defined and
it is proved in Yor [3] that its law is supported by the whole set of integers.
Question 3. What is the law of the index of Xy with respect to a?
This is to be compared to Exercise (2.15) Chap. VIII.
Sect. 3. The first results of this section appeared in Doob [1]. They were one
of the first great successes of stochastic integration. They may also be viewed
as a consequence of decompositions in chaoses discovered by Wiener [2] in the
case of Brownian motion and generalized by Ito to processes with independent
increments. The reader may find a more general and abstract version in Neveu [1]
(see also Exercise (3.17).
Theorem (3.5) and the decomposition in chaoses play an important role in
Malliavin Calculus as well as in Filtering theory. Those are two major omissions of
this book. For the first we refer to Ikeda-Watanabe [2], Nualart [2], and Stroock [3],
for the second one to Kallianpur [1]; there is also a short and excellent discussion
in Rogers-Williams [1]. A few exercises on Filtering theory are scattered in our
book such as Exercise (5.15) Chap. IV and Exercise (3.20) in this section which
is taken from Lipster-Shiryaev [1].
Our exposition of Theorem (3.9) follows Jacod [2] and Exercise (3.12) is taken
from Lane [1]. Exercise (3.13) is inspired from Chen [1]. Exercise (3.14) is due to
Rosen and Yor [1] and Exercise (3.19) to Neveu [3], the last question being taken
from Ledoux and Talagrand [1]. The source of Exercise (3.22) is to be found in
Goswami and Rao [1]; question 4°) is taken from Attal et al. [1].
Sect. 4. The ideas developed in this section first appeared in Dellacherie [1] in
the case of BM and Poisson Process and were expanded in many articles such as
Jacod [1], Jacod-Yor [1] and Yor [6] to mention but a few. The method used here
to prove Theorem (4.6) is that of Stroock-Yor [1]. Ruiz de Chavez [1] introduces
signed measures in order to give another proof (see Exercise (4.24)). The notion
of pure martingales was introduced by Dubins and Schwarz [2].
Most of the exercises of this section come from Stroock-Yor ([1] and [2]) and
Yor [9] with the exception of Exercise (4.20) taken from Knight [7] and Exercise
(4.19) which comes from Stricker [2]. Exercise (4.12) is from Van and Yoeurp [1].
The results in Exercise (4.25) are completed by Azema-Rainer [1] who describe
all (.y;; M) martingales.
The unsatisfactory aspect of the results of this section is that they are only
of "theoretical" interest as there is no explicit description of extremal martingales
(for what can be said in the discrete time case, however, see Dubins-Schwarz [2]).
It is even usually difficult to decide whether a particular martingale is extremal or
pure or neither. The exercises contain some examples and others may be found in
Exercise (4.19) Chap. VI and the exercises of Chap. IX as well as in the papers
already quoted. For instance, Knight [7] characterizes the harmonic functions f
Notes and Comments
219
in ffid, d > 1, such that f (B t ) is pure. However, the subject still offers plenty of
open questions, some of which are already found in the two previous editions of
this book.
Here, we discuss the state of the matter as it is understood presently (i.e.:
in 1998) thanks mainly to the progress initiated by Tsirel'son ([1], [2]), and coworkers.
First, we introduce the following important
.r,
Definition. Afiltration (.;if) on the probability space (Q,
P) such that.31) is
P-a.s. trivial is said to be weakly, resp. strongly, Brownian if there exists a (.Y{')BM' fJ such that fJ has the (.YD-PRP, resp . .¥; = .¥;fJ.
We will abbreviate these definitions to W.B. and S.B.
Many weakly Brownian filtrations may be obtained through "mild" perturbations of the Brownian filtration. Here are two such examples:
(Pt) (Local absolute continuity). If (.Y{') is W.B., and, in the notation of Chapter
VIII, Q <l P, then (.jf) is also W.B. under Q.
(P2) (Time change). If (.Yf) is W.B., and if (at) is the time-change associated
with At = f~ If,ds, where Hs is > 0 d P ds-a.s. and Aoo = 00 a.s., then (.¥;;,) is
also W.B. under P.
A usually difficult, albeit important, question is:
(Q) Given a WB. filtration (.Y{'), is it S.B. and, if so, can one describe explicitly
at least one generating BM'?
Tsirel'son [1] gives a beautiful, and explicit, example of a W.B. filtration for
which a certain Brownian motion is not generating. See Prop. (3.6), Chap. IX, for
a proof. Feldman and Smorodinsky [1] give easier examples of the same situation.
Emery and Schachermayer [2] prove that the filtration in Tsirel'son's example is
S.B. Likewise, Attal et a!. [1] prove that the Goswami-Rao filtration is S.B. (see
Exercise (3.22».
A deep and difficult study is made by Dubins et a!. [1] to show that the
filtration on the canonical space C(llt+, lit) is not S.B. under (many) probabilities
Q, equivalent to the Wiener measure, although under such probabilities, (.Yr) is
well-known to be W.B. (see (PI) above). The arguments in Dubins et a!. [1] have
been greatly simplified by Schachermayer [1] and Emery [4].
Tsirel'son [2] has shown that the filtration of Walsh's Brownian motion with
at least three rays, which is well-known to be W.B., is not S.B. The arguments
in Tsirel'son [2] have been simplified by Barlow, Emery et a!. [1]. Based on
Tsirel'son's technique, it is shown in this paper that if .Y{' = .¥;B, for a BMd B
(d ~ 1), and if L is the end of a (.Y;) predictable set r, then .~+ differs from .~­
by the adjunction of one set A at most, i.e.: .~+ = a {.yZ-, A}, a property of the
Brownian filtration which had been conjectured by M. Barlow. Watanabe [6] shows
the existence of a 2-dimensional diffusion for which this property does not hold.
Warren [2] shows that the filtration generated by sticky Brownian motion and its
220
Chapter V. Representation of Martingales
driving Brownian motion (for the definition of this pair of processes, see Warren
[1]) is not S.B. Another simplification of Tsirel'son's work is presented by De
Meyer [1]. Emery and Schachermayer [1] show the existence of a pure martingale
(Mt ) with bracket (M, M}t such that the measure d(M, M}t is equivalent to dt,
and nonetheless, the filtration of (Mt ) is not S.B., although it is W.B. (see (P2)
above).
Let us also recall the question studied by Lane [1].
Question 4. If B is a BM! and H a (.y; B)-predictable process, under which
condition on H is the filtration of M t = J~ HsdBs that of a Brownian motion?
Under which conditions are all the (.jfM)-martingales continuous?
In the case of Hs = f(Bs), Lane [1] has partial and hard to prove results.
There are also partial answers in Knight [7] when f is the indicator function of a
set (see Exercise (4.20)).
We also list the
Question 5. Which of the martingales of the previous question are extremal or
pure?
For Hs = B~, Stroock and Yor [2] give a positive answer for n odd (see
Exercise (3.11) Chap. IX) and so does Begdhadi-Sakrani [2] for n even. When
H is > O-a.s., then J~ HsdBs is extremal (see Exercise (4.15)) but we have the
following question, which is a particular case of the previous one:
Question 6. Does there exist a strictly positive predictable process H such that
the above stochastic integral is not pure?
Brownian filtrations have been further studied in Tsirel'son [4] who starts a
classification of noises, that is, roughly, of families (.~t )s::::t of a-fields such that,
for s < v < t,
.Ys:v v.Yv.t = ·¥s.t
and ,¥S,v and .7,;.t are independent. It is shown that there are many more noises
than those generated by the increments of Levy processes.
Chapter VI. Local Times
§1. Definition and First Properties
With Ito's formula, we saw how C 2 -functions operate on continuous semimartingales. We now extend this to convex functions, thus introducing the important
notion of local time.
In what follows, f is a convex function. We use the notation and results of
Sect. 3 in the Appendix. The following result will lead to a generalization of Ito's
formula.
(1.1) Theorem. If X is a continuous semimartingale, there exists a continuous
increasing process A f such that
f(Xt) = f(Xo) +
il f
Jot
f_(Xs)dXs + "2At
where f~ is the left-hand derivative of f.
Proof If f is C 2 , then this is Ito's formula and A{ = J~ j"(Xs)d(X, X)s.
Let now j be a positive C'Xl-function with compact support in ] - 00, 0] such
that f~oo j(y)dy = 1 and set fn(x) = n J~oo f(x + y)j(ny)dy. The function f
being convex, hence locally bounded, fn is well defined for every n and, as n
tends to infinity, fn converges to f pointwise and f~ increases to f~. For each n
f,,(X t ) = fn(Xo) +
»
1
t
o
1
f~(Xs)dXs + -A{',
2
and fn(X t ) (resp. fn(X o converges to f(Xt) (resp. f(Xo». Moreover, by stopping, we can suppose that X is bounded and then f~ (Xs) also is bounded. By the
dominated convergence theorem (Theorem (2.12) Chap. IV) for stochastic integrals f~ f~(Xs)dXs converges to f~ r(Xs)dX s in probability uniformly on every
bounded interval. As a result, A J" converges also to a process A f which, as a
limit of increasing processes, is itself an increasing process and
The process A{ can now obviously be chosen to be a.s. continuous, which ends
the proof.
0
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
222
Chapter VI. Local Times
The problem is now to compute A f in an explicit and useful way making
clear how it depends on f. We begin with the special cases of lxi, x+ = x V 0
and x- = -(x /\ 0). We define the function sgn by sgn(x) = 1 if x > 0 and
sgn(x) = -1 if x :::: O. If f(x) = lxi, then f~(x) = sgn(x).
(1.2) Theorem (Tanaka formula). For any real number a, there exists an increasing continuous process La called the local time of X in a such that,
IXo - al + 1t sgn(Xs - a)dXs + L~,
IX t -al
(Xo - a)
+
ria
+ Jo I(X,>a)dX s + 2Lt'
(Xo - a)- -
1
Jor 1(X,:,:a)dX s + 2L~.
In particular, IX - ai, (X - a)+ and (X - a)- are semimartingales.
Proof The left derivative of f (x) = (x - a) + is equal to I ja . oo [; by Theorem
(1.1), there is a process A + such that
(Xt - a)
+
= (Xo - a)
+
r
+ Jo I(X,>a)dXs + 21 A t+ .
In the same way
(Xt - a)- = (Xo - a)-
-10
1
I(X,:':a)dXs +
~A;.
By subtracting the last identity from the previous one, we get
X t = Xo +
1 +t
o
dXs
1
2
(At - A;).
It follows that Ai = A;- a.s. and we set L~ = Ai.
By adding the same two identities we then get the first formula in the statement.
o
We will also write L~(X) for the local time in a of the semimartingale X when
there is a risk of ambiguity.
Remark. The lack of symmetry in the last two identities in the statement is due to
the fact that we have chosen to work with left derivatives. This is also the reason
for the choice of the function sgn. See however Exercise (1.25).
With the increasing process L~, we can as usual associate a random measure
dL~ on lR.+. To some extent, it measures the "time" spent at a by the semi martin-
gale X, as is shown in the following result.
(1.3) Proposition. The measure dL~ is a.s. carried by the set {t : X t = a}.
§ 1. Definition and First Properties
223
Proof By applying Ito's formula to the semimartingale IX - ai, we get
(X, - a)2 = (Xo - a)2 +
210' IXs - ald(IX - al)s + (IX - ai, IX - al),
and using the first formula in Theorem (1.2) this is equal to
(Xo - a)2 +
210' IXs - al sgn(Xs - a)dXs + 210' IXs - aldL~ + (X, X),.
If we compare this with the equality, also given by Ito's formula,
we see that J~ IXs - aldL~ = 0 a.s. which is the result we had to prove.
D
Remarks. 1°) This proposition raises the natural question whether {t : X, = a}
is exactly the support of d L ~. This is true in the case of BM as will be seen in
the following section. The general situation is more involved and is described in
Exercise (1.26) in the case of martingales.
2°) If CIa = sup{t : X, = a}, then L':x, = L~u' a fact which comes in handy in
some proofs.
We are now going to study the regularity in the space variable a of the process
L~. We need the following
(1.4) Lemma. There exists a j»(lR) ® .cY'-measurable process L such that, for
each a, L(a, ., .) is indistinguishable from U.
Proof Apply Fubini's theorem for stochastic integrals (Exercise (5.17) of Chap.
IV) to the process H(a, s, .) = l(X,>a)'
Consequently, we henceforth suppose that U is .J3>(lR) ® ./'-measurable and
we will use the measurability in a to prove the existence of yet another better
version. We first prove two important results. We recall that if f is convex, its
second derivative f" in the sense of distributions is a positive measure.
(1.5) Theorem (Ito-Tanaka formula). If f is the difference of two convex functions and if X is a continuous semimartingale
f(X,) = f(Xo) + ['
Jo
f~(Xs)dXs + ~
In particular, f(X) is a semimartingale.
rL~
2 irK
f"(da).
224
Chapter VI. Local Times
Proof It is enough to prove the formula for a convex I. On every compact subset
of JR., I is equal to a convex function g such that g" has compact support. Thus by
stopping X when it first leaves a compact set, it suffices to prove the result when
I" has compact support in which case there are two constants a, f3 such that
I(x) = ax + f3 + 2"1
f
"
Ix - all (da).
Thanks to the previous results we may write
aX t + f3 + 2"1
f
"
IX t - all (da)
a(X t - Xo) + I(Xo) +
f 2" (r10
1
a) I "(da).
sgn(X, - a)dX\o + L t
From Sect. 3 in the Appendix and Lemma (1.4), we see that
~ ( t sgn(Xs - a)dXsI"(da) =
2
11R 10
r 1~(X,)dXs
10
which completes the proof.
- a(X t - Xo)
o
(1.6) Corollary (Occupation times formula). There is a P-negligible set outside
of which
r cP(Xs)d(X, X), = 1-00
roo cP(a)L~da
h
for every t and every positive Borel function cP.
Proof If cP = f" with I in C 2 , the formula holds for every t, as follows from
comparing Ito-Tanaka's and Ito's formulas, outside a P-negligible set rep. By
considering a countable set (cP n ) of such functions dense in Co(JR.) for the topology
of uniform convergence, it is easily seen that outside the P-negligible set r =
Un rep", the formula holds simultaneously for every t and every cP in Co (JR.). An
application of the monotone class theorem ends the proof.
Remarks. 1°) The time t may be replaced by any random time S.
2°) These "occupation times" are defined with respect to d(X, X)s which may
be seen as the "natural" time-scale for X. However the name for this formula is
particularly apt in the case of Brownian motion where, if cP = lA for a Borel set
A, the left-hand side is exactly the amount of time spent in A by the BM.
3°) A consequence of these formulas is that for a function which is twice
differentiable but not necessarily C 2 , the Ito formula is still valid in exactly the
same form provided I" is locally integrable; this could also have been proved
directly by a monotone class argument.
We now tum to the construction of a regular version of local times with which
we will work in the sequel.
§ 1. Definition and First Properties
225
(1.7) Theorem. For any continuous semimartingale X, there exists a modification
ofthe process {L~; a E JR., t E JR.+} such that the map (a, t) ~ L~ is a.s. continuous
in t and cadlag in a. Moreover, if X = M + V, then
L~ - L~- = 2 i t I(X,=a)dVs = 2 i t I(X,=a)dX s '
Thus, in particular, if X is a local martingale, there is a bicontinuous modification
of the family La of local times.
Proof By Tanaka's fonnula
L~ = 2 [eXt - a)+ - (Xo - a)+ - i t I(X,>a)dMs - i t l(x,>a)dVs]'
Using Kolmogorov's criterion (Theorem (2.1) Chap. I) with the Banach space
C([O, t), JR.), we first prove that the stochastic integral
possesses a bicontinuous modification. Thanks to the BDG-inequalities of Chap.
IV and Corollary (1.6), we have, for any k ::: I,
E
[s~p IM~ - M:12k] < CkE [ (i l(a<x,Sb)d(M, M)sY]
OO
CkE [ (lab
L~dX) k]
Ck(b - a)k E [ (b
<
Ck(b - al E [b
y]
(L~)kdx l
~ a lab L~dx
~ a lab
By Fubini' s theorem, this is less than
Ck(b - al sup E [(L~)k] .
x
Now, L: = 2
{(X
t -
x)+ - (Xo - x)+ - f~ I(X,>x)dX s } and since I(X t - x)+ -
(Xo - x)+1 :s IX t - Xol, there is a universal constant d k such that
Remark that the right-hand side no longer depends on x. If it is finite for some
k > 1, we have proved our claim. If not, we may stop X at the times
226
Chapter VI. Local Times
Tn = inf
It :~~~ IXs - Xol + (lot IdVls) + (M, M}~!2 2: n} .
k
k
The martingales (lV/a) Tn have bicontinuous versions, hence also ita.
To complete the proof, we must prove that
vt = 10 l(X,>a)dVs
1
is jointly cadlag in a and continuous in t. But, by Lebesgue's theorem
Vta- = ~fa1101 l(x,>b)dVs = lot l(X,":a)dVs.
It follows that L~ - L~- = 2 Vta- - Vta
A
A
(
)
= 2
Jo l(X,=a)dVs . In the same way
t
so that L~ = L~+.
Finally, the occupation times formula implies that
lot l(X,=a)d(M, M}s = lot l(X,=a)d(X, X}s = 0
so that J~ l(X,=a)dMs = 0 which ends the proof.
D
Remark. If X is of finite variation, then U (X) == O. Indeed by the occupation
times formula, Lf = 0 for Lebesgue almost every a and, by the right-continuity in
a, La(x) = 0 for every a. However, a semimartingale may have a discontinuous
family of local times, or in other words, the above theorem cannot be improved
to get continuity in both variables, as is shown in Exercise (1.34).
As a by-product of the use, in the preceding proof, of Kolmogorov's criterion
we see that for local martingales we may get Holder properties in a for the family
La. This is in particular the case for Brownian motion where
E
[~~~ IE: - E:1 J ::: Ckla - bl kt k!2
2k
for any fixed time t and for every k. Thus we may now state the
(1.8) Corollary (Continuity of martingale local times). The family La may be
chosen such that almost surely the map a ~ L~ is Holder continuous of order (){
for every (){ < 1/2 and uniformly in t on every compact interval.
Proof In the case of BM, only the uniformity in t has to be proved and this
follows from Exercise (2.10) Chap. I. The result for local martingales is then a
consequence of the DDS Theorem (see Exercise (1.27». The details are left to the
~~
D
§ 1. Definition and First Properties
227
From now on, we will of course consider only the version of the local time
L~ (X) which was exhibited in Theorem (l. 7). For this version, we have the fol-
lowing corollary which gives another reason for the name "local time".
(1.9) Corollary. If X is a continuous semimartingale, then, almost-surely,
L~(X) =
lit
lim £to e
0
l[a.a+e[(X,)d(X, X}s
for every a and t, and if M is a continuous local martingale
L~(M) =
lim - I
£to 2e
it
0
Ija-£.a+£[(M,)d(M, M}s.
The same result holds with any random time S in place oft.
Proof This is a straightforward consequence of the occupation times formula and
the right-continuity in a of L~(X).
0
For BM we have in particular
which proves that L? is adapted to the completion of u(IBsl, SSt). This will be
taken up in the following section.
The above corollary is an "approximation" result for the local time. In the
case of BM, there are many results of this kind which will be stated in Chap.
XII. We begin here with a result which is valid for all the semi martingales of .Sf;
(Exercise (4.13) Chap. IV). Let X be a continuous semimartingale; for e > 0,
define a double sequence of stopping times by
rg = inf{t : X = e},
ug
0,
u:
inf{t > r:_ 1 : X, = O},
t
r: = inf{t > u: : X, = e}.
We set d£(t) = max{n : u: < t}; this is the number of "down crossings" of X
(see Sect. 2 Chap. II) from level e to level 0 before time t. On Figure 4, we have
d£(t) = 2.
For simplicity, we will write only Un and rn instead of u: and r: and L t for
L?
(1.10) Theorem. If X = M + V is in .51, p:::: I, i.e.
E [(M, M}fJ2 +
(10 IdVltYJ <
00
then
lim E [sup led£(t) -
£---+0
t
00,
~LtIPJ
= O.
2
228
Chapter VI. Local Times
e~______~~~--------------~--~------
Fig. 4.
Proof By Tanaka's formula
Because X does not vanish on [Tn' Un+l [, we have LrnA! -LernA! = Lern+1A! -LernA!'
As a result
e:
where
is the predictable process Ln Ijern.rnj(s)l)O.sj(Xs). But X~A! - X;;;'A! = e
on {Tn :s t}; if n(t) = inf{n : Tn > t}, the left-hand side of the above equality is
equal to ed,(t) + U(e) where 0 :s U(e) = X; - Xern(t)A! :s e. Thus the proof will
be complete if
lim E [sup I
,---+0
t
Jot e:dXs I
P]
= O.
But, by the BDG inequalities, this expectation is less than
and since e' converges boundedly to zero, it remains to apply Lebesgue's dominated convergence theorem.
0
With the same hypothesis and notation, we also have the following proposition
which is the key to Exercise (2.13) Chap. XIII.
(1.11) Proposition.
limE [sup
,to
1,,:0
le- Jot e:d(X, X)s - ~LtIP]
= O.
2
1
§ 1. Definition and First Properties
229
Proof Using Ito's and Tanaka's fonnulas and taking into account the fact that
dL s does not charge the set {s : Xs i= O}, one can show that
(X~!\I)2 _ (X;',!\I)2 =
21
Xs l(X,>o)dXs +
]anAI.rnAt]
1
l(x,>o)d(X, X)s
]anl\t,Tn At ]
which entails that
Therefore
e:
and it is enough to prove that E'-l f~ XsdXs converges to zero unifonnly in LP.
But using the BDG inequalities again, this amounts to showing that
converges to zero as E' converges to zero, and since IXsl ::::: E' on {e: > OJ, this
follows again from Lebesgue's theorem.
0
Remark. By stopping, any semimartingale can be turned into an element of .'1;.
Therefore, the last two results have obvious corollaries for general semimartingales provided one uses convergence in probability on compact sets instead of
convergence in LP.
We close this section with a more thorough study of the dependence of Lain
the space variable a and prove that La, as a function of a, has a finite quadratic
variation.
For each t, the random function a ~ L~ is a cadlag function hence admits
only countably many discontinuities. We denote by 1::.L~ the process L~ - L~-. If
X=M+V,
Consequently, for a < b,
L I1::.L;I::::: 21/ IdVls <
a<x~b
and, a fortiori,
00
0
L I1::.L: 12 <
00.
a<x~b
For our later needs, we compute this last sum. We have
230
Chapter VI. Local Times
(L1L:)2 = 4 (it 1(x,=x)dVs ) 2
and since, by the continuity of V, the measure d V ® d V does not charge the
diagonal in [0, t]2,
8 i t 1(x,=x)d\i, is l(x u =x,)dVu
4 i t L1L;' l(x,=x)dVs '
Since there are at most countably many x E la, b[ such that L1L~ >
s E [0, t], we finally get
°for some
L (L1L:)2 = 41t L1L;' 1(a<x,:<,b)dV
s'
a<x:<,b
0
We may now state
(1.12) Theorem. Let (L1,,) be a sequence ofsubdivisions of[a, b] such that IL1,,1 --+
0. For any non negative and finite random variable S
E~ L (L';;+I - L~;)2 = 41b qdx + L (L1q)2
a
L111
a<x~b
in probability.
Proof The case ofa general S will be covered if we prove that LLlu (L~;+I - L~;)2
converges in probability, uniformly for t in a compact subinterval of IR.+, to
4
f: L~dx +
La<x:<,b(L1Ln2. To this end, we develop ~ (L~;+I - Ln 2 with the
help of Tanaka's formula, namely,
~L~ = (X t -a)+ - (Xo -a)+ - kI~ - vt
t
where we define Zf = fo l(X,>a)dZ s for Z = M, V, X.
The function ¢t(a) = (X t - a)+ - (Xo - a)+ is Lipschitz in a with constant
1 which implies that
A
Likewise, using the continuity of kI a as was done in Sect. 1, Chap. IV,
E~
L (¢t(ai+l) - ¢t(ai» (kI:;+1 - kI:;) = o.
LIn
Finally
§ 1. Definition and First Properties
231
which goes to zero as n tends to infinity.
As a result, the limit we are looking for is equal (if it exists) to the limit in
probability of
Ln =
L (X~;+l - Xn
2
LIn
Ito's formula yields
Ln
2L
LIn
1t (x~; - X~;+l)
l(a;<x,"':a;+JJdXs
0
+ L lot l(a;<x,"':a;+I)d(X, X)s.
LI"
f:
The occupation times formula shows that the second term in the right-hand side
L ~ dx. We shall prove that the first term converges to the desired
is equal to
quantity.
By localization, we may assume that X is bounded; we can then use the
dominated convergence theorem for stochastic integrals which proves that this
term converges in probability uniformly on every compact interval to
J =
210 t(X;' - x;,-) l(a<x,"':b)dXs = lot ,1L;' I (a<x,"':b)dXs.
By the computation preceding the statement, it remains to prove that
lot L1L;' l(a<x,"':b)dM, = O.
But as already observed, there are only countably many x E ]a, b[ such that
,1L; > 0 for some s E [0, t] and moreover P-a.s., f~ l{x,=x}d(M, M)s = 0 for
0
every x. Thus the result follows.
The following corollary will be used in Chap. XI.
(1.13) Corollary. If the process x ~ L~, x 2: 0, is a continuous semimartingale
(in some appropriate filtration), its bracket is equal to 4
L~dy.
f;
#
(1.14) Exercise. Let M be a continuous local martingale vanishing at 0 and L its
local time at O.
1°) Prove that inf{t : L t > O} = inf{t : (M, M)t > O} a.s .. In particular M == 0
if and only if L == O.
2°) Prove that for 0 < ex < 1, and M ¢ 0, IMl a is not a semimartingale.
232
Chapter VI. Local Times
(1.15) Exercise (Extension of the occupation times formula). If X is a continuous semimart., then almost-surely, for every positive Borel function h on ffi.+ x ffi.,
1°t
h(s, Xs)d(X, X)s =
1+00 da 1t h(s,
°
-00
a)dL~(X).
Extend this formula to measurable functions h on ffi.+ x Q x R
#
(1.16) Exercise. 1°) Let X and Y be two continuous semimartingales. Prove that
lot l(x,=y,)d(X, Y)s = lot l(x,=y,)d(X, X), = lot l(x,=y,)d(Y, Y)s.
[Hint: Write (X, Y) = (X, X) + (X, Y - X).]
2°) If X = M + V and A is a continuous process of finite variation
lot l(x,=A,)dXs = lot l(x,=A,)dVs.
3°) If X = M + V is ~ 0 and Mo
= 0, its local time at 0 is equal to
2 fo l(x,=o)dVs = 2 fo l(x,=o)dX,. As a result, if dVs is carried by the set {s :
Xs = O}, then V is increasing; moreover if Vt = sUPs~t ( - M s ), the local time of
X at 0 is equal to 2V.
(1.17) Exercise. 1°) If X is a continuous semimartingale, prove that
L~(IXI) = L~(X) + L;-a)-(X) if a ~ 0,
L~(IXI) = 0 if a < 0,
and that L~(X+) = L~(X).
2°) If X is a continuous semimartingale, prove that
3°) Prove a result similar to Theorem (1.10) but with upcrossings instead of
downcrossings.
(1.18) Exercise. 1°) If X and Yare two cont. semimarts., prove that X v Y and
X 1\ Yare cont. semimarts. and that
[Hint: By the preceding exercise, it is enough to prove the result for positive
X and Y. Use Exercise (1.16), 3°) .]
2°) Prove further that
LO(Xy) = X+ ·Lo(y) + y+ .Lo(X) + X- .Lo-(y) + Y- .Lo-(X).
§l. Definition and First Properties
**
233
(1.19) Exercise. 1°) If X is the BM, prove that, with the notation of Theorem
(LlO),
limedeO = ~L
e.j,O
2
a.s.
[Hint: Prove the almost-sure convergence for the sequence en = n- 2 , then use
the fact that de (t) increases when e decreases. Another proof is given in Sect. 1
Chap. XII.]
2°) More generally, if for a < 0 < b we denote by da,b the number of
downcrossings from b to a, then
n~~ (bn - an)da",b" (1) = ~ Lr.
a.s.
if L:(bn - an) < 00.
* (1.20) Exercise. (A generalization of P. Levy's characterization theorem). Let
1 (x, t) be a fixed solution to the heat equation i.e. be such that
We recall that such a function is analytic in x for t > O.
I: + ! I;~ = O.
1°) If B is a (.(~)-BM and A a (~)-adapted continuous increasing process,
prove that a.s.
up to m-negligible sets where m is the Lebesgue measure.
[Hint: Apply to the semimartingale Yt = 1:(Bt , At) the fact that d(Y, X}t
does not charge the sets {t : Yt = a}.] Arguing inductively prove that if
is not
identically zero then m{t : 1:(Bt , At) = O} = O.
[Hint: If not, there would be a point x where all the spatial derivatives of
vanish.]
2°) Let X be a continuous local martingale such that I(Xr. t) is a local martingale and m{t : 1:(Xr. t) = O} = 0; prove that X is a BM.
[Hint: Observe that J~ 1;(Xs , s)d(X, X}s = J~ 1;(Xs, s)ds.]
I:
I:
(1.21) Exercise. 1°) Let Xl and X2 be two continuous semimartingales vanishing
at O. Prove that
L~(XI V X2) = fot l(x;:o:o)dL~(XI) + L~(X2+ - XI+).
[Hint: Use Exercise (Ll6) 3°) and the equality (Xl v X2)+ = (X2+ - XI+)+ +
X I +.]
2°) Suppose henceforth that LO(X2_XI) == O. Pick a real number a in ]0, 1/2[
and set Zi = Xi - 2aX i +. After observing that, for X2 ~ Xl,
(1 - 2a)(X2 - Xl) ::: z2 _ Zl ::: X2 _ Xl,
prove that LO(Z2 - Zl) == O.
[Hint: Use Theorem (LlO).]
234
Chapter VI. Local Times
3°) From the equality 2a(X 2+ - X I+)+ = (X 2 - XI)+ - (Z2 - Zl)+ derive
that
L?(X I V X2) = 1t l(x;:;:o)dL~(XI) + 1t I(Xl<O)dL~(X2).
(1.22) Exercise. Prove that for the standard BM
sup L:' = sup L~.
s-:s,t
a
[Hint: One may use the Laplace method.] The process L~ is further studied
in Exercise (2.12) Chap. XI.
(1.23) Exercise. Let f be a strictly increasing function on JR., which moreover is
the difference of two convex functions. Let X be a cont. semimart.; prove that for
every t, the equality
holds a.s. for every a.
**
(1.24) Exercise. Let X = M + V be a continuous semimartingale and A a continuous process of bounded variation such that for every t,
i)
1t I(X,=A,)dX s = 0,
ii)
1t I(X,=A,)dAs = 0,
(see Exercise (1.16) above). If (L~) is the family of local times of X and At is
the local time of X - A at zero, prove that for any sequence (L1n) of subdivisions
of [0, t] such that lL1nl -+ 0, and for any continuous adapted process H,
i
t
o
(At.
.
At.)
HsdAs = n~oo~
hm ' " Htj Lt·1+') - Lt· '
LIn
I
in probability. This applies in particular when A = V, in which case we see that
the local time of the martingale part of X is the "integral" of the local times of X
along its bounded variation part.
(1.25) Exercise (Symmetric local times). Let X be a continuous semimartingale.
1°) If we take sgn(x) to be 1 for x > 0, -1 for x < 0 and 0 for x = 0, prove
that there is a unique increasing process La such that
IX t - al = IXo - al + 1t sgn(Xs - a)dXs + L~.
Give an example in which La differs from La. Prove that La = (La + L a-)/2 and
L~ = lim
~ i t Ija-e.a+e[(Xs)d(X, X)s'
e,!,O 28 0
2°) Prove the measurability of La with respect to a and show that if f is
convex
§ 1. Definition and First Properties
!(X t ) = !(X
I " + !_)(Xs)dXs + 2.I
o)+ iot 2.(f+
f
235
-a "
LJ
(da).
Write down the occupation times formula with i a instead of La.
**
(1.26) Exercise (On the support of local times). Let M be a continuous (J1)martingale such that Mo = 0 and set Z(w) = {t : Mt(w) = OJ. The set Z(w) is
closed and Z(w)C is the union of countably many open intervals. We call R(w)
(resp. G(w)) the set of right (resp. left) ends of these open intervals.
1°) If T is a (.Y;)-stopping time and
DT(w) = inf{t > T(w) : Mt(w) = OJ,
prove that if M is uniformly integrable, the process
is a (.% +t )-martingale.
2°) Prove that there exists a sequence of stopping times Tk such that R(w) C
UdTk(W)}, that for any stopping time T, P[w : T(w) E G(w)] = 0, and finally
that Z(w) is a.s. a perfect set (i.e. closed and without isolated points).
We now recall that any closed subset Z of ~ is the union of a perfect set and
a countable set. This perfect set may be characterized as the largest perfect set
contained in Z and is called the perfect core of Z.
o
3°) Prove that the support of the measure dL t is the perfect core of Z\ Z
where Z° is the interior of Z. Thus in the case of BM, the support is exactly Z, as
is proved in the following section.
[Hint: Use Exercise (1.16).]
4°) Prove the result in 3°) directly from the special case of BM and the DDS
theorem of Sect. I Chap. V.
#
(1.27) Exercise. Let r be a process of time changes and X a r-continuous semimartingale; prove that
L~(Xr) = L~r(X),
Consequently, if X is a continuous local martingale, and B its DDS BM, prove
that L~(X) = L(X.X)r (B).
Prove further that if X is a local martingale then for any real a, the sets
{\im1-+CXJ Xt exists} and {L':x, < oo} are a.s. equal.
[This Exercise is partially solved in Sect. 2 Chap. XL]
*
(1.28) Exercise. Let X be a continuous semimartingale and Lathe family of its
local times.
1°) Let S be a positive r.v. Prove that there exists a unique vector measure on
the Borel sets of ~ with values in the space of random variables endowed with
the topology of convergence in probability, which is equal to
236
Chapter VI. Local Times
on the step function L:.fi1)a;.a;+d' We write f~ooj(a)daL'S for the integral with
respect to this vector measure.
2°) If j is a bounded Borel function and F(x) = fox j(u)du, prove that
F(Xs) = F(Xo) +
i
s
1100 j(a)daL'S.
-00
j(Xu)dXu - o
2
Extend this formula to locally bounded Borel functions j.
3°) Prove that if a --+ L'S is a semimartingale (see Sect. 2 Chap. XI) then
f~oo j(a)daL'S is equal to the stochastic integral of j with respect to this semimartingale.
* (1.29) Exercise (Principal values of Brownian local times). 1°) For the linear
BM, prove that for a > -1 the process
X~a) = i t IBslads
is finite valued, hence
-(a)
Xr
{t
a
= 10 IBsl (sgn Bs)ds
is also well-defined. Prove that these processes have continuous versions.
2°) Prove that
Hr =
~m-lr B;ll(IB,I2:s)ds
or more generally that for a > - 3/2
H/ aJ = ~m-lt IBsla(sgn Bs)1(IB,I2:s)ds
exists a.s. Prove that these processes have continuous versions.
[Hint: Use the occupation times formula and Corollary (1.8).]
3°) Setting Br = Br - Ht , prove that BtBt is a local martingale. More generally,
if h : lR x lR+ --+ lR is a solution of the heat equation 1- ~:~ + ~7 = 0, then
h(Br, t)Br is a local martingale.
4°) If j =
the process B t
L~=l Ak1)rk.tk+d' and &;1 = exp {i f~ j(s)dBs + ~ f~ j2(s)ds},
Pf/ is a local martingale. Derive therefrom that
This discussion is closely related to that of Exercise (3.29), Chap. IV.
§ 1. Definition and First Properties
*
237
(1.30) Exercise. 1°) Using the same notation as in the previous exercise prove
that for any C'-function ¢ we have
2°) For every a E IR., define a process Aa by the equality
Show that there is a jointly measurable version which is again denoted by A. Then
prove that for any positive Borel function f,
t f(Hs)ds = 1 f(a)A~da.
10
00
-00
30) Show that there is a version of Aa which is increasing in t and such that
A~ - B t I(H,>a) is jointly continuous.
(1.31) Exercise. If X is a continuous semimartingale, the martingale part of which
is a BM, prove that for any locally integrable function f on IR.+,
lot ds f (lot I (xu:cox,)du ) = lot f(v)dv a.s ..
More generally, prove that, for every locally integrable function h on IR.+,
where O:'t(z) = inf{y: f~ du l(xu:COY) > z}.
*#
(1.32) Exercise. (HOlder properties for local times of semimartingales). Let
X = M + V be a continuous semimart. and assume that there is a predictable
process v such that
i) Vt = f~ vsd(M, M)s;
ii) there exists a p > I for which
lot IVsIPd(M, M)s < 00 a.s.,
for every t > 0.
Call L~, a E IR., t ::: 0, the family of the local times of X.
1°) Prove that after a suitable localization of X, there exists, for every N > 0,
kEN and T > 0, a constant C = C(N, k, T) such that
E [sup
t:coT
ILit - L;l k ] :s C (Ix - ylk/2 + Ix _ Ylk(P-')/P)
238
Chapter VI. Local Times
for every pair (x, y) for which Ixl :s Nand Iyl :s N. As a result, there is a
bicontinuous modification of the family L~, a E ~, t 2: O.
2°) Prove that, consequently, there exists a r.v. DT such that
sup IL~
tST
- L;I :s DTlx _ YI"
for every ex < 1/2 if p 2: 2 and every ex < (p - l)/p if p < 2.
Apply these results to X t = B t + et m where B is a BM(O), e E ~ and m < 1.
The following exercise shows that the conditions stated above are sufficient
but not necessary to obtain Holder properties for local times of semimarts.
#
(1.33) Exercise. Let B be a BM(O) and I its local time at O. Let further X = B +d
where e E ~ and call L Q the family of local times of X. Prove that for every T > 0
and k > 0, there is a constant C T . k such that
and that consequently the Holder property of the preceding exercise holds for
every ex < 1/2. Prove the same result for X = IB I + d.
(1.34) Exercise. Let B the standard BM' and a and b two different and strictly
positive numbers. Prove that the local time for the semimartingale X = aB+ -bBis discontinuous at 0 and compute its jump.
*
(1.35) Exercise. For the standard BM B prove the following extensions of Theorem (l.l2):
p- lim
HOO
)2
L 10r HudL~i+l - 10rS
HudL~i = 4 r H~I(Q<Bu<b)du
10
(
S
S
,1,
whenever either one of the following conditions applies:
i) H is a (.~B)-cont. semimart.;
ii) H = feB) where f is of bounded variation on every compact interval and S
is such that x ---+ Us is a semimartingale (see Sect. 2, Chap. XI).
Examples of processes H for which the above result fails to be true are found
in Exercise (2.13) Chapter XI.
§2. The Local Time of Brownian Motion
239
§2. The Local Time of Brownian Motion
In this section, we try to go deeper into the properties of local times for standard
linear BM. By the preceding section, it has a bicontinuous family L~ of local
times. We write L, instead of L~ as we focus on the local time at zero.
We first introduce some notation, which is valid for a general semimartingale
X and will be used also in Sect. 4 and in Chap. XII. We denote by Z(w) the
random set {s : Xs(w) = O}. The set zc is open and therefore is a countable union
of open intervals.
For t :::: 0, we define
d t = inf{s > t : Xs = O}
with the convention inf(fZJ) = +00. Those are stopping times which are easily seen
to be predictable and for t :::: 0, we have L, = Ld, because as was proved in the
last section, d L, is carried by Z. In the case of BM we will prove shortly a more
precise result (see also Exercise (1.26)).
Our first goal is to find the law of the process L. We shall use Ito-Tanaka's
formula
IBtl =
fo' sgn(Bs)dBs + Lt.
We saw in Exercise (1.14) of Chap. III that IBri is a Markov process (see also
Exercise (2.18) in this section). It is also clearly a semimartingale and its local
time at is equal to 2L (see Exercises (1.l7) and (2.14)). A process having the
law of IBI is called a reflecting Brownian motion (we shall add "at zero" if there
is a risk of ambiguity).
To analyse L, we need the following lemma which is useful in other contexts
as well (see Exercise (2.14) Chap. IX).
°
(2.1) Lemma (Skorokhod). Let y be a real-valued continuous function on [0, oo[
such that yeO) :::: 0. There exists a unique pair (z, a) offunctions on [0, oo[ such
that
i) z = Y + a,
ii) z is positive,
iii) a is increasing, continuous, vanishing at zero and the corresponding measure
da s is carried by {s : z(s) = O}.
The function a is moreover given by
aCt) = sup ( - yes) V 0).
s-::;:.t
240
Chapter VI. Local Times
Proof We first remark that the pair (a, z) defined by
aCt) = sup ( - yes) vo),
z = y +a
s~t
satisfies properties i) through iii).
To prove the uniqueness of the pair (a, z), we remark that if (a, z) is another
= a - is a process of bounded
pair which satisfies i) through iii), then z variation, and we can use the integration by parts formula to obtain
z
a
0::: (z - Z)2(t) = 21t (z(s) - z(s»)d(a(s) - a(s»).
Thanks to iii) this is further equal to
1
-21 z(s)da(s) - 21t z(s)da(s)
which by ii) and iii) is ::: O.
D
(2.2) Corollary. The process f31 = f~ sgn(Bs)dBs is a standard BM and .-y;fJ
.Yr IBI. Moreover, LI = sup( -f3s).
s~t
Proof That f3 is a BM is a straightforward consequence of P. Levy's characterization theorem. The second sentence follows at once from the previous lemma
and Tanaka's formula
IBII = f3t + Lt.
lt is now obvious that .Yr IBI c .-y;fJ and, since it follows from Corollary (1.9) that
.y,; L C .-y;IBI, we have .Yr fJ C .-y;IBI; the proof is complete.
Remark. Another proof of the equality .Yr IBI =
(3.16) Chap. IX.
.Yr fJ will be given in Exercise
This corollary entails that the processes L t and St have the same law; in
particular, by Proposition (3.8) Chap. III, LI is the inverse of a stable subordinator
of index 1/2. Another proof of this is given in Exercise (1.11) Chap. X. The
equality of the laws of L t and SI can be still further improved.
(2.3) Theorem (Levy). The two-dimensional processes (SI-B t , SI) and(IBtl, L t )
have the same law.
Proof On one hand, we have by Tanaka's formula, IBII = f3t + L t ; on the other
hand, we may trivially write SI - BI = -Bt + St. Thus, Lemma (2.1) shows
that one gets Sand S - B (resp. Land IBI) from -B (resp. f3) by the same
deterministic procedure. Since - Band f3 have the same law, the proof is finished.
Remark. The filtration of S - B is actually that of B (see Exercise (2.12» hence,
the filtration of (S - B, S) is also (.Yr B) whereas the filtration of (IBI, L), which
is (.Yr IBI), is strictly coarser.
§2. The Local Time of Brownian Motion
241
The following corollary will be important in later sections.
(2.4) Corollary. P [L~ = 00] = 1 for every a.
Proof By the recurrence properties of BM, we have obviously P [Soo = 00] = 1;
thus P [L~ = 00] = 1 follows from Theorem (2.3). For every a =1= 0, we then
get P [L~ = 00] = 1 from the fact that BTa+! - a is a standard BM.
0
We now tum to the result on the support of the measure dL s which was
announced earlier. We call (Tt) the time-change associated with Ls i.e.
Tt = inf {s >
°:
Ls > t} .
By the above corollary, the stopping times T t are a.s. finite. We set
(J(w) = U]T,-(W), Ts(W)[.
s'=:o
The sets ]T,-, Ts [ are empty unless the local time L has a constant stretch at level
s and this stretch is then precisely equal to [Ts-, Ts]. The sets ]T,-, Ts[ are therefore
pairwise disjoint and 6(w) is in fact a countable union. We will prove that this set
is the complement of Z(w). We recall from Sect. 3 in Chap. III that Z has almost
surely an empty interior and no isolated points; the sets ]T,-, Ts[ are precisely the
excursion intervals defined in Chap. III.
(2.5) Proposition. The following three sets
i) Z(w), ii) 6 (w)C, iii) the support 17(w) of the measure dLt(w),
are equal for almost every w.
Proof An open set has zero dLt-measure if and only if L is constant on each
of its connected components. Thus, the set 6(w) is the largest open set of zero
measure and 17(w) = 6(wY.
We already know from Proposition (1.3) that 17(w) c Z(w) a.s. To prove the
reverse inclusion, we first observe that L t > a.s. for any t > or in other words
that TO = 0 a.s. Furthermore, since dt is a stopping time and Bd , = 0, Tanaka's
formula, for instance, implies that Ld,+s - L d" S ~ 0, is the local time at zero of the
BM Bd,+s, s ~ 0 and therefore Ld,+s - L dt > 0 a.s. for every s > O. We conclude
that for any fixed t the point dt(w) is in 17(w) for a.e. wand, consequently, for
a.e. w the point drew) is in 17(w) for every r E Q+.
Pick now s in Z(w) and an interval I :3 s. Since Z(w) is a.s. closed and has
empty interior, one may find r such that r < s, r E Q+ n I and r rt. Z(w); plainly
d r :S s, thus s is the limit of points of the closed set 17(w), hence belongs to 17(w)
and we are done.
°
°
Remarks. 1°) The fact that TO = 0 a.s. is worth recording and will be generalized
in Chap. X.
2°) The equality between Z(w) and 17(w) is also a consequence of Exercise
(1.26) and Proposition (3.12) in Chap. III.
242
Chapter VI. Local Times
(2.6) Corollary. P(Vs ::: 0, B r , = B r,_
either U = Ts or U = T,- for some s.
= 0) = 1. Conversely, for any U E Z,
°
Proof The first statement is obvious. To prove the second let u > be a point in
Z = E; then, either L u+[ - Lu > for every E > 0, hence u = inf{t : L t > Lu}
and u = Ts for s = L u , or L is constant on some interval [u, u + E], hence
Lu - Lu-ry > for every T) > and u is equal to T,- for s = Lu.
°
°
°
Remark. We have just proved that the points of Z which are not left-ends of
intervals of e are points of right-increase of L.
We close this section with P. Levy's Arcsine law which we prove by using the
above ideas in a slightly more intricate context. The following set-up will be used
again in an essential way in Sect. 3 Chap. XIII.
We set
Ai =
1t
I(B,>o)ds,
A; =
1t
1(B,<o)ds,
and call at and a t- the associated time-changes. Our aim is to find the law of the
r.v. Ai and since {Ai> t} = {at < l}, this amounts to finding the law of at.
But since u = At + A;;- entails at = t + A - (an, we will look for the law of
A - (an. The following considerations serve this particular goal.
The processes A ± are the increasing processes associated with the martingales
Mt =
11
I(B,>o)dB s ,
M
t-= 1t
I(B,<o)dB s .
Obviously (M+, M-) = 0, and so, by Knight's theorem (Theorem (l.9) Chap.
V), since A~ = A~ = 00, there exist two independent Brownian motions, say 8+
and 8-, such that M±(a.±) = 8± and by the definition of the local time L of B,
where B+ and B- are simply the positive and negative parts of B (and not other
processes following the ± notational pattern).
Using Lemma (2.1) once more, we see that (B±(a±), 4L(a±») has the same
law as (I B I, L) and moreover that
1
+
+
-L(a
t ) = sup(-8 s ),
2
s9
1
-L(a;) = sup(8;).
2
s:'Ot
We now prove the
(2.7) Theorem. The law of Ai is the Arcsine law on [0, 1].
§2. The Local Time of Brownian Motion
243
Proof For a > 0 let us put
T 8- (a) == T;- = inf{ t : 8t- > a} .
We claim that for every t, we have A~ = T 8- (t /2). Indeed, by definition of a;,
whence B-(Tt ) = 0 = -8-(A~)+t/2. Moreover, T is a.s. a point of right increase
of L (see Corollary (2.6)), hence there is a sequence (sn) decreasing to Tt such
that B;:' = 0 and L,,, > L T , = t and consequently 8-(A~) > t/2. It follows that
A~ :::: T 8- (t /2). Now, if u < A~, then u = A;;- for some v < T t • If v < Tt - then
If Tt - :S v < Tr, then A-increases between Tt - and Tt which implies that B is
negative on this interval, hence B- is > 0 and again 8- (A;;-) is less than t /2,
which proves the reverse inequality A~ :S T 8- (t /2).
Moreover A-(a+) = A- (TL(a+)); indeed TL, = v if v is a point of increase
of L and if belongs to a level stretch of L and =
for some t, then B is
positive on this stretch and A - (TL,) = A;;-.
Combining these two remarks we may write that for each fixed t,
v
v at
Now, !L(an is independent of 8- since it is equal to sUPs:::t(-8,n. Thus we
have proved that A - (an = T 8 - (St) where St is the supremum process of a BM
f3 independent of 8-. By the scaling properties of the family
and the reflection
principle seen in Sect. 3 of Chap. III we get
T;-
1/ 2 . «-)-2 <!!.J
. C2
A -( at+) ~
- S2t . TI8- ~
- t pi
01
- t
where C is a Cauchy variable with parameter 1.
By the discussion before the statement,
Ai
at <tJ. t (1 + C and
2)
Hence, the r.v.
follows the law of (1 + C 2 )-1 which is the Arcsine law as can
be checked by elementary computations (see Sect. 6 Chap. 0).
The reader ought to ponder what the Arcsine law intuitively means. Although
the BM is recurrent and comes back infinitely often to zero, the chances are that
at a given time, it will have spent much more time on one side of zero than on
the other.
244
#
Chapter VI. Local Times
(2.8) Exercise. 1°) For the linear BM and a :s: x :s: y :s: b prove that
Ex [L-VT AT]
= 2u(x, y)
h
(J
where u(x, y) = (x - a)(b - y)/(b - a).
2°) For a :s: y :s: x :s: b set u(x, y) = u(y, x) and prove that for any positive
Borel function f
Ex [
[TaATb
Jo
f(Bs)ds
]
[b
= 2 Ja u(x, y)f(y)dy.
This gives the potential of BM killed when it first hits either a or b.
(2.9) Exercise. Let CPt) be the semi-group of BM and put f (x) = Ix!. Prove that
lit
L t = lim -
h--O h
0
(PhfCB s ) - f(Bs» ds
a.s.
[Hint: Write the occupation times formula for the function Ph f - f.]
#
(2.10) Exercise. 1°) Prove that for ex > 0, the processes
are local martingales.
2°) Let Ux = inf{t : St - Bt > x} and Tx = inf{t : IBtl > x}. Prove that both
Su, and LTx follow the exponential law of parameter x-I. This can also be proved
by the methods of Sect. 4.
°
#
(2.11) Exercise (Invariance under scaling). Let < c < 00.
1°) Prove that the doubly-indexed processes (Bt,L~) and (Bct,L~t.jC)/-JC,
a E lit, t :::: 0, have the same law.
2°) Prove that the processes (Tt ) and (c- I T.jCt) have the same law.
3°) If as usual Ta = inf {t : B t = a} prove that the doubly-indexed processes
Cq) and (c- I L~J, x E lit, a :::: 0, have the same law.
#
(2.12) Exercise. Prove that . ft B =
S-B. In other words, if you know S - B
up to time t you can recover B up to time t.
#
(2.13) Exercise. If B = (B I , B2) is a standard planar BM and T t is the inverse of
the local time of BI at zero, prove that X t = B;, is a symmetric Cauchy process.
Compare with Exercise (3.25) of Chap. III.
#
(2.14) Exercise. 1°) Prove that the two-dimensional process (IBtl, !L(IBl)t) has
the same law as the processes of Theorem (2.3).
2°) Conclude that the local time of IBt I (resp. St - Bt ) is equal to 2L t (resp.
2St ). See also Exercise (1.17).
.x
§2. The Local Time of Brownian Motion
245
(2.15) Exercise. 1°) Fix t > O. Prove that for the standard linear BM, there is a.s.
exactly one s < t such that Bs = St, in other words
P [3(r, s) : r < s :::s t and Br = Bs = Srl = o.
[Hint: 2S is the local time at 0 of the reflected BM S - B. This result can
actually be proved by more elementary means as is hinted at in Exercise (3.26) of
Chap. 111.]
2°) Prove that GI = sup {s < 1 : Bs = SI} has also the Arcsine law; thus G 1
and A 1 have the same law.
[Hint: Use Exercise (3.20) Chapter 111.]
(2.16) Exercise. Let X be the standard BM reflected at 0 and 1 (see Exercise
(1.14) of Chap. III).
1°) Prove that X t = f3t + I? - I: where f3 is a standard linear BM and I a
the symmetric local time (Exercise (1.25» of X at a.
2°) By extending Lemma (2.1) to this situation prove that
(2.17) Exercise. Prove that the filtration (.'Yfx) of the martingale X t = J~ B;dB;
introduced in Exercise (4.13) of Chap. V, is the filtration of a BM2.
[Hint: Compute (X, X).]
(2.18) Exercise. 1°) Prove that the joint law of (IBtl, L t ) has a density given by
(2/JTt3)1/2(a + b) exp (-(a + b)2 /2t) ,
a, b 2:
o.
Give also the law of (Bt, Lt).
2°) Prove that the 2-dimensional process (IBtl, L t ) is a Markov process with
respect to (Jif) and find its transition function.
The reader will find a more general result in Exercise (1.13) of Chap. X and
may also compare with Exercise (3.17) in Chap. III.
(2.19) Exercise. 1°) Prove that almost-surely the random measure v on lR. defined
by
vU) =
11 11
f(B t - Bs)ds dt
has a continuous density with respect to the Lebesgue measure. Prove that this
density is Holder continuous of order f3 for every f3 < 1.
[Hint: This last result can be proved by using the same kind of devices as in the
proof of Corollary (1.8); it is also a consequence of the fact that the convolution
of two Holder continuous functions of order f3 is Holder continuous of order 2f3.]
2°) More generally, for every Borel subset r of [0, If, the measure
vU) =
fl
f(Bt - Bs)ds dt
246
Chapter VI. Local Times
has a continuous density a(x, r) with respect to the Lebesgue measure. The map
a is then a kernel on ~ x [0, 1]2.
[Hint: Use Exercise (1.15).]
3°) Prove that, if I E LI(~), a.s.
}i~ n
1111
I (n(Bt - Bs» dt ds = (/ I(a)da) ( / (Lt)2 db) .
(2.20) Exercise. With the notation used in the proof of the Arcsine law, prove
that B is a deterministic function of 8+ and 8-, namely, there is a function I on
C(IR+, ~)2 such that B = 1(8+,8-).
[This exercise is solved in Chap. XIII, Proposition (3.5).]
(2.21) Exercise. Prove the result of Exercise (1.26) on the support of dL t by
means of Proposition (2.5) and the DDS theorem of Sect. 1 in Chap. V.
(2.22) Exercise. Let I be a locally bounded odd function on IR with a constant
sign on each side of and such that the set {x : I(x) = o} is of zero Lebesgue
measure. Prove that the filtration generated by M t = J~ I(Bs)dBs is that of a
Brownian motion.
[Hint: Use Exercise (3.12) Chap. V.]
°
(2.23) Exercise. Let B be the standard linear BM. Prove that I(Bt) is a (.9if}-local
submartingale if and only if I is a convex function.
[Hint: A function I is convex if and only if I + I admits no proper local
maximum for any affine function I whatsoever.]
* (2.24) Exercise. Let X be a continuous semimartingale, if it exists, such that
X t = x + B t + lot a(s)dLs
where B is a BM, a is a deterministic Borel function on ~+ and L = LO(X).
1°) Prove that if a < I the law of the process L is uniquely determined by a.
[Hint: Write the expression of IXI and use Lemma (2.1).]
2°) Let gt(A) = E [exp(iAX t )] and prove that
gt(A) = exp(ih) -
~21ot gs(A)ds + iAE [lot a(s)dLs ] .
As a result the law of the r.v. X t is also determined by a. Using the same device
for conditional laws, prove that all continuous semimartingales satisfying equation
(*) have the same law. In the language of Chap. IX, there is uniqueness in law
for the solution to (*). The skew BM of Exercise (2.24) Chap. X is obtained in
the special case where a is constant.
3°) Prove that L?- = J~ (1-2a(s) )dL s and that as a result there is no solution
X to (*) if a is a constant> 1/2.
§2. The Local Time of Brownian Motion
247
(2.25) Exercise. Let X be a continuous process and Lathe family of local times
of BM. Prove that for each t the process
Ya =
1t XudL~
is continuous.
*
(2.26) Exercise (Hausdorff dimension of the set of zeros of BM). 1°) For every n ::: I, define inductively two sequences of stopping times by
U~
= 0,
Vkn = inf{t ::: Uf : IBtl = Tn},
Uf+1 = inf{t ::: V; : IBtl = O}.
F or an integer K, prove that if ex > ~, then almost-surely
sup{(V; -
Una,
l:s k :s K2n} :s Tn
for n sufficiently large.
[Hint: The r.v. 's (Vkn - Uf) are i.i.d. with the law of 2- 2n fl.]
2°) Using the approximation result of Exercise (1.19), prove that, a.s. on the
set {L I < K}, for n sufficiently large
Nn
~)V; - Una :s K
I
where N n = sup{k ::: I : Uf :s I}. Conclude that the Hausdorff dimension of the
set Z = {t :s 1 : B t = O} is a.s. :s ~.
The reverse inequality is the subject of the following exercise.
*
(2.27) Exercise. 1°) Let v be a measure on JR., C and IX two constants> 0, such
that in the notation of Appendix 4, v(l) :s CI I la for every interval. If A is a
Borel set of strictly positive v-measure, prove that the Hausdorff dimension of A
is ::: ex.
2°) By applying 1°) to the measure dLr, prove that the Hausdorff dimension of
Z is ::: 1/2 a.s. Together with the preceding exercise, this shows that the Hausdorff
dimension of Z is almost-surely equal to 112.
**
(2.28) Exercise (Local times of BM as stochastic integrals).
1°) For fEe K (JR), prove that for the standard linear BM,
t f(Bs)ds = 10t E [f(Bs)]ds + limit ds 10t- (Ps-vf)'(Bv)dB v.
10
S
stO
e
[Hint: Use the representation of Exercise (3.13) Chap. V.]
2°) If we put q (x) = 2 ~C;; exp( -u 2 /2)du prove that
t f(Bs)ds
10
t E[f(Bs)]ds+lim ~l+oo f(y)dy ...
10
stO y 2n
s
- y)] dBv.
10t- sgn(Bv - y) [ q (Bv
.jt -_ y)
v - q (Bv.j£
-00
[Hint: Use Fubini's theorem for stochastic integrals extended to suitably integrable processes.]
248
Chapter VI. Local Times
rt f(Bs)ds =
3°) Conclude that Jo
L Yt =
**
1t
o
gs(y)ds -
1
~
y2n
f+oo f(y)L Yt dy where
-00
1t
0
y) dBv.
sgn(Bv - y)q (Bv;-;--:::
yt - v
(2.29) Exercise (Pseudo-Brownian Bridge). 1°) Let h be a bounded Borel function on lR+ and set
Mt = g
(1-
h(S)dBs ) t .
If ¢ is a continuous function with compact support, prove that the process
Ya =
1 ¢(t)MtdL~,
00
a E R
is continuous (see Exercise (2.25».
2°) Let Q~ be the law of the Brownian Bridge from 0 to x over the interval
[0, u] and write simply QU for Q~ (see Exercise (3.16) Chap. I for the properties
of the family (Q~». If Z is a positive predictable process prove that
where QU[Zu] = J ZudQu.
[Hint: Compute in two different ways E [It fn(Bt)¢(t)Mtdt] where Un) is
an approximation of the Dirac mass 00, then let n tend to +00.]
3°) If F is a positive Borel function on C ([0, 1], lR) and g a positive Borel
function on lR+ prove that
where fJu is the Brownian Bridge from 0 to 0 over the time interval [0, u].
4°) Let X be the process defined on [0, 1] by
Xu =
1
h,"BUTI'
yTI
0:::: u:::: I
which may be called the Pseudo-Brownian Bridge. Prove that
where A is the local time of fJI at level 0 and at time 1.
§2. The Local Time of Brownian Motion
249
[Hint: Use the scaling invariance properties to transform the equality of 3°).
Then observe that I/.JTI is the local time of X at level 0 and time 1 and that
1
- - = lim -
.JTI
1
£-+0 2£
11
0
I j -£.E[(Xs )ds.]
5°) Prove that the processes (B t ; t S Tl) and (B TI - t ; t S Tl) have the same
law.
6°) Prove that A has the law of J2e where e is exponential with parameter
1. This is taken up in Exercise (3.8) of Chap. XII. This question can be solved
independently of 5°) .
(2.30) Exercise. In the notation of Exercise (3.23) Chap. III, prove that the process {g~I/2 BugI' 0 Sus I} is a Brownian Bridge which is independent of
a(gl, B gl +U , u :::: 0).
[Hint: Use time-inversion and Exercise (3.10) Chap. I.]
#
(2.31) Exercise. In the situation of Proposition (3.2) Chap. V, prove that there
are infinitely many predictable processes H such that Jooo H}ds < 00, and
F = E[F] +
1
00
HsdBs.
[Hint: In the notation of this section, think about 1[0. T, j.]
(2.32) Exercise. Let X be a cont. loco mart. vanishing at 0 and set
St
= sup Xs and Xt =
sg
r sgn(Xs)dXs.
Jo
1°) Prove that the following two properties are equivalent:
i) the processes S - X and IXI have the same law,
ii) the processes X and
have the same law.
-x
2°) Let X and Y be two cont. loco mart. vanishing at zero and call fJ and y
their DDS Brownian motions. Prove that X and Y have the same law iff the 2dimensional processes (fJ, (X, X) and (y, (Y, Y) have the same law. In particular,
if Y = X, then X and Y have the same law iff conditionally w.r.t. .~x.x), the
processes fJ and y have the same law.
3°) If .~x.X) = .~V'll, then the laws of X and X are not equaL Let ¢ be a
continuous, one-to-one and onto function from lR+ to [a, b[ with 0 < a < b < 00,
fJ a BMI and A the time-change associated with J~ ¢(lfJsl)ds, then if X t = fJA"
the laws of X and X are not equaL Notice in addition that there is a BM denoted
by B such that X t = J~ ¢(IX s l)-I/2dB s'
4°) Likewise prove that if .~x.X) = .:yJ, then the laws of X and X are
not equaL Change suitably the second part of 3°) to obtain an example of this
situation.
250
**
Chapter VI. Local Times
(2.33) Exercise (Complements to the Arcsine law). 1°) In the notation of Theorem (2.7), prove that for every positive Borel function F on C([O, 1], ~),
E [F (Bu, u :s
1) I(B
1 >0)]
= E [(I/a
n (Bua;/r:;f, u :s 1)].
F
[Hint: It may be helpful to consider the quantity
E
[LX> F (t- B st , s :s 1) ¢(t)dAi ]
I/2
where ¢ is a positive Borel function on ~+.]
2°) Prove that for every positive Borel function f on [0, 1] x ~+, one has
E [J (Ai, Li) I(B 1 >0)] = E [(AUTJ) f (AUTI, I/TI)].
3°) Prove that the law of the triple T- I (Ai, AT' L}) is the same for all the
following random times:
iii) T = Tu.
i) T = t (a constant time), ii) T =
at,
(2.34) Exercise. 1°) Let B be the standard BMI and L its local time at 0; for
h > 0, prove that there is a loco mart. M such that
log (1 + hlBtD = M t - (1/2)(M, M}t + hL t .
2°) Let y be the DDS BM of -M; set f3t = Yt + (1/2)t and at = sUPs:<:r f3s.
Using Skorokhod's lemma (2.1), prove that for every t :::: 0,
log (1 + hlBd) = aV
t
-
f3v
t
a.s.
where Vt = f~ (h- I + IBslf2 ds.
3°) Define as usual fa = inf{t : IBtl = a} and Ts = inf{u : Lu > s} and prove
that
Vra = inf{u : au - f3u = logO + h)} and Vr, = inf{u : f3u = hs}.
The laws of V fa and Vr., are obtained at the end of Exercise (3.18) Chapter VIII.
(2.35) Exercise (Local times of the Brownian bridge). 1°) Let x > O. In the
notation of this section, by considering the BM Bt+Tx - x, and using Theorem
(2.3) prove that
P [Lf :::: y, BI > b] = (l/2)P [51 > x + y, 51 - BI > Ib - xl]·
Consequently, prove that, conditionally on BI = b, the law of Lf, for 0 :s x :s b,
does not depend on x.
2°) Using the result in Exercise (2.18) and conditioning with respect to
{BI = O}, prove that if W) is the family of local times of the standard BB, then
if ~ (R - 2x)+ ,
where R has the density l(r>o)r exp (_(r2 /2»).
§3. The Three-Dimensional Bessel Process
251
§3. The Three-Dimensional Bessel Process
In Chap. XI we will make a systematic study of the one-parameter family of the
so-called Bessel processes using some notions which have yet to be introduced. In
the present section, we will make a first study of the 3-dimensional Bessel process,
which crops up quite often in the description of linear BM, using only the tools
we have introduced so far.
We first take up the study of the euclidean norm of BM8 which was begun in
Chap. V for 8 = 2 and in the preceding section for 8 = 1. Let us suppose that 8 is
an integer 2: 1 and let Pt be the modulus of BM8. As usual, we denote by Px the
probability measure of the BM8 started at x and (.%) is the complete Brownian
filtration introduced in Sect. 2 Chap. III. For Bessel functions see Appendix 7.
(3.1) Proposition. For every 8 2: 1, the process Pt, t 2: 0, is a homogeneous (.%)Markov process with respect to each Px , x E JR8. For 8 2: 2, its semi-group
is
given on [0, oo[ by the densities
p/
p:(a, b) = (a/t)(b/a)8/2 18/2-1 (ab/t) exp (_(a 2 + b2)/2t) for a, b > 0,
where Iv is the modified Besselfunction of index v, and
Proof Let! be a positive Borel function on [0,00[. For s < t,
where j(x) = !(Ixl) and Pt is the semi-group ofBM8.
For 8 2: 2, we have
prJ(x) = (2rrt)-8/2
f
exp (-Ix - yl2 /2t) !(Iyl)dy
and using polar coordinates
prJ(x) =
(2rrt)-8/2
f
exp (-(lxI2 + p2)/2t) exp (-Ixlp cos O/t) !(p)p 8- l dpa(dT/)
where T/ is the generic element of the unit sphere and 0 the angle between x and
T/. It turns out that prJ(x) depends only on Ix I which proves the first part of the
result (the case 8 = 1 was studied in Exercise (1.14) of Chap. III). Moreover,
setting p t8 ! (a) = Pt j (x) where x is any point such that Ix I = a, we see that p/
has a density given by
(2rrt)-8/2b 8- 1 exp (_(a 2 + b 2)/2t)
which entails the desired result.
i'-l
exp (-ab cosO It) a (dT/)
252
Chapter VI. Local Times
(3.2) Definition. A Markov process with semi-group p/ is called a a-dimensional
Bessel process.
Bessel processes are obviously Feller processes. We will write for short BE sa
and BESa(x) will designate a a-dimensional Bessel process started at x ::: O. The
above result says that the modulus of a BMa is a realization of BESa. From the
results obtained for BMa, we thus deduce that a BE sa never reaches 0 after time
o if a ::: 2. Moreover for a ::: 3, it is a transient process, that is, it converges a.s.
to infinity.
From now on, we will focus on the 3-dimensional process BES 3 , which we
will designate by Pt. The semi-group N has a particularly simple form which can
be seen from the expression of [1/2. We call Qt the semi-group of the linear BM
on ]0, oo[ killed when it hits zero. It was seen in Exercise (1.15) of Chap. III that
Qt is given by the density
Ifwe set hex) = x on ]0, 00[, it is readily checked that Qth = h. The semi-group
p/ is what will be termed in Chap. VIII as the h-transform of Qt, namely
p/I(x) = h(x)-I QtUh)(x),
x > 0;
in other words, p/ is given by the density x-1qt(x, y)y. For x = 0, we have
p/I(O) =
1
00
(2/rrt 3)1/2 exp( - i /2t)i I(y)dy.
We will also need the following
(3.3) Proposition. If (Pt) is a BES 3(x) with x ::: 0, there is a Brownian motion f3
such that
Pt = X + f3t + 1t p;lds.
Moreover, pt- I is a local martingale (in the case x = 0, the time-set is restricted
to ]0, oo[).
Proof We know that Pt may be realized as the modulus of BM3; using the fact
that f3t = L~ J~ p;1 B;dB; is a BMI (P. Levy's characterization theorem) the
result follows easily from Ito's formula, and the fact that PI never visits 0 after
time O.
Remark. The first result says that P is a solution to the stochastic differential
equation dps = df3s + p;lds (see Chap. IX).
Another proof of the fact that pt- I is a local martingale is hinted at in Exercise
(2.13) of Chap. V where it is used to give an important counter-example. It will
now be put to use to prove the
§3. The Three-Dimensional Bessel Process
253
pl
(3.4) Corollary. Let
be the probability measure governing BES 3 (x) with x > 0
and Ta be the hitting time of a > O. For 0 < a < x < b,
pl [Ta < 00] = a/x. Moreover, io = infs::::o Ps is uniformly distributed on
and
[0, x].
Proof The local martingale p t- 1 stopped at Ta is bounded, hence is a martingale
to which we may apply the optional stopping theorem. The proof then follows
exactly the same pattern as in Proposition (3.8) of Chap. II. We then let b go to
[Ta < 00]. Finally
infinity to get
pl
P; [io :::: a] = P; [Ta < 00] = a/x
which ends the proof.
We now tum to our first important result which complements Theorem (2.3),
the notation of which we keep below, namely B is a BMI (0) and St = sUPs:'Ot Bs.
(3.5) Theorem (Pitman). The process Pt = 2St - Bt is a BES 3 (0). More precisely,
if Pt is a BES 3 (O) and it = infs::::t Ps, then the processes (2S t - Bf, St) and (Pt, it)
have the same law.
Proof Let P be a BES 3 (0). If we put X t = 2it - Pr. we shall prove that for each
t, it = sUPsSt Xs· Indeed, if it = Pr. then X t = it and for s :::: t, since is :::: Ps,
we get Xs = 2is - Ps :::: is :::: it = X t which proves our claim in this case; if
Pt -=I- ir. then Pt > it and X t > it = i g , where gt = sup{s < t: is = Ps}. Since
by the first part i g, = SUPsSg, X s , we get the result in all cases.
We have thus proved that (Pt, it) = (2it - Xt, it) with it = sUPsSt Xs and
consequently, it remains to prove that X is a BM. To this end it is enough by
P. Levy's characterization theorem (Sect. 3 Chap. IV) to prove that X is a martingale, since plainly (X, X}t = (p, p}t = t.
We first notice that is = it /\ infs<u<t Pu for s < t and therefore the knowledge
of it and of Ps, s :::: t, entails the kn~wledge of is, s :::: t. As a result .§?i x C
.YfPva(Jt). On the other hand a(Jt) C .~x since it = sups<t Xs and.9ifP C .'Y;x
since Pt = 2it - Xt. Consequently .Yfx = .¥/ v a(Jt)~ Since X t :::: Pt and
-Xt :::: Pr. each LV. X t is integrable; thus, to prove that X is a martingale, it
suffices to show that for any a > 0 and s :::: t,
Call K the right-hand side of (*). Corollary (3.4) implies that
3
3
Pz [io > a] = Pz [Ta = 00] =
and
{
I - az- 1
0
if a < z,
if a:::: z,
254
Chapter VI. Local Times
if a < z,
if a:::: z.
Using this and the Markov property for p, it is not difficult to prove that K =
(a - a 2ps-I) I (p,>a)'
Now, call k the left-hand side of (*); we have
k
E6 [Xt I (Jt>a) I
pu>a) 1.j;fP I.¥,'P]
E6 [E6 [Xtl(Jt>a) I.~P] l(inf'.~Ugpu>a) I.~P],
(inf'99
and using the above computation of K with t instead of s, we obtain
k
E6 [(a - a 2p;l) l(inf,~ugpu>a) I.¥,P]
E6 [(a _a 2p;l) l(p,>a)l(sHaoe,>t) I.¥,'P]
E6 [(a - a 2PC;~S)/\Ta Bs) l(p,>a) I .¥,'p]
0
E~, [a - a2pC;~S)/\Ta] l(p,>a) = (a - a 2p;l) l(p,>a),
since (A~ITa)) is a bounded martingale. It follows that k = K which ends the
~~
D
It is interesting to observe that although 2S - B is a Markov process and is
(.~B)-adapted, it is not a Markov process with respect to (.~B); indeed by the
early part of the preceding proof, .Y; B contains some information on the future
of 2S - B after time t. This is at variance with the case of S - B studied in the
last section where .Y;S-B = .9fB (see Exercise (2.12». Here .9f2S-B is strictly
contained in .Y:;B as is made plain by the following
(3.6) Corollary. The conditional distribution of St, hence also of St - B t , with
respect to .y:;2S-B is the uniform distribution on [0, 2St - B t ].
Proof By the preceding result it is also the conditional distribution of J t with
respect to a(ps, s ::::; t), but because of the Markov property of p, this is the
distribution of J o = infs,,=o Ps where P is a BES 3 started at Pt. Our claim follows
from Corollary (3.4).
It was also shown in the proof of Theorem (3.5) that
(3.7) Corollary. If p is a BES\x), then B t = 2Jt - Pt is a Brownian motion
started at 2Jo - x and .~B = a (it. .~P).
Finally Theorems (2.3) and (3.5) brought together yield the
(3.8) Corollary. If B is BM(O) and L is its local time at 0, then IB I + L is a
BES 3 (0).
We will now strive to say more about Jo which is the absolute minimum of p.
§3. The Three-Dimensional Bessel Process
255
(3.9) Proposition. Let P be a BES3. 1fT is a stopping time of the bivariate process
(p, J) such that PT = h. then PT+r - PT is a BES 3(0) independent of{Pr, t < T}.
Proof Let us first suppose that P starts at O. Using the notation and result of
Corollary (3.7), we see that T is also a stopping time of B. Consequently by the
strong Markov property or by Exercise (3.21) of Chap. IV, Br = B T+r - BT is a
BM(O) independent of ~ B. By the hypothesis made on T
BT = 2h - PT = PT·
As a result, the equality Jr = sUPs<r Bs proved at the beginning of the proof of
Theorem (3.5) shows that, with obVlous notation,
h = sup Bs - BT = Sr.
h+r -
s~T+r
The definition of B implies that
PT+r - PT
2h+r - BT+r - PT
=
2 (h+r - h) - (B nr - BT)
Pitman's theorem then implies that PT+r - PT is a BES 3 (0). We know moreover
that it is independent of .¥y.B hence of {Pr, t < T}.
If P starts at x > 0, by the strong Markov property, it has the same law as
hx+r, t ~ 0, where p is a BES 3 (0). It suffices to apply the above result to p; the
details are left to the reader.
D
The first time at which P attains its absolute minimum
T =inf{t: Pr =
Jo}
obviously satisfies the conditions of the above result. Therefore, since a BES 3
never reaches 0, it follows that T is the only time at which P is equal to Jo. We
recall moreover that Pr = Jo is uniformly distributed on [0, x] and we state
(3.10) Proposition. Let P be a BES 3 (x) with x > 0; the process (Pr, t < T) is
equivalent to (Br, t < Ty) where B is a BM(x) and Ty is the hitting time by B of
an independent random point y uniformly distributed on [0, x].
Proof By Corollary (3.7), Br = 2Jr - Pr is a BM started at 2Jo - x. For t < T,
we have Jr = Jo, hence Br = 2Jo - Pr; as a result, for t < T, we have Pr =
2Jo - Br = fJr where fJ is a BM(x). Moreover fJr = x - (B r - Bo) is independent
of Bo hence of J o = Pr and T = inf{t : fJr = Jo}. Since Jo is uniformly distributed
on [0, x], the proof is complete.
The above results lead to an important decomposition of the path of BES\O).
(3.11) Theorem (Williams). Pick c > 0 and the following four independent elements
256
Chapter VI. Local Times
i) a r. v. a uniformly distributed on [0, c];
ii) a BM(c) called B;
iii) two BES 3 (0) called P and p.
Put Rc = inf{t : PI = c}; Ta = inf{t : BI = a}. Then, the process X defined by
if t < Rc
if Rc ::; t < Rc + Ta
if t ::: Rc + Ta
is a BES 3 (0).
Proof If we look at a BES 3 (0), say p, the strong Markov property (Proposition
(3.5) in Chap. III) entails easily that the processes {PI, t < Rc} and {PI+K, t ::: O}
are independent and the second one is a BES 3 (c). Thus, the theorem follows from
0
the preceding results.
This theorem, as any path decomposition theorem, is awkward to state but is
easily described by a picture such as Figure 5 in which a is uniformly distributed
on [O,c].
PI
c
I
I
----,-- -----_.
I
I
I
I
I
I
O~~----~--------~------~----~
Rc
Rc+Ta
Lc
t
_
BES3(O) _ _ BM(c)-..BES3(O)+a- BES 3(O) + c - -
Fig. 5.
According to Proposition (3.9), the last part can be further split up in two
independent parts at time Lc or, for that matter, at any time Ld with d > c.
Indeed, since the BES 3 converges a.s. to infinity, for every c > 0, the time
Lc = sup {t :::
°: =
PI
c},
where we agree that sup(0) = 0, is a.s. finite. Since we have also
Lc = inf{t : PI = JI = c},
this is also a stopping time as considered in Proposition (3.9).
§3. The Three-Dimensional Bessel Process
257
In Sect. 4 of Chap. VII, the foregoing decomposition will be turned into a
decomposition of the Brownian path. We close this section with another application
to BM. We begin with a lemma which complements Proposition (3.3).
(3.12) Lemma. If P is a BES\x), x > 0, p- I is a time-changed BM(x- l ) restricted to [0, To[.
°
Proof By Proposition (3.3) and the DDS theorem of Sect. 1 Chap. V, we have
p;-I = f3(At) where At = f~ p;4ds and f3 is a BM(x- I ). Since p >
and
limHoo Pt- I = 0, we have Aoo = inf{t : f3t = o} and the result follows.
We may now state
(3.13) Proposition. Let B be a BM(a), a > 0, and M = max {R t , t < To}; then,
the following properties hold:
(i) the r.v. M has the density ax- 2 on [a, 00[;
(ii) there is a.s. a unique time v < To for which Bv = M.
Furthermore, conditionally on M = m,
(iii) the processes Xl = (B t , t < v) and X2 = (B v+ t , 0::::: t < To - v) are independent;
(iv) the process Xl is a BES 3(a) run until it hits m;
(v) the process m - X2 is a BES 3 (0) run until it hits m.
Proof Using the notation of the preceding proof, we have
(B t , t < To) <:!J. (Pct l , t < Aoo)
where C is the inverse of A. Thus properties i) and ii) are straightforward consequences of Propositions (3.9) and (3.10). Property iii) follows equally from
Proposition (3.9) applied at time T when the BES 3 (a- l ) process P reaches its
absolute minimum.
To prove iv) let us observe that
Xl = (Pctl,t < Ar I Pr = 11m).
By Proposition (3.10) there is a BM(a- I ), say y, such that
( Pt-I ' t
( -I
)
11m ) (d)
= Yt ' t < T ljm .
< T I Pr =
As a result,
X
=
I (d) (
-I
-)
yet,t < A T1 / m
where At = f~ y,-4ds and C is the inverse of A. But by Ito's formula,
Yt
-I
= a -
1
0
t
Ys dys +
-2
1t
0
-3
Ys ds
258
Chapter VI. Local Times
which entails
= a + fit +
-I
-
y-
c,
1
t
0
-I
y- ds
c,
with ~ another BM. As a result, the process y::
c, 1 satisfies the same stochastic
differential equation as BES3 and, by the uniqueness result for solutions of SDE's
which we will see in Sect. 3 of Chap. IX, this process is a BES\a). Property (iv)
now follows easily. Property (v) has a similar proof which we leave as an exercise
to the reader.
Remark. The law of M was already derived in Exercise (3.12) of Chap. II.
(3.14) Exercise. Extend Corollary (3.6) to the stopping times of the filtration
(.~2S-B).
(3.15) Exercise. Let X be a BES\O) and put L = sup{t : X t = I}. Prove that the
law of L is the same as the law of TI = inf{t : B t = I} where B is a BM(O).
[Hint: Use the fact that 2St - B t is a BES3 (0).]
This will also follow directly from a time-reversal result in Sect. 4 Chap. VII.
**
(3.16) Exercise (A Markov proof of Theorem (3.11». Pick b > 0 and the following three independent elements:
(i) a r.v. y uniformly distributed on [0, b];
(ii) a BES 3 (O)-process p,
(iii) a BM(b)-process B,
and define Ty = inf{t : B t = y},
X t = B t on {t < T y },
Xt =
p(t -
Ty)
+ yon {t :::: T y }.
1°) Prove that for t > 0, the conditional probability distribution P [Xt E .; Ty <
t I y] has a density equal to
l(y>y)(Y -
a
y) ay qt(b - y, y - y)
with respect to the Lebesgue measure dy.
2°) Prove that consequently, if to = 0 < tl < t2 < ... < tn, the conditional
probability distribution of the restriction of (Xtl' ... , X t.) to {ti-I < Ty < ti} with
respect to y has a density equal to
l(y<inf(xi» ( -
(n
aay qti-ti-I (Xi-I - y, Xi - y»)
qtk-tk-I (Xk-I -
y, Xk - y») (Xn - y)
kii
with respect to the Lebesgue measure dXI ... dx n •
3°) For 0 ::::: c < b, prove that X conditioned by the event {y > c} is equivalent
to p + c where p is a BES 3 (b - c). In particular, X is a BES 3 (0).
4°) Use 30) to give another proof of Theorem (3.11).
§3. The Three-Dimensional Bessel Process
*
**
259
(3.17) Exercise. 1°) Let B be the standard linear BM and set X t = B, + t. Prove
that the process exp(-2X t ) is a time-changed BM(l) killed when it first hits O.
2°) Let y = inft Xt; prove that (-y) is exponentially distributed (hence a.s.
finite) with parameter 2 and that there exists a unique time P such that X p = y.
The law of (-y) was already found in Exercise (3.12) of Chap. II; the point here
is to derive it from the results in the present section.
(3.18) Exercise. 1°) With the usual notation, put X = IBI +L. Write the canonical
decompositions of the semimartingale X in the filtrations (.jifx) and (.3ifIBI) and
deduce therefrom that the inclusion .jifx C .~IBI is strict. This, by the way, gives
an example of a semimartingale X = M + A such that (.jifx) C (.~M) strictly.
2°) Derive also the conclusion of 1°) from the equality
L t = inf(IBsl + Ls).
s?:..t
3°) Let now c :f:. 1 and put X = IBI + cL. The following is designed to prove
that conversely
= .rlBI in that case.
Let D be the set of (w, t)'s such that
.rx
.
1
h,?l;;
L1
{X(t-2-k,w)-X(t-2-k-1,wk:O} =
1/2.
k:'On
Prove that
a) D is .rx-predictable;
b) for fixed t > 0, P[{w : (w, t) ED}] = 1;
c) for fixed t > 0, P[{w : (w, Tt(W» E D}] = 0 where as usual Tt is the inverse
of L.
Working in
.rB, prove that
1t
ID(s)dXs =
1t
sgn(Bs)dBs
and derive the sought-after conclusion.
(3.19) Exercise. If P is the BES\O) prove that for any t,
P [lim Ipt+h - ptl /J2h log2 1/ h =
htO
1] = 1.
[Hint: Use the result in Exercise (1.21) Chap. II and the Markov property.]
260
Chapter VI. Local Times
§4. First Order Calculus
J;
If M is a martingale, K a predictable process such that
K;d(M, M)s < 00, we
saw how stochastic integration allows to construct a new local martingale, namely
KM, with increasing process K 2.(M, M). We want to study the analogous problem
for the local time at zero. More precisely, if L is the local time of M at 0 and if
K is a predictable process such that 1K I· L is finite, can we find a local martingale
with local time IKI . L at O? The answer to this question wi11lead to a first-order
calculus (see Proposition (4.5)) as opposed to the second order calculus of Ito's
formula.
Throughout this section, we consider a fixed continuous semimartingale X
with local time L at O. We use a slight variation on the notation of Sect. 2, namely
Z = {t : X t = O} and for each t
gt = sup {s < t : Xs = O},
d t = inf{s > t : Xs = O}.
(4.1) Lemma. If K is a locally bounded predictable process, the process Kg is
locally bounded and predictable.
Proof Let T be a stopping time; since gT .::.:: T, the process Kg is bounded on
[0, T] if K is and therefore Kg is locally bounded if K is. It is enough to prove
the second property for a bounded K and by Exercise (4.20) Chap. I and the
monotone class theorem, for K = l[o,T]' But in that case Kg. = 1[O,dT] and one
easily checks that d T is a stopping time, which completes the proof.
The following result supplies an answer to the question raised above.
(4.2) Theorem. i) IfY is another cont. semimart. such that Yd, = 0 for every t ::: 0,
then Kg, Yt is a continuous semimartingale and more precisely
Kg,Yt = KoYo + i t Kg,dYs.
In particular, Kg X is a continuous semimartingale.
ii) If Y is a local martingale with local time A at zero, K gY is also a local
martingale and its local time at zero is equal to
In particular, if X is a local martingale, then Kg X is a local martingale with local
1Kg, IdL s =
IKs IdL s.
time at 0 equal to
J;
J;
Proof By the dominated convergence theorem for stochastic integrals, the class
of predictable processes K for which the result is true is closed under pointwise
bounded convergence. Thus, by the monotone class theorem, to prove i), it is
once again enough to consider the case of K = l[o,T]' Then, because YdT = 0 and
Kg, = l[g,::;T] = l[t::;dr],
§4. First Order Calculus
KoYo +
261
[tl\d T
10
dYs
KoYo + fot l[o.drl(s)dYs = KoYo + lot Kg,dYs
which proves our claim.
To prove ii) , we apply i) to the semimartingale IY I which clearly satisfies the
hypothesis; this yields
IKg, I IYrI = IKoYol + 1t IKg,ldlYls
IKoYol + lot IKg, Isgn(Y,)dYs + [
IKg, IdAs
IKoYol + lot sgn(KgJs)d(Kg,Ys) + 1t IKg, IdAs
which is the desired result.
We may obviously apply this result to X as Xd == O. The local time of KgX
is thus equal to
1t IKg,ldL s'
But the measure dL s is carried by Z and gs = s for any s which is the limit
from the left of points of Z; the only points in Z for which gs =1= s are therefore
the right-end points of the intervals of ZC, the set of which is countable. Since
L is continuous, the measure d Ls has no point masses so that IKg, I = IKs I for
dL-almost all s which completes the proof.
0
If I is the difference of two convex functions, we know that I (X) is a semimartingale, and if 1(0) = 0, then I(X)d == 0 and consequently
I(Xt)Kg, = f(Xo)Ko + 1t Kg,df(X)s'
In this setting, we moreover have the
(4.3) Proposition. If ¢ : ~+ --+ ~+ is locally bounded,
Proof We apply the above formula with K t = ¢ (L t ) and take into account that
L g, = L t as was already observed.
0
We now apply the above results to a special class of semimartingales.
(4.4) Definition. We call E the class of semimartingaies X = N + V such that
the measure dVt is a.s. carried by Z = {t : X t = O}.
262
Chapter VI. Local Times
If, for instance, M is a local martingale, then the semimartingales IMI and M+
are in E with respectively V = L and V = L. This will be used in the
!
(4.5) Proposition. If X E E, the process
XtKg, - lot KsdVs = XoKo + lot Kg,dNs
is a local martingale. If M is a local martingale. the processes
are local martingales. Finally, if rp is a locally bounded Borel function and </> (x) =
rp(u )du, then
J;
are local martingales.
Proof The first two statements are straightforward consequences of Theorem (4.2)
and its proof. The third one follows from the second by making K t = rp(L t ) and
using the fact that J~ rp(Ls)dLs = </>(L t ) which is a simple consequence of timeD
change formulas.
In Theorem (2.3), we saw that the processes (S - B, S) and (IBI, L) have
the same law; if for a local martingale M we put St = sups<t M" it should not
be surprising that one can replace in the above formulas L t-by St and IM t I by
St - Mt. although, in this generality, the equality in law of Theorem (2.3) will not
necessarily obtain (see Exercise (2.32». In fact, the semimartingale X = S - M is
in E, with V = S, since clearly S increases only on the set {S = M} = {X = OJ.
As a result, the processes
where Yt = sup{s < t : Ss = Ms}, are local martingales.
Observe that Proposition (4.3) can also be deduced from the integration by
parts formula if </> is C I and likewise the last formula of Proposition (4.5) if </> is
C 2 . For instance, if </> E C 2 , then rp(St) is a continuous semimartingale and
</>(St) - (St - Mt)rp(St)
=
</>(St) -lot rp(Ss)dSs + lot rp(Ss)dMs
+ lot (Ss - Ms)rp'(Ss)dS,.
But the last integral is zero, because dS is carried by {S - M = OJ. Moreover
J~ rp(Ss)dSs = </>(St) so that
§4. First Order Calculus
263
which is a local martingale. We can thus measure what we have gained by the
methods of this section which can also be used to give yet another proof of Ito's
formula (see Notes and Comments). Finally, an elementary proof of the above
result is to be found in Exercise (4.16).
Ito's and Tanaka's formulas may also be used to extend the above results to
functions F(Mf, St, (M, M)t) or F(Mt, L t , (M, M)t) as is seen in Exercise (4.9).
These results then allow to compute explicitly a great variety of laws of random
variables related to Brownian motion; here again we refer to the exercises after
giving a sample of these computations.
(4.6) Proposition. If B is a standard BM and Tb = inf{t : B t = b}, then for
o :s: a :s: b and A 2: 0,
E[exp(-AL~J] = (1 +2A(b-a)r l
that is to say, L ~b has an exponential law with parameter (2 (b - a) ) -I .
Proof By Proposition (4.5), ((1/2) +A(Bt -a)+) exp( -AL~) is a local martingale
for any A 2: O. Stopped at Tb , it becomes a bounded martingale and therefore
E [
(~ + A(b - a)) exp (-AL~J] = ~.
o
We now state a partial converse to Proposition (4.5).
(4.7) Proposition. Let (l, x) --+ F(l, x) be a real-valued CI,2-junction on lR~. If
M is a cant. lac. mart. and (M, M)oo = 00 a.s., and if F(Lf, IMtl) is a local
martingale, then there exists a C 2 -junction f such that
F(l, x) = f(l) - xf'(l)·
Proof If F (L t, IM t I) is a local martingale, Ito's and Tanaka's formulae imply that
(F!(L s, IMs!) + F~(Ls, IMs!)) dL s + ~F~;(Ls, IMsl)d(M, M)s = 0 a.s.
The occupation times formula entails that the measures dL s and d(M, M)s are
a.s. mutually singular; therefore
(F!(Ls, 0) + F~(Ls, 0)) dL s = 0,
F~2(Ls, IMsl)d(M, M)s = 0 a.s.
Now since (M, M)oo = 00, the local time L is the local time of a time-changed
BM and consequently Loo = 00 a.s. By the change of variables formula for
Stieltjes integrals, the first equality then implies that
264
Chapter VI. Local Times
F!(l, 0) + F~(l, 0) = 0
for Lebesgue-almost every I, hence for every I by the assumption of continuity
of the derivatives. By using the time-change associated with (M, M), the second
equality yields P [F~;(lt. IBrI) = 0 dt-a.e.] = I where B is a BM and I its local
time at 0 and because of the continuity
P [F~;(lt, IBtl) = 0 for every t] = 1.
For every b > 0, it follows that
But by Proposition (4.6) the law of IT. is absolutely continuous with respect to
the Lebesgue measure; using the continuity of F;2' it follows that F~'2 (', b) = 0
for every b. As a result
F(l, x) = g(l)x + f(l)
and since F is continuously differentiable in I for every x, it follows that f
and g are C 1-functions; furthermore the equality F/(l,O) + F~(l, 0) = 0 yields
gel) = - f'(l) which entails that f is in C 2 and completes the proof.
#
(4.8) Exercise. Let M be a uniformly integrable continuous martingale and L its
local time at O. Set G = sup{s : Ms = OJ.
1°) Prove that for any bounded predictable process K,
E
that
[1
00
KsdLs] = E [1(G>o)KGIMool].
2°) Assume now first that P (Moo = 0) = P (G = 0) = 0 and deduce from 1°)
E [Kru: Tu < 00] du = E [KG IMoo I I Loo = uJ P(L oo E du),
where Tu = inf{t : L t > u}. In particular,
P(Loo E du) =
(E (IMool I Loo = u))
-1 P(L oo >
u)du;
check this formula whenever M is a stopped BM, say B T, for some particular
stopping times such as T = t, T = fa, ...
Finally, prove that in the general setting d P du-a.s.,
#
(4.9) Exercise. Let M be a local martingale, L its local time, S its supremum.
1°) Prove that the measures d (M, M) and d S are mutually singular.
2°) If F : (x, y, z) -+ F(x, y, z) is defined on lR x lRt, and is sufficiently
smooth and if
I
-F"
2 x 2 + F'Z == 0 '
F;(y, y, z) = 0
for every x, y and z,
§4. First Order Calculus
265
then F (Mr. Sr. (M, M}t) is a local martingale. Find the corresponding sufficient
condition for F (Mt, Lr. (M, M}t) to be a local martingale.
3°) Prove that (St - M t )2 - (M, M}t is a local martingale and that for any
reals a, fJ, the process
Z~·fJ = [fJcoshfJ(St - M t ) -asinhfJ(St - Mt)]exp{aSt - ~2 (M, M)t}
is a local martingale. Prove the same results when S - M is replaced by IMI and
S by L.
4°) If B is a BM and
= inf{t : IBtl = a}, then for a> 0, fJ =I 0,
fa
E [exp {-aLia -
~2 fa}] = fJ [fJ coshafJ + a sinhafJrl .
5°) Prove an analogous formula for S and Ra = inf{t : St - B t = a}.
6°) Again for the BM and with the notation of Sect. 2, prove that
* (4.10) Exercise. 1°) Let M be a martingale and L its local time at 0. For any P ~ 1
prove that IILtilp ::s plIMtllp. For P = 1 and Mo = 0, prove that IIMtll1 = liLt III.
[Hint: Localize so as to deal with bounded M and L; then apply Proposition
(4.5).] For P > 1, prove that IIStilp ::s (pip -l)IIMtllp.
2°) Show that there is no converse inequality, that is, for P > 1, there is no
universal constant Cp such that
for every M locally bounded in LP.
(4.11) Exercise. Let M be a square-integrable martingale vanishing at zero and
set St = infs9 Ms. Prove the following reinforcement of Doob's inequality
[Hint: Use 3°) in Exercise (4.9).] Prove that this inequality cannot be an equality unless M vanishes identically on [0, t].
°
(4.12) Exercise. For the BM and b > set fb = inf{t : IBtl = b}. Prove that Lib
has an exponential law with parameter lib. (In another guise, this is already in
Exercise (2.10».
(4.13) Exercise. For the BM call JL the law of the r.v. ST, where St = infs9 Bs.
1°) Using the analogue for S of the local martingale </J(St) - (St - Bt)</J'(St),
prove that 11 = ( - (1 - x)Il)' where the derivative is taken in the sense of
distributions.
Prove that consequently, JL has the density (1 - x)-2 on] - 00,0[.
266
Chapter VI. Local Times
2°) Using this result, prove that the result of Exercise (4.11) cannot be extended
to p =1= 2, in the form
with Cp = (pip - IV.
[Hint: Take M t = Btld" where T] = inf{t : IBtl = 1}.]
#
(4.14) Exercise. 1°) Let M be a continuous local martingale vanishing at 0, and
F a C 2 -function. For a > 0, prove that
l
I
t
2.F(L~) + [(Mt - a) /\ 0] F'(L~) -
2. 10 F'(L~)dL~
is a local martingale.
2°) For the Brownian motion, compute the Laplace transform of the law of
L~, where Tt = inf{s : L~ > t}.
(4.15) Exercise. Let M be a positive continuous martingale such that Moo = O.
Using the local martingale </J(St) - (St - Mt)qJ(St) for a suitably chosen qJ, find a
new derivation of the law of the r.v. Soo conditioned on 90, which was found in
Exercise (3.12) of Chap. II.
#
(4.16) Exercise. Following the hint below, give an elementary proof of the fact
that, in the notation of Proposition (4.5), the process </J(St) - (St - Mt)qJ(St) is a
local martingale.
[Hint: Assume first that M is a bounded martingale, then derive from the
equality
that
E [CST - a)+ - (ST - M T )I(ST>a)]
does not depend on the stopping time T. This is the result for qJ(x) = l(x>a);
extend to all functions qJ by monotone class arguments.]
* (4.17) Exercise. 1°) Let X be a cont. loco mart. and (La) the family of its local
times. For any Coo function f on lR~ and a] < a2 < ... < an, prove that
n
is a local martingale.
§4. First Order Calculus
267
2°) For (Yl, ... , Yn) E lR.n and an :s 1, prove that for the BM
E [exp (-
~YiLil)] = ¢(O, y)!¢(l, y)
where
n
This generalizes Proposition (4.6) and gives the law of the process a ---+ L~,
which will be further identified in the Ray-Knight theorem of Sect. 2 Chap. XI.
(See Exercise (2.11) in that chapter).
*
(4.18) Exercise. Let ¢ be an increasing C1-function on lR.+ such that ¢(O) = 0
and 0 :s ¢(x) :s x for every x. Let f be a C1-function such that
f(x)¢'(x) = f'(x)(x - ¢(x»
and set
F(x) =
foX f(y)dy == f(x)(x - ¢(x».
10) In the notation of this chapter, prove that
[Ta
Jo
(d)
[TF(a)
I(B,></J(s,))ds = Jo
(l/f 2 o F-1(Ss») I(B,>o)ds.
[Hint: Using Theorem (2.3), prove first that the left-hand side has the same
law as foTa 1(J(L,)IB,I<F(L,))ds, then apply Proposition (4.5) to f(Ls)Bs.] The same
equality holds with < in place of >.
2°) If I is the local time at 0 of the semi martingale B - ¢(S) prove that
[Hint: Write I as the limit of c 1 fo l(o<B,_</J(s,)<E)ds, then follow the same
pattern as in 1°) .]
3°) Carry out the computations for ¢(x) = ax, a < 1.
*
(4.19) Exercise. Let B be the standard linear BM and L its local time at O.
1°) If K is a strictly positive, locally bounded (.~ B)-predictable process, prove
that M t = Kg, Bt. t 2: 0, has the same filtration as B.
2°) Prove that Nt = f~ Kg,dL s - Kg, IBtl has the same filtration as IBI.
3°) For p > 0, prove that the local martingale M = LP-l B is pure.
[Hint: If T t is the time-change associated with (M, M), express Tt as a function
of (LUp) which is the local time of the DDS Brownian motion of M.]
268
Chapter VI. Local Times
4°) Prove that, for p > 0, the local martingale L{' - pIBtIL{'-I, t 2: 0, is also
pure.
[Hint: Use Lemma (2.1).]
*
°
(4.20) Exercise. (More on the class L log L of Exercise (1.16) Chap. II). We
consider a cont. loc. mart. X such that Xo = and write X; = sUPs<t IXs I and
L t = L~(X).
1°) Prove that
IXtllog+ X; - (X; - 1)+ -fot log+ X;dLs
is a local martingale.
2°) Prove that
E [X:x,] ::: (e/(e - 1))
(1 + s~P
E [IX T Ilog+ IX T I])
where T ranges through the family of all finite stopping times and that
sup E [IXTllog+ X~] ::: (e + l)/e)E [X:x,] + E [Lao log+ Lao].
T
30) Prove likewise that L t 10g+(L t ) - (L t - 1)+ - IXtllog+(L t ) is a local
martingale and derive that
E [Lao 10g+(Lao)] ::: sup {( (e + l)/e)E [IXTI] + E [IXTllog+ IXTI]).
T
4°) Conclude that the following two conditions are equivalent:
i) SUPT E [IXTllog+ IXTI] < 00;
ii) E [X:x,] < 00 and E [Lao log+ Lao] < 00.
#
(4.21) Exercise (Bachelier's equation). 1°) Let M be a cont. loco mart. vanishing
at 0. Prove that there exists a unique strictly positive cont. loc. mart. M such that
Mo = 1 and, in the notation of this section,
St - Mt = (Mr/St) - 1,
where St = inf Mu.
u:SJ
[Hint: Try for St = exp( -St ).]
2°) Let h be a function on lR+ which is strictly decreasing, of class C 1 with
non vanishing derivative and such that h (0) = 1, h (00) = 0. Given a loco mart. M
satisfying the conditions of 1°) , prove that there exists a unique cont. loco mart.
M such that
-hi (h-1(St») (St - M t ) = M t - St.
30) Prove that the local martingale M of 2°) satisfies the Bachelier equation
dMt = dMr/ h'(St).
Give a sufficient condition on h in order that this equation have a unique solution
and then express it as a function of M.
§5. The Skorokhod Stopping Problem
269
(4.22) Exercise (Improved constants in domination). 1°) For the BM B prove,
in the usual notation, that for k < 1,
E [(-ST\)k] :::: E [SUP(Ss - Bs/] :::: Ck,
s.:::;Tj
where Ck is the constant defined in Exercise (4.30) Chap. IV. Conclude that
(nkj sinnk) :::: r(l - k) :::: Ck.
[Hint: Use the results in Exercises (4.12) and (4.13).]
2°) Combining 1°) with Exercise (4.30) Chap. IV, prove that
lim(l - k)C k = 1.
k ..... \
(4.23) Exercise. 1°) In the usual notation, set T = inf{t: ILtBt! = I} and prove
that the law of LT is that of Je where e is an exponential r.v. with parameter l.
[Hint: Express the local time at 0 of the loco mart. LB as a function of Land
follow the pattern of the proof of Proposition (4.6).]
2°) Prove that consequently, LT BT is a bounded martingale such that the
process H of Proposition (3.2) Chap. V is unbounded.
(4.24) Exercise (An extension of Pitman's theorem). Retain the notation of
Proposition (4.5) and the remarks thereafter and suppose that cp is positive and
that Mo = O.
1°) Prove that
¢(St) = sup
s:oot
t cp(Su)dMu.
10
[Hint: Apply Lemma (2.1).]
2°) Prove that (St - M t )cp(St) is a time-changed reflected BM and identify its
local time at O.
3°) Prove that ¢ (St) + (St - M t )cp(St) is a time-changed BES 3 (0).
§5. The Skorokhod Stopping Problem
Let J-t be a probability measure on R; we wish to find a stopping time T of the
standard linear BM such that the law of BT is J-t. In this generality the problem
has a trivial solution which is given in Exercise (5.7); unfortunately this solution
is uninteresting in applications as T is too large, namely E[T] = 00. We will
therefore amend the problem by demanding that E[T] be finite. This however
imposes restrictions on J-t. If E[T] < 00, the martingale BT is in Mg which
implies E[BT] = 0 and furthermore (Br)2 - T 1\ t is uniformly integrable so
that E[Bj] = E[T]. The conditions
f
x 2 dJ-t(x) < 00,
f
x dJ-t(x) = 0,
270
Chapter VI. Local Times
are therefore necessary. They are also sufficient, and indeed, the problem thus
amended has several known solutions one of which we now describe. We actually
treat the case of cont. loco martingales which by the DDS theorem is equivalent
to the case of BM.
In what follows all the probability measures JI we consider will be centered,
i.e., will satisfy
f
f
IxldJI(x) < 00,
x dJI(x) = O.
For such a JI, we define fL(x) = JI([x, oo[) and
o/,Ax)
o/,Ax)
=
fL(X)-ll
=
x if x 2: b.
[X,CXl[
t dJI(t) if x < b = inf{x : fLex) = OJ,
The functions fL and 0/f1- are left-continuous; 0/f1- is increasing and converges to b
as x ---+ b. Moreover, o/f1-(x) > x on] - 00, b[ and b = inf{x : o/f1-(x) = x}.
(5.1) Lemma. For every x E] - 00, b[ and every a :::: 0,
fLex) = o/f1-(a) - a exp
fLea)
o/f1-(x) - x
Proof By definition
fL(x)o/f1-(x) =
and taking regularizations to the right,
fL(x+)o/f1-(x+)
(-lX
a
1
=1
ds
o/f1-(S) -
)
S
.
S dJI(s),
[X,CXl[
]X,CXl[
S
dJI(s).
Let us call v the measure associated (Sect. 4 Chap. 0) with the right-continuous
function of finite variation 0/f1- (x+); using the integration by parts formula of
Sect. 4 Chap. 0, the above equality reads
(+)
fL(x)dv(x) - (o/f1-(x+) -x)dJI(x) = 0,
this equality being valid in ] - 00, b[ where 0/f1- (x) > X.
By the reasoning in Proposition (4.7) Chap. 0, there exists only one locally
bounded solution fL to (+) with a given value in a given point. But for a < 0, the
function
is a solution to (+). This is seen by writing
¢(x+)(o/f1-(x+)-x)=exp
(-l
a
X
ds
o/f1-(S) -
S
)
and applying again the integration by parts formula which yields
(o/f1-(x+) - x) d¢(x) + ¢(x)dv(x) = O.
The proof is then easily completed.
§5. The Skorokhod Stopping Problem
271
The preceding lemma shows that the map J-L ---+ 1/f/1 is one-to-one; the next
lemma is a step in the direction of showing that it is onto, namely, that every
function possessing the properties listed above Lemma (5.1) is equal to 1/f/1 for
some J-L.
(5.2) Lemma. Let 1/f be a left-continuous increasing function and a < 0 < b two
numbers such that 1/f(x) = 0 for x :::: a, 1/f(x) > x for a < x < b, and 1/f(x) = x
for x :::: b. Then, there is a unique centered probability measure J-L with support in
[a, b] and such that 1/f/1 = 1/f.
Proof We set ii(x) = 1 for x :::: a,
ii(x) = ( - a/(1/f(x) - x») exp ( -lx (1/f(s) - Sf1dS)
for a < x:::: b.
This function is left-continuous on ] - 00, b[; furthermore, it is decreasing. Indeed,
this is easy to see if 1/f is C 1; if not, we use a COO function j > 0 with support
in ]0,1] and such that f j(y)dy = 1, and we set 1/fn(x) = nf f(x + y)j(ny)dy
as in the proof of Theorem (1.1). Then 1/fn :::: 1/f and limn 1/fn(x) = 1/f (x +),
whence the property of ii follows. We complete the definition of ii by setting
ii(b) = limx->b,x<b ii(x), and ii(x) = 0 for x > h. There is then a unique
probability measure J-L such that ii(x) = J-L([x, oo[). By differentiating the equality
ii(x+)(1/f(x+) -x) = -aexp (-lX (1/f(s) -Sf1dS)
we get
(1/f(x+) - x)dii(x) = -ii(x)d1/f(x)
which may be written d(1/fii) = x dii. As a result
1/f(x)ii(x) = [
t dJ-L(t).
J[X,OO[
Taking x < a this equality shows that J-L is centered and the proof is complete.
We will also need the following
(5.3) Lemma. Iff x 2 dJ-L(x) < +00, then
f
x 2 dJ-L(x) =
Proof By eq( *) we have
f
x 2 dfL(X) = -
and integrating by parts
f
x 2 dfL(X) =
f
f
f
1/f/1(x)ii(x)dx.
xd (ii(x+)1/f/1(x+»)
ii (x) 1/f/1 (x)dx -
[xii(x+)1/f/1(x+)]~:.
272
Chapter VI. Local Times
But
lim xiL(x+)1jJI-'(x+) = lim x l
x--+oo
x--+oo
lx,oo[
t df.k(t) :s lim
x--+oo
1
lx,oo[
t 2 df.k(t) = 0
whereas, because f.k is centered,
lim (-x)iL(x+)1jJI-'(x+)
lim x l
x-+-oo
x--+-oo
<
lim
x--+-oo
1
l-oo,xl
l-oo,x[
t df.k(t)
t 2 df.k(t) = O.
o
From now on we consider a cont. loc, mart. M such that Mo = 0 and
(M, M)oo = 00. We set Tx = inf{t : M t = x} and St = SUPSg Ms. The main
result of this section is the following
(5.4) Theorem. If f.k is a centered probability measure, the stopping time
TI-' = inf{t ~ 0: St ~ 1jJ1-'(Mt )}
is a,s, finite, the law of MT is equal to f.k and, moreover,
i) MT~ is a uniformly integrable martingale,
ii) E [(M, M)TJ = J x 2 df.k(x).
We illustrate the theorem with the next
a
Fig. 6.
Proof of the Theorem, The time TI-' is a.s, finite because for any x > 0 we have
TI-' :s inf{t > Tx : M t = 1jJ1-'(x)}
and this is finite since (M, M)oo = 00 (Proposition (1.8) Chap, V). To complete
the proof of the theorem we use an approximation procedure which is broken into
several steps,
§5. The Skorokhod Stopping Problem
273
(5.5) Proposition. Let (fLn) be a sequence of centered probability measures such
that (1/JJ1n) increases pointwise to 1/JJ1; then
i) (fLn) converges weakly to fL;
ii) limn IxldfLn(X) = IxldfL(X);
iii) (M TMn ) converges a.s. to M TM .
J
J
Proof The sequence (fLn) is weakly relatively compact because for any K > 0,
<
fLn([K,oo[)
and, recalling that fL is centered,
fLn(] -
<
00, -K])
Moreover, since 1/JJ1n increases to 1/JJ1 it is easily seen that bn = inf {t :
1/JJ1n(t) = t} increases to b. From Lemma (5.1) it follows that fin(x)/fin(O) converges to fi(x)/fi(O) on ] - 00, b[ and fin = fi = 0 on ]b,oo[. Let (fLnk) be
a subsequence converging weakly to a probability measure v. If x is a point of
continuity of v, then
lim fink (X) = VeX).
k
Ifwe choose x < b, it follows that fink (0) has a limit l, otherwise fink (X)/fink(O)
could not have a limit. Moreover
v(x)/l = fi(x)/fi(O).
By taking a sequence Xn of continuity points of v increasing to zero, it follows
that I = fiCO) and v = fi which proves i).
Using the lower semi-continuity of fL --+ fro.OO[ t dj.L(t), we have
(
i[o.oo[
t dfL(t)
<
lim (
n
i[o.oo[
t dfLn(t)
:s lim (
n
i[o.oo[
t dfLn(t)
lim fin (O)1/JJ1n (0) = fi(O)1/JJ1(O) = (
n
i[o.oo[
using the fact that fLn and fL are centered we also have
liml
n
which establishes ii).
1-00.0]
t dfLn(t) =
1
]-00.0]
t dfL(t)
t dj.L(t);
274
Chapter VI. Local Times
Finally, the sequence Tn = Tfl " increases to a stopping time R :s TIL" But for
p :s n, we have
o/flp(MT,) :s o/fl,,(MT,,) :s ST" :s SR;
passing to the limit, we get
o/flp(M R) :s lim o//h,,(MT,,) :s SR
hence o/fl(MR) :s SR and R = TIL" By the continuity of M the proof is complete.
(5.6) Lemma. There exists a sequence of centered probability measures fLn such
that
i) fLn has a compact support contained in an interval [an, b n ];
ii) 0/fln is continuous on lEt and strictly increasing on [an, 00[;
iii) the sequence (0/fl,,) increases to 0//h everywhere on lEt.
= sup{x : o/fl(X) = OJ, and (an) (resp: (b n)} a sequence of strictly
negative (resp: positive) numbers decreasing (resp: increasing) to a (resp: b). The
number
On = inf{o/(x) -x; -00 < x:s bn}
Proof Let a
is finite and> O. Let j be the function in the proof of Theorem (1.1), and for
each n, pick an integer kn such that
For -00 < x :s bn , set
o/n(X) = (x - an+l)1(an +19<a,,)/(an - an+l)
+ l(a,,:oox))
!
o/fl(X
+ y)knj(kny) dy.
The function o/n enjoys the following properties:
i) o/n:S o/fl and o/n is continuous on ] ii) o/n > 0 on ]an+l' bn],
iii) o/n(x) > x on] - 00, bn],
00, b n],
(indeed, for x E [0, bn ],
o/n(x)-x=kn !(o/fl(x+y)-(x+y»j(knY)dY+kn !
yj(kny)dy
which is > 0 by the choice of kn ).
We now define o/n on ]bn, oo[ in the following manner. Since bn < o/n(bn ) :s
o/fl(bn), we let o/n be affine on [b n , o/fl(bn)] and set o/n(x) = x for x 2: o/fl(b n) in
such a way that o/n is continuous. Finally, we set tn = 0/1 V 0/2 V ... v o/n; by
Lemma (5.2), the sequence (tn) enjoys the properties of the statement.
§5. The Skorokhod Stopping Problem
275
End of the Proof of Theorem (5.4). By Proposition (S.S) and Lemma (S.6) it is
enough to prove the first sentence when the support of f.J., is contained in the
compact interval [a, b] and o/fl- is continuous and strictly increasing on [a, 00[.
Since o/fl-(x) = 0 for x < a we have Tfl- < Ta and since o/(x) = x for x > b we
also have Tfl- :::: Tb , hence M T" is bounded.
Let y be the inverse of the restriction of 0/fl- to [a, 00[; for ¢ E C K we set
g = ¢ 0 y and G(x) =
g(u)du. By the remarks following Proposition (4.S) the
process X f = G (SI) - (SI - M f ) g (Sf) is a local martingale. The functions ¢ and
fa'
G being bounded, XT~ is a bounded martingale and consequently
By the definitions of g and Tfl- this may be written
If v is the law of M T" we have
f
v(dx)
i~ ¢(v)do/fl-(v) +
f
(x -o/fl-(x)) ¢(x)v(dx) = 0,
and after integrating by parts
Since ¢ is arbitrary in C K it follows from Lemma (S.1) and its proof that v = f.J.,.
To prove i) choose a sequence (o/n) according to Lemma (S.6). For each n,
the process MT~n is a bounded martingale. Moreover IMy: 1 converges a.s. to
~"
IMT" 1and by Proposition (S.S) ii), E [IMT~" I] to E [IMT~ I]; it follows that IMT~n 1
converges to IMT" 1 in L 1. The proof of i) is then easily completed.
It remains to prove ii). When f.J., has compact support, MT~ is bounded and by
Proposition (1.23) Chap. IV, E [(M, M)d = f x 2 df.J.,(x). To get the general case
we use again an approximating sequence. Set, with )..(x) = x,
o/n = o/fl-Il-n,nl + o/fl-(n)lln,1jJ~(n)1 + )"111jJ~(n),oo['
By Lemma (S.2), o/n corresponds to a measure f.J.,n and by Lemma (S.3) if
f x 2df.J.,n(x) < 00,
f
x 2 df.J.,n(x) = i:,,(n) o/fl-,,(x)ltn(x)dx.
By Lemma (S.I), for -n < x < n, we have Itn(x) = Cnlt(x) where limn Cn = I.
Therefore
276
Chapter VI. Local Times
°
We will prove that the last integral on the right, say In, goes to as n tends to
infinity. Indeed, iill is also constant on [n, 1friL (n)] and is equal to Cnii(n). Hence
If X is a r.v. with law f-L, then:
In/Cn
<
E[X l(x:,,:n)] E[(X - n) l(x:,,:nd/ P(X 2: n)
E[X2]1/2 E[«X _ n)+)2]1/2,
which proves our claim.
As a result, limn f x 2 df-Ln(x) < 00. By the proof of Proposition (5.5) the
sequence {TiL,,} increases to TiL so that
D
#
(5.7) Exercise. Let B be the standard linear BM.
1°) For any probability measure f-L on lR prove that there is a
r.v. say Z, such that Z(P) = f-L.
2°) Define a (.y;-B)-stopping time T by
.Y;/z measurable
T = inf {t 2: 1 : B f = Z} .
Prove that the law of BT is f-L and that E[T] = 00.
*
(5.8) Exercise (A uniqueness result). 1°) Let g be a continuous strictly increasing function such that limx --+_ oo g(x) = 0, g(x) 2: x and g(x) = x for all
x 2: inf{u : g(u) = u}. If for T = inf{t: St 2: g(Mt )} the process MT IS a
uniformly integrable martingale and if MT has law f-L, prove that
1
g(x) = --_f-L(x)
jX t df-L(t).
-00
In particular if f-L is centered, then g = 1friL'
[Hint: Use the first part of the proof of Theorem (5.4).]
2°) Extend the result to the case where g is merely left-continuous and increasing.
(5.9) Exercise. In the situation of Theorem (5.4) prove that the law of ST" is given
by
P[ST2: X ]=exp (-
where y (s) = inf{ t : 1friL (t) 2: s }.
Jor s - dsyes) ),
Notes and Comments
277
(5.10) Exercise. Let B be the standard linear BM and J), a centered probability
measure. Prove that there is an increasing sequence of finite stopping times Tn
such that the random variables BT,,+! - BT" are independent, identically distributed
with law J), and E [Tn+l - Tn] = f x 2 dJ),(x).
*
(5.11) Exercise. Prove that the time TIL of Theorem (5.4) has the following minimality property: if R is a stopping time such that R :::: TIL and MR V!.J. MT~' then
R = Tw
(5.12) Exercise. 1°) For p, q E lR.+, call v(p, q) the probability measure (qcp +
PC-q) I (p + q). If J), is a probability measure satisfying the conditions of Theorem
(5.4), prove that there is a probability measure ct on lR.t such that
J), =
f
v(p, q)ct(dp, dq).
[Hint: Start with J), of finite support and use weak convergence.]
2°) Let (P, Q) be a r.v. independent of B with law ct. The r.v.
T = inf{t ::: 0 : Bt rt- ]-Q, P[}
is such that the law of BT is J),. Observe that it is a stopping time but for a filtration
strictly larger than the Brownian filtration.
Notes and Comments
Sect. 1. The concept and construction of local time in the case of Brownian
motion are due to P. Levy [2]. The theory expanded in at least three directions.
The first to appear was the theory of local times for Markov processes which is
described in Blumenthal-Getoor [1] (see also Sharpe [1]) and will be taken up in
the Notes and Comments of Chap. X. A second approach is that of occupation
densities (Geman and Horowitz [1]). The point there is to show that the measure
A -+ f~ 1A (Xs )ds is absolutely continuous with respect to a given deterministic
measure which usually is the Lebesgue measure on R This is often done by Fourier
transform methods and generalizes in the theory of intersection local times which
has known much progress in recent years and for which the reader may consult
Geman et a1. [1], Rosen [1], Le Gall [4] and Yor [18]; the Markov view-point on
this question being thoroughly developed in Dynkin [2]. These two approaches to
local times are fleetingly alluded to in some exercises e.g. Exercise (2.19).
The third and possibly most useful line of attack stems from the desire to
enlarge the scope of Ito's formula; this is the semimartingale point of view which
first appeared in Meyer [5] after earlier results of Tanaka [1] for Brownian motion
and Millar [2] for processes with independent increments. The case of continuous
semi martingales is the subject of this section. The reader can find another exposition based on the general theory of processes in Azema-Yor [1] and the extension
to local times of regenerative sets in Dellacherie-Meyer [1] vol 4.
278
Chapter VI. Local Times
Theorem (1.7) which extends or parallels early results of Trotter [1], Boylan
[1] and Ray [1] is taken from Yor [4] and Theorem (1.12) from Bouleau-Yor [1]
as well as Exercise (1.29). The approximation results of Corollary (1.9), Theorem
(1.10) and Exercise (1.20) as well as some others to be found in Chap. XII, were, in
the case of Brownian motion, originally due to Levy (see Ito-McKean [1] and for
semimartingales see EI Karoui [1]). Ouknine-Rutkowski [1] give many interesting
"algebraic" formulae for the computation of local times, some of which we have
turned into exercises.
Exercise (1.21) is from Weinryb [1], Exercise (1.20) from McGill et al. [1] and
Exercise (1.26) from Pratelli [1]. Exercise (1.17) is due to Yoeurp [1], Exercises
(1.14) and (1.22) are respectively from Yor [5] and [12]. Exercise (1.29) is from
Biane-Yor [1]; it extends a result which is in Ito-McKean [1] (Problem 1, p. 72).
Principal values of Brownian local times have been studied in depth by Yamada
([2], [3], [4], [6]) and also by Bertoin [2] to whom Exercise (1.30) is due; they
have been investigated for physical purposes by Ezawa et al. ([1], [2], [3]).
Tanaka's formula expresses the Doob-Meyer decomposition of the absolute
value of a martingale. Conversely, Gilat [1] has shown that the law of a positive
submart. is the law of the absolute value of a martingale. A pathwise construction of this martingale (involving a possible extra-randomization) has been given,
among others, by Protter-Sharpe [1], Barlow [7], Barlow-Yor [3] and Maisonneuve
[8].
Sect. 2. The results of the first half of this section are due to Levy but the proofs
are totally different. Actually Levy's study of Brownian local time was based on
the equivalence theorem (2.3), whereas we go the other way round, thanks to
Lemma (2.1) which is due to Skorokhod [2] (see also EI Karoui and ChaleyatMaurel [1]). Among other things, Theorem (2.3) shows that the Brownian local
time is not after all such an exotic object since it is nothing else than the supremum
process of another BM.
Corollary (2.8) gives a precise labeling of the excursions of BM away from
zero which will be essential in Chap. XII.
The first proof of the Arcsine law appears in Levy ([4], [5]). The proof presented here is found in Pitman-Yor [5] and Karatzas-Shreve [1] but the original
ideas are due to Williams [1] and McKean [3]. There are other proofs of the Arcsine law, especially by the time-honoured Feynman-Kac's approach which may
be found in Ito-McKean [1]. Another proof relying on excursion theory is found
in Barlow-Pitman-Yor [1] (see Exercise (2.17) Chap. XII).
Exercise (2.13) is due to Spitzer [1] and Exercise (2.15) is in Ito-McKean [1].
The equality in law of Exercise (2.15), 2°) is a particular case of a general result
on Levy processes (see Bertoin [7]) which is an extension to continuous time of
a combinatorial result of Sparre Andersen (see Feller [4]).
Exercise (2.22) is taken from Lane [1] and Exercise (2.24) from Weinryb [1].
Exercise (2.27) is taken from Le Gall [3]; another method as well as Exercise (2.26)
may be found in Ito-McKean [1]; the results are originally due to Besicovitch and
Notes and Comments
279
Taylor [1] and Taylor [1]. Exercise (2.28) is from Yor [18] and Exercise (2.29)
from Biane et al. [1].
Pushing the ideas of Exercise (2.32) a little further we are led to the following
open
Question 1. Is the map fJ --+
S ergodic?
Dubins and Smorodinsky [1] have given a positive answer to a discrete analogue of Question 1 for the standard random walk on Z. Furthermore Dubins et
al. [1] (1993) give another interesting question equivalent to Question 1 along the
lines of Exercise (2.32). Let us note that Question 1 may be extended to functions
of modulus 1 other than sgn and also to a d-dimensional setting using the result
in Exercise (3.22) Chapter IV. Finally, Malric [2] obtains density results for the
sets of zeros of the iterates of which may help to solve question 1.
Exercise (2.33) presents one of many absolute continuity relationships (see,
e.g., Yor [25]) which follow from scaling invariance properties; for a different
example, see Exercise (2.29) relating the Brownian and Pseudo-Brownian bridges.
The method used in Exercise (2.33) may be extended to yield Petit's generalization
of Levy's Arcsine law (see Petit [1] and Carmona et al. [1]).
S
Sect. 3. Most of the results of this section are from Pitman [1], but we have
borrowed our proof of Theorem (3.5) from Ikeda-Watanabe [2]. The original proof
of Pitman uses a limiting procedure from the discrete time case to the continuous
time case. This can be used successfully in many contexts as for instance in Le
Gall [5] or to prove some of the results in Sect. 3 Chap. VII as in Breiman [1].
For other proofs of Theorem (3.5) see Pitman-Rogers [1], Rogers [2] and Jeulin
[1], as well as Exercise (4.15) in Chapter VII. A simple proof has been given by
Imhof [3].
Theorem (3.11) is due to Williams [3] as well as several exercises. Exercise
(3.18) is taken from Emery-Perkins [1].
Sect. 4. The better part of this section comes from Azema-Yor [1] and Yor [8].
Earlier work may be found in Azema [2] and extensions to random closed sets in
Azema [3]. As mentioned below Proposition (4.5) another proof of Ito's formula
may be based on the results of this section (see Azema-Yor [1]) and thus it is
possible to give a different exposition of many of the results in Chaps. IV and VI.
Exercise (4.9) is due in part to Kennedy [1]. Exercise (4.11) is borrowed from
Dubins-Schwarz [3] and Pitman [2] and is continued in Exercise (4.13). Further
computations of best constants for Doob-like inequalities are found in Jacka [1].
Exercise (4.17) is in Ito-McKean [1] (see also Azema-Yor [2]) and Exercise
(4.20) in Brossard and Chevalier [1]. Exercise (4.21) is due to L. Carraro (private
communication). Exercise (4.25) comes from Pitman [7].
Let us mention the
Question 2. Is it possible to relax the hypothesis on F in Proposition (4. 7)?
280
Chapter VI. Local Times
Sect. 5. The problem dealt with in this section goes back to Skorokhod [2] and
has received a great many solutions such as in Dubins [3], Root [1], Chacon-Walsh
[1] to mention a few. In discrete time the subject has been investigated by Rost
(see Revuz [3]) and has close connections with Ergodic theory.
The solution presented in this section is taken from Azema-Yor [2] with a
proof which was simplified by Pierre [1] (see also Meilijson ([1] and [2]) and
Zaremba [1]). It is only one of the many solutions given over the years, each of
which has drawbacks and advantages. A complete discussion is given by Obloj
[ 1].
Further Skorokhod type problems are solved in Obloj-Yor [1] using either
stochastic calculus or excursion theory.
Chapter VII. Generators and Time Reversal
In this chapter, we take up the study of Markov processes. We assume that the
reader has read Sect. 1 and 2 in Chap. III.
§1. Infinitesimal Generators
The importance of the theory of Markov processes is due to several facts. On the
one hand, Markov processes provide models for many a natural phenomenon; that
the present contains all the information needed on the past to make a prediction
on the future is a natural, if somewhat overly simplifying idea and it can at least
often be taken as a first approximation. On the other hand, Markov processes arise
naturally in connection with mathematical and physical theories.
However, the usefulness of the theory will be limited by the number of processes that can be constructed and studied. We have seen how to construct a
Markov process starting from a t.f., but the snag is that there aren't many tf.'s
which are explicitly known; moreover, in most phenomena which can be modeled
by a Markov process, what is grasped by intuition is not the t.f. but the way in
which the process moves from point to point. For these reasons, the following
notions are very important.
(1.1) Definition. Let X be a Feller process; a function I in Co is said to belong
to the domain ~A of the infinitesimal generator of X if the limit
. 1
AI = 11m - CPr! - f)
t,j,O t
exists in Co. The operator A : ~A ~ Co thus defined is called the infinitesimal
generator of the process X or of the semi-group Pt.
By the very definition of a Markov process with semi-group CPt), if I is a
bounded Borel function
As a result, if I E ~A' we may write
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
282
Chapter VII. Generators and Time Reversal
Thus A appears as a means of describing how the process moves from point to
point in an infinitesimally small time interval.
We now give a few properties of A.
(1.2) Proposition. If f E ~A' then
i) Pt f E ~A for every t;
ii) the function t -+ Pt f is strongly differentiable in Co and
d
-Prf = APrf = PtAf;
dt
iii) Prf - f = J~ PsAf ds = J~ APs! ds.
Proof For fixed t, we have, using the semi-group property,
lim! [Ps(Prf) - Prf] = lim Pt
s--+O S
s--+O
[!
S
(Ps! -
f)] = PtAf
which proves i) and APrf = PtAf. Also, t -+ Prf has a right-hand derivative
which is equal to PtAf.
Consider now the function t -+ J~ PsAf ds. This function is differentiable
and its derivative is equal to PtAf. Since two continuous functions which have
the same right derivatives differ by a constant, we have Pt f = J~ PsAf ds + g
for some g, which completes the proof of ii); by making t = 0, it follows that
g = f which proves iii).
D
Remark. The equation
fr Prf = PtAf may be written in a formal way
d
-Pt(x,
.) = A * Pt(x,')
dt
where A * is the formal adjoint of A for the duality between functions and measures. It is then called the forward or Fokker-Planck equation. The reason for
the word forward is that the equation is obtained by perturbing the final position, namely, PtAf is the limit of Pt (Ps! - f)) as t: -+ O. Likewise, the
equation PI f = APt f which is obtained by perturbing the initial position i.e.
APrf = limB--+o ~(Pe -I)Prf, is called the backward equation. These names are
especially apt in the non-homogeneous case where the forward (resp. backward)
equation is obtained by differentiating Ps,t with respect to t (resp. s).
G
fr
(1.3) Proposition. The space ~A is dense in Co and A is a closed operator.
*
Proof Set Ahf = (Phf - f) and Bs! =
Bs are bounded on Co and moreover
For every s > 0 and f E Co,
f J; Prf dt. The operators Ah and
§ 1. Infinitesimal Generators
283
therefore Bs f E !lJA and since lims---+o Bs f = f, i0"A is dense in Co.
Let now Un) be a sequence in i0"A, converging to I and suppose that (Afn)
converges to g. Then
Bsg
=
lim BsAfn = lim Bs (lim Ahfn)
=
lim lim As (Bhln) = lim Asfn = As!-
n
n
n
h
n
h
It follows that f E i0"A and Af = lims---+o As f = g which proves that A is a
closed operator.
0
The resolvent Up, which was defined in Sect. 2 Chap. III, is the resolvent of
the operator A as is shown in the next
(1.4) Proposition. For every p > 0, the map f -+ pi - Af from i0"A to Co is
one-to-one and onto and its inverse is Up.
Proof If f
E i0"A, then
1
00
Up(pf - Af)
=
p
e- pt Pt(pf - Af)dt
1
00
e- pt Prf dt
-1
00
e- pt (:t Prf) dt;
integrating by parts in the last integral, one gets Up(pl - Af) = f. Conversely,
if f E Co, then, with the notation of the last proposition
lim AhUpf = lim UpAhf = lim
h---+O
h---+O
roo e- pt Pt (Phfh- f) dt
h---+OJo
which is easily seen to be equal to pUpf - f. As a result, (pI - A)Upf = f
and the proof is complete.
0
The last three propositions are actually valid for any strongly continuous semigroup of contractions on a Banach space. Our next result is more specific.
(1.5) Proposition. The generator A of a Feller semi-group satisfies the following
positive maximum principle: if f E i0"A, and if Xo is such that 0 :::s f(xo) =
sup {f(x), x E E}, then
Af(xo) :::s o.
Proof We have Af(xo) = limq,o
f (Prf(xo) - f(xo» and
Pt f(xo) - f(xo) :::s f(xo) (Pt(xo, E) - 1) :::s O.
o
284
Chapter VII. Generators and Time Reversal
The probabilistic significance of generators which was explained below Definition (Ll) is also embodied in the following proposition where X is a Feller
process with transition function (Pt ).
(1.6) Proposition. If f E ~A' then the process
Mf = f(Xt) - f(X o) -lot Af(Xs)ds
is a (.¥,'o, Pv )-martingale for every v. If, in particular, Af = 0, then f(Xt) is a
martingale.
Proof Since f and Af are bounded, Mf is integrable for each t. Moreover
By the Markov property, the conditional expectation on the right is equal to
Ex, [f (X t- s ) - f(X o)
_Io t- s Af(Xu)dUJ .
But for any y E E,
_Io t- s Af(Xu)dUJ
t
= Pt-s/(y) - fey) _Io - PuAf(y)du
Ey [f (X t- s ) - f(X o)
s
which we know to be zero by Proposition (1.2). This completes the proof. We
observe that in lieu of (.¥,'o) , we could use any filtration (Wt) with respect to
which X is a Markov process.
D
Remark. This proposition may be seen as a special case of Exercise (1.8) in
Chap. X. We may also observe that, if f E !'ZA, then f(Xt) is a semimartingale;
in the case of BM, a converse will be found in Exercise (2.23) of Chap. X.
Conversely, we have the
(1. 7) Proposition. If f
E Co and, if there exists a function g E Co, such that
is a (J1{, Px)-martingalefor every x, then f E!'ZA and Af = g.
§ 1. Infinitesimal Generators
285
Proof For every x we have, upon integrating,
Prf(x) - I(x)
-lot
Psg(x)ds = 0,
hence
o
which goes to zero as t goes to zero.
The two foregoing results lead to the following
(1.8) Definition. If X is a Markov process, a Borel function I is said to belong
to the domain ]I)) A of the extended infinitesimal generator if there exists a Borel
function g such that, a.s., J~ Ig(Xs)lds < +oofor every t, and
is a (.%", Px)-right-continuous martingale for every x.
Of course ]l))A ::J ~A; moreover we still write g = AI and call the "operator"
A thus defined the extended infinitesimal generator. This definition makes also
perfect sense for Markov processes which are not Feller processes. Actually, most
of the above theory can be extended to this more general case (see Exercise (1.16»
and the probabilistic significance is the same. Let us observe however that g may
be altered on a set of potential zero (Exercise (2.25) Chap. III) without altering
the martingale property so that the map I --+ g is actually multi-valued and only
"almost" linear.
The remainder of this section is devoted to a few fundamental examples. Some
of the points we will cover are not technically needed in the sequel but are useful
for a better understanding of some of the topics we will treat.
There are actually few cases where ~A and A are completely known and
one has generally to be content with subspaces of ~A. We start with the case
of independent increment processes for which we use the notation of Sect. 4 in
Chap. III. Let .'/ be the Schwartz space of infinitely differentiable functions I
on the line such that limlxl-->oo l(k)(X)p(x) = 0 for any polynomial P and any
integer k. The Fourier transform is a one-to-one map from .'/ onto itself.
(1.9) Proposition. Let X be a real-valued process with stationary independent increments; the space .'7' is contained in MA and for I E .Y'
2
AI(x) = fJI'(x) + a I"(x) +
2
f [/(X +
y) - I(x) -
~/'(X)J
v(dy).
l+y
286
Chapter VII. Generators and Time Reversal
Proof We first observe that 11/11 increases at most like luI 2 at infinity. Indeed
1 (
1
[-\,1]<
eiUX - 1-
iux
1+x
2) v(dx)
I ::;
2v ([-1, In + lUll
~V(dx)
[-\,\]' 1 + x
and
IUll\ I~ -XIV(dX)
_\ 1 +x
+
1\
-\
le iUX - 1 - iux I v(dx);
it remains to observe that the last integrand is majorized by clxl 2 1ul 2 for a constant c.
Let then I be in .9"; there exists a unique g E .Y such that I (y) =
f eiYVg(v)dv. If we set gx(v) = eixvg(v), we have Pt!(x) = (ftt, gx) as is proved
by the following string of equalities:
(ftt,gX)
=
1
1 1
eiXVg(v)ftt(v)dv =
/Lt(dy)
1
eixvg(v)
ei(x+y)vg(v)dv =
(I
eiYV/Lt(dY»)dV
1 +
I(x
Y)/Lt(dy) = Pt!(x).
As a result
where, because eSl/r is the Fourier transform of a probability measure,
IH(t, x)1 ::; sup I{1/1 2eS1/r , gx}1 ::; {11/I12, Igxl};
099
by the above remark, {11/I1, Igxl} and {11/I12, Igxl} are finite so that Hpt!(x) I(x») converges uniformly to {1/1, gx}. As f'(y) = if vgy(v)dv and f"(y) =
i 2 f v 2gy(v)dv, we get for A the announced formula.
D
The three following particular cases are fundamental. To some extent, they
provide the "building blocks" of large classes of Markov processes.
If X is the linear BM, then obviously AI (x) = ~ f" (x) for every I E .9", and
if X = 0' B where B is the linear BM, then AI(x) = ~2 f"(x), I E .9". In this
case we can actually characterize the space ~A' We call
the space of twice
continuously differentiable functions I on IRd(d ~ 1) such that I and its first and
second order derivatives are in Co.
CJ
(1.10) Proposition. For the linear BM, the space ~A is exactly equal to the space
and AI(x) =
on this space.
CJ
V"
§ I. Infinitesimal Generators
287
Proof From Proposition (1.4), we know that ~A = Up(C o) for any p > 0, and
that AUpl = pUpl - I. We leave as an exercise to the reader the task of
showing, by means of the explicit expression of Up computed in Exercise (2.23)
of Chap. III, that if I E Co then Upl E C6 and pUpl - I = !(Upf)".
If, conversely, g is in C6 and we define a function I by
1 "
1= pg --g
2
°
the function g - Up I satisfies the differential equation y" - 2 py = whose only
bounded solution is the zero function. It follows that g = Upl, hence g E qA
and Ag = !g".
0
The other particular cases are: i) the translation at speed f3 for which 21A is the
space of absolutely continuous functions in Co such that the derivative is in Co
and AI(x) = f31 ' (x), ii) the Poisson process with parameter A for which!2iA = Co
(see Exercise (1.14)) and
AI(x) = A(J(X + 1) - I(x)).
In all these cases, we can describe the whole space Y A , but this is a rather unusual
situation, and, as a rule, one can only describe subspaces of ~A'
We tum to the case of BMd.
(1.11) Proposition. For d ~ 2, the infinitesimal generator ofBMd is equal to !Ll
on the space C6.
Proof For I
E Co, we may write
Pt!(x) = (2JT)-d/2l d e- 1zl '/2 I (x + zJt) dz.
If I
E C6, using Taylor's formula, we get
1
t
Pt!(x) = I(x) + "2tLlI(x) + (2JT)-d/2"2J(t, x)
where
J(t, x) =
f
e- 1zl 2 / 2
(d~ [-ax·ax·
a- (I B ) - -a- I( x ) ]
ax-ax"
i.j=l
2
2
I
J
with B some point on the segment [x, x + z.Jt]. Set
For any R > 0, we have
I
J
ZiZj
) dz
288
Chapter VII. Generators and Time Reversal
As t goes to zero, the uniform continuity of second partial derivatives entails that
the first half of the sum above goes to zero uniformly in X; consequently
11~lll
lim sup IJ(t,x)l:s 2 max
tt O XElRd
!.j
aXiaXj
Izl>R
e- zi'/2
1
(L
..
IZiIIZjl) dz.
!.j
By taking R large, we may make this last expression arbitrarily small which
implies that
lim
ttO
II~t (Prf - f) - ~L1fll
=0
2
for every f E C&.
o
Remarks. 1) At variance with the case d = 1, for d > 1, the space C& is not equal
to gA. Using the closedness of the operator A, one can show without too much
difficulty that lZA is the subspace (of Co) of functions f such that L1f taken in
the sense of distributions is in Co and A is then equal to ~L1f.
2) In the case of BM, it follows from Proposition (1.2) that for f E C&
d
1
1
dt Prf = 2. L1Prf = 2. PtL1 j.
Actually, this can be seen directly since elementary computations prove that
a
1
a2
at gt + 2. ax2gt = 0,
and the similar formula in dimension d. It turns out that the equality 11r Prf =
~ L1 Prf is valid for any bounded Borel function f and t > O. In the language
of PDE's, gt and its multidimensional analogues are fundamental solutions of the
+ ~ L1f = O.
heat equation
r,
If B is a BMd(O) and (J a d x d-matrix, one defines a ~d-valued Markov
process X by stipulating that if Xo = x a.s., then X t = x + (J BI • We then have
the
(1.12) Corollary. The infinitesimal generator of X is given on C& by
1
a2 f
Af(x) = - LYij--(X),
2 ..
aXiaXj
!.j
where Y = (J (J t, with (J I being the transpose of (J.
§ 1. Infinitesimal Generators
Proof We have to find the limit, for f E
289
C5, of
1
-E [f(x + a Bt ) - f(x)]
t
where E is the expectation associated with B; it is plainly equal to Llg(O)/2, where
g(y) = f(x +ay), whence the result follows by straightforward computations. 0
Remark. The matrix y has a straightforward interpretation, namely, ty is the
covariance matrix of X t •
Going back to Proposition (1.9), we now see that, heuristically speaking, it
tells us that a process with stationary independent increments is a mixture of a
translation term, a diffusion term corresponding to ~ f" and a jump term, the
jumps being described by the Levy measure v. The same description is valid for
a general Markov process in jRd as long as ffPA J cl-; but since these processes
are no longer, as was the case with independent increments, translation-invariant
in space, the translation, diffusion and jump terms will vary with the position of
the process in space.
(1.13) Theorem. If Pt is a Feller semi-group on jRd and C':' C ffPA' then
i) cl- c ffPA ;
ii) For every relatively compact open set U, there exist functions aij, hi, con U
and a kernel N such that for f E Cl- and x E U
Af(x)
where N(x, .) is a Radon measure on jRd\{x}, the matrix a(x) = lIaij(x)1I is
symmetric and non-negative, cis :s O. Moreover, a and c do not depend on U.
Fuller information can be given about the different terms involved in the description of A, but we shall not go into this and neither shall we prove this result
(see however Exercise (1.19» which lies outside of out main concerns. We only
want to retain the idea that a process with the above infinitesimal generator will
move "infinitesimally" from a position x by adding a translation of vector hex),
a gaussian process with covariance a(x) and jumps given by N(x, .); the term
c(x) f (x) corresponds to the possibility for the process of being "killed" (see Exercise (1.26». If the process has continuous paths, then its infinitesimal generator
is given on Ck- by
af
a2 f
Af(x) = c(x)f(x) + ~::)i(X)-(X) + Laij(x)--(x)
.
1
aXi
..
l,J
aXiaXj
where the matrix a(x) is symmetric and non-negative. Such an operator is said to
be a semi-elliptic second order differential operator.
290
Chapter VII. Generators and Time Reversal
As was noted at the beginning of this section, a major problem is to go the other
way round, that is, given an operator satisfying the positive maximum principle, to
construct a Feller process whose generator is an extension of the given operator.
Let us consider a semi-elliptic second order differential operator L aij (x) axa~x
,
without terms of order 0 or I. If the generator A of a Feller process X is equal
to this operator on c'k, we may say, referring to Corollary (1.12), that between
times t and t + h, the process X moves like a(x)B where a(x) is a square root
of a(x), i.e., a(x) = a(x)at(x), and B a BMd; in symbols
}
X t+h = X t + a(Xt ) (Bt +h - Bt ) + O(h).
The idea to construct such a process is then to see it as an integral with respect
to B,
X t = 1t a(Xs)dBs,
or, to use a terminology soon to be introduced, as a solution to the stochastic differential equation dX = a(X)dB. As the paths of B are not of bounded variation,
the integral above is meaningless in the Stieltjes-Lebesgue sense and this was one
of the main motivations for the introduction of stochastic integrals. These ideas
will be developed in the following section and in Chap. IX.
#
(1.14) Exercise (Bounded generators). 1°) Let IT be a transition probability on
E such that IT (Co(E» C Co(E) and I be the identity on Co(E). Prove that
Pt = exp (t(lT - 1)) is a Feller semi-group such thatYA = Co(E) and A = IT - I.
Describe heuristically the behavior of the corresponding process of which the
Poisson process is a particular case.
[Hint: See the last example in Exercise (1.8), Chap. III.]
2°) More generally, if A is a bounded operator in a Banach space, then Tt =
exp(t A) is a uniformly continuous semi-group (i.e. limq,o II Tr+s - Ts II = 0) of
bounded operators (not necessarily contractions) with infinitesimal generator A.
3°) Prove that actually the three following conditions are equivalent
i) (Tt ) is uniformly continuous;
ii) Y A is the whole space;
iii) A is a bounded operator.
If these conditions are in force, then Tt = exp(t A).
[Hint: Use the closed graph and Banach-Steinhaus theorems.]
(1.15) Exercise. A strongly continuous resolvent on Co is a family (V)..), A > 0,
of kernels such that
i) IIAVAII:s 1 for every A> 0;
ii) VA - VJ.l = (fJ - A)VAVJ.l = (fJ - A)VJ.lVA for every pair (A, fJ);
iii) for every f E Co, limA-+ oo IIAVd - f II = O.
It was shown in Sect. 2 Chap. III that the resolvent of a Feller semi-group is a
strongly continuous resolvent.
§ 1. Infinitesimal Generators
291
1°) If (U)..), )... > 0, is a strongly continuous resolvent, prove that each operator
U).. is one-to-one and that if the operator A is defined by U - A = U;I, then A
does not depend on ).... If (U)..) is the resolvent of a Feller semi-group, A is the
corresponding generator.
2°) Prove that I E ~A if and only if lim)..~oo )...()...Ud - f) exists and the limit
is then equal to AI.
(1.16) Exercise. For a homogeneous transition function Pr. define Bo as the set
of bounded Borel functions I such that limHo II Pt I - I II = 0, where II I II =
suPx 1I (x) I. Define ~A as the set of those functions I for which there exists a
function AI such that
Prove that ~A C Bo and extend the results of the present section to this general
situation by letting Bo play the role held by Co in the text.
(1.17) Exercise. If (Pt ) is a Feller semi-group and I a function in Co such that
f) is uniformly bounded and converges pointwise to a function g of
Co, then I E ~A and AI = g.
[Hint: Prove that I E U)..(Co).]
t- I (PrJ -
#
(1.18) Exercise. Let Pt and Qt be two Feller semi-groups on the same space with
infinitesimal generators A and B. If ~A C ~B and B = A on ~A' prove that
Pt = Qt. Consequently, the map Pt -+ A is one-to-one and no strict continuation
of an infinitesimal generator can be an infinitesimal generator.
[Hint: For I E ~A' differentiate the function s -+ QsPt-s/.]
*
(1.19) Exercise. Let A be a linear map from coo(]Rd) into C(]Rd), satisfying the
positive maximum principle and such that Al = O. We assume moreover that A is
a local operator, namely, if 1== 0 on some neighborhood of x, then AI(x) = O.
1°) Prove that A satisfies the local maximum principle: if I has a local maximum at x, then AI(x) :5 O.
2°) If, for some x, the function I is such that I/(y) - l(x)1 = O(ly - X12) as
y -+ x, prove that AI(x) = O.
[Hint: Apply A to the function I(y) + aly - xl2 for suitably chosen a.]
3°) Call Xi the coordinate mappings and set bi(x) = AXi(x), aij(x) =
A(X i Xj)(x) -bi(x)X j (x) -bj(x)X i (x). Prove that for every x, the matrix (aij(x»)
is non-negative.
[Hint: Use the functions I(y) = IL~=I ei (Xi(y) - xi
4°) Prove that
[Hint: Use Taylor's formula.]
(X»)1 2 where e E ]Rd.]
292
#
Chapter VII. Generators and Time Reversal
(1.20) Exercise. Let y be the Gaussian measure in IRd with mean 0 and covariance
matrix [d. Let f be a COO-function in IRd , bounded as well as its derivatives and
B the BMd (0).
1°) (Chernoff's inequality). Prove that
f(B I) - E[f(Bd] =
10 V(PI-r!)(Bt)dBt
1
and derive that
2°) Suppose further that IIV!II s 1 and prove that there exists a BMI say X
and a r.v. r S 1 such that f(B I ) - E[f(Bd] = X".
[Hint: IIV(PI-r!)II s 1.]
3°) Prove that for every u > 0,
y (Ix E IR : f(x) >
f
f dy + u
l) s
If1
00
exp( _X2 j2)dx.
4°) Extend the result of 1°) to functions f in L2(y) such that V f is in L2(y)
and prove that the equality obtains if and only if f is an affine function.
* (1.21) Exercise. In the case ofBM I let F be a r.v. of L 2 (.¥C:). Assume that there
is a function 1> on Q x 1R+ x Q such that for each t, one has F (w) = 1> (w, t, f1t (w ) )
(see Exercise (3.19) Chap. III) and (x, t) --+ C/>(w, x, t) = Ex [1> (w, t, .)] is for a.e.
w a function of
Prove then that the representation of
given in Sect. 3
Chap. V is equal to
C;::.
F
E[F]
+ lot C/>~ (w, Bs(w), s) dBs(w).
Give examples of variables F for which the above conditions are satisfied.
[Hint: Use Exercise (3.12) Chap. IV.]
(1.22) Exercise. Prove that the infinitesimal generator of the BM killed at 0 is
equal to the operator L~2 on Ck(]O, oo[).
1::
#
(1.23) Exercise (Skew Brownian motion). Prove that the infinitesimal generator
of the semi-group defined in Exercise (1.16) of Chap. III is equal to
2 on the
space {f E Co : I" exists in 1R\{0}, 1"(0-) = 1"(0+) and (1 - a)f'(O-) =
af'(O+)}.
#
(1.24) Exercise. If X is a homogeneous Markov process with generator A, prove
that the generator of the space-time process associated with X (Exercise (1.10) of
Chap. III) is equal to f, + A on a suitable space of functions on 1R+ x E.
§ 1. Infinitesimal Generators
293
(1.25) Exercise. 1°) Let X be a Feller process and U the potential kernel of
Exercise (2.29) Chap. III. If f E Co is such that U f E Co, then U f E Y A and
-AUf = f. Thus, the potential kernel appears as providing an inverse for A.
[Hint: Use Exercise (1.17).]
2°) Check that for BM3 the conditions of 1°) are satisfied for every f E
C K • In the language of PDE's, -l/Ixl is a fundamental solution for A, that is:
- A (1/ Ix I) = 80 in the sense of distributions.
(1.26) Exercise. Let X be a Feller process and c a positive Borel function.
1°) Prove that one defines a homogeneous transition function QI by setting
Although we are not going into this, QI corresponds to the curtailment or "killing"
of the trajectories of X perfonned at the "rate" c(X).
2°) If f is in the domain of the generator of X and, if c is continuous, prove
that
limt- I (Qrf - f) = Af - cf
I~O
pointwise. The reader is invited to look at Proposition (3.10) in the following
chapter.
(1.27) Exercise. 1°) Let Z be a strictly positive r.v. and define a family of kernels
on ~+ by
Prf(x) = E[f((tZ) v x)].
Prove that (PI) is a semi-group iff Z-I is an exponential r.v., i.e. Z-I 14] Ae,
A> 0.
2°) If this is the case, write down the analytical fonn of (PI), then prove that
it is a Feller semi-group, that ~A = Co(~+) and that
Af(x) = A
1
00
(J(y) - f(x»)y- 2 dy.
(1.28) Exercise. In the notation of this section, call An the n-th iterate of A and
its domain.
1°) If qJ E C~( ]0, oo[) prove that for f E Co, the function Pcpf (see above
Lemma (4.3» is in ~An for every n. Prove then that
~An is dense in Co.
2°) If the paths of X are continuous and if f E i¥An prove that
MAn
nn
n-I
{ ; ((-tl/k!)Akf(X I ) + (-It
I
10 (un-I/(n - 1)!)An f(Xu)du
is a (~O, Pv)-martingale for every v. This question does not depend on the first
one; furthennore the continuity of paths is needed only because of the limitations
of this book.
294
Chapter VII. Generators and Time Reversal
§2. Diffusions and Ito Processes
In the foregoing section, we have seen, in a heuristic way, that some Markov
processes ought to be solutions to "stochastic differential equations". We now take
this up and put it in a rigorous and systematic form thus preparing for the discussion
in Chap. IX and establishing a bridge between the theory of the infinitesimal
generator and stochastic calculus.
In the sequel, a and b will denote a matrix field and a vector field on lRd
subject to the conditions
i) the maps x --+ a(x) and x --+ b(x) are Borel measurable and locally bounded,
ii) for each x, the matrix a(x) is symmetric and non-negative i.e. for any A E lRd ,
Li,j aij(x)A;Aj ~ O.
With such a pair (a, b), we associate the second order differential operator
1
a2
d
d
a
L = 2" L aijO ax.ax. + Lb;(.) ax-'
i,j=1
I
J
i=1
I
In Sect. 1, we have mentioned that some Markov processes have infinitesimal
generators which are extensions of such operators. It is an important problem to
know if conversely, given such an operator L, we can find a Markov process
whose generator coincides with L on
ci:-.
(2.1) Definition. A Markov process X = (Q, .¥, .%" X to Px ) with state space lRd
is said to be a diffusion process with generator L if
i) it has continuous paths,
ii) for any x E lRd and any I
Ex[f(X
E C'/t,
t)) = I(x) + Ex [lot LI(Xs)dS] .
We further say that X has covariance or diffusion coefficient a and drift b.
This is justified by the considerations in Sect. 1.
Let us stress that the hypothesis of continuity of paths includes that l; = 00 a.s.
o
As a result, if {Knl is an increasing sequence of compact sets such that Kn C Kn+1
and Un Kn = lRd , then setting an = TK~' we have limn an = +00. Furthermore,
the necessity ofthe non-negativity of a follows from Theorem (1.13) and Exercise
(1.19) but is also easily explained by Exercise (2.8).
Observe also that one could let a and b depend on the time s, and get for each
s a second-order differential operator Ls equal to
§2. Diffusions and Ito Processes
295
The notion of diffusion would have to be extended to that of non homogeneous
diffusion. In that case, one would have probability measures Ps,x corresponding
to the process started at x at time s and demand that for any f E Cr;, s < t,
Es.Af(Xt )] = f(x) + Es.x [it Luf(Xu)dU] .
In the sequel, we will deal mainly with homogeneous diffusions and write for
f E C2,
M! = f(X t ) - f(Xo) -fot Lf(Xs)ds.
The process M f is continuous; it is moreover locally bounded, since, by the
hypothesis made on a and b, it is clearly bounded on [0, an /\ n]. Likewise, if
f E Cr;, Mf is bounded on every interval [0, t] and the integrals in ii) are finite.
(2.2) Proposition. The property ii) above is equivalent to each of the following:
iii) for any f E Cr;, Mf is a martingale for any Px;
iv) for any f E c 2, Mf is a local martingale for any Px .
Proof If iii) holds, then, since
Mt = 0,
Prf(x) - f(x) - Ex [fot Lf(Xs)dS] = Ex[M!] = 0,
and ii) holds.
Conversely, if ii) holds, then Ex,[M!] =
property, we consequently have
°for every sand t. By the Markov
M! + Ex [f(Xt) - f(Xs) _ [t Lf(Xu)duIYs]
M! + Ex, [Mls] = M!,
which shows that ii) implies iii).
If Mf is a local martingale and is bounded on [0, t] for each t, then it is a
martingale; thus, iv) implies iii).
To prove that iii) implies iv), let us begin with f in C~. There is a compact
set H and a sequence {fp} of functions in Cr; vanishing on He and such that {fp}
converges uniformly to f on H as well as the first and second order derivatives.
For every t, the process M!p - M! is bounded on [0, t] by a constant cp which
goes to zero as p ----+ 00. By passing to the limit in the right-hand side of the
inequality
we see that M! is a martingale.
Let now f be in C 2 ; we may find a sequence {gll} of functions in C~ such
that gll = f on Kn. The processes Mf and Mg" coincide up to time an. Since
Mg" is a martingale by what we have just seen, the proof is complete.
296
Chapter VII. Generators and Time Reversal
Remarks. 1°) The local martingales Mf are local martingales with respect to
the uncompleted u-fields ~o = u(Xs , s ::s t) and with respect to the usual
augmentation of (~o).
2°) Proposition (2.2) says that any function in
is in the domain of the
extended infinitesimal generator of X. If X is Feller, by arguing as in Proposition
(1.7), we see that
C qA and A = L on
In the same vein, if Lf = 0, then
f(X t ) is a local martingale, which generalizes what is known for BM (Proposition
(3.4) Chap. IV). By making f(x) = x, we also see that X is a local martingale if
and only if L has no first order terms.
C;
Ck.
Ck
If we think of the canonical version of a diffusion where the probability space
is W = C(lR+, ]Rd) and X is the coordinate process, the above result leads to the
(2.3) Definition. A probability measure P on W is a solution to the Martingale
problem 1l'(x, a, b) if
i) P[Xo=x]=l;
ii) for any f E Cr;, the process
Mf = f(X t ) - f(Xo) -1t Lf(Xs)ds
is a P-martingale with respect to the filtration (u(Xs, s ::s t» = (~o).
The idea is that if (Q, Xt, Px ) is a diffusion with generator L, then X (Px ) is a
solution to the martingale problem 1l' (x, a, b). Therefore, if one wants to construct
a diffusion with generator L, we can try in a first step to solve the corresponding
martingale problem; then in a second step if we have a solution for each x, to
see if these solutions relate in such a way that the canonical process is a diffusion
with L as its generator. This will be discussed in Chap. IX.
For the time being, we prove that the conditions in Proposition (2.2) are equivalent to another set of conditions. We do it in a slightly more general setting which
covers the case of non-homogeneous diffusions. Let a and b be two progressively
measurable, locally bounded processes taking values in the spaces of non-negative
symmetric d x d-matrices and d-vectors. For f E C 2 (]Rd), we set
1
a2 f
af
Ls(w)f(x) = - I>ij(s, w)--(x) + Lbi(s, w)-(x).
2 ..
aXiaXj
.
aXi
',J
•
(2.4) Proposition. Let X be a continuous, adapted, ]Rd-valued process; the three
following statements are equivalent:
for any f E C 2, the process Mf = f(X t ) - f(Xo) - f~ Ls/(Xs)ds is a local
martingale;
ii) foranye E ]Rd, the process
= (e, Xt-X o- f~ b(s)ds) isalocalmartingale
and
i)
M:
§2. Diffusions and Ito Processes
297
iii) for any e E jRd,
~II = exp ((e, X t - Xo - lot b(s)ds) - ~ lot (e, a(s)e)ds)
is a local martingale.
Proof i) =:} ii). For f(x) = (e, x), we get Mf = Mil which is thus a local
martingale. Making f(x) = (e, x)2 in i), we get that
Htll == (e, X t )2 - (e, XO)2 - 2 lot (e, X,)(e, b(s»)ds - lot (e, a(s)e)ds
is a local martingale. Writing X ~ Y if X - Y is a local martingale, we have,
since 2(e, Xo)Mf is a local martingale, that
(M~)2 - lot (e, a(s)e)ds ~ (M~ + (e, XO))2 - lot (e, a(s)e)ds.
Setting At = (e, f~ b(s)ds), we further have
(M~ + (e, XO))2 - (e, Xo)2 - lot (e, a(s )e)ds - Htll
«e, Xt) - At)2 - (e, X t )2 + 2 lot (e, Xs)dAs
-2(e, Xt)A t + A; + 2
L
(e, X,)dA s.
As (e, X t ) = Mf + (e, Xo) + At is a semimartingale, we may apply the integration
by parts formula to the effect that
(M~)2 - lot (e, a(s)e)ds ~ A; - 2 lot AsdAs = 0
which completes the proof.
ii) =:} iii). By Proposition (3.4) in Chap. IV, there is nothing to prove as
i'f/ = (5 (MII)t.
iii) =:} i). We assume that iii) holds and first prove i) for fey) = exp«(e, y).
The process
Vt = exp [(e, lot b(s)ds) + ~ lot (e, a(s)e)ds ]
is of bounded variation; integrating by parts, we obtain that the process
C;/Vt - lot g/dVs
= exp«(e, X t - Xo) - lot exp«(e, X, - Xo) ((e, b(s) +
~a(s)e)) ds
298
Chapter VII. Generators and Time Reversal
is a local martingale. Since Ls!(x) = exp«(e, Xs» ((e, b(s) + ~a(s)e)), we have
proved that
f(Xo) (f(X
t)-lot Ls!(X,)dS)
is a local martingale. The class of local martingales being invariant under multiplication by, or addition of, .YJO-measurable variables, our claim is proved in that
case. To get the general case, it is enough to observe that exponentials are dense
in C 2 for the topology of uniform convergence on compact sets of the functions
and their two first order derivatives.
D
Remarks. 1°) Taking Proposition (2.2) into account, the implication ii) => i) above
is a generalization ofP. Levy's characterization theorem (Theorem (3.6) of Chapter
IV) as is seen by making a = I d and b = O.
2°) Another equivalent condition is given in Exercise (2.11).
(2.5) Definition. A process X which satisfies the conditions of Proposition (2.4) is
called an Ito process with covariance or diffusion coefficient a and drift b.
Obviously Ito processes, hence diffusions, are continuous semimartingales. We
now show that they coincide with the solutions of some "stochastic differential
equations" which we shall introduce and study in Chap. IX. We will use the
following notation. If X = (Xl, ... , X d ) is a vector semimartingale and K = (Kij)
a process taking its values in the space of r x d-matrices, such that each Kij is
progressive and locally bounded we will write K . X or J~ K.,dX s for the rdimensional process whose i -th component is equal to L j J~ Kij (s)d X j (s).
(2.6) Proposition. Let fJ be an (.Y()-BM" defined on a probability space
(Q, .'7, .%" P) and a (resp. b) a locally bounded predictable process with values in the d x r-matrices (resp. JRd); if the adapted continuous process X satisfies
the equation
X
t = Xo + lot a (s)dfJs + lot b(s)ds,
it is an Ito process with covariance aa t and drift b. If, in particular, a(s) = a(X s )
and b(s) = b(Xs) for two fields a and b defined on JRd, and if Xo = x a.s., then
X (P) is a solution to the martingale problem n(x, aa t , b).
Proof A straightforward application of Ito formula shows that condition i) of
Proposition (2.4) holds.
D
We now want to prove a converse to this proposition, namely that given an Ito
process X, and in particular a diffusion, there exists a Brownian motion fJ such
that X satisfies (*) for suitable a and b. The snag is that the space on which X
is defined may be too poor to carry a BM; this is for instance the case if X is
the translation on the real line. We will therefore have to enlarge the probability
space unless we make an assumption of non degeneracy on the covariance a.
§2. Diffusions and Ito Processes
299
(2.7) Theorem. If X is an Ito process with covariance a and drift b, there exist a predictable process a and a Brownian motion B on an enlargement of the
probability space such that
X
t = Xo + lot a(s)dBs + lot b(s)ds.
Proof By ii) of Proposition (2.4), the continuous vector local martingale M t =
X t - Xo - f~ b(s)ds satisfies (Mi, Mj)t = f~ aij(s)ds. The result follows immeD
diately from Proposition (3.8) in Chap. V.
Remark. By the remark after Proposition (3.8) in Chap. V, we see that if a is
d P ®dt a.e. strictly non-negative, then a and B may be chosen such that a = aa t .
If in particular a(s) = c(Xs) where c is a measurable field of symmetric strictly
non-negative matrices on IR d , we can pick a measurable field y of matrices such
that yyt = c and take a(s) = y(Xs).
#
(2.8) Exercise. 1°) In the situation of Proposition (2.2) and for f, g E C 2 prove
that
[This exercise is solved in Sect. 3 Chap. VIII].
2°) Deduce from 1°) the necessity for the matrices a (x) to be non-negative.
#
(2.9) Exercise. In the situation of Proposition (2.2), prove that if f is a strictly
positive C 2 -function, then
is a Px-local martingale for every x.
Ito
(2.10) Exercise. If X is a d-dimensional
process with covariance a and drift
0, vanishing at 0, prove that for 2 ::: p < 00, there is a constant C depending only
on p and d such that
E
L~~~t IXs I
P ] ::: C E [
(lot Trace a(s) dS) P/2] .
(2.11) Exercise. Prove that the conditions in Proposition (2.4) are also equivalent
to
iv) for any f on [0, OO[xIRd which is once (twice) differentiable in the first
(second) variable, the process
is a local martingale. Compare with Exercise (1.24).
300
Chapter VII. Generators and Time Reversal
*
(2.12) Exercise. In the situation of Proposition (2.4), suppose that a and b do not
depend on w. If u is a function on [0, oo[xlRd , which is sufficiently differentiable
= Ltu + gin ]0, oo[xlRd , prove that
and such that
u(t - s, Xs) +
1
s
get - r, Xr)dr
is a local martingale on [0, t[.
§3. Linear Continuous Markov Processes
Beside the linear BM itself, many Markov processes, with continuous paths, defined on subsets of lR such as the BES 3 or the reflected BM have cropped up in our
study. The particular case of Bessel processes will be studied in Chap. XI. This is
the reason why, in this section, we make a systematic study of this situation and
compute the corresponding generators.
We will therefore deal with a Markov process X whose state space E is
an interval (l, r) of lR which may be closed, open or semi-open, bounded or
unbounded. The death-time is as usual denoted by l;. We assume, throughout the
section, that the following assumptions are in force:
i) the paths of X are continuous on [0, l; [;
ii) X enjoys the strong Markov property;
iii) if l; < 00 with strictly positive probability then at least one of the points I and
r does not belong to E and limrt< X t ~ E a.s. on {l; < oo}; in other words X
can be "killed" only at the end-points of E which do not belong to E.
Property i) entails that the process started at x cannot hit a point y without
hitting all the points located between x and y. The hitting time of the one-point
set {x} is denoted by Tx; we have
Tx = inf{t > 0: X t = x}
where as usual inf(0) = +00. Naturally, XTx = x on {Tx < oo}.
Finally, we will make one more assumption, namely, that X is regular: for
o
any x E E =]1, r[ and y E E, PATy < 00] > 0. This last hypothesis means
that E cannot be decomposed into smaller sets from which X could not exit (see
Exercise (3.22».
From now on, we work with the foregoing set of hypotheses.
For any interval I =]a, b[ such that [a, b] c E, we denote by 0"/ the exit time
of I. For x E I, we have 0"/ = Ta /\ Tb Px-a.s. and for x ~ I, 0"/ =
Px-a.s. We
also put m/(x) = EAO"/].
°
(3.1) Proposition. If I is bounded, the function m/ is bounded on I. In particular
is almost-surely finite.
0"/
§3. Linear Continuous Markov Processes
301
Proof Let y be a fixed point in I. Because of the regularity of X, we may pick
a < 1 and t > 0 such that
If now y < x < b, then
the same reasoning applies to a < x < y and consequently
sup PAa, > t] :::: a < 1.
XE'
Now, since a, = u + a, 0 (}u on {a, > u}, we have
PAa, > nt] = PA(a, > (n - I)t) n ((n - I)t + a, 0 (}(n-I)I > nt )],
and using the Markov property
On {a, > (n - I)t), we have X(n-I)I E I Px-a.s. and therefore
PAa, > nt] :::: aP[a, > (n - l)tl
It follows inductively that Px[a, > nt] :::: an for every x E I, and therefore
L t Px[a, > nt] :::: t(l - a)-I,
00
sup Ex[aIJ :::: sup
XE'
XE' n=O
which is the desired result.
o
For a and b in E and 1 :::: a < x < b :::: r, the probability PATb < Ta] is the
probability that the process started at x exits ]a, b[ by its right-end. Because of
the preceding proposition, we have
(3.2) Proposition. There exists a continuous, strictly increasing function s on E
such that for any a, b, x in E with 1 :::: a < x < b :::: r
Px[Tb < Ta] = (s(x) - s(a»)/(s(b) - sea»).
If s is another function with the same properties, then s = as + f3 with a > 0 and
f3 E R
Proof Suppose first that E is the closed bounded interval [I, r l The event {Tr <
Tt} is equal to the disjoint union
302
Chapter VII. Generators and Time Reversal
Now Tt = Ta + Tt 0 (ho and Tr = Ta + Tr 0 BTa on the set {Ta < Tb}. Thus
Px [Tr < Tt; Ta < Tb] = Ex [1(Ta<Tbl I(T,<1il 0 BTJ
and since {Ta < Tb} E '~a' the strong Markov property yields
Px [Tr < T/; Ta < Tb] = Ex [1(7;,<ThlExTa [1(T,<1il]]
and because XTa = a a.s.,
We finally get
Setting sex) = Px[Tr < Tt] and solving for PATb < Ta] we get the formula in
the statement.
To prove that s is strictly increasing, suppose there exist x < y such that
sex) = s(y). Then, for any b > y, the formula just proved yields Py[Tb < Tx] = 0
which contradicts the regularity of the process.
Suppose now that E is an arbitrary interval. If 12 < II < rl < r2 are four points
in E, we may apply the foregoing discussion to [II, rIl and [/ 2, r2]; the functions Sl
and S2 thus defined obviously coincide on ]/1 , rl [ up to an affine transformation. As
a result, a function s may be defined which satisfies the equality in the statement
for any three points in E. It remains to prove that it is continuous.
If a < x and {an} is a sequence of real numbers smaller than x and decreasing
to a, then Taa t Ta Px-a.s.; indeed, because of the continuity of paths, Xlim a Taa = a
Px-a.s., so that limn Taa :::: Ta, and the reverse inequality is obvious. Consequently,
{Ta < Tb} = limn{Taa < Tb} Px-a.s. since obviously PATa = Tb] = O. It follows
that s is right-continuous in a and the left-continuity is shown in exactly the same
0
way. The proof is complete.
(3.3) Definition. The function s of the preceding result is called the scale function
ofX.
We speak of the scale function although it is defined up to an affine transformation. If s (x) may be taken equal to x, the process is said to be on its natural
scale. A process on its natural scale has as much tendency to move to the right
as to the left as is shown in Exercise (3.15). The linear BM, the reflected and
absorbed linear BM's are on their natural scale as was proved in Proposition (3.8)
of Chap. II. Finally, a simple transformation of the state space turns X into a
process on its natural scale.
(3.4) Proposition. The process Xt = s(X t ) satisfies the hypotheses of this section
and is on its natural scale.
Proof Straightforward.
o
§3. Linear Continuous Markov Processes
303
The scale function was also computed for BES 3 in Chap. VI. In that case,
since is not reached from the other points, to stay in the setting of this section,
we must look upon BES 3 as defined only on ]0,00[. The point 0 will be an
entrance boundary as defined in Definition (3.9). In this setting, it was proved in
Chap. VI that s (x) = -1/ x (see also Exercise (3.20) and the generalizations to
other Bessel processes in Chap. XI). The proof used the fact that the process X
of Proposition (3.4) is a local martingale; we now extend this to the general case.
We put R = ~ /\ II /\ Tr • Let us observe that if ~ is finite with positive probability
and limtH X t = I (say), then limx-+l sex) is finite and we will by extension write
s(l) for this limit. Accordingly, we will say that s(X{) = s(l) in that case.
°
(3.5) Proposition. A locally bounded Borel function I is a scale function if and
only if I (Xt)R is a local martingale. In particular, X is on its natural scale if and
only if X R is a local martingale.
Proof If I(Xt)R is a local martingale, then fora < x < b, the process I(Xt)Ta/\Tb
is a bounded (.'7r, Px )-martingale and the optional stopping theorem yields
On the other hand, as already observed,
Solving this system of linear equations in Px[Ta < Tb] shows that I is a scale
function.
Conversely, by the reasoning in the proof of Proposition (3.2), if I is a scale
E
function, then it is continuous. As a result, for [a, b] C and e > 0, we may find
a finite increasing sequence A = (adk=o .... ,K of numbers such that ao = a, aK = b
and I/(ak+d - I(adl < e for each k ::: K - 1. We will write S for Ta /\ Tb.
We define a sequence of stopping times Tn by To = 0, Tl = TA and
Clearly, {Tn} increases to S and if I is a scale function, the strong Markov property
implies, for x E]a, b[,
Ex [J (X~I) 0 ()Tn - I I '~n-I]
EXTn _ 1 [J(X~)] = f(Xt)
Px-a.s.;
in other words, {t(Xt>,.%.} is a bounded Px-martingale.
For t > 0, let N = inf{n 2: 1 : Tn 2: t} where as usual inf(0) = 00. The r.v. N
is a stopping time with respect to (.~.), so that, by the optional stopping theorem
But, on the set {t < S}, N is finite and TN- 1 < t ::: TN ::: S which by the choice
of A implies that
304
Chapter VII. Generators and Time Reversal
Ex [II(X!) - I(XfN)I] < 8.
Since 8 is arbitrary, it follows that I(x) = Ex[f(X!)] and another application of
the Markov property shows that I(X!) is a martingale. The proof is now easily
completed.
Remarks. We have thus proved that, up to an affine transformation, there is at
most one locally bounded Borel function I such that I (X t ) is a local martingale.
This was stated for BM in Exercise (3.13) of Chap. II. If R = 00, we see that s
belongs to the domain of the extended infinitesimal generator A and that As = 0,
a fact which agrees well with Theorem (3.12) below.
We now introduce another notion, linked to the speed at which X runs through
its paths. We will see in Chap. X how to time-change a BM so as to preserve the
Markov property. Such a time-changed BM is on its natural scale and the converse
may also be shown. Thus, the transformation of Proposition (3.4) by means of the
scale function turns the process into a time-changed BM. The time-change which
will further tum it into a BM may be found through the speed-measure which we
are about to define now. These questions will be taken up in Sect. 2 of Chap. X.
Let now J =]e, d[ be an open subinterval of [. By the strong Markov property
and the definition of s, one easily sees that for a < e < x < d < b,
Ex [O"I +OI 080"1]
sed) - sex)
sex) - s(e)
mJ(x) + sed) _ s(e) mICe) + sed) _ s(e) mI(d).
Since mJ(x) > 0, it follows that mI is an s-concave function. Taking our cue
from Sect. 3 in the Appendix and Exercise (2.8) in Chap. VI, we define a function
G I on E x E by
(s(x) - s(a))(s(b) - s(y))
s(b) - sea)
(s(y) - s(a))(s(b) - sex))
s(b) - sea)
°
if a :S x :S y :S b,
if a :S y :S x :S b,
otherwise.
o
(3.6) Theorem. There is a unique Radon measure m on the interior E of E such
that for any open subinterval [ =]a, b[ with [a, b] c E,
mI(x) =
f
GI(x, y)m(dy)
for any x in [.
Proof By Sect. 3 in the Appendix, for any [, there is a measure v I on I for which
§3. Linear Continuous Markov Processes
305
Thus, we need only prove that if J C I as in (*), vJ coincides with the restriction
of v / to J. But, this is a simple consequence of the definition of v/; indeed, if we
take the s-derivatives in (*), we see that the derivatives of m/ and mJ differ by a
constant and, consequently, the associated measures are equal.
0
(3.7) Definition. The measure m is called the speed measure of the process X.
For example, using the result in Exercise (3.11) of Chap. II or Exercise (2.8)
of Chap. VI, one easily checks that, if we take s (x) = x, the speed measure of
Brownian motion is twice the Lebesgue measure. The reader will also observe
that, under m, every open sub-interval of E has a strictly positive measure.
We will see, in Theorem (3.12), that the knowledge of the scale function and
of the speed measure entails the knowledge of the infinitesimal generator. It is
almost equivalent to say that they determine the potential operator of X killed
when it exits an interval, which is the content of the
(3.8) Corollary. For any I =]a, b[, x E I and f
Ex
[lTaATb f(Xs)dS] =
!
E .J9(~)+,
G/(x, y)f(y)m(dy).
Proof Pick c such that a < c < b. The function v(x) = Ex
is s-concave on la, b[ and v(a) = v(b) = 0; therefore
v(x) = -
!
[J:aATb 11c.b [(X u )du]
G/(x, y)v"(dy)
where v" is the measure which is associated with the s-derivative of v (see Appendix 3). On the other hand, by the Markov property,
v(x)
=
v(x)
=
Ex[Tc /\ Tb]
s(x) - s(a)
s(c) - s(a)
s(b) - s(x)
+ s(b) - s(c) v(c)
v(c)
on ]c, b[,
on la, c}.
Since the function E.[Tc /\ Tb] is equal to f G 1c .b[(., y)m(dy), the measure associated to its second s-derivative is equal to -11 c .b[m; the measures associated with
the other terms are obviously 0 since their s-derivatives are constant. Therefore
v(x) =
!
G/(x, y)11c.b[(y)m(dy),
which is the result stated in the case where f = 11 c .b[. The proof is completed by
means of the monotone class theorem and the usual extensions arguments.
0
From now on, we will specialize to the case of importance for us, namely
when E is either ]0, oo[ or [0, oo[ and we will investigate the behavior of the
process at the boundary {OJ. The reader may easily carry over the notions and
results to the other cases, in particular to that of a compact subinterval. If 0 f/:. E
and if l; = 00 a.s., we introduce the following classification.
306
Chapter VII. Generators and Time Reversal
(3.9) Definition. If E =]0, 00[, the point 0 is said to be a natural boundary iffor
all t > 0 and y > 0,
lim Px[Ty < t] = o.
x,j,O
It is called an entrance boundary if there are t > 0 and y > 0 such that
lim PATy < t] > O.
x,j,O
An example where 0 is an entrance boundary is given by the BES 3 process described in Sect. 3 of Chap. VI (see more generally Bessel processes of dimension
2: 2 in Chap. XI). Indeed, the limit in Definition (3.9) is monotone, hence is equal,
in the case of BES 3 to the probability that BES 3(0) has reached y before time t
which is strictly positive. In this case, we see that the term "entrance boundary"
is very apt as BES 3 is a process on [0, oo[ (see Sect. 3 Chap. VI) which does not
come back to zero after it has left it.
At this juncture, we will further illustrate the previous results by computing
the speed measure ofBES3. Since the scale function ofBES 3 is equal to (-l/x),
by passing to the limit in Corollary (3.8), we find that the potential operator of
BES 3 is given, for x > 0, by
Vj(x) = Ex
[la j(Xs)dsJ = la u(x, y)j(y)m(dy)
oo
oo
where m is the speed measure and u (x, y) = inf(l / x, 1jy). To compute the
potential kernel V(O, .) of BES 3(0), we may pass to the limit; indeed for E: > 0,
if j vanishes on [0, E], the strong Markov property shows that
Vj(O) = Vj(E) =
1
00
y-l j(y)m(dy).
Passing to the limit yields that V (0, .) is the measure with density y-l 1(pO) with
respect to m. On the other hand, since the modulus ofBM 3 (0) is a BES 3(0), V (0, .)
may be computed from the potential kernel of BM3 (Exercise (2.29) Chap. III); it
follows, using polar coordinates, that
Vj(O)
-1
277:
21
1
m;3
00
j(lxl)
- - d x =1Ix I
277:
12lf jlf/21°O p-lj(p)p cos¢ded¢dp
2
0
- I f /2
0
j(p)pdp.
Comparing the two formulas for V j (0) shows that the speed measure for BES 3
is the measure with density 2y2 l(po) with respect to the Lebesgue measure.
We now tum to the case E = [0,00[. In that situation, s(O) is a finite number
and we then always make s(O) = O. The hypothesis of regularity implies that 0 is
§3. Linear Continuous Markov Processes
307
visited with positive probability; on the other hand, since regularity again bars the
possibility that be absorbing, by the remarks below Proposition (2.19) Chap. III,
the process started at leaves {OJ immediately.
The speed measure is defined so far only on ]0, oo[ and the formula in Theorem
(3.6) gives for b > 0, the mean value of To 1\ Tb for the process started at x E]O, b[.
We will show that the definition of G may be extended and that m({O}) may be
defined so as to give the mean value of Tb for the process started at x E [0, b[.
We first define a function s on] - 00, oo[ by setting sex) = sex) for x :::: and
sex) = -s(-x) for x < 0. For J = [0, b[, we define OJ as the function G[
defined for I = [-b, b] by means of s in lieu of s. For x, y :::: 0, we next define
a function G j by
°
°
°
(3.10) Proposition. One can choose m({O}) in order that for any x E J and any
positive Borel function f,
Proof Thinking of the case of reflected BM, we define a process X on ] - 00, oo[
from which X is obtained by reflection at 0. This may be put on a firm basis by
using the excursion theory of Chap. XII and especially the ideas of Proposition
(2.5) in that chapter. We will content ourselves here to observe that we can define
the semi-group of X by setting for x :::: and A C lR+
°
[
1
Pt(x, A) = Ex lA(Xt ) 1(t::oTol] + lEx [lA(Xt)l(t~Tol]
and for A C lR_,
where A = -A. For x < 0, we set Pt(x, A) = pte-X, A). Using the fact that
To = t + To 0 8t on {To > t}, it is an exercise on the Markov property to show
that this defines a transition semi-group. Moreover, excursion theory would insure
that X has all the properties we demanded of X at the start of the section. It is
easy to see that s is a scale function for X and the corresponding speed measure
mthen coincides with m on ] - 00, O[ and ]0, 00[.
Let now I be the function equal to f on [O,oo[ and defined by lex) =
- fe-x) on] - 00,0[. By applying Corollary (3.8) to X, we have
{Tb/\Lh
Ex [ 10
(
h~~
I(Xs)ds
]
(b
= i-b OJ(X, y)f(y)m(dy)
Gj(x, y)f(y)m(dy) + ~Gj(x, O)f(O)m({O}).
It remains to set m({O}) = m({0})/2 to get the result.
We observe that m({O}) < 00.
2
308
Chapter VII. Generators and Time Reversal
(3.11) Definition. The point 0 is said to be slowly reflecting ifm({O}) > 0 and
instantaneously reflecting ifm({O}) = O.
If absorbing points were not excluded by regularity, it would be consistent to
set m({O}) = 00 for 0 absorbing.
For the reflected BM, the point 0 is instantaneously reflecting; the Lebesgue
measure of the set {t : X t = O} is zero which is typical of instantaneously reflecting
points. For slowly reflecting points, the same set has positive Lebesgue measure as
is seen by taking f = 1{O} in the above result. An example of a slowly reflecting
point will be given in Exercise (2.29) Chap. X.
We next tum to the description of the extended infinitesimal generator A of
X; its domain (see Sect. I) is denoted by ]]} A. Here E is any subinterval of lR. We
recall that the s-derivative of a function f at a point x is the limit, if it exists, of
the ratios (J (y) - f (x) ) / (s (y) - s(x») as y tends to x. The notions of right and
left derivatives extend similarly.
(3.12) Theorem. For a bounded function f of]]}A and x E
E,
d d
Af(x) = dm ds f(x)
in the sense that
'Z
i) the s-derivative
exists except possibly on the set {x : m({x}) > OJ,
ii) if Xl and X2 are two points for which this s-derivative exists
df
ds
-(X2) -
df
-(xd
=
ds
l
x2
Af(y)m(dy).
Xl
Proof If f E ][}A, by definition
M! = f(Xt) - f(Xo) -lot Af(Xs)ds
is a martingale. Moreover, 1M! I s 211f11 + tllAfil so that if T is a stopping time
such that Ex[T] < 00, then M~T is uniformly integrable under Px and therefore
Ex[f(XT)] - f(x) = Ex [loT Af(Xs)dSJ.
E
For I = la, b[ C and a < x < b, we may apply this to T = Ta /\ Tb, and, by
Corollary (3.8), it follows that
(#)
f(a)(s(b) - sex») + f(b)(s(x) - sea») - f(x)(s(b) - sea»)
= (s(b) - sea»)
1
G/(x, y)Af(y)m(dy).
By straightforward computations, this may be rewritten
§3. Linear Continuous Markov Processes
f
feb) - f(x) _ f(x) - f(a) =
s(b) - sex)
sex) - sea)
where
s(y) - sea)
H[(x, y)Af(y)m(dy)
if a < y S x,
sex) - sea)
s(b) - s(y)
H[(x, y) =
309
if x S y < b,
s(b) - sex)
otherwise.
0
If we let b decrease to x, the integrand :~~;:::::i~; I (xSy<b) tends to I {x}; by an
application of Lebesgue's theorem, we see that the right s-derivative of f exists.
Similarly, the left s-derivative exists. If we let simultaneously a and b tend to x,
we find, applying again Lebesgue's theorem, that
df+
df- ( x ) - - ( x ) = 2m({x})Af(x)
ds
ds
which yields the part (i) of the statement.
To prove the second part, we pick h such that a < x + h < b. Applying (#)
to x and x + h and subtracting, we get
feb) - f(a) - (s(b) _ s(a») f(x + h) - f(x)
sex + h) - sex)
= (s(b) - sea»)
f
G[(x + h, y) - G[(x, y) Af(y)m(dy).
sex + h) - sex)
If the s-derivative of f in x exists, letting h go to zero and applying once more
Lebesgue's theorem yields
df
feb) - f(a) - (s(b) - s(a»)-(x)
ds
=
=
-IX
-Ib
(s(y) - s(a»)Af(y)m(dy) +
s(y)AJ(y)m(dy) + sea)
IX
lb
(s(b) - s(y»)Af(y)m(dy)
Af(y)m(dy) + s(b)
lb
Af(y)m(dy).
Let Xl < X2 be two such points; by subtraction, we obtain
df
ds
-(X2) -
df
ds
-(Xl)
=
l
x2
Af(y)m(dy)
Xl
which is the desired result.
Remarks. 1°) The reader will observe that, if s is multiplied by a constant, since by
its very definition m is divided by the same constant, the generator is unchanged,
as it should be.
310
Chapter VII. Generators and Time Reversal
! ::2
2°) For the linear BM, we get A =
as we ought to. The reader can
further check, using the values for sand m found above, that for the BES 3 , we
this jibes with the SDE satisfied by BES 3 which was given
have A = aD:, + ~
in Sect. 3 of Chap. VI.
3°) The fact that S(X)R is a local martingale agrees with the form of the
infinitesimal generator and Proposition (1.6).
!
t;
We now investigate what happens at the boundary point when E = [0,00[.
The positive maximum principle shows that the functions of]]J) A must satisfy some
condition on their first derivative. More precisely, we have the
(3.13) Proposition. If E = [0, 00[, for every bounded f E ]]J) A
df+
Ts(O) = m({O})Af(O).
Proof Using Proposition (3.10) instead of Corollary (3.8), the proof follows the
same patterns as above and is left to the reader as an exercise.
0
For the reflected BM, we see that, by continuity, Af(O) = !f"(0) and that
f' (0) = 0 which is consistent with the positive maximum principle; indeed, a
function such that f' (0) < 0, could have a maximum in 0 with f" (0) > o. It
is also interesting to observe that the infinitesimal generators of the reflected BM
and the absorbed BM coincide on Ci(]O, oo[) (see Exercise (3.16».
As may be surmised, the map X --+ (s, m) is one-to-one or in other words,
the pair (s, m) is characteristic of the process X. We prove this in a special case
which will be useful in Chap. XI.
(3.14) Proposition. If X and X are two Feller processes on [0, oo[ such that s = s
and m = iii, then they are equivalent.
Proof By Propositions (1.6) and (1.7) on the one hand, and Theorem (3.12) and
Proposition (3.13) on the other hand, the spaces qA and YA: are equal and so are
the generators A and A. It follows from Exercise (1.18) that the semi-groups of
X and X are equal whence the result follows.
0
(3.15) Exercise. Prove that X is on its natural scale if and only if for any a < b
and Xo = (a + b)/2,
Pxo[Ta < Tb ] = 1/2.
(3.16) Exercise. Prove that the domain of the infinitesimal generator of the reflected BM is exactly {f E C~([O, oo[) : f'(O) = OJ.
* (3.17) Exercise. 1°) (Dynkin's operator). For x E E° and h sufficiently small,
call /(h) the interval ]x - h, x + h[. For f E ]]J)A, prove that
Af(x) = lim (Ex [J(XU/(h)] - f(x») jmI(h)(x).
h-J,O
§3. Linear Continuous Markov Processes
311
The limit on the right may exist even for functions I which are not in llJJ A. We
will then still call the limit AI, thus defining a further extension of the operator
A.
° define PI(X) = Px[Tb < Ta]. Prove that ApI = 0 on I
2°) For I =]a, b[C E,
and Ami = -Ion I.
(3.18) Exercise. If </> is a homeomorphism of an interval E onto an interval E and
if X is a process on E satisfying the hypothesis of this section, then X = </>(X)
is a process on E satisfying the same hypothesis. Prove that s = s 0 </>-1 and that
is the image of m by </>.
m
(3.19) Exercise. Suppose that X is on its natural scale and that E = [0, 00[; prove
that for any £ > 0, ~O.e[ ym(dy) < 00.
#
(3.20) Exercise. Let X be a diffusion on IR with infinitesimal generator
1
d2
d
L = -a 2 (x)- + b(x)-,
2
dx 2
dx
where a and b are locally bounded Borel functions and a does not vanish. We assume that X satisfies the hypothesis of this section (see Exercise (2.10) Chap. IX).
1°) Prove that the scale function is given by
sex) =
l
X
exp (
-l
Y
2b(Z)a- 2 (Z)dZ) dy
where c is an arbitrary point in R In particular, if b = 0, the process is on natural
scale.
2°) Prove that the speed measure is the measure with density (2js'a 2 ) with
respect to the Lebesgue measure where s' is the derivative of s.
[Hint: Use Exercise (3.18).]
(3.21) Exercise. 1°) If E =]1, r[ and there exist x, b with I < x < b < r such
that Px[Tb < 00] = 1 (for instance in the case of BESd, d > 2), then s(l+) =
liIlla,j,1 sea) = -00. Conversely, if s(l+) = -00, prove that PATb < 00] = 1 for
every I < x < b < r.
2°) If, moreover, s(r-) = limbtr s(b) = 00, prove that X is recurrent.
3°) If instead of the condition in 2°), we have s(r-) < 00, prove that
Px
[~iWXt = r] = Px [i~fXt > I] = 1
and find the law of y = inft X t under Px . As a result, X is recurrent if and only
if s(l+) = -00 and s(r-) = 00.
4°) Under the conditions of 3°), i.e. s(l+) = -00, s(r-) < 00, prove that
there is a unique time p such that X p = y.
[Hint: use the same method as in Exercise (3.17) of Chap. VI, the hypotheses
of which, the reader may observe, are but a particular case of those in 3°).]
312
Chapter VII. Generators and Time Reversal
(3.22) Exercise. Let X be a strong Markov process on JR with continuous paths.
For a E JR, set
Da- = 1J-oo,a['
1°) Prove that either Pa[Da+ = 00] = 1 or Pa[Da+ = 0] = 1 and similarly
with D a _. The point a is said to be regular (resp.: a left shunt, a right shunt,
a trap) if Pa[Da+ = 0, Da- = 0] = 1 (resp.: Pa[Da+ = 00, Da- = 0] = 1,
Pa[Da+ = 0, Da- = 00] = 1, Pa[Da+ = 00, Da- = 00] = 1). Find examples of
the four kinds of points. Prove that a regular point is regular for itself in the sense
of Exercise (2.25) in Chap. III, in other words Pa[Ta = 0] = 1. For a regular
process in the hypothesis of this section, prove that all the points in the interior
of I are regular.
2°) Prove that the set of regular points is an open set, hence a union of intervals.
Assume that l; = 00 a.s. and show that the process can be restricted to any of
these intervals so as to obtain a process satisfying the hypothesis of this section.
(3.23) Exercise. If E =]0, 00[, l; = 00 a.s. and X is on its natural scale, then 0
is a natural boundary.
(3.24) Exercise. Let E = lR and X be recurrent (Exercise (3.21), 3°». Choose s
such that s(O) = O. If /-t is a probability measure on JR such that
f
Is(x)ld/-t(x) < 00,
f
S(X)dJL(X) = 0,
prove that there exists a stopping time T such that the law of X T under Po is /-to
* (3.25) Exercise (Asymptotic study of a particular diffusion). In the notation of
Exercise (3.20) let a(x) = 2(1 + x 2) and hex) = 2x.
1° Prove that X t = tan(YH,) where Y is a BMI and
Ht = 2
it
(1 + x;r 1 ds = inf {u : ioU (1 + tan2 (ys») ds > 2t} .
[Hint: Use Exercise (3.20) and Proposition (3.4).]
2°) Show that as t tends to 00,
lim H t = inf{u : IYul = n12}
t-+oo
a.s.,
and limHoo IX t I = 00 a.s.
3°) Show that for to sufficiently large (for instance to = sup{t : IXtl = I}), the
following formula holds for every t > to,
log IXtl = log IXtol +
1
t
to
X;l (2(1 + X;»)
1/2
df3s + t - to -
1t
X;2ds
to
where f3 is a BM.
4°) Show that t- 1/ 2(log IXtl - t) converges in law as t tends to 00, to 2G
where G is a standard Gaussian r.v.
§4. Time Reversal and Applications
313
§4. Time Reversal and Applications
In this section, we consider a Markov process with general state space and continuous paths on [0, l;[ and assume that X~_ exists a.s. on {~ < oo} as is the case
for Feller processes (Theorem (2.7) of Chap. III). Our goal is to show that, under
suitable analytic conditions, one can get another Markov process by running the
paths of X in the reverse direction starting from a special class of random times
which we now define.
(4.1) Definition. A positive r. v. L on Q is a cooptional time if
n;
i) {L < oo} C {L :::
ii) For every t 2: 0, L 0 e t = (L - t)+.
The reader will check that ~ is a cooptional time and so is the last exit time
of a Borel set A defined by
LA(W) = sup{t : Xt(w) E A}
where sup(0) = O. We also have the
(4.2) Proposition. If Lis cooptional, then for any s 2: 0, the r.v. (L - s)+ is also
cooptional.
Proof Condition i) of (4.1) is obviously satisfied by (L - s)+ and moreover
(L-s)+oe t = (Loet-s)+ = (L-t)+-st = (L-t-s)+ = (L-s)+-tt
which is condition ii).
D
In what follows, L is a fixed, a.s. finite and strictly positive cooptional time
and we define a new process X taking its values in E,1 by setting for t > 0
Xt(w) = { .1
XLCw)-t(w)
~f L(w)::: t or L(w) = 00,
If 0 < t < L(w),
and Xo = X L - if 0 < L < 00 and Xo = .1 otherwise.
We will set
= a(X" s ::: t). On {L > t + u}, we have, using Property ii)
of Definition (4.1), L(eu ) = L - u > t, hence
k
Xt(e u ) = XLCOu)-t(eu) = X L- u - t+u = Xt.
lt follows from the monotone class theorem that, if r is in
u2:0
(r) n {t + u < L} =
n {t + u < L}.
e;!
k, then for every
r
We now introduce the set-up in which we will show that X is a Markov
process. We assume that:
i) there is a probability measure /L such that the potential v = /LU where U is
the potential kernel of X (Exercise (2.29) Chap. III), is a Radon measure.
314
Chapter VII. Generators and Time Reversal
ii) there is a second semi-group on E, denoted by (Pt ), such that
a) if ! E CK(E), then PrJ is right-continuous in t;
b) the resolvents CUp) and (Up) are in duality with respect to v, namely
/ Up!' g dv = /
for every p >
!. Upg dv
°and every positive Borel functions ! and
g.
Examples will be given later in this section. The last equality will also be
written
(Up!, g}v = (j, Upg}v.
If X is another Markov process with Pt as transition semi-group, we say that X
and X are in duality with respect to v. Using the Stone-Weierstrass and monotone
class theorems, it is not difficult to see that this relationship entails that for any
positive Borel function ¢ on lR+
(P",!, g}v = (j, P",g}v,
10
where P",!(x) = 00 ¢(t)PrJ(x)dt.
Our goal is to prove that X is a Markov process with transition semi-group
(Pt ). We will use the following lemmas.
(4.3) Lemma. Given r > 0, ¢ a positive Borel junction on lR+ and H a positive
,¥,-measurable r. V., then jor any positive Borel junction! on E,
1
00
¢(t)E J1 [!(Xt+r)H] dt = / !h",dv
where h",(x) = Ex [H¢(L - r)l{r<Ll]. Moreover,jor s > r,
1
00
¢(t)EJ1 [!(Xt+s)H] dt = /
! Ps-rh",dv.
Proof By considering (L - r)+ instead of L, we may make r =
to be proven. The left-hand side is then equal to
1
00
EJ1 [!(X L - t)l(L>t)H] ¢(t)dt =
=
1
1
°in the equality
00
EJ1 [H!(Xu)¢(L - u)l(L>U)] du
00
EJ1 [(H¢(L) 1(L>0»
0
8u/(Xu )] du
since, as a consequence of the definition of go, we have H = H o8u on (L > u).
Furthermore, by the Markov property of X, the last expression is equal to
which proves the first part of the lemma.
§4. Time Reversal and Applications
315
To prove the second part, observe that since .j?; C.¥,-, the r.v. H is also in .¥,so that, by the first part, we may write
1
00
¢(t)EiJ. [f(.Xt+s)H] dt =
f
f(x)Ex [¢(L - s)H l(s<L)] v(dx).
But since l(r<L) 0 (}s-r = l{s<L) and H 0 (}r-s = H on {L > s}, we have
¢(L - s)H l(s<L) = (¢(L - r)H l{r<L)) 0 (}r-s and the second part follows from
the Markov property.
0
(4.4) Lemma. Keeping the notation of Lemma (4.3), if1/! is another positive Borel
function on lR.+, then
P1{Ih", = h1{l*'" = P",h1{l'
Proof From the above proof, it follows that
As a result
P1{Ih",
=
1
=
E. [l(r<L)H lL 1{!(s - r)¢(L - S)dSJ = h1{l*"',
00
1{!(s - r)Ps-rh",ds = E.
[1
00
1/!(s - r)¢(L - s)H l(s<L)dsJ
which is the first equality. The second one follows by symmetry.
o
We may now turn to the main result of this section.
(4.5) Theorem. Under PiJ., the process X is a Markov process with respect to (.~
with transition function (Pt ).
Proof We want to prove that EiJ. [f(X t+s ) I.~] = Prf(X s ) PiJ.-a.s. But using
the notation and result in Lemma (4.3) we have
1
00
¢(t)EiJ. [f(.Xt+s)H] dt = (f, Ps-rh",)v
and on the other hand, Fubini's theorem yields
Let us compare the right members of these identities. Using the above lemmas
and the duality property, we get
1
00
1{!(s - r)(f, Ps-rh",)vds
316
Chapter VII. Generators and Time Reversal
It follows that there is a Lebesgue-negligible set N (f, r, H, if»
s > r, s rt. N(f, r, H, if»,
c lR.+ such that for
Ej.L [pqJ(X s )' H] = (f, Ps-rh",)v.
Let N be the union of the sets N (f, r, H, if» where f runs through a dense
sequence in CK(E), if> runs through a sequence which is dense in CKClR.+), r runs
through rational numbers> 0 and for each r, H runs through a countable algebra
of bounded functions generating (J (X u, U :s r); then, for s rt. N, the last displayed
equality holds simultaneously for every f E CK(E), if> E CK(lR.+), r E Q+, s > r
and every H which is (J(X u , U :s r)-measurable and bounded. As a result, under
the same conditions
for almost every t. But, by the property ii) a) of Pr, both sides are right continuous
in t and the equality holds for every t.
Next, because of the continuity of paths, the filtration (.X) is right and left
continuous (i.e .. ~_ = .:Y; = .~+ for each t) up to sets of Pj.L-measure zero; it
follows first that the last displayed equality is valid with H in .~ and, finally,
since each s is a limit of points in N C , that this equality holds without restriction.
As a result, for f E CK(E),
Ej.L [f(X s+t )
I.fr,] = Pd(X s)
for every sand t, which is the desired result.
Remarks. 1°) This result does not show that X has good properties such as the
Feller or Strong Markov property. However, in many applications the semi-group
(Pt ) is a Feller semi-group, which insures that X has good properties (see also
Exercise (4.13».
2°) If Pt is already known to be the semi-group of a process X, then X under
Pj.L has the same law as X under Pfi where jl = XL-(Pj.L) is the law of X Lunder Pw
We will now give two applications of Theorem (4.5) to the Bessel process of
dimension 3 which was introduced and studied in Sect. 3 Chap. VI. This process
and the BM killed at 0 (which we will denote by BO) are in duality with respect
to the measure v(dx) = 2x dx. Indeed, in the notation of Chap. VI, using the fact
that Qt is in duality with itself with respect to Lebesgue's measure, we have, for
f, g :::: 0 and hex) = x,
1°00
f(x)p/g(x)x dx =
100
°
f(x)-1 Qt(gh)(x)x dx =
x
100
°
Qd(x)g(x)x dx
which proves our claim. This is a particular instance of a more general result;
the process BES 3 is the h-process of BMo as defined in Sect. 3 Chap. VIII and
§4. Time Reversal and Applications
317
as such is in duality with BMo with respect to the measure h(x)dx (see Exercise
(3.17) in Chap. VIII).
Furthermore, it was shown in the last section that v is precisely equal to
the potential measure U(O, .) of BES 3 (0). Thus, we are exactly in the setting of
Theorem (4.5) with J1., = EO, and we may state the
°:s :s
°:s :s
(4.6) Corollary (Williams). Let X be a BES 3 (0) and B a BM(b) with b > 0, then
if Lb = sup{t : X t = b}, the processes {X Lb - t ,
have the same law.
t
Lb} and {Bt,
t
To}
Remarks. 1°) Another proof of this result relying on excursion theory will be
given in Chap. XII.
2°) This corollary implies that the law of Lb for BES3 (0) is the same as the
law of To for BM(b), which was computed in Chap. II, Proposition (3.7) and
Chap. III, Proposition (3.7).
Our second application deals with the process BES 3 killed when it first hits a
point b > 0. More precisely, if X is a BES3, we consider the process X b defined
by
X~ = X t if t < Tb and Xo E [0, b[, X~ = L1 otherwise
°:
where as usual Tb = inf{t >
X t = b}. It was shown in Exercise (2.30) of
Chap. III that this is a Markov process on [0, b[ and clearly Tb is the deathtime,
hence a cooptional time, for X b .
(4.7) Lemma. The processes X b and b - X b are in duality with respect to the
measure ~(dx) = x(b - x)dx on [0, b].
Proof We have already used the fact, that the potential U of X has the density
u(x, y) = inf(l/x, I/y) with respect to the measure 2y2dy. By a simple applica-
tion of the strong Markov property, we see that the potential V of X b is given by,
for x < b,
Vf(x)
Ex [iTb f(Xt)dtJ = Uf(x) - PTbUf(x)
b
2 i (u(x, y) - u(b, y»)ldy;
in other words, V has the density vex, y) = inf(l/x, I/y) - I/b with respect to
the measure 2y 2 1(osYSb)dy. Clearly, the potential if of the process b - X b has the
density v(b - x, b - y) with respect to the measure 2(b - y)21(osySb)dy. It is then
a tedious but elementary computation to check that for f, g :::: 0,
f
Vf . g
d~ =
f if
f·
g dr
Now the mapping f ---+ V f (resp. f ---+ if f) is bounded on the space of bounded
functions on [0, b] so that the result follows from Exercise (4.17).
D
3 18
Chapter VII. Generators and Time Reversal
(4.8) Proposition. If X is a BES 3 (0) and b is strictly positive, the processes
(X Tb- t , 0 ::: t S Tb) and (b - X t , 0 ::: t ::: Tb) are equivalent.
Proof The potential measure V(O, dy) is equal, by what we have just seen, to
2(1/y - l/b)y2dy = b~(dy). Thus the result follows at once from Theorem (4.5)
and the above lemma.
0
Bringing together Corollary (4.6), Proposition (4.8) and Theorem (3.11) of
Chap. VI we obtain
B,
b
--------------------------
a
BM(O)
II
•
a - BES3(O)
II
Fig. 7.
(4.9) Theorem (Williams' Brownian path decomposition). For b > 0, let be
given the four following independent elements:
i) a r. v. a uniformly distributed on [0, b];
ii) a standard BM B;
iii) two BES 3 (0) processes p and p',
and define
Ta = inf{t : Bt = a},
gTh
= Ta + sup {t : a - p(t) = O},
Tb = gTb + inf{t : p'(t) = b},
then, the process X defined for 0 ::: t ::: Tb by
1
Bt,
Xt =
a,- p(t - Ta),
p (t - gTb)'
is a BM(O) killed when it first hits b.
Ta ::: t ::: gTb'
gTb ::: t ::: Tb ,
§4. Time Reversal and Applications
319
Proof By Corollary (4.6), a BM killed at time Tb is a time-reversed BES 3 (0) to
which we apply the decomposition Theorem (3.11) of Chap. VI. The time-reversed
parts are easily identified by means of Corollary (4.6) and Proposition (4.8). Here
again, the result is best described by Figure 7; it is merely Figure 5 of Chap. VI
put "upside down".
Remark. There are actually other proofs of the fact that BM taken between gTb
and Tb (if gTb is the last zero before Tb) is a BES 3 . If this result were known,
then the above decomposition theorem might be deduced from (4.6) and Theorem
(3.11) of Chap. VI without having to resort to Proposition (4.8) above.
(4.10) Exercise. If Land L' are two cooptional times, then L v L' and L /\ L'
are cooptional times.
*
(4.11) Exercise. Let L be a cooptional time and ,'f/L be the family of sets r
such that for every u ::::: 0,
E .¥
r n {L > u} = e;;l(r) n {L > u}.
1°) Prove that "~L is a a-algebra (see also Exercise (4.13) below) and that L
and XL are ;f/L -measurable.
2°) If A E "~L' prove that the r.v. LA defined by
LA = L
on A,
is co optional.
*
(4.12) Exercise. 1°) Let Pt be the modulus of BM2 and suppose that Po = r
with 0 < r < 1. Prove that there exists a BM 1 Y started at (- log r) such that
-logpt = Ye l where C t = inf{u :
exp(-2Ys)ds > t}.
[Hint: Use the ideas of Sect. 2 Chap. V.]
2°) Let X be a BES 2 (0) and Tl = inf{t : X t = l}. Prove that there exists a
BES 3 (0), say Y, such that
J;
(-logXt,O < t :::: Tl) = (YA" 0 < t ::::
1
00
eXp(-2y,)dS) ,
where At = sup {u : JuDO exp( -2Ys)ds > t}.
[Hint: Apply Corollary (4.6) to the BM Y of 1°), then let r converge to 0.]
3°) Extend the result of 1°) and 2°) to P = IBMd I with d ::::: 3. More precisely
prove that if X is a BES d (0)
(Xt)2~d, t > 0) = (YA, , t > 0)
where At = sup {u : (d - 2)~2 Juoo Ys~ads > t}, and ex = 2(d - l)/(d - 2).
*
(4.13) Exercise. With the notation of this section, let ,(~t be the a-algebra of sets
r in ,7/ such that for every u ::::: 0
e;;l (r) n {t + u < L} = r n {t + u < L}.
320
Chapter VII. Generators and Time Reversal
1°) Prove that (~) is a right-continuous filtration which is larger than (,~.
Check that Lemmas (4.3) and (4.4) are still valid with (~) instead of (.%).
2°) Prove that if T is a (~)-stopping time, then (L - T)+ is a cooptional
time.
3°) Prove that in Theorem (4.5), one can replace (.~ by (.(~); then using 2°),
prove that X has the strong Markov property.
#
(4.14) Exercise. Let L be a cooptional time and set 4>(x) = Px[L > 0].
1°) Prove that 4> is an excessive function (see Definition (3.1) of Chap. X).
2°) If f is excessive and finite, prove that one defines a new transition semigroup P f by setting
p/ (x, dy)=
f-'(x)Pt(x, dy)f(y)
= 0
*
if f(x) =1= 0
otherwise.
(See also Proposition (3.9) in Chap. VIII).
3°) Let Yt(w) = Xt(w) ift < L(w) and Yt(w) = L1 ift ~ L(w), and prove that
for any probability measure fJ." the process Y is a Markov process with transition
semi-group
p/.
* (4.15) Exercise (Another proof of Pitman's theorem). Let B be the standard
linear BM, L its local time at 0 and as usual Tj = inf{t : L t > I}. We call
(T) the following property which is proved in Exercise (2.29) of Chap. VI and in
Exercise (4.17) of Chap. XII: the processes (IBtl, t ::: Tj) and (IB'I-tl, t ::: Tj) are
equivalent. Call (P) the property proved in Pitman's theorem (Sect. 3 Chap. VI)
namely
(d)
(2St - Br. St, t ~ 0) = (Zr. Jt , t ~ 0)
where Z is a BES\O) and Jt = infs:=:t Zs. Call further (R) the time-reversal
property of Corollary (4.6). The aim of this exercise is to show that together with
the Levy equivalence (St - Br. Sr. t ~ 0) tgl (IBrI, Lr. t ~ 0) proved in Sect. 2 of
Chap. VI and which we shall call (L), any two of the properties (T), (P), (R)
imply the third one.
1°) Let as usual T, = inf{t : B t = I}; deduce from (L) that
(IB'I-ul, U ::: Tl) tgl (-1 + STI-u + (1- B TI - u), U ::: Tl)
and conclude that (R) and (P) imply (T).
2°) Using (L) (or Tanaka's formula) prove that
(d)
(Bu, U ::: Tj) = (Lu - IBul, U ::: Tj)
and conclude that (T) and (P) imply (R).
[Hint: If (L) is known, (P) is equivalent to (PI), namely
(d)
(IBul+Lu,u~O) = (Zu,u~O).]
§4. Time Reversal and Applications
321
3°) Use (T), then (L), to prove that
Use the scaling invariance properties to deduce that for any a > 0,
(IBul + Lu, U :.s Ta) = (Zu, u :.s La)
(d)
and conclude that (T) and (R) imply (PI), hence (P).
*
(4.16) Exercise (On last passage times). Let L be a cooptional time.
1°) In the notation of Exercise (4.14) prove that the supermartingale Zt
¢(X t ) (see Proposition (3.2) Chap. X) is equal to Px[L > t 1 •.Yj] Px-a.s.
2°) Suppose that X is a Feller process on ]0, oo[ and that the scale function
s is such that s(O+) = -00 and s(oo) = 0 (see Exercise (3.21». For a > 0,
let L = La = sup{t : XI = a} and AX be the family of local times of the local
martingale seX). Prove that
-1 As (a)
+ __
Z
I
2s(a)
I
is a local martingale (a particular instance of Meyer's decomposition theorem).
30) Prove that for every positive predictable process H,
This may be stated: 2~~)As(a) is the dual predictable projection of 1[O<L:,o-]4°) We now assume in addition that there exists a measure JL such that
i) there exists a continuous function P' on ]0, 00[3 such that
PI(X, dy) = Pi(x, Y)JL(dy);
ii) for every positive Borel function f on lR+,
1t
f(Xs)ds =
f
f(y)A:(Y)JL(dy)
(see Exercise (2.32) Chap. X and Sect. 1 Chap. XI). Prove that
~Ex[A:(a)] = Pi(x, a).
at
5°) Show that
-1
Px(L a Edt) = --pi(x, a)dt.
2s(a)
An important complement to this exercise is Exercise (1.16) in Chap. X where
it is shown how conditioning with respect to L is related to the distribution of the
bridges of X.
322
#
Chapter VII. Generators and Time Reversal
(4.17) Exercise. 1°) Let V P and VP be two resolvents on (E, (5) such that the
kernels V = V O and V = V O are bounded on the space be; of bounded Borel
functions. If ~ is a measure such that
f
Vf·
gd~ =
f Vgd~
f·
for every pair of positive Borel functions, prove that the two resolvents are in
duality with respect to ~.
[Hint: For p sufficiently small, V P = L~ pn V n+ 1.]
2°) If the two resolvents are the resolvents of two right- or left-continuous
processes with semi-groups Pt and fit. prove that for every t,
(4.18) Exercise. Let X be a Feller process on [0, oo[ such that 0 is not reached
from the other points and such that the restriction of X to ]0, oo[ satisfies the
hypothesis of Sect. 3. We call sand m the corresponding scale function and speed
measure and assume that s(O+) = -00, s(oo) = 0 (see Exercise (3.21)).
1°) Compute the potential kernel of X and prove that X is in duality with itself
with respect to m.
2°) Prove that for every b > 0, the process Yt = {X Lb - t , t < Lb} is under Po
a Markov process on ]0, b[ with initial measure Cb and semi-group Qt given by
Qd(x) = Pt(fs)(x)/s(x).
As a result the law of Lb under Po is the same as the law of To for the process Y.
3°) Prove that (-I/s) is a scale function for the process with semi-group Qt
and that the corresponding speed measure is s2m.
Notes and Comments
Sect. 1. This section is devoted to the minimum of semi-group theory which is
necessary (for intuition more than for technical needs) in the sequel. For a detailed
account, we recommend the book of Pazy [1]; another exposition designed for
probabilists is that of Dellacherie-Meyer [1] vol. IV.
We are uncertain about the origin of Theorem (1.13) but we can mention that
it is a special case of a much more general result of Kunita [1] and Roth [1];
the same is true of Exercise (1.l9). Exercise (1.20) comes from Chen [1] and
Ledoux [1].
In relation to Exercise (1.21), the reader shall find more extensions of Ito's
formula in Kunita [5].
Most exercises of this section just record classical properties of semi-groups
and may be found in the textbooks on the subject.
Notes and Comments
323
Sect. 2. The bulk of this section is taken from Stroock-Varadhan [1] and Priouret
[1]. The systematic use of martingale problems in the construction of diffusions
is due to Stroock and Varadhan. Their ideas were carried over to other contexts
and still playa great role in present-day research although it is only of marginal
interest in our own exposition which favors the SDE aspect.
The exercises of this section have the same origin as the text. Exercise (2.8)
is taken from a lecture course by Meyer.
Sect. 3. The material covered in this section appeared in a series of papers of
Feller. There are many extensive - much more so than ours - expositions of the
subject, e.g. in Dynkin [1], Ito-McKean [1], Freedman [1] and Mandl [1]. Our
exposition is borrowed from Breiman [1] with other proofs, however, where he
uses approximation by discrete time processes. The exercises are very classical.
Sect. 4. The main result of the section, namely Theorem (4.5), is due to Nagasawa
[1]. In our exposition, and in some of the exercises, we borrowed from Meyer [2]
and Meyer et al. [1]. Corollary (4.6) is stated in Williams [3]. The remarkable
Theorem (4.9), which was the first of this kind, is from Williams [2] and [3].
Exercise (4.12) is from Williams [3] and from Yor [16]. Exercise (4.16) comes
from Pitman-Yor [1]; see Getoor [2] for the particular case of transient Bessel
processes.
In connection with Exercise (4.15) let us mention that Pitman's theorem has
now been extended to other processes (see Tanaka ([2], [3D for random walks,
Bertoin [5] for Levy processes, and Saisho-Tanemura ([1D for certain diffusions;
see in particular Exercise (1.29) Chap. XI).
Chapter VIII. Girsanov's Theorem
and First Applications
In this chapter we study the effect on the space of continuous semimartingales of
an absolutely continuous change of probability measure. The results we describe
have far-reaching consequences from the theoretical point of view as is hinted at
in Sect. 2; they also permit many explicit computations as is seen in Sect. 3.
§1. Girsanov's Theorem
The class of semimartingales is invariant under many operations such as composition with C 2 -functions or more generally differences of convex functions. We
have also mentioned the invariance under time changes. It is also invariant under
an absolutely continuous change of probability measures. This is the content of
Girsanov's theorem: If Q is a probability measure on (Q, ,rP-) which is absolutely
continuous with respect to P, then every semimartingale with respect to P is a
semimartingale with respect to Q.
The above theorem is far from intuitive; clearly, a process of finite variation
under P is also a process of finite variation under Q but local martingales may
lose the martingale property. They however remain semimartingales and one of
our goals is precisely to describe their decomposition into the sum of a local
martingale and a process with finite variation.
We will work in the following setting. Let (.3if O), t ~ 0, be a right-continuous
filtration with terminal O'-field .P-c,2 and P and Q two probability measures on
.¥;2. We assume that for each t ~ 0, the restriction of Q to .3if0 is absolutely
continuous with respect to the restriction of P to .-3if 0 , which will be denoted by
Q <l P. We stress the fact that we may have Q <l P without having Q « P.
Furthermore, we call D t the Radon-Nikodym derivative of Q with respect to P on
.3ifo. These (classes of) random variables form a (.3ifo, P)-martingale and since
(.-3if 0 ) is right-continuous, we may choose D t within its P-equivalence class so
that the resulting process D is a (.~O)-adapted martingale almost every path of
which is cadlag (Theorem (2.5) and Proposition (2.6) Chap. II). In the sequel we
always consider such a version.
(1.1) Proposition. The following two properties are equivalent:
i) the martingale D is uniformly integrable;
ii) Q« P on .~.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
326
Chapter VIII. Girsanoy's Theorem and First Applications
Proof See Exercise (2.13) Chap. II.
As was observed in Sect. 4 of Chap. I and at the beginning of Chap. IV, when
dealing with stochastic processes defined on (Q,.~, p), one has most often to
consider the usual augmentation of (.¥;O) with respect to P, or in other words to
consider the (T-fields.'Y; obtained by adding to .¥;o, 0 ::: t ::: 00, the P-negligible
sets of the completion
of .yO:; with respect to P. If Q <I P but Q is not
absolutely continuous with respect to P on
then Q cannot be extended to
.-¥oo. Indeed, if peA) = 0 and Q(A) > 0, then all the subsets of A belong to
each .'Y; and there is no reason why Q could be consistently defined on .~(A).
In contrast, if Q « P on .~ we have the following complement to Proposition
(1.1 ).
.roo
.ro:;,
(1.1') Proposition. The conditions of Proposition (1.1) are equivalent to:
.roo
iii) Q may be extended to a probability measure Q on
such that Q « P in
restriction to each of the completed (J' -:fields (.Yr], 0 ::: t ::: 00.
If these conditions hold, D/ = E [dQ/dP I.~O] P-a.s.
Proof It is easy to see that ii) implies iii). If iii) holds and if A E .~ and
peA) = 0, then A E .¥;, hence Q(A) = 0 since Q « P on .'Y; and therefore
Q P on
0
«
.¥02.
In the sequel, whenever Q « P on.~, we will not distinguish between Q and
Q in the notation and we will work with (.Yr]-adapted processes without always
recalling the distinction. But if we have merely Q <I P, it will be understood that
we work only with (.~O)-adapted processes, which in view of Exercise (1.30)
Chap. IV is not too severe a restriction. The following two results deal with the
general case where Q <I P only.
The martingale D is positive, but still more important is the
(1.2) Proposition. The martingale D is strictly positive Q-a.s.
Proof Let T = inf{t : D/ = 0 or D/~ = O}; by Proposition (3.4) in Chap. II, D
vanishes P-a.s. on [T,oo[ , hence D/ = 0 on {T < t} for every t. But since
Q = D/ . P on .¥;o, it follows that Q({T < t}) = 0 which entails the desired
conclusion.
0
We will further need the following remark.
(1.3) Proposition. If Q <I P, then for every (.¥;O)-stopping time T,
Q = DT . P
IfQ« P, then Q = D T · P on
.Y;/.
on .¥;o n {T < oo}.
§ 1. Girsanoy's Theorem
327
Proof The particular case Q « P follows from the optional stopping theorem.
For the general case, we use the fact that D is uniformly integrable on each [0, t].
Let A E . yo? ; for every t, the event A n (T :::: t) is in .Yr~t and therefore
Q(An(Tst))
Letting t tend to infinity, we get
Q(A n (T < 00)) = (
DTdP
JAn(T<OO)
which completes the proof.
D
The martingale D plays a prominent role in the discussion to follow. But,
since the parti-pris of this book is to deal only with continuous semimartingales,
we have not developed the techniques needed to deal with the discontinuous case;
as a result, we are compelled to prove Girsanov's theorem only in the case of
continuous densities. Before we tum to the proof, we must point out that if X
and Yare two continuous semimartingales for both P and Q and if Q <I P, any
version of the process (X, Y) computed for P is a version of the process (X, Y)
computed for Q and this version may be chosen (.~O)-adapted (the reader is
referred to Remark (1.19) and Exercise (1.30) in Chap. IV). In the sequel, it will
always be understood that (X, Y) is such a version even if we are dealing with Q.
The following theorem, which is the main result of this section, is Girsanov's theorem in our restricted framework; it specifies the decomposition of
P-martingales as continuous semimartingales under Q.
(1.4) Theorem (Girsanov's theorem). If Q <I P and if D is continuous, every
continuous (.9f0, p) -semimartingale is a continuous (.9f0, Q) -semimartingale.
More precisely, if M is a continuous (.9f0, P)-local martingale, then
M = M - D- 1 • (M, D)
is a continuous (.y;o, Q )-local martingale. Moreover, if N is another continuous
P -local martingale
(M,N) = (M,N) = (M,N).
~
~
~
Proof If X is a cadlag process and if X D is a (.~o, P )-loc. mart., then X is
a (.~O, Q)-loc. mart. Indeed, by Proposition (1.3), Di is a density of Q with
respect to P on 'YT~t and if (XD)T is a P-martingale, it is easily seen that XT
is a Q-martingale (see the first remark after Corollary (3.6) Chap. II). Moreover a
sequence of (.9fO)-stopping times increasing to +00 P-a.s. increases also to +00
Q-a.s., as the reader will easily check.
328
Chapter VIII. Girsanov's Theorem and First Applications
Let Tn = inf{t : Dt :s lin}; it is easy to see that the process (D- 1 • (M, D) )Tn
is P-a.s. finite and consequently, (M D) Tn is the product of two semimarts. By the
integration by parts formula, it follows that
[ T" /\I
[ Tn /\I
MoDo + Jo
MsdDs + Jo
DsdMs + (M, Dh,,/\1
[ T" /\I
[ Tn /\I
MoDo + Jo
MsdDs + Jo
DsdMs - (M, D)T,,/\1
+ (M, D)T,,/\1
[ Tn /\I
[ T" /\I
MoDo + Jo
MsdDs + Jo
DsdMs,
which proves that (M D)T" is a P-loc. mart. By the first paragraph of the proof MTn
is thus a Q-loc. mart.; but (Tn) increases Q-a.s. to +00 by Proposition (1.2) and
a process which is locally a loco mart. is a loco mart. (Exercise (1.30) Chap. IV)
which completes the proof of the second statement. The last statement follows
readily from the fact that the bracket of a process of finite variation with any
semimart. vanishes identically.
0
Furthermore it is important to note that stochastic integration commutes with
the transformation M ---+ M. The hypothesis is the same as for Theorem (1.4) and
here again the processes involved are (.~O)-adapted.
(1.5) Proposition. If H is a predictable process in Lroc (M), then it is also in
Lfoc(M) and
H:M = H . M.
Proof The first assertion follows at once from the equality (M, M) = (M, M).
Moreover, if H is locally bounded,
H . M = H . M - H D- 1 • (M, D) = H . M - D- 1 • (H . M, D) = fi':M.
We leave to the reader the task of checking that the expressions above are still
meaningful for H E Lfoc(M), hence that fi':M = H . M in that case too.
With a few exceptions (see for instance Exercise (1.22) in Chap. XI), Q is
usually actually equivalent to P on each .~o, in which case the above results
take on an even more pleasant and useful form. In this situation, D is also strictly
positive P-a.s. and may be represented in an exponential form as is shown in
(1.6) Proposition. If D is a strictly positive continuous local martingale, there
exists a unique continuous local martingale L such that
§l. Girsanov's Theorem
329
L is given by the formula
t
L = log Do +
lot D;ldDs.
Proof Once again, the uniqueness is a consequence of Proposition (1.2) in
Chap. IV.
As to the existence of L, we can, since D is > 0 a.s., apply Ito's formula to
the process log D, with the result that
10gDt
o
If P and Q are equivalent on each .~o, we then have Q = g(L)t' P on ~o
for every t, which we write simply as Q = geL) . P. Let us restate Girsanov's
theorem in this situation.
(1.7) Theorem. Jf Q = ~(L)· P and M is a continuous P-Iocal martingale, then
~
1
M = M-D- . (M,D) = M- (M,L)
is a continuous Q-Iocal martingale. Moreover, P = g'(-L)Q.
Proof To prove the first statement, we need only show the identity D- 1 . (M, D) =
(M, L) which follows from the equality L t = log Do + f~ D;ldDs.
For the second statement, observe that, because of the first, - L = - L + (L, L)
is a Q-Iocal martingale with (L, L) = (L, L). As a result
g (-L)t = exp {-Lt + (L, L)t consequently, P = ~(-L). Q.
~(L, L)(} = g (L)";I;
o
We now particularize the situation still further so that P and Q can play totally
symmetric roles.
(1.8) Definition. We call Girsanov's pair a pair (P, Q) of probability measures
such that Q ~ P on
and the P -martingale D is continuous.
n
If (P, Q) is a Girsanov pair, then the completion of (.~o) for Q is the same
as the completion for P and we may deal with the filtration (.¥,) and forget about
(~o). Moreover, by the above results, we see that (Q, P) is also a Girsanov pair
and that the class of continuous semimartingales is the same for P and Q.
(1.9) Definition. Jf(P, Q) is a Girsanov pair, we denote the map M -+ M from
the space of continuous P-Iocal martingales into the space of continuous Q-Iocal
martingales by G~ and we call it the Girsanov transformation from P to Q.
330
Chapter VIII. Girsanov's Theorem and First Applications
This map is actually one-to-one and onto as a consequence of the following
proposition; it moreover shows that the relation P '" Q if (P, Q) is a Girsanov
pair is an equivalence relation on the set of probability measures on Yoo.
(1.10) Proposition. If(P, Q) and (Q, R) are two Girsanov pairs, then (P, R) is
a Girsanov pair and G~ 0 G~ = G~. In particular,
G~ 0 G~ = G~ 0 G~ = I.
Proof Since ~~ 1.7, = ~~ 1.7, x ~ 1.7,' the pair (P, R) is obviously a Girsanov
pair.
By definition, G~(M) is the local martingale part of M under R, but as M
and G~ (M) differ only by a finite variation process, G~(M) is also the local
martingale part of G~ (M) under R which proves the result.
Remark. This can also be proved by using the explicit expressions of Theorem
(1.7).
In this context, Propositon (1.5) may be restated as
(1.11) Proposition. The Girsanov transformation G~ commutes with stochastic
integration.
Before we turn to other considerations, let us sketch an alternative proof of
Theorem (1.7) in the case of a Girsanov pair, based on Exercises (3.11) and (3.14)
in Chap. IV. The point of this proof is that the process A which has to be subtracted
from M to get a Q-Iocal martingale appears naturally.
By Exercise (3.14), Chap. IV, it is enough to prove that for any A, the process
exp {A(M - A) - ¥(M, M)} is a Q-Iocal martingale, which amounts to showing
that
exp {A(M - A) -
~ (M, M)} exp {L - ~(L, L)}
is a P-martingale. But this product is equal to
A2
1
}
exp { AM + L - '2(M, M) - '2(L, L) - AA
which, by Exercise (3.11), Chap. IV, is equal to i5(AM + L) provided that A =
(M,L).
Remark. We observe that if Q is equivalent to P on each Yr and D is continuous,
(P, Q) becomes a Girsanov pair if we restrict the time interval to [0, f) and
the above results apply. However, one should pay attention to the fact that the
filtrations to consider may not be complete with respect to P (or Q); when working
on [0, f), we complete with the negligible sets in .'Yr in order to recover the above
set-up.
§ 1. Girsanov's Theorem
331
To illustrate the above results, we treat the case of BM which will be taken
up more thoroughly in the following section.
(1.12) Theorem. If Q <l P and if B is a (.Yi o, P)-BM, then jj = B - (B, L) is
a (.~o, Q)-BM.
Proof The increasing process of jj is equal to that of B, namely t, so that P.
Levy's characterization theorem (Sect. 3, Chap. IV) applies.
Remarks. 1°) The same proof is valid for BMd.
2°) If (.~ is the Brownian filtration, then for any Q equivalent to P on .~,
the pair (P, Q) is a Girsanov pair as results from Sect. 3 in Chap. V. Even in that
case the filtration of jj may be strictly smaller than the filtration of B as will be
seen in Exercise (3.15) in Chap. IX.
3°) The same result is true for any Gaussian martingale as follows from Exercise (1.35) in Chap. IV.
In this and the following chapters, we will give many applications of Girsanov's theorem, some of them under the heading "Cameron-Martin formula" (see
the notes and comments at the end ofthis chapter). In the usual setting D, or rather
L, is given and one constructs the corresponding Q by setting Q = Dr . P on .0/;.
This demands that L being given, the exponential geL) be a "true" P-martingale
(which is equivalent to E [f3"(L)t] == 1) and not merely a local martingale (otherwise, see Exercise (1.38». Sufficient criterions ensuring this property will be
given below.
When E [g (L) r] == 1 obtains, the formula Q = is (L) . P defines a set function
Q on the algebra Ut'o/; which has to be extended to a probability measure on
.~. To this end, we need Q to be a-additive on Ut.j<f
(1.13) Proposition. Let X be the canonical process on Q = W = C(lR+, JR.d) and
.~o = a (X s , s :::: t)+; if E [is(L)r] == 1, there is a unique probability measure Q
on (Q, .~) such that Q = geL) . P.
This is a consequence of Theorem (6.1) in the Appendix.
The above discussion stresses the desirability of a criterion ensuring that is (L)
is a martingale. Of course, if L is bounded or if E[exp(L~)] < 00 this property
obtains easily, but we will close this section with two criterions which can be
more widely applied. Once again, we recall that geL) is a supermartingale and
is a martingale if E [g(L)t] == 1 as L is always assumed to vanish at O. The first
of these two criterions is known as Kazamaki's criterion. We return to the setting
of a general filtered space (Q, .¥, .~, P).
(!
(1.14) Proposition. If L is a local martingale such that exp L) is a uniformly
integrable submartingale, then is(L) is a uniformly integrable martingale.
Proof Pick a in ]0, 1[. Straightforward computations yield
(5 (aL)t = U5 (L)tt 2
(zia l )
l-a 2
332
Chapter VIII. Girsanoy's Theorem and First Applications
zi
with a ) = exp (aLt/(l + a». By the optional stopping theorem for uniformly
integrable submartingales (Sect. 3 Chap. II), it follows from the hypothesis that
the family {z~), T stopping time} is uniformly integrable. If r is a set in .¥
and T a stopping time, Holder's inequality yields
E [lrt)(aLh] ::c E [j)(Lh
r E lrZ;
2
[
( )]
l-a 2
::c E lrZ;
[
( )]
l-a 2
since E [t§(Lh] ::c 1. It follows that {(5(aLh, T stopping time} is uniformly
integrable, hence tS(aL) is a uniformly integrable martingale. As a result
The hypothesis implies also that Loo exists a.s. and that exp (~Loo) is integrable.
Since Z~) ::c l(Looso) + exp (!Loo) I(L oo >o), Lebesgue's dominated convergence
theorem shows that limat 1 E [ Z~)
r
_a
2
= 1. By letting a tend to 1 in the last
displayed inequality, we get E [g(L)oo] :::: 1, hence E U!f(L)oo] = 1 and the
proof is complete.
D
Remarks. If L is a u.i. martingale and E [exp (~Loo)] < 00, then exp (~Lt) is
a u.i. submartingale and Kazamaki's criterion applies; this will be used in the
next proof. Let us also observe that if M is a local martingale, the condition
E[ exp Md < 00 for every t does not entail that exp(M) is a submartingale;
indeed, if Z is the planar BM and 0 < ex < 2 then exp ( -ex log IZt I) = IZt I-a has
an expectation decreasing in t and in fact is a supermartingale (see also Exercise
(1.24) in Chap. XI).
From the above proposition, we may derive Novikov's criterion, often easier
to apply, but of narrower scope as is shown in Exercises (1.30) and (1.34).
(1.15) Proposition. If L is a continuous local martingale such that
then g/(L) is a u.i. martingale. Furthermore E [exp (!L~)] < 00, and as a result,
L is in HP for every p E [1,00[.
Proof The hypothesis entails that (L, L)oo has moments of all orders, therefore,
by the BDG-inequalities, so has L~; in particular L is a u.i. martingale. Moreover
exp
(~Loo) = (5 (L)!i,2 exp (~(L' L)oo),
so that, by Cauchy-Schwarz inequality,
§ 1. Girsanoy's Theorem
333
and since E[(5(L)oo] S 1, it follows that E[exp(!Loo)] < 00. By the above
remarks, (5 (L) is a u.i. martingale.
To prove that E [exp (! L~)] < 00, let us observe that we now know that for
c < 1/2,
exp(cL t ) = gC(L)t exp
(~2 (L, L)t)
is a positive submartingale. Applying Doob's inequality with p = 1/2c, yields
Then, by Holder's inequality, we get
E
[s~pexp (~Lt) ] S CpE [gC(L)~-C)/4c]2/(2-C) E [exp (~(L' L)oo) ]"/2,
and since (2 - c)/4c < 1 for c > 2/5, the left-hand side is finite. The same
reasoning applies to -L, thus the proof is complete.
Remark. It is shown in Exercise (1.31) that 1/2 cannot be replaced by (1/2) - 8,
with 8 > 0, in the above hypothesis.
The above results may be stated on the intervals [0, t]; by using an increasing
homeomorphism from [0,00] to [0, t], we get
(1.16) Corollary. If L is a local martingale such that either exp (! L) is a submartingale or E [exp (!(L, L)t)] < 00 for every t, then (5(L) is a martingale.
Let us close this section with a few comments about the above two criterions.
Their main difference is that in Kazamaki's criterion, one has to assume that
exp L) is a submartingale, whereas it is part of the conclusion in Novikov's
criterion. Another difference is that Novikov's criterion is "two-sided"; it works
for L if and only if it works for -L, whereas Kazamaki's criterion may work for
L without working for -L. Finally we stress the fact that Kazamaki's criterion
is not a necessary condition for (L/(L) to be a martingale as will be seen from
examples given in Exercise (2.10) of Chap. IX.
(!
#
(1.17) Exercise. Suppose that Q = D t • P on.:Pi for a positive continuous martingale D. Prove the following improvement of Proposition (1.2): the r.v. Y = inft D t
is > Q-a.s. If moreover P and Q are mutually singular on .:Yoo then, under Q,
Y is uniformly distributed on [0, 1].
°
(1.18) Exercise. Let Pi, i = 1,2,3 be three probability measures such that any
two of them form a Girsanov pair and call DJ = g (
Lj) the martingale such that
Pi = DJ . Pj . There is a PI-martingale M such that D~ =
that L ~ = M + L
r
(5 ( G~~ (M»). Prove
334
Chapter VIII. Girsanov's Theorem and First Applications
(1.19) Exercise. Call .~!;(P) the space of cont. loco mart. with respect to P. Let
(P, Q) be a Girsanov pair and r a map from .46(P) into ./#6(Q).
1°) If (r(M), N) = (M, N) for every M E j!;(P) and N E ./t6(Q) prove
that r = G~.
2°) If (r(M), r(N») = (M, N) for every M, N E ./16(P) there exists a map
1 from .~!;(P) into itself such that (J(M), leN») = (M, N) and r = G~ 0 l.
(1.20) Exercise. If (P, Q) is a Girsanov pair with density D and if M is a Pmartingale, then M D- 1 is a Q-martingale. Express it as a Girsanov transform.
In relation with the above exercise, observe that the map M ~ M D- 1 does not
leave brackets invariant and does not commute with stochastic integration.
#
(1.21) Exercise. 1°) Let B be the standard linear BM and for a > 0 and b > 0
set eJa.b = inf{t : Bt + bt = a}. Use Girsanov's theorem to prove that the density
of eJa.b is equal to a(2JTt 3 )-1/2 exp (-(a - bt)2 12t). This was already found by
other means in Exercise (3.28) of Chap. III; compare the two proofs.
2°) Prove Novikov's criterion directly from the DDS theorem and the above
result.
(1.22) Exercise. If for some £ > 0, E [exp ((! + £) (M, M)t)] < 00 for every t,
prove, using only Holder's inequality and elementary computations, that geM) is
a martingale.
(1.23) Exercise. Let B be the standard linear BM. For any stopping time T such
that E [exp T)] < 00, prove that
0
(1.24) Exercise. 1°) Let B be the standard linear BM and prove that
T = inf{t : B~ = 1 - t}
is a stopping time such that P[O < T < 1] = l.
2°) Set Hs = -2Bs . I(T,=:s)/(l - s)2 and prove that for every t,
1t
H}ds < 00
a.s.
3°) If Mt = f~ HsdBs, compute Mt - !(M, M)t + (1- t!\ T)- 2 B Ti\t.
4°) Prove that E [g(M)1] < 1 and hence that geM)!. t E [0,1], is not a
martingale.
#
(1.25) Exercise. Let (.%) be a filtration such that every (.%)-martingale is continuous (see Sect. 3 Chap. V). If Hn is a sequence of predictable processes
converging a.s. to a process H and such that IHn I :s K where K is a locally
bounded predictable process, prove that, for every t and for every continuous
('jif)-semimartingale X,
§l. Girsanoy's Theorem
P- lim
335
t HndX = 10t H dX
n~oo 10
[Hint: Use the probability measure Q = pc· n n/ p(n where r is a suitable
set on which the processes Hn are uniformly bounded.]
(1.26) Exercise. (Continuation of Exercise (5.15) of Chap. IV). Prove that N
has the (.Y;;Y)-PRP. As a result every (.~Y)-local martingale is continuous.
[Hint: Start with a bounded h and change the law in order that Y become a
BM.]
*
(1.27) Exercise. 1°) Let (P, Q) be a Girsanov pair relative to a filtration (.31).
Prove that if M is a P-cont. loco mart. which has the (.~-PRP, then G~(M)
has also the (.31)-PRP. It is shown in Exercise (3.12) Chap. IX that this does not
extend to the purity property.
2°) Let B = (BI, B2) be a BM2 and set
X t = B/ +
fot B;ds.
Prove that BI is not adapted to (.Y;X).
[Hint: Use Q = i5 (- fa B;dB1) . P to prove that there is a BMl which has
the (.~X)_PRP.]
(1.28) Exercise. In the notation of this section, assume that Q « P on .~ and
that D is continuous. If dQ/dP is in L 2 (P) prove that (in the notation of Exercise
(4.13) of Chap. IV) any semimartingale oL~(P) belongs to .9f(Q).
(1.29) Exercise. Prove that if (M, M)oo = 00, then g (M)t converges a.s. to 0 as
t tends to +00, hence cannot be uniformly integrable.
(1.30) Exercise. Let B be the standard linear BM, TI = inf{t : Bt = I}, and set
Tt
t
1- t
= - - /\ Tl
if t < 1,
Prove that M t = Brr is a continuous martingale for which Kazamaki's criterion
applies and Novikov's does not.
[Hint: Prove that i5(-M) is not a martingale and observe that Novikov's
criterion applies to M if and only if it applies to -M.]
*
(1.31) Exercise. Retain the situation and notation of Exercises (3.14) in Chap. II
and (3.28) in Chap. III (see also Exercise (1.21) above).
1°) Prove that
E[exp (Bal,b - ~O'l'b) ] < 1.
2°) Derive therefrom that, for any 8 > 0, there exists a continuous martingale
M such that E [exp
8) (M, M)oo] < +00 and C; (M) is not a uniformly
integrable martingale.
(! -
336
*
Chapter VIII. Girsanov's Theorem and First Applications
(1.32) Exercise. Let M E BMO and Mo = 0; using Exercise (lAO) in Chap. IV
prove that for any stopping time T,
E [3'(M)oog(M)T 1 I Ji?i] ::: exp
(-~ IIMII~M02)'
Prove that consequently g (M) is a uniformly integrable martingale.
*
(1.33) Exercise. For a continuous local martingale M vanishing at 0 and a real
number a, we set
e~ = exp{aMt+(~-a)(M,M)t},
sup {E [en;
g(a)
T stopping time} .
1°) For a ::: f3 < 1, prove that g(f3) ::: g(a)(1-,'l)/(1-a), and that for 1 < a ::: f3,
g(a) ::: g(f3)(a-1)/(fJ-l).
2°) If a =1= 0 and Tt = inf{s: (M,M), > t}, then g(aM) is a uniformly
integrable martingale if and only if
lim E [(5 (aMhr I(T <oo)] = O.
t---+oo
1
30) If a =1= 1 and g(a) < 00, then g (aM) and (5 (M) are uniformly integrable
martingales.
4°) If M E BMO, then g(a) < 00 for some a =1= l.
#
(1.34) Exercise. Let Pt be the modulus of the planar BM started at a =1= o. Prove
that L t = log(pt/lal) is a local martingale for which Kazamaki's criterion applies
(on the interval [0, t]) and Novikov's criterion does not.
*
(1.35) Exercise. Assume that the filtration (.~ is such that all (..~-martingales
are continuous. Let X and Y be two continuous semimartingales. Prove that their
martingale parts are equal on any set r E .~ on which X - Y is of finite variation.
(This is actually true in full generality but cannot be proved with the methods of
this book).
[Hint: Use Q = Pc· n r)1 P(r).]
*
(1.36) Exercise. Let P be the Wiener measure on f2
C([O, 1], JR.), .0/;
(T(w(s), s ::: t) and b be a bounded predictable process. We set
=
Q = exp
{i
1
b(s, w)dw(s) -
~
i
1
=
b 2 (s, W)dS} . P
and e(w)t = w(t) - f~ b(s, w)ds.
Prove that if (Mf, t ::: 1) is a (Yr, P)-martingale then (Mt 0 e, t ::: 1) is a
(.Yr", Q)-martingale. For instance if h is a function of class C 2. 1 such that! ~:~ +
= 0 then h (e(w)r, t) is a (.~, Q)-martingale.
¥f
[Hint: If.¥; = (T (e (w) s' s ::: t), one can use the representation theorem to
prove that every
Q )-martingale is a (.jf, Q)-martingale.]
(k,
§1. Girsanov's Theorem
*
337
(1.37) Exercise. Let (Q, .]if, p) be a filtered space such that every (.:]if, p)martingale is continuous. Let X be a (.¥,')-adapted continuous process and 1r
the set of probability measures Q such that
i) Q 1.7, ~ P 1.7, for every t,
ii) X is a (.7,', Q)-Iocal martingale.
1°) Show that if 1r is non-empty, then X is a P-semimartingale with canonical
decomposition X = M + A such that dA «d(M, M} a.s.
2°) Conversely if under P the condition of 1°) is satisfied we call h a good
version of dA/d(M, M} (see Sect. 3 Chap. V). Assume that one can define a
probability measure Q on .¥co by setting
prove then that Q is in 1r.
3°) If the conditions in 2°) are satisfied, describe the set 1r. Prove that Q is
the only element of 1r if and only if M has the PRP under P.
* (1.38) Exercise. Let P and Q be two probability measures on a space with a
filtration (.%) which is right-continuous and complete for both P and Q. Let
D t be the martingale density of Q with respect to 2- 1 (P + Q). Remark that
0::::: D t ::::: 2, P + Q-a.s. and set T = inf{t : D t = 2}.
1°) Prove that P[T = 00] = 1 and that the Lebesgue decomposition of Q
with respect to P on .';if may be written
Q(B) =
i
Zt dP + Q(B n (T :::::
t»
where Zt = Dd(2- D t ) on {t < T}, Zt = 0 on {t ~ T}. Prove that Z is a positive
P-supermartingale and if (T', Z') is another pair with the same properties, then
T' = T Q-a.s. and Z' = Z up to P-equivalence.
2°) Assume that D is continuous and prove Girsanov theorem for P and Q
and the filtration (.9if/\r).
(1.39) Exercise. Let H be a predictable process with respect to the Brownian
filtration such that 0 < c ::::: H ::::: C < 00 for two constants c and C. For any f
in Lroc(lR+, ds), prove that
exp
(C 2/2)
1t
f(S)2dS)
[Hint: Apply Novikov's criterion.]
(1 t
(C 1t
<
E [exp
<
exp
f(S)HsdBs)]
2 /2)
f(S)2 d S).
338
Chapter VIII. Girsanov's Theorem and First Applications
* (1.40) Exercise (Another Novikov's type criterion). Let B be a (§{)-BM and
H an adapted process such that
E [exp (aH})] ~ c,
for every s ~ t and two constants a and c > O.
Prove that for r ~ s ~ t and s - r sufficiently small,
E [(~ (H· B)s /g (H· B)r) I'¥'] = 1,
and conclude that E [g (H . B)t] = 1. This applies in particular if H is a Gaussian
process.
[Hint: Use a truncation of H and pass to the limit using Novikov's criterion
and Jensen's inequality.]
(1.41) Exercise (A converse to Theorem (1.12». Let P be a probability measure on W such that, in the notation of this section, the law of X under Q is the
same as the law of X under P, for any Q equivalent to P. Prove that P is the
law of a Gaussian martingale.
[Hint: For any positive functional F,
Ep[F«(X, X))(dQ/dP)] = Ep[F«(X, X))] .]
§2. Application of Girsanov's Theorem
to the Study of Wiener Space
This section is a collection of results on Wiener space which may seem to be
loosely related to one another but are actually linked by the use of Girsanov's
transformation and the ubiquitous part played by the Cameron-Martin space (reproducing kernel Hilbert space of Exercise (3.12) in Chap. I) and the so-called
action functional of BM.
We restrict the time interval to [0, T] for a positive real T and we will consider
the Wiener space W of jRd-valued continuous functions on [0, T] vanishing at O.
We endow W with the topology of uniform convergence; the corresponding norm
is denoted by II 1100 or simply II II. The Wiener measure will be denoted by W
and the coordinate mappings by f3t, 0 ~ t ~ T. As W is a vector space, we can
perform translations in W; for hEW, we call rh the map defined by
Let Wh be the image of Wunder rh. By definition, for any finite set of reals ti ~ T
and Borel sets Ai C jRd, we have
§2. Application of Girsanov's Theorem to the Study of Wiener Space
339
thus a probability measure Q is equal to W h if and only if, under Q, fJ = B + h
where B is a standard BMd.
We are going to investigate the conditions under which Wh is equivalent to
W. For this purpose, we need the following
(2.1) Definition. The space H offunctions h defined on [0, T] with values in lR d ,
such that each component hi is absolutely continuous, hi (0) = 0, and
is called the Cameron-Martin space.
For d = 1, this space was introduced in Exercise (1.12) of Chap. I (see also
Exercise (3.12) in the same chapter). The space H is a Hilbert space for the scalar
product
and we will denote by IlhllH the Hilbert norm (h, h}1/2. It is easy to prove, by
taking linear approximations, that H is dense in W for the topology of uniform
convergence.
If h is in H, the martingale
where the stochastic integrals are taken under W, satisfies Novikov's criterion
(1.15) and therefore (f (h' . fJ) is a martingale. This can also be derived directly
from the fact that h' . fJ is a Gaussian martingale, and has actually been widely
used in Chap. V.
(2.2) Theorem. The probability measure Wh is equivalent to W if and only if h E
H and then W h = (f'(h' . fJh . W.
This may also be stated: W is quasi-invariant with respect to H and the
Radon-Nikodym derivative is equal to i5(h'· fJh.
Proof If Wh is equivalent to W, then by the results in Sect. 3 Chap. V and the
last section, there is a continuous martingale M such that Wh = (5(Mh . W;
by Girsanov's theorem, we have fJi = jji + (fJi, M) where jj is a Wh-Brownian
motion. But by definition of W h , we also have fJ = B + h where B is a Wh-BM.
Under Wh, the function h is therefore a deterministic semimartingale; by Exercise
(1.38) (see also Exercise (2.19» Chap. IV, its variation is bounded. Thus, under
Wh , we have two decompositions of the semimartingale fJ in its own filtration; by
the uniqueness of such decompositions, it follows that hi (t) = (fJi, M}t a.s.
340
Chapter VIII. Girsanov's Theorem and First Applications
Now by the results in Sect. 3 of Chap. V, we know that there is a predictable
process </J such that JOT I</Js 12ds < 00 W -a.s. and M t = Li J~ </J~df3;. Consequently,
we get hi(t) = J~ </J;ds. This proves on the one hand that h E H, on the other
hand that </Ji can be taken equal to the detenninistic function h; whence the last
equality in the statement follows.
Conversely, if h E H, the measure Q = f5(h' . f3h . W is equivalent to W
and under Q we have
f3i = Bi +
L
h;(s)ds = Bi + hi
o
where B is a BM, which shows that Q = W h .
Remark. Whatever h may be, the process f3 is, under Wh, a Gaussian process
and therefore, by a general result, Wh and Ware either equivalent or mutually
singular. Thus if h ¢:. H then Wand W h are mutually singular.
Let us now recall (see Sect. 1 Chap. XIII) that the u-algebra .¥; generated by
f3 is the Borel u-algebra of Wand that the support of a measure is the smallest
closed subset which carries this measure.
(2.3) Corollary. The support of the measure W is the whole space w.
Proof The support contains at least one point; by the previous results, it contains
all the translates of that point by elements of H. One concludes from the density
0
~H~W.
We next supplement the representation Theorem (3.4) of Chap. V, which we
just used in the proof of Theorem (2.2). Let (.Y;) be the W-complete filtration
generated by f3; if X is in L2 (.¥T, W), then
X = E[X] +
faT </Jsdf3s
for a </J E L~/,(f3T). On the other hand, X is a.s. equal to a function, say F, of the
path W E W such that E [F(w)2] < 00. Our aim is to compute </J as a function
of F.
If ljI E W, W + ljI is also in Wand F(w + ljI) makes perfect sense. We will
also sometimes write F(f3) instead of F(w). We will henceforth assume that F
enjoys the following properties:
i) there exists a constant K such that
IF(f3 + ljI) - F(f3)1 ::s KlIljIli
for every ljI E W;
ii) there exists a kernel F' from Q to [0, T] such that for every ljI E H
lim £-1 (F(f3 + £ljI) - F(f3») =
• _0
hr F' (f3, dt)ljI (t),
T
for a.e. f3 .
§2. Application of Girsanoy's Theorem to the Study of Wiener Space
341
It is worth recording that condition i) implies the integrability condition
E[FC,B)2] < 00. Moreover, if F is a differentiable function with bounded deriva-
tive on the Banach space W, then, by the mean value theorem, condition i) is in
force and since a continuous linear form on W is a bounded measure on [0, T],
ii) is also satisfied.
Under conditions i) and ii), we have Clark's formula:
(2.4) Theorem. The process ¢ is the predictable projection of F' C,B, ]t, TD.
Proof Let u be a bounded (.~-predictable process and set
for each w, the map t ----+ lhCw) belongs to W.
For 8 > 0, gC8~h, t :s T, is, by Novikov's criterion, a uniformly integrable
martingale and if Q = ~C8:;j;h . W, then, under Q, the process ,B - 81/! is a BM.
As a result E[FC,B)] = fw FC,B - 81/!)dQ, where, we recall, E is the expectation
with respect to Wand consequently
E[FC,B)] = E [FC,B - 81/!)gC8:;j;h].
This may be rewritten
E [(FC,B - 81/!) - FC,B)) (gC8:;j;h - 1)] + E[FC,B - 81/!) - FC,B)]
+E [FC,B) U~C8:;j;h - 1)] = o.
Let us divide by 8 and let 8 tend to zero. By condition i), the modulus of the first
part is majorized by KTllullE [I gC8:;j;h - 11] and this converges to zero as 8 goes
to zero; indeed, gC8:;j;h converges to 1 pointwise and gC8:;j;h :s exp (81:;j;TI).
Moreover we will show at the end of the proof that, for a sufficiently small,
E [exp (a:;j;;)] < 00. Thus, the necessary domination condition obtains and one
may pass to the limit in the expectation. In the second term, conditions i) and ii)
allow us to use the dominated convergence theorem. Finally
8- 1 (goC8:;j;h - 1) = foT gC8:;j;)su sd,Bs
and gC8:;j;)U converges to u in L 2 CdW ® ds). Indeed, by Doob's inequality,
and this last term converges to zero as 8 tends to zero because the integrand is dominated by (exp (a:;j;;) + 1)2 for 8 :s a. By the L2-isometry property of stochastic
integrals, it follows that 8- 1 (gC8:;j;h - 1) converges in L 2 CW) to foT usd,Bs.
342
Chapter VIII. Girsanov's Theorem and First Applications
The upshot of this convergence result is that
hence, using integration by parts in the right-hand side,
The result now follows from the definition of a predictable projection.
It remains to prove the integrability property we have used twice above. But,
by the DDS Theorem, there is a BM y such that:;f = Y(;f,:;fr) and since (:;f, :;fh S c
for some constant c, we get
o
for ex sufficiently small.
Remark. If one wants to avoid the use of predictable projections, one can state
the above result by saying that ¢ is equal to the projection in the Hilbert space
L2(dW ® dt) of the process F'({J, L T]) on the subspace of equivalence classes
of predictable processes (see Exercise (5.18) Chap. IV).
The remainder of this section will be devoted to asymptotic results on Brownian
motion.
(2.5) Definition. Thefunctional IT on W defined by
/r(¢)
=
liT
2 0 ¢'(s)2ds = 21 II¢II~
+00
if¢ E H,
otherwise,
is called the action functional of Brownian motion.
We will often drop T from the notation. We begin with a few remarks about
fr.
(2.6) Lemma. A real-valued function ¢ on [0, T] is absolutely continuous with
derivative ¢' in L2([0, T]) if and only if
M(¢) = sup
L (¢(ti+d - ¢(ti))2 /(ti+1 - ti),
where the supremum is taken on aI/finite subdivisions of[O, T], isfinite and in that
case
§2. Application of Girsanov's Theorem to the Study of Wiener Space
343
Proof If ¢' E L 2, a simple application of Cauchy-Schwarz inequality proves that
M(¢)
.:s foT ¢'(s)2ds.
Suppose conversely that M(¢) < 00; for disjoint intervals (ai, {3i) of [0, T],
Cauchy-Schwarz inequality again gives us
L 1¢({3i) - ¢(ai)1
which proves that ¢ is absolutely continuous. If (ar> is the n-th dyadic partition of [0, T], we know, by an application of Martingale theory (Exercise
(2.13) Chap. II) that the derivative ¢' of ¢ is the limit a.e. of the functions
1fJn(s)
(2n IT) (¢ (a?+l) - ¢ (a?)) l[a?,a?+l[(s). By Fatou's lemma, it follows
that
[T ¢'(s)2ds .:s lim [T (1fJ n(s»)2 ds .:s M(¢).
Jo
n
Jo
o
(2.7) Proposition. i) The functional ¢ -+ h (¢) is lower semi-continuous on W.
ii) For any A ~ 0, the set K)" = {¢ : h(¢) .:s A} is compact in W.
Proof The result can be proved componentwise so it is enough to prove it for
d = 1. Moreover, i) is an easy consequence of ii). To prove ii), we pick a sequence
{¢n} converging to ¢ in Wand such that h(¢n) .:s A for every n. It is easy to see
that M(¢) .:s sUPn M(¢n) .:s 2A, which, by the lemma, proves our claim.
0
Our first asymptotic result is known as the theorem of large deviations for
Brownian motion. We will need the following definition.
(2.8) Definition. The Cramer transform of a set A C W is the number AT(A)
(.:s 00) defined by
AT(A) = inf{IT(¢) : ¢ E A}.
In what follows, T will be fixed and so will be dropped from the notation.
We will be interested in the probability W[s{3 E A], as s goes to zero. Clearly,
if A is closed and if the function 0 does not belong to A, it will go to zero and
we shall determine the speed of this convergence. We will need the following
lemmas known as the Ventcell-Freidlin estimates where B(¢, 8) denotes the open
ball centered in ¢ and with radius 8 in W.
(2.9) Lemma. If ¢ E H,for any 8 > 0,
lim s2log W[s{3 E B(¢, 8)] ~ -I(¢).
e ..... O
344
Chapter VIII. Girsanov's Theorem and First Applications
Proof By Theorem (2.2)
W[s.8 E B(¢, 8)]
=
=
W[.8 - S-I¢ E B(O, 8s- I )]
{
1B(O,8e- 1 )
=
exp {_~ {T ¢'(s)d.8s _ ~ (T ¢'(S)2 dS } dW
10
S
exp(-s-2I(¢») {
1B(O,8e- 1 )
2s
10
exp(-~ {T ¢'(S)d.8s)dW.
S 10
Now on the one hand W[B(O, 8C I )] :::: 3/4 for s sufficiently small; on the other
hand, Tchebicheff's inequality implies that
which implies
As a result
o
for s sufficiently small. The lemma follows immediately.
(2.10) Lemma. For any 8> 0,
lim s2log W [p (s.8, K)..) :::: 8] :s -)..
8--*0
where p is the distance in W.
Proof Set a = Tin and let 0 = to < tl < ... < tn = T be the subdivision of
[0, T] such that tk+1 - tk = a. Let U be the function which componentwise is
affine between tk and tHI and equal to s.8 at these times. We have
W [p(s.8, U) :::: 8]
:s
t
k=1
<
W
[k~~;tk IS.8t - L~ I :::: 8J
[max IS.8t - L~I :::: 8J
nW [max l.8t - ~.8"'1
: : 8S- J
a
nW
0:9::'0'"
I
0:9::'0'"
<
nW
[o~ta;.l.8tl :::: (2S)-18 J
§2. Application of Girsanov's Theorem to the Study of Wiener Space
because
345
{O~t~~ l,Btl > (2e)-'8} :J {O~t~~ l,Bt - ~,Bal ~ 8e-'}' and by the expo-
nential inequality (Proposition (1.8) Chap. II), this is still less than
2nd exp (-n8 2 j8e 2d T) .
If n is sufficiently large, we thus get
lim e 2 10g W [p(e,B, L£) ~ 8] ::: -A.
£ .....
0
Let us fix such an n; then because
{p(e,B, K)J ::: 8} c {I(U) > A} U {p(e,B, U) ~ 8},
it is enough, to complete the proof, to prove that
But
2
J(U) = e2
nd
Lrd
,
where the l1i are independent standard Gaussian variables. For every ,B > 0,
E[exp ( ¥ I1f)] = C <
{3
W[J(U) > A]
00, and therefore, by Markov inequality,
nd) exp ((1 ~2,B)A)]
1 ,B ~l1f
--T-
=
P [ exp (
<
cpd exp ( _ (1 ~:)A
>
),
whence lim e 2 10g W[I (U) > A] ::: -(1 - ,B)A and, as ,B is arbitrary, the proof of
the lemma is complete.
0
We can now state
(2.11) Theorem (Large deviations). For a Borel set A C W,
-A(A) ::: lim e 2 10g W[e,B E A] ::: lim e2 10g W[e,B E A] ::: -A(A).
£ ..... 0
£ ..... 0
Proof If A is open, for any 4J E H n A, and 8 sufficiently small,
W[e,B E A] ~ W[p(e,B, 4J) < 8],
and so, by Lemma (2.9),
lim e2 10gW[e,B E A] ~ -inf{l(4J),4J E A} = -A(A).
346
Chapter VIII. Girsanov's Theorem and First Applications
Let now A be closed; we may suppose A(A) > 0 as the result is otherwise
obvious. Assume first that A(A) < 00. For A(A) > Y > 0, the sets A and
K = KA(A)-y are disjoint. Since K is compact, there is a number 8 > 0 such that
p(1{!, K) ~ 8 for every 1{! E A; by Lemma (2.10), we consequently get
lim £2 log W[£.B E A] :::::: -A(A) + y
and, since y is arbitrary, the proof is complete. If A(A) = +00, the same reasoning
applies with K = KM for arbitrary large M.
D
We will now apply the preceding theorem to the proof of a beautiful result
known as Strassen 's functional law of the iterated logarithm. In what follows, we
set g(n) = (2nlog2n)-1/2, n ~ 2, and Xn(t) = g(n).Bnt. 0 :::::: t :::::: T. For every
w, we have thus defined a sequence of points in W, the asymptotic behavior of
which is settled by the following theorem. The unit ball of the Hilbert space H
which is equal to the set Kl/2 will be denoted by U.
(2.12) Theorem. For W-almost every w, the sequence {X n (·, w)} is relatively compact in Wand the set of its limit points is the set U.
Proof We first prove the relative compactness. For 8 > 0, let KO be the closed set
of points w in W such that p(w, U) :::::: 8. Using the semi-continuity of Jr, it may
be seen that A ((Kot) > 1/2, and thus, for fixed 8, we may choose y such that
1 < y < 2A ((K°t). Pick A > 1 and set n(m) = [Am]; by the scaling property
ofBM,
W [Xn(m) fj. KO] = W [In(m)g(n(m)).B. fj. KO],
thus by Theorem (2.11), for m sufficiently large,
W [Xn(m) fj. KO]:::::: exp(-ylog2n(m»):::::: «m -1)logA)-Y.
It follows that W -a.s., Xn(m) belongs to KO for m sufficiently large. As this is true
for every 8, it follows that the sequence {Xn(m)} is a.s. relatively compact and that
its limit points are in U. Clearly, there is a set B of full W -measure such that for
wEB, all the sequences Xn(m)(w) where A ranges through a sequence S = {Ad
decreasing to 1, are relatively compact and have their limit points in U. We will
prove that the same is true for the whole sequence {Xn(w)}. This will involve no
probability theory and we will drop the w which is fixed throughout the proof.
Let M = SUPhEU IIh 1100 and set b(t) = 0; observe that
sup Ih(t) - h(s)1 :::::: b(lt - sl)
hEU
thanks to the Cauchy-Schwarz inequality.
Fix A E S; for any integer n, there is an m such that n(m) :::::: n < n(m + 1)
and we will write for short N = n(m + 1). We want to show that p(Xn, U) tends
to 0 as n tends to infinity which we will do by comparing Xn and X N .
§2. Application of Girsanov's Theorem to the Study of Wiener Space
347
Pick 8 > 0 and then choose ).. > S sufficiently close to 1 so that b(l (l/)..)T) < 8. These numbers being fixed, for n sufficiently large, we have
p (X N, U) ~ 8. This entails that II X N II 00 ~ M + 8 and that there is a function
k E U such that IIX N - kll oo < 8. We may now write
p(Xn , U)
<
P(XN, U) + p(X n , X N)
<
8 + sup g(n) X N (!!:"'t) - XN(t)1
<
8 + g(n) g(N)
<
8+I
I
tg g(N)
I
N
11"XN100 + sup IXN (!!:"'t) - XN(t)1
tg
N
:(~) - 11 (M + 8) + 28 + b ( (1 - ;) T) .
But I:(<;/) - 11 ~ ).. - 1 as the reader may check so that
p(Xn, U) ~ ().. - l)(M + 8) + 48.
Since M is fixed and we could take).. and 8 arbitrarily close to 1 and 0 respectively
we have proved the relative compactness of the sequence {Xn} and that the limit
points are in U. It remains to prove that all the points of U are limit points of the
sequence {X n }.
We first observe that there is a countable subset contained in {h E H; IIh IIH < I}
which is dense in U for the distance p. Therefore, it is enough to prove that if
h E H and II h II H < 1,
w [ lim p(Xn , h) = 0] = 1.
n-+oo
To this end, we introduce, for every integer k, an operator Lk on W by
LkcjJ(t) = 0
if 0 ~ t ~ T/k,
LkcjJ(t) = cjJ(t) - cjJ(T / k)
if T / k ~ t ~ T.
For k :::: 2 and m :::: 1, we may write
p (Xkm, h)
<
sup IXkm (t)1 + sup Ih(t)1 + IXkm (T / k)1 + Ih(T / k)1
t~T/k
t~T/k
+ IILk (Xkm- h) 1100 .
Since {Xn} is a.s. relatively compact, by Ascoli's theorem, we may for a.s. every
w choose a k(w) such that for k :::: k(w) the first four terms of this inequality are
less than any preassigned 8 > 0 for every m. It remains to prove that for a fixed k,
348
Chapter VIII. Girsanov's Theorem and First Applications
But, on the one hand, the processes Lk(Xkm) are independent as m varies; on the
other hand, by the invariance properties of BM,
where h(t) = h(t + T / k), O:s t :s T(l - 1/ k). Since
(T(l-l/k)
10
h'(s)2ds :s
{T
10 h,(s)2ds < 1,
the large deviation theorem applied for open sets and on [0, T (1 - 1/ k)] instead
of [0, T] shows that for m sufficiently large
for some y < 1. An application of Borel-Cantelli's Lemma ends the proof.
Remark. We have not used the classical law of the iterated logarithm of Chap. II
which in fact may be derived from the above result (see Exercise (2.16».
(2.13) Exercise. 1°) In the setting of Theorem (2.2), prove that for t < T,
It is noteworthy that this derivative may be written without stochastic integrals.
2°) Conversely, prove that W is the only measure on W which is quasiinvariant with respect to H and which admits the above derivatives.
[Hint: Consider functions h with compact support in ]0, t[.]
(2.14) Exercise. 1°) Prove that for the BMd , d ::: 2, absolutely continuous functions are polar in the sense of Exercise (1.20) of Chap. I (see also Exercise (2.26)
in Chap. V).
2°) Prove that if B is a Brownian Bridge of dimension d ::: 2 between any
two points of]Rd, then for any x E ]Rd,
P[3t E]O, 1[: B t = x] = O.
[Hint: Use the transformations of Exercise (3.lO) Chap. I.]
3°) If in 2°) we make d = 2 and Bo = BI = 0, the index I (x, B) of x =I- 0 with
respect to the curve B t , 0 :s t :s 1, is well defined. Prove that P[I(x, B) = n] > 0
for every n E Z.
[Hint: Extend the result of Corollary (2.3) to the Brownian Bridge and use it
together with the fact that x has the same index with respect to two curves which
are homotopic in ]R2\ {x }.]
§3. Functionals and Transformations of Diffusion Processes
349
(2.15) Exercise. 1°) Recall from Exercise (3.26) Chap. III that there is a.s. a
unique time 0- such that S 1 = Ba. Prove that
where ¢s is the predictable projection of I (a>s) (or its L 2-projection on the predictable o--field).
[Hint: With the notation of Theorem (2.4), prove that F'({J, .) = ca.]
2°) Prove that there is a right-continuous version of P [0- > t I .Yf] which is
indistinguishable from ¢t and conclude that for cP(x) = fxoo gl (y)dy,
¢t = 2cP (St - Bt ) /v'I=t).
[Hint: P [0- > t I.Yi] = P [Ta < 1- t]a=S,-B, .]
3°) (Alternative method). Compute E [J(Sl) I .Yf] where f is a positive Borel
function and deduce directly the formula of 1°).
(2.16) Exercise. 1°) In the setting of Theorem (2.12), let cP be a real-valued
continuous function on W, and prove that
W [lim cP (X n (·» = sup cP(h)] = 1.
n
hEU
2°) Derive therefrom the classical law of the iterated logarithm.
(2.17) Exercise. Let (~n) be a sequence of independent identically distributed real
random variables with mean 0 and variance I and set Sn =
~k. Define a process
St by
St = (l - t + [t])S[tl + (t - [t])S[tl+l.
L7
Prove that the sequence Xn(t) = g(n)Snr. 0 :s t :s T, has the same property as
that of Theorem (2.12).
[Hint: Use the result in Exercise (5.10) of Chap. VI.]
§3. Functionals and Transformations of Diffusion Processes
In the study of diffusions and stochastic differential equations, Girsanov's theorem
is used in particular to change the drift coefficient. One reduces SDE's to simpler
ones by playing on the drift or, from another view-point, constructs new Markov
processes by the addition of a drift. This will be used in Sect. I Chap. IX.
In this section, we will make a first use of this idea towards another goal,
namely the computation of the laws of functionals of BM or other processes. We
will give a general principle and then proceed to examples.
The situation we study is that of Sect. 2 Chap. VII. A field (J (resp. b) of
d x d symmetric matrices (resp. vectors in JRd) being given, we assume that for
350
Chapter VIII. Girsanov's Theorem and First Applications
each x E JR(d there is a probability measure Px on g = C (JR(+, JR(d) such that
(S?, .y;o, Xt, Px ) is a diffusion process in the sense of Definition (2.1) of Chap.
VII with a = (J (Jt. By Theorem (2.7) Chap. VII, for each Px , there is a Brownian
motion B such that
X t = X + lot (J(Xs)dB, + lot b(X,)ds.
We moreover assume that Px is, for each x, the unique solution to the martingale
problem n(x, (J(Jt, b).
Suppose now given a pair (f, F) of functions such that D t = exp(f(X t )f(X o)- f~ F(Xs)ds) is a (.y;o, Px)-continuous martingale for every x. By Propoon.~ by
= D t · Px on .~o.
sition (1.13) we can define a new probability
If Z is an .Y~-measurable function on g, we will denote by Epx[Z I X t = .] a
Borel function ¢ on]Rd such that EAZ I Xtl = ¢(X t ).
p!
(3.1) Proposition. The term (S?, .y;o, Xf,
p!
ph is a Markov process. For each x,
and t > 0, the probability measures PI' (x, dy) and Pt(x, dy) are equivalent and,
for each x, the Radon-Nikodym derivative is given by
(1°
pI
t (x , dy) = exp(J(y) - f(x»)E px [ exp Pt(x, dy)
t
F(Xs)ds )
I X t = y] .
p!
Proof The measurability of the map x --+
is obvious. Let g be a positive
Borel function and Y a .Y;°-measurable LV. Because Dt+, = D t · Ds oet. we have,
with obvious notation,
Ex [Yg(Xt+s)Dt+s]
Ex [Y DtE Xr [g(Xs)DsJ] = E! [Y E{ [g(X s )]]
which proves the first claim. The second follows from the identities
PI' g(x) = Ex [Dtg(Xt )] = Ex [g(Xt ) exp (f(Xt) - f(X o) -lot F(X,)dS)]
= Ex [g(Xt) exp (f(X t ) - f(x» Ex [exp ( -lot F(Xs)dS) I X t ]] .
o
In the above Radon-Nikodym derivative, three terms intervene, the two semigroups and the conditional expectation of the functional exp(- f~ F(Xs)ds). This
can be put to use in several ways, in particular to compute the conditional expectation when the two semi-groups are known. This is where Girsanov's theorem
comes into play. Since D t = 15 (M)t for some local martingale M (for each
then Girsanov's theorem permits to compute the infinitesimal generator of
Pt , hence at least theoretically, the semi-group PI' itself. Conversely, the above
formula gives PI' when the conditional expectation is known.
px),
§3. Functionals and Transfonnations of Diffusion Processes
351
We now give a general method to find such pairs (j, F) and will afterwards
take advantage of it to compute the laws of some Brownian functionals.
The extended generator L of X is equal on C 2 -functions to
1
a2
d
a
d
L = - '"' a i j - - + '"' b;2 ;.j=l
~
ax·ax
~
ax I
I
}
;=1
where a = aa t . We recall that if f
E C 2 then
is a continuous local martingale and we now show how to associate with f a
function F satisfying the above hypothesis.
(3.2) Definition. The operateur carre du champ r is defined on C 2 x C 2 by
r(j, g) = L(jg) - f Lg - gLf.
(3.3) Proposition. If f, g E C 2, then, under each Px ,
(MI, Mg)t = lot r(j, g)(Xs)ds.
Proof Let us write At ~ B t if A - B is a local martingale. Using the integration
by parts formula, straightforward computations yield
lot L(j2)(Xs)ds + (lot Lf(Xs)ds
Y-
2f(Xt) lot Lf(Xs)ds
lot L(f2)(Xs)ds - 2 lot (jLf)(Xs)ds = lot r(j, f)(Xs)ds.
o
The proof is completed by polarization.
As a consequence of this proposition, if f E C 2 , then
exp {f(X t ) - f(Xo) -lot Lf(Xs)ds -
~ lot r(j, f)(Xs)dS}
(h(Xt)/h(Xo))exp (lot (Lh(Xs)/h(Xs))dS) ,
if h = exp(j). If this local martingale turns out to be a true martingale, then
we may define the probability measures
as described at the beginning of the
section, with F = Lf + ~r(j, f). In this setting, we get
p!
(3.4) Proposition. If L is the extended generator of the P -process, the extended
generator of the pi -process is equal on C 2 to L + r(j, .) == L + h- 1 r(h, .).
352
Chapter VIII. Girsanov's Theorem and First Applications
Proof If ¢ E C 2, then ¢(X t ) - ¢(Xo) - J~ L¢(Xs)ds is a Px-Iocal martingale
and Girsanov's theorem implies that
¢(Xt ) - ¢(X o) -1t L¢(Xs)ds - (MI, M"')t
is a
pI -local martingale. The proof is completed by means of Proposition (3.3).
o
We proceed by applying the above discussion to particular cases. Let us suppose that f is a solution to Lf = 0, which, thinking of the special case of BM,
may be expressed by saying that f is "harmonic". Then r(f, f) = L(f2) and
F = ~ L(f2). The extended generator of the pI -process is equal on ¢ E C 2 to
L¢ + (V f, aatV¢).
We see that the effect of the transformation is to change the drift of the process.
If the P-process is a BMd and f is harmonic in the usual sense, then F =
IV f 12 and the generator is given by ~,1¢ + (V f, V ¢). We will carry through some
computations for particular cases of harmonic functions. Let for instance 8 be a
vector in JRd; then f(x) = (8, x) is a harmonic function, and plainly g(MI) is a
martingale. We get a Markov process with generator
I _ I
A ¢ - "2,1¢ + (8, V¢),
which is the Brownian motion with constant drift 8, namely B t +t8. Let us call P;
instead of
the corresponding probability measures. By the above discussion
pI
on .~o
and the semi-group
p/ is given by
2
181 t } . Pt(x, dy),
Pt8 (x, dy) = exp { (8, y - x) - -2where Pt is the Brownian semi-group. Of course in this simple case, the semimay be computed directly from Pt. Before proceeding to other examples,
group
we shall study the probability measure
We suppose that d 2: 2; since P; is absolutely continuous with respect to Px on
.7,0 and since the hitting time of a closed set is a (.~o)-stopping time, it follows
from the polarity of points for BM2 that the hitting times of points are also a.s.
infinite under P;. Thus, we may write a.s. X t = pA for all t > 0, where Pt = IXtl
and the process 8 takes its values in the unit sphere. We set .~t = a (Ps, s :::: t)
and ·j£oo = Vt ·'/&t.
p/
pt.
(3.5) Lemma. For each t > 0, the r.v. 8t is, under Po, independent oJ.f&oo and
uniformly distributed on the unit sphere Sd-I.
§3. Functionals and Transfonnations of Diffusion Processes
353
Proof Suppose d = 2 and let Z be .~(X)-measurable and:::: 0 and G be a positive
Borel function on Sd-I. Because of the invariance of Po, i.e. the Wiener measure,
by rotations, for every ex E [0, 2n],
and integrating with respect to ex,
since the Lebesgue measure on Sl is invariant by multiplication by a given point
of Sl, we get
For d > 2 there is a slight difficulty which comes from the fact that Sd-I is only
a homogeneous space of the rotation group. The details are left to the reader (see
Exercise (1.17) Chap. III).
D
We henceforth call1Ld the uniform distribution on Sd-I and if ¢ is a positive
Borel function on lR. d , we set
(3.6) Corollary. On the a-algebra .~t,
Pt = M¢(pt) exp ( _18~2t) . Po,
with ¢(x) = exp{(8,x)}.
Proof Since .:Ytt C .Yfo, we obviously have
on·~t
and it is easily checked, taking the lemma into account, that the conditional exD
pectation is equal to exp ( _ 18 2t ) M¢(pt).
i
We may now state
pt.
(3.7) Theorem. Under
the process Pt is a Markov process with respect to
(·~t). More precisely. there is a semi-group Qt such that for any positive Borel
function f on lR.+.
354
Chapter VIII. Girsanoy's Theorem and First Applications
Proof Pick A in ~t; we may write, using the notation of Corollary (3.6),
1
A f
(PtH) dPo8 = exp
2
81
(1
-2«(
+ s) )
1
A f
(Pt+s) M¢ (Pt+s) dPo,
and by the Markov property under Po, this is equal to
1
1812 + ) 1
( -2«(
1812 + S) )
exp ( -2«(
= exp
S)
A EXt [f(Ps)M¢(ps)] dPo
A Eo [EXt [f(Ps)M¢(ps)]
I ~r] dPo.
By the same reasoning as in Corollary (3.6), this is further equal to
1812 + S) )
exp ( -2(t
1
A M1{!(pt)dPo
with 1{!(x) = Ex [f(Ps)M¢(ps)]. Thus we finally have
1
8 =
A f(Pt+s)dPo
1 (18 12s)
A
exp --2-
(M1{!(pt)/M¢(pt»dPo8 '
This shows the first part of the statement and we now compute the semi-group
Qt.
Plainly, because of the geometrical invariance properties of BM, the function
1{! depends only on Ix I and consequently M 1{! = 1{!. Thus, we may write
Eg [J(Pt+s) I.~t]
=
exp ( _18~2S) 1{!(Pt)/ M¢(pt)
=
E pt [f(Ps)M¢(Ps) exp (
_18~2S) ] / M¢(pt)
where Pa is the law of the modulus of BM started at x with Ix I = a. We will see
in Chap. XI that this process is a Markov process whose transition semi-group has
a density p:(a, p). Thus
Eg [J (Pt+s) I .~r] = Qs! (Pt)
S)
12
. 0f a
Q s (a, d)
h
were
P -- M¢(p)
M¢(a) exp (18
--2- Psd( a, P )dp. S·mce Psd'IS th e denslty
semi-group, it is readily checked that Qt is a semi-group, which ends the proof.
D
Remark. The process Pt is no longer a Markov process under the probability
measure pi for x f:. o.
§3. Functionals and Transfonnations of Diffusion Processes
355
We now tum to another example, still about Brownian motion, which follows
the same pattern (see also Exercise (1.34)). Suppose that d = 2 and in complex
notation take fez) = a log Izl with a :::: O. The function f is harmonic outside the
polar set {O}. Moreover, for every t,
sup t5 (Mf)s .::s sup IZsl";
s"St
s..:::.t
since the last r.v. is integrable, it follows that the local martingale tS(Mf) is
actually a martingale so that our general scheme applies for Pa if a i= O. With the
notation used above, which is that of Sect. 2 in Chap. V,
From Ito's formula, it easily follows that under Pa
Pt = Po +
where
fir + ~ fot P; 1ds
fir = J~ p;l (XsdXs + YsdYs) is a linear BM. But we also know that
log PI = log Po + fol p;ld1Js,
hence (f3, 10gp)1 = J~ p;lds. Thus, Girsanov's theorem implies that under
fir -
pI,
the process /31 =
a J~ p;lds is a BM, and consequently PI = Po + /31 +
2at J~ Ps-1ds. The equations satisfied by P under Pa and p! are of the same
type. We will see in Sect. 1 Chap. XI, how to compute explicitly the density p~
of the semi-group of the solution to
111
8PI = Po + f3t + - -
2
0
-I
Ps ds.
All this can be used to compute the law of er, the "winding number" of ZI around
the origin. As fez) depends only on Izl, the discussion leading to Proposition (3.1)
may as well be applied to P as to Z with the same function f. As a result, we
may now compute the conditional Laplace transform of CI = J~ p;2ds. We recall
that Iv is the modified Bessel function of index v.
(3.8) Proposition. For every a and a i= 0,
Ea [exp (i a (e l - eo)) I PI = p]
Ea [ex p ( 11"1
~2 C
I)
I Pt = pJ
(latIP ) /10 (Ia}p) .
Proof The first equality follows from Theorem (2.12) in Chapter V and the second
from Proposition (3.1) and the explicit formulas of Chap. XI, Sect. 1.
356
Chapter VIII. Girsanov's Theorem and First Applications
Remark. From this result, one may derive the asymptotic properties of ()I proved
in Theorem (4.1) Chap. X (see Exercise (4.9) in that chapter).
Our next example falls equally in the general set-up of Proposition (3.1).
Suppose given a function F for which we can find a C 2 -function I such that
1
F = LI + "2 r (f, f);
then our general scheme may be applied to the computation of the conditional
expectation of exp ( - J~ F(Xs)ds) given Xt.
Let us apply this to the linear BM with drift b, namely the process with
generator
1 1/
,
Lcp ="2CP + bcp .
Again, it is easily seen that r(f, f) = 1'2, so that if we use the semigroup (p/) associated with I, we can compute the conditional expectation of
exp ( - J~ F(Xs)ds) with
F(x) = ~ II/(x) + b(x)/'(x) + ~ 1'(x)2.
By playing on b and I, one can thus get many explicit formulas. The best-known
example, given in Exercise (3.14), is obtained for b(x) = Ax and leads to the
Cameron-Martin formula which is proved independently in Chap. XI.
Still with Brownian motion, we proceed with some other examples in which
we consider not only I but its product vi by a constant v and call
instead of
p;f the corresponding probability measures. Moreover, we take b(x) = ai' (x)
for a constant a and assume that I satisfies the differential equation
P;
(f')2 = - ~ II/ + y
2a
Eq. (3.1)
for some constant y. The function F is then given by
F(x) =
(v 2 +2av)y
v 2 1/
2
- 4al (x),
the general formula of Proposition (3.1) reads
Ex [ exp ( - V2
4a
£1
0
II/(Xs)ds
)I
Xt = y
]
and the infinitesimal generator of the P v -process is given by
1
Ug = "2 g l/ + (a + v)/'g',
as follows from Proposition (3.4).
{IX I'(u)du,}
dy) exp v
= r(x
I'
Pt(x, dy)
y
§3. Functionals and Transfonnations of Diffusion Processes
357
By solving Eq. (3.1) for f', we find for which drifts and functions the above
discussion applies. This is done by setting f' = _h' /2ah and solving for h. Three
cases occur.
Case I. y = O. In that case, f' (x) = 2~ AX~B where A and B are constants. If, in
particular, we take a = 1/2, A = 1, B = 0, the generator L" is then given by
"
1 If
( 1
g' (x)
Lg(x)=-g(x)+
-+v ) -.
2
2
x
The P" -process is thus a Bessel process which will be studied in Chap. XI.
Case 2. y < O. Then f'(x) = M -Asmmx+Bcosmx
Acosrnx+iBsinrnx where m = -2aM. In the
special case y = -1, A = I, B = 0, we get as generator
1
L"g(x) = "2 g (x) + (a + v)cot(2ax)g (x)
If
I
which is the generator of the so-called Legendre process.
Case 3. y > O. Then f'ex) = vIVy
Acoshrnx+Bsinhmx where m = 2a IVy. For y = I
r Asmhrnx+Bcoshrnx
V r
,
A = 1, B = 0, the generator we get is
1
2
L"g(x) = -g (x) + (a + v)coth(2ax)g (x).
If
I
The corresponding processes are the so-called hyperbolic Bessel processes.
We proceed to other important transformations of diffusions or, more generally,
Markov processes.
Let h be "harmonic", that is, as already said, h is in the domain of the extended
infinitesimal generator and Lh = O. Suppose further that h is strictly positive. If
we set f = log h, our general scheme applies with F = 0 provided that h is
Pt (x, .)-integrable for every x and t. In that case one observes that h is invariant
by the semi-group P(, namely Pth = h for every t.
The semi-group obtained from Pt by using this particular function f, namely
log h, will be denoted by p th and the corresponding probability measures by P;.
Plainly, pthcp = h- 1 Pt(hcp). The process X under the probability measures P; is
called the h-process of the Pt-process and is very important in some questions
which lie beyond the scope of this book. We will here content ourselves with the
following remark for which we suppose that the Pt-process is a diffusion with
generator L.
(3.9) Proposition. The extended infinitesimal generator of the h-process is equal
on the C 2 :!unction cp to
L hcp = h- 1L(hcp).
358
Chapter VIII. Girsanov's Theorem and First Applications
Proof If ¢ E C~, then
Pth¢(x) -¢(x) -lot psh (h-iL(h¢»)ds
= h-i(x) [Pt(h¢)(X) - h(x)¢(x) -lot Ps(L(h¢))dSJ = 0,
o
and one concludes by the methods of Sect. 2, Chap. VII.
In the case of Brownian motion, the above formula becomes
1
L h¢ = -L1¢ + h- i (Vh, V¢)
2
which is again the generator of Brownian motion to which is added another kind
of drift. Actually, we see that the h-process is pulled in the direction where h- i Vh
is large. This is illustrated by the example of BES 3 ; we have already observed
and used the fact that it is the h-process of the BM killed at 0 for hex) = x
(see Exercise (1.15) in Chap. III, Sect. 3 in Chap. VI and Exercise (3.17) in this
section).
Finally, we observe that in Proposition (3.1) we used only the multiplicative
property of D t . Therefore, given a positive Borel function g we may replace D t
by Nt = exp (- f~ g(Xs)ds) which has the same multiplicative property. Thus,
we define a new semi-group p/ g) and probability measures P;g) by
p/ g) f(x) = Ex [Nt' f(X t )] ,
P;g) = Nt . Px
on .~o.
Again, X is a Markov process for the probability measures P;g) and
p/ g ) (x, dy) = Ex [exp (-lot g(Xs)dS) I X t = yJ Pt(x, dy).
This transformation may be interpreted as curtailment of the life-time of X or
killing of X and g appears as a killing rate. Evidence about this statement is also
given by the form of the extended infinitesimal generator of the new process which
is denoted by L (g) •
(3.10) Proposition (Feynman-Kac formula). If ¢ E C 2,
L(g)¢ = L¢ - g¢.
Proof Let ¢ E C~; then, M¢ is a Px-martingale and the integration by parts
formula gives
Nt¢(X t ) = ¢(Xo) -lot ¢(Xs)Nsg(Xs)ds + lot NsL¢(Xs)ds + lot NsdMt.
The last term is clearly a Px-martingale and integrating with respect to Px yields
E;g) [¢(X t )] = E;g) [¢(Xo)] + E;g) [lot (L¢ - g¢)(Xs)dsJ .
Using Proposition (2.2) in Chap. VII, the proof is easily completed.
o
§3. Functionals and Transfonnations of Diffusion Processes
359
Let us finally observe that these transfonnations are related to one another.
Indeed, if g = Lf + !r(f, f), we have, with the notation of Proposition (3.1),
p/ (x, dy) = exp ( - f(x) )P/cx, dy) exp (f(y»).
Thus, the semi-group p/g) appears as the h-transfonn of the semi-group p/ with
g)
h = exp(f).
(3.11) Exercise. In the situation of Theorem (3.7) but with x i= 0 prove that the
bidimensional process (Pt, J~ p;2ds) is a Markov process with respect to (,~t)
under
P;.
*
~.12) Exercise (Time-inversion).
If X is a process indexed by t > 0, we define
X by
t > O.
1°) With the notation of this section, prove that if
P; is the law of X, then
Pt is the law of X. In other words, for the BM with constant drift, time-inversion
interchanges the drift and the starting point.
2°) Suppose that d = 2, x = 0 and 8 i= 0, namely X is the complex BM with
drift 8 started at O. Prove that X t = P, exp(iYA) where Pt = IXtl, Y is a linear
BM independent of ,~oo and At = J/:X) p;2ds.
3°) If for r > 0 we set T = inf{t : Pt = r}, then X T is independent of ,c/tT'
As a result, X T and T are independent. Observe also that this holds equally for
8 = 0 (see also Exercise (1.17) Chap. III).
4°) For r = 1, prove that X T follows the so-called von Mises distribution of
density Co exp«(8, e) with respect to the unifonn distribution on sl, where Co is
a nonnalizing constant and (8, e) is the scalar product of 8 and e as vectors of]R2.
(3.13) Exercise. Instead ofEq. (3.1) suppose that f satisfies the equation f2(x) =
fJf'(x) + Y where fJ and yare constants independent of a, and carry the computations as far as possible.
*
(3.14) Exercise (O.U. processes and Levy's formnla). 1°) Let X be the standard d -dimensional BM and P = IX I. Prove that Proposition (3.1) extends to
Dt = exp
{~ (p~ - dt) - ~2fot P;dS}
and prove that the infinitesimal generator of the transfonned process is given on
¢ E C 2 , by
1
"2L1¢(x)
+ A(X, V¢(x»).
We call Px the corresponding probability measures.
2°) Prove that under Px , the process X satisfies the SDE
X t = x + B t + A fot Xsds
360
Chapter VIII. Girsanov's Theorem and First Applications
where B is ad-dimensional BM. Deduce therefrom that it can be written
eAt (x + jj ((1 - e- 2At ) /2A)) , where jj is a standard BMd , and find its semi-group.
[This kind of question will be solved in a general setting in the next chapter, but
this particular case may be solved by using the method of Sect. 3 Chap. IV].
The process X may be called the d-dimensional OU process.
3°) For d = 1 and A < 0 the process X is an OU process as defined in Exercise
(1.13) Chap. III. Check that it can be made stationary by a suitable choice of the
initial measure.
4°) Prove that
Ex [exp ( - ~ lot P;dS) I Pt = pJ
=
~exp (IXI2 +p2 O-AtCothAt)) Iv (I~IPA) /Iv (IXIP)
smh At
2t
t
smh At
where v = (d/2) - 1. The reader will observe that for d = 2, this gives the law
of the stochastic area St studied in Exercise (2.19) Chap. V. For x = 0 or p = 0,
and k 2 = Ix 12 + p2, the right-hand side becomes
At
( --:---nh
S1
At
)v+! exp (k2- (1 - At coth At) ) .
2t
5°) For d = 2, prove that
(Id
)
Eo [exp(iASt ) I Bt = Z] = - .At- - exp --(At cothAt - 1) .
smhAt
2t
(3.15) Exercise. Prove that the extended generator of the semi-group of BES3
(Exercise (1.15) Chap. III and Sect. 3 Chap. VI) is equal on C 2 (]0, oo[) to
1
</> ---+ -</>"(x)
2
1
+ -</>'(x).
x
[Hint: Use the form of the generator ofBM killed at 0 found in Exercise (1.22)
Chap. VII.]
(3.16) Exercise. If (Pt ) and (P;) are in duality with respect to a measure I; (see
Sect. 4 Chap. VII), then the semi-group of the h-process of X t and (p;) are
in duality with respect to the measure hI;. This was implicitly used in the time
reversal results on BES 3 proved in Sect. 4 of Chap. VII.
(3.17) Exercise (Inverting Brownian motion in space). In ]Rd\{O}, d 2: 3, put
</>(x) = x/lxI2.
1°) If B is a BMd (a) with a =1= 0, prove that </>(B) is a Markov process with
transition f11llction
§3. Functiona1s and Transformations of Diffusion Processes
361
2°) Call (it) the time-change associated with At = f~ IB s l- 4 ds (see Lemma
(3.12) Chap. VI) and prove that Yt = </J(Br ,), t < A oo , is a solution to the SDE
t
Y = Yo
+,8t - (d - 2) 1t (Ys/IYsI2) ds,
where ,8 is a BM stopped at time Aoo = inf{t : Yt = OJ.
3°) Prove that the infinitesimal generator of Y is equal on C 2 (lR d \{0}) to
'21<1 - (d - 2)(y/lyl 2 , V'.),
and that Y is the h-process of BMd associated with h(x) = IxI 2- d.
4°) Prove that under Pa, the law of Aoo is given by the density
(r(v) (21a12f tV+1r 1 exp (-(2tlaI2)-I)
with v = (d/2) - 1.
5°) Conditionally on Aoo = u, the process (Yr. t ~ u) is a Brownian Bridge
between a/lal 2 and 0 over the interval [0, u].
This exercise may be loosely interpreted by saying that the BMd looks like a
Brownian Bridge on the Riemann sphere.
(3.18) Exercise. Let Y be ad-dimensional r.v. of law)., and set B: = Bt + t Y
where B is a standard BMd independent of Y. Call p A (resp. po) the law of B:
(resp. B).
1°) Prove that pA <l po with density hA(Bt , t) where
h;.,(x,t) =
f
exp((x,y} -tlyI2/2)).,(dy).
2°) Prove that in the filtration (.~B") the semimartingale decomposition of
B;" is given by
B: = B
t+ 1t V'x (logh;.,) (B;,s)ds.
Write down the explicit value of this decomposition in the following two cases:
i) d = 1 and)., = (Lm + 8m) /2;
ii) d = 3 and )., is the rotation-invariant probability measure on the sphere of
radius r > 0, in which case if Rt is the radial part of B:,
t Yt + r 1t coth(rRs)ds
R =
for a BM y.
(3.19) Exercise (B.M. with drift and Levy's equivalence). Let J1 ::: 0 and set
B; = B t + J1t where B is a BMI (0). Call LIl the local time of BIl at O. In
particular BO = Band LO = L. Set XIl = IBIlI + LIl and XO = X.
362
Chapter VIII. Girsanov's Theorem and First Applications
1°) Using the symmetry of the law of B, prove that for any bounded functional F
E [F (X:', s ::s t)] = E [F (Xs, s ::s t) cosh (JLIB t !) exp (-JL2 t /2)].
2°) Using Levy's identity and Corollary (3.6) of Chapter VI, prove that these
expressions are yet equal to
E [F (Xs, s ::s t)(sinh (JLX t ) / JLX t ) exp (_JL2 t /2)] .
3°) Put si = sup (Bf, s ::s t) and prove that the processes XfJ- and 2SfJ- - BfJhave the same law, namely, the law of the diffusion with infinitesimal generator
(1 /2)d 2/ dx2 + JL coth(JLx)d / dx.
4°) Prove that for JL =1= 0 the processes IBfJ-1 and SfJ- - BfJ- do not have the
same law. [Hint: Their behaviors for large times are different.]
Prove more precisely that for every bounded functional F,
E [F (IBfl, s ::s t)] = E [F (Sf - Bf, s ::s t) .1 t ]
where .1 t = exp (-Sf") (1 + exp (2JL (Si - Bi)) /2).
5°) Prove however that
(IBfl, s ::s Ot lot < 00) <!J. (Sf - Bf, s ::s Tt ),
where Ot = inf {s :::: 0 : Lf > t} and Tt = inf {s :::: 0 : Bf = t}. Give closed form
formulae for the densities of Ot and Tt .
6°) Use the preceding absolute continuity relationship together with question
4°) in Exercise (4.9) Chapter VI to obtain a closed form for the Laplace transform
of the law of
inf{s : Sf - Bf = a}.
Notes and Comments
Sect. 1. What we call Girsanov's theorem - in agreement with most authors - has
a long history beginning with Cameron-Martin ([1] and [2]), Maruyama ([1] and
[2]), Girsanov [1], Van Schuppen-Wong [1]. Roughly speaking, the evolution has
been from Gaussian processes to Markov processes, then to martingales. Cameron
and Martin were interested in the transformation of the Brownian trajectory for
which the old and new laws were equivalent; they first considered deterministic
translations and afterwards random translations so as to deal with BM with non
constant drift. The theory was extended to diffusions or more generally Markov
processes by Maruyama and by Girsanov. Proposition (1.12) is typical of this
stage. Finally with the advent of Martingale problems developed by Stroock and
Varadhan, it became necessary to enlarge the scope of the results to martingales
which was done by Van Schuppen and Wong; Theorem (1.4) is typically in the line
Notes and Comments
363
of the latter. This already intricate picture must be completed by the relationship
of Girsanov's theorem with the celebrated Feynman-Kac formula and the Doob's
h-processes which are described in Sect. 3.
For the general results about changes oflaw for semimartingales let us mention
Jacod-Memin ([1], [2]) and Lenglart [3] and direct the reader to the book of
Dellacherie and Meyer [1] vol. II. Other important papers in this respect are
Yoeurp ([3], [4]) which stress the close connection between Girsanov's theorem
and the theory of enlargements of filtrations. In fact the parenthood between the
decomposition formulas of martingales in this theory and in the Girsanov set-up
is obvious, but more precisely, Yoeurp [4] has shown that, by using Follmer's
measure, the enlargement decomposition formula could be interpreted as a special
case of a wide-sense Girsanov's formula; further important developments continue
to appear in this area.
The theory of enlargements of filtrations is one of the major omissions of this
book. By turning a positive random variable into a stopping time (of the enlarged
filtration), it provides alternative proofs for many results such as time reversal
and path-decomposition theorems. The interested reader may look at Jeulin [2]
and to Grossissements de filtrations: exemples et applications, Lecture Notes in
Mathematics, vol. 1118, Springer (1985). A thorough exposition is also found
in Dellacherie, Maisonneuve and Meyer [1]. Complementing Yoeurp's theoretical
work mentioned above, enlargement of filtrations techniques have been used together with Girsanov's theorem; see for example Azema-Yor [4], Follmer-Imkeller
[1] and Mortimer-Williams [1].
The theory of enlargements also allows to integrate anticipating processes
with respect to a semimartingale; a different definition of the integral of such
processes was proposed by Skorokhod [3] and continues to be the subject of
many investigations (Buckdahn, Nualart, Pardoux, ... ).
The Girsanov pair terminology is not standard and is introduced here for the
first time. The proof of Kazamaki' s criterion (Kazamaki [2]) presented here is due
to Yan [1] (see also Lepingle-Memin [1]).
Exercise (1.24) is taken from Liptser-Shiryaev [1] as well as Exercise (1.40)
which appears also in the book of Friedman [1].
Sect. 2. The fundamental Theorem (2.2) is due to Cameron-Martin. We refer
to Koval'chik [1] for an account of the subject and an extensive bibliography.
Theorem (2.4) is from Clark [1], our presentation being borrowed from RogersWilliams [1]. The result has been generalized to a large class of diffusions by
Haussman [1], Ocone [2], and Bismut [2] for whom it is the starting point of his
version of Malliavin's calculus.
Part of our exposition of the large deviations result for Brownian motion is
borrowed from Friedman [1]. Theorem (2.11) is due to Schilder [I]. The theory of
large deviations for Markov processes has been fully developed by Donsker and
Varadhan in a series of papers [1]. We refer the reader to Azencott [1], Stroock [4]
and Deuschel and Stroock [1]. The proof ofStrassen's law of the iterated logarithm
364
Chapter VIII. Girsanov's Theorem and First Applications
(Strassen [1]) is borrowed from Stroock [4]. In connection with the result let us
mention Chover [1] and Mueller [1].
Exercise (2.14) is from Yor [3]. The method displayed in Exercise (2.16) is
standard in the theory of large deviations. The use of Skorokhod stopping times
in Exercise (2.17) is the original idea of Strassen.
Sect. 3. The general principle embodied in Proposition (3.1) and which is the
expression of Girsanov's theorem in the context of diffusions is found in more
or less explicit ways in numerous papers such as Kunita [1], Yor [10], PriouretYor [1], Nagasawa [2], Elworthy [1], Elworthy-Truman [1], Ezawa et al. [2],
Fukushima-Takeda [1], Oshima-Takeda [1], Ndumu [1], Truman [1], Gruet [3],
among others.
The operateur carre du champ was introduced by Kunita [1] and by Roth [1].
The reader is warned that some authors use 2r or r /2 instead of our r. Clearly, it
can be defined only on sub-algebras of the domain of the (extended) infinitesimal
generator and this raises the question of its existence when one studies a general
situation (see Mokobodzki [1] who, in particular, corrects errors made earlier on
this topic).
Proposition (3.4) may be found in Kunita [1] and Theorem (3.7) in Pitman-Yor
[1]. For Exercise (3.12), let us mention Kent [1], Wendel [1] and [2], Pitman-Yor
[1] and Watanabe ([1], [4]). Exercise (3.14) comes from Pitman-Yor [1] and Yor
[10]; the formula of 4°) was the starting point for the decomposition of Bessel
Bridges (see Pitman-Yor [2] and [3]). In connection with Exercise (3.14) let us
also mention Gaveau [1] whose computation led him to an expression of the
heat semigroup for the Heisenberg group. It is interesting to note that P. Levy's
formula for the stochastic area plays a central role in the probabilistic proof given
by Bismut ([4] and [5]) of the Atiyah-Singer theorems.
Exercise (3.17) is taken from Yor [16], following previous work by L. Schwartz
[1]. It is also related to Getoor [2], and the results have been further developed
by Came [3].
Renormalizations of the laws of Markov processes with a multiplicative functional and the corresponding limits in law have been considered in Roynette et al.
[1].
Chapter IX. Stochastic Differential Equations
In previous chapters stochastic differential equations have been mentioned several
times in an informal manner. For instance, if M is a continuous local martingale,
its exponential g (M) satisfies the equality
this can be stated: g (M) is a solution to the stochastic differential equation
t
X = 1
+
1t
XsdMs,
which may be written in differential form
Xo = 1.
We have even seen (Exercise (3.10) Chap. IV) that g (M) is the only solution to
this equation. Likewise we saw in Sect. 2 Chap. VII, that some Markov processes
are solutions of what may be termed stochastic differential equations.
This chapter will be devoted to the formal definition and study of this notion.
§1. Formal Definitions and Uniqueness
Stochastic differential equations can be defined in several contexts of varying
generality. For the purposes of this book, the following setting will be convenient.
As usual, the space C(lR+, IRd ) is denoted by W. If w(s), s ::: 0, denote the
coordinate mappings, we set .9i9r = a ( w (s), s ::::: t). A function f on IR+ x W
taking values in IRr is predictable if it is predictable as a process defined on W with
respect to the filtration (.~). If X is a continuous process defined on a filtered
space (Q, .jf, P), the map s --+ Xs(w) belongs to Wand if f is predictable,
we will write f(s, X,) or f(s, X.(w)) for the value taken by f at time s on the
path t --+ Xt(w). We insist that we write X.(w) here and not Xs(w), because
f(s, X.(w)) may depend on the entire path X.(w) up to time s. The case where
f (s, w) = a (s, w (s)) for a function a defined on IR+ x IRd is a particular, if
important, case and we then have f(s, X,) = a(s, Xs). In any case we have
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
366
Chapter IX. Stochastic Differential Equations
(1.1) Proposition. If X is (§f)-adapted, the process f(t, X.(w)) is (§f)-predictable.
Proof Straightforward.
(1.2) Definition. Given two predictable functions f and g with values in d x r
matrices and d-vectors, a solution of the stochastic differential equation e(f, g) is a
pair (X, B) ofadapted processes defined on afiltered probability space ([2, .'Yr, P)
and such that
i) B is a standard (§f)-Brownian motion in lRr;
ii) for i = 1,2, ... , d,
X; = X& + L
j
1t
0
fij(s, X.)dBj +
1t
gi(S, X.)ds.
0
Furthermore, we use the notation ex(f, g) ifwe impose the condition Xo = x a.s.
on the solutions.
We will rather write ii) in vector form
X
t= + 1t
Xo
f(s, X.)dBs +
1t
g(s, X.)ds
and will abbreviate "stochastic differential equation" to SDE. Of course, it is
understood that all the integrals written are meaningful i.e., almost-surely,
[ I,: h;(s, X.)ds <
00,
l.j
1t
Ig(s, X')I ds < 00.
Consequently, a solution X is clearly a continuous semimartingale.
As we saw in Sect. 2 of Chap. VII, diffusions are solutions of SDE's of the
simple kind where
f(s, X.) = a(s, X s ),
g(s, X.) = b(s, Xs)
for a and b defined on lR+ x lRd , or even in the homogeneous case
f(s, X.) = a(Xs ),
g(s, X.) = b(Xs).
Such SDE's will be denoted e(a, b) rather than e(f, g).
We also saw that to express a diffusion as a solution to an SDE we had to
construct the necessary BM. That is why the pair (X, B) rather than X alone is
considered to be the solution of the SDE.
We have already seen many examples of solutions of SDE's, which justifies
the study of uniqueness, even though we haven't proved yet any general existence
result. There are at least two natural definitions of uniqueness and this section will
be devoted mainly to the study of their relationship.
§ 1. Fonnal Definitions and Uniqueness
367
(1.3) Definitions. 1°) There is pathwise uniquenessfor e(j, g) ifwhenever (X, B)
and (X', B') are two solutions defined on the same filtered space with B = B' and
Xo = X~ a.s., then X and X' are indistinguishable.
2°) There is uniqueness in law for e(j, g) ifwhenever (X, B) and (X', B') are
two solutions with possibly different Brownian motions Band B' (in particular if
(X, B) and (X', B') are defined on two different probability spaces (Q, .¥, .7;, P)
(d)
and (Q', .'7', .:Y;', P'») and Xo = X~, then the laws of X and X' are equal. In
other words, X and X' are two versions of the same process.
Uniqueness in law is actually equivalent to a seemingly weaker condition.
(1.4) Proposition. There is uniquenes in law if,for every x E JRd, whenever (X, B)
and (X', B') are two solutions such that Xo = x and X~ = x a.s., then the laws of
X and X' are equal.
Proof Let P be the law of (X, B) on the canonical space C(JR+, JRd+r). Since
this is a Polish space, there is a regular conditional distribution pew, .) for P with
respect to .)310. For almost every w the last r coordinate mappings f3i still form a
BMr under pew, .) and the integral
11
f(s, ~.)df3s +
11
g(s, ~.)ds,
where ~ stands for the vector of the first d coordinate mappings, makes sense. It
is clear (see Exercise (5.16) Chap. IV) that, for almost every w, the pair (~, f3)
is under pew, .) a solution to e(j, g) with ~o = ~(w) pew, ·)-a.s. If (X', B') is
another solution we may likewise define P'(w, .) and the hypothesis implies that
P(w,·) = P'(w,·) for w in a set of probability I for P and P'. If Xo <gJ X~ we
get P = P' and the proof is complete.
0
Remark. The reader will find in Exercise (1.16) an example where uniqueness in
law does not hold.
The relationship between the two kinds of uniqueness is not obvious. We will
show that the first implies the second, but we start with another important
(1.5) Definition. A solution (X, B) of e(j, g) on (Q, .y;,.?{, P) is said to be a
strong solution if X is adapted to the filtration (.§fB) i.e. the filtration of B completed with respect to P.
By contrast, a solution which is not strong will be termed a weak solution.
For many problems, it is important to know whether the solutions to an SDE
are strong. Strong solutions are "non-anticipative" functionals of the Brownian
motion, that is, they are known up to time t as soon as B is known up to time t.
This is important in as much as B is often the given data of the problem under
consideration.
We now prepare for the main result of this section. Let WI = C(JR+, JRd) and
W 2 = C(JR+, JRr). On WI x W2 we define, with obvious notation, the a-algebras
368
Chapter IX. Stochastic Differential Equations
Ji/ = a (w~, s :s t) ,
;z,i -
./J
V
.2)i
c7J t ,
i = 1,2;
we observe that .ZJi = .7)/ v ~i for each t.
Let (X, B) be a solution to eU, g) and Q the image of P under the map
¢ : w --* (X(w), B.(w» from Q into WI x W z. The projection of Q on Wz is the
Wiener measure and, as all the spaces involved are Polish spaces, we can consider
a regular conditional distribution Q(wz, .) with respect to this projection, that is,
Q(wz, .) is a probability measure on WI x Wz such that Q (wz, WI x {wz}) = 1
Q-a.s. and for every measurable set A C WI X W z, Q(wz, A) = EQ [IA I.EZ]
Q-a.s.
(1.6) Lemma. If A E .~I, the map Wz --* Q(wz, A) is .7J?-measurable up to a
negligible set.
ProaL If. ~I' . ~z, .1: are three a-algebras such that '~I v. ~z is independent
of. "1, under a probability measure m, then for A E. ·t:: I ,
m (A I. /6 2 ) = m (A I. ~z v.~)
Applying this to .~I,
m-a.s.
.H?, .~2 and Q we get
Q-a.s.,
which proves our claim.
(1. 7) Theorem. If pathwise uniqueness holds for eU, g), then
i) uniqueness in law holds for eU, g);
ii) every solution to eAf, g) is strong.
Proof By Proposition (1.4), it is enough to prove i) to show that if (X, B) and
(X', B') are two solutions defined respectively on (Q, P) and (Q', P') such that
Xo = x and X~ = x a.s. for some x E JRd, then the laws of X and X' are equal.
Let WI and W~ be two copies of C(JR+, JRd). With obvious notation derived
from that in the previous lemma, we define a probability measure n on the product
WI x
X Wz by
W;
where W is the Wiener measure on Wz. If.r, = a (WI (s), wi (s), wz(s), s :s t)
then under n, the process wz(t) is an (.,0f)-BMr. Indeed we need only prove that
for any pair (s, t), with s < t, W2(t) - W2(S) is independent of.Y?;. Let A E JJ},
A' E .fi/', B E .n}; by Lemma (l.6), for l; E JR r ,
§ 1. Fonnal Definitions and Uniqueness
369
Err [exp(i (~, W2(t) - W2(S))) lAINIB]
=
Is
exp (i (~, W2(t) - W2(S))) Q (W2, A) Q' (W2, A') W( dW2)
= exp (-I~ 12(t - s)/2)
Is
Q (W2, A) Q' (W2, A') W( dW2)
= exp (-1~12(t - s)/2) n(A x A' x B)
which is the desired result.
We now claim that (WI, W2) and (w;, W2) are two solutions to eAt, g) on the
same filtered space (WI x W; x Wz,.jf,n). Indeed, if for instance
X
under P, then
t it
it
= x +
WI (t) = x
+
t(s, X.)dBs
+
f(s, wI)dwz(s)
it
+
g(s, X.)ds
it
g(s, wI)ds
under n because the joint law of (f(s, X.), g(s, X.), B) under P is that of
(f(s, WI), g(s, WI), wz) under n (see Exercise (5.16) in Chap. IV). Since moreover w; (0) = x n-a.s., the property of path uniqueness implies that WI and w;
are n-indistinguishable, hence wI(n) = w;(n), that is: X(P) = X'(P') which
proves i).
Furthermore, to say that WI and
are n-indistinguishable is to say that n is
carried by the set {(WI, w;, W2): WI = w;}. Therefore, for W-almost every Wz,
under the probability measure Q (wz, dWI) ® Q' (wz, dW;) the variables WI and
are simultaneously equal and independent; this is possible only if there is a
measurable map F from Wz into WI such that for W-almost every Wz,
w;
w;
Q (W2, .) = Q' (wz, .) = SF(W2)'
But then the image of P by the map </> defined above Lemma (1.6) is carried by
the set of pairs (F(W2), W2), hence X = F(B) a.s. By Lemma (1.6), X is adapted
to the completion of the filtration of B.
D
Remarks. 1) In the preceding proof, it is actually shown that the law of the pair
(X, B) does not depend on the solution. Thus stated, the above result has a converse which is found in Exercise (1.20).
On the other hand, property i) alone does not entail pathwise uniqueness which
is thus strictly stronger than uniqueness in law (see Exercise (1.19)). Likewise the
existence of a strong solution is not enough to imply uniqueness, even uniqueness
in law (see Exercise (1.16)).
2) We also saw in the preceding proof that for each x, if there is a solution
to eAt, g), then there is a function F(x,·) such that F(x, B) is such a solution.
It can be proved that this function may be chosen to be measurable in x as well,
in which case for any random variable Xo, F(Xo, B) is a solution to e(f, g) with
Xo as initial value.
370
Chapter IX. Stochastic Differential Equations
We now tum to some important consequences of uniqueness for the equations
e( a, b) of the homogeneous type. In Proposition (2.6) of Chap. VII we saw that if
(X, B) is a solution of exCa, b), then X(P) is a solution to the martingale problem
n(x, a, b) with a = aa t • By Proposition (1.4) it is now clear that the uniqueness
of the solution to the martingale problem n (x, a, b) for every x E ]Rd, implies
uniqueness in law for e(a, b).
In what follows, we will therefore take as our basic data the locally bounded
fields a and b and see what can be deduced from the uniqueness of the solution
to n(x, a, b) for every x E ]Rd. We will therefore be working on W = C(]R+, ]Rd)
and with the filtration (.J:9r) = (a(Xs, s :::: t)) where the Xt's are the coordinate
mappings. If T is a (.~ )-stopping time, the a -algebra ./'Jr is countably generated
(see Exercise (4.21), Chap. I) and for any probability measure P on W, there is a
regular conditional distribution Q(w, .) with respect to .fir .
(1.8) Proposition. If P is a solution to n (x , a, b) and T is a bounded stopping time,
there is a P -null set N such that for w ~ N, the probability measure e r (Q (w, .))
is a solution to n (Xr(w), a, b).
Proof For a fixed w, let A = {w': Xo(w') = Xr(w)}. From the definition of a
regular conditional distribution, it follows that
Thus, by Definition (2.3) in Chap. VII, we have to find a negligible set N such
that for any t E Cr!, any t > s,
(+)
Q(w, ·)-a.s.
for w rf. N. Equivalently, for w rf. N, we must have
(*)
i
Mf 0 er(w')Q(w, dw') =
i
M/ 0 er(w')Q(w, dw')
for any A E e;l (.}»s) and t > s.
Recall that, by hypothesis, each M f is a martingale. Let t be fixed and pick
B in .Ylr ; by definition of Q, we have
E [IB(W)
i
Mf 0 er(w')Q(w, dw') ] = E [lB' IA . (M!+t - M!)].
Since IB ·IA is .fir+s-measurable as well as M! and since, by the optional stopping
theorem, M!+t is a ./'Jr+t-martingale, this is further equal to
E [lB' IA (M!+s - M!)] = E [IB(W)
i
M[ 0 er(w')Q(w, dw') ] .
As a result, there is a P -null set N (A, t, s, t) such that (*) holds for w
N(A, t, s, t).
rf.
§l. Fonnal Definitions and Uniqueness
371
Now the equality (+) holds for every f in C'; if it holds for f in a countable
dense subset!/' of C';; because of the continuity of X it holds for every sand
t if it holds for sand t in Q. Let ~ be a countable system of generators for
e;l(.:J9,); the set
U U U N(A,f,s,t)
N=
S,tEQ fE'..?
AE~
is P-negligible and is the set we were looking for.
(1.9) Theorem. Iffor every x E ~d, there is one and only one solution Px to the
martingale problem n (x, a, b) and, iffor every A E .J7(~d) and t ~ 0 the map
x ~ PX[X I E A] is measurable, then (X t , Px , x E ~d) is a Markov process with
transition function Pt(x, A) = PAX t E A].
Proof For every event r E .)(100' every bounded .-i?r-stopping time T and every
x E ~d we have, with obvious notation,
the uniqueness in the statement together with the preceding result entails that
Px [e;l(r) I'~T] = Px,[r]
Making T = t and integrating, we get the semi-group property.
D
Remark. With continuity assumptions on a and b, it may be shown that the semigroup just constructed is actually a Feller semi-group. This will be done in a
special case in the following section.
Having thus described some of the consequences of uniqueness in law, we
want to exhibit a class of SDE's for which the property holds. This will provide
an opportunity of describing two important methods of reducing the study ofSDE's
to that of simpler ones, namely, the method of transformation of drift based on
Girsanov's theorem and already alluded to in Sect. 3 of the preceding chapter
and the method of time-change. We begin with the fonner which we treat both in
the setting of martingale problems and of SDE's. For the first case, we keep on
working with the notation of Proposition (l.8).
(1.10) Theorem. Let a be afield of symmetric and non-negative matrices, band c
fields of vectors such that a, band (c, ac) are bounded. There is a one-to-one and
onto correspondence between the solutions to the martingale problems n(x, a, b)
and n (x, a, b + ac). If P and Q are the corresponding solutions, then
-I = {I
dQ
dP ./1,
exp
0
t
111 (c, ac) (Xs)ds }
(c(X s ), dX s ) - -
2
0
where X t = X t - f~ b(Xs)ds.
The displayed formula is the Cameron-Martin formula.
372
Chapter IX. Stochastic Differential Equations
Proof Let P be a solution to rr(x, a, b). By Proposition (2.4) in Chap. VII, we
know that under P the process X is a vector local martingale with increas- ) , we have (Y, Y)t =
ing process fot a(Xs)ds. If we set Yt = fot ( c(Xs), dX
s
f~ (c, ac) (Xs)ds and, since (c, ac) is bounded, Novikov's criterion of Sect. I
Chap. VIII asserts that g (Y) is a martingale. Thus one can define a probability
measure Q by Q = g(Y)t P on .Br, which is the formula in the statement. We
now prove that Q is a solution to rr(x, a, b + ac) by means of Proposition (2.4)
in Chap. VII.
For e E ]Rd, the process Mf = (e, X t - x) is a P-Iocal martingale with increasing process At = f~ (e, a(XsW) ds. Thus by Theorem (1.4) in Chap. VIII,
Me - (MIJ, Y) is a Q-Iocal martingale with the same increasing process At. It is
furthermore easily computed that
(e,
X t - X - f~ b(Xs)ds - f~ ac(Xs)ds) is a Q-local martingale with
As a result,
increasing process At which proves our claim.
The fact that the correspondence is one-to-one and onto follows from Proposition (LlO) in Chap. VIII applied on each subinterval [0, t].
0
The above result has an SDE version which we now state.
(1.11) Theorem. Let f (resp. g, h) be predictable functions on W with values in
the symmetric non-negative d x d matrices (resp. d-vectors) and assume that h is
bounded. Then, there exist solutions to ex (f, g) if and only if there exist solutions
to ex(f, g + fh). There is uniqueness in law for e(f, g) if and only if there is
uniqueness in law for e(f, g + f h).
Proof If (X, B) is a solution to e(f, g) on a space (st, ,y:;, P), we define a probability measure Q by setting Q = g(M)t P on .Yfwhere M t = f~ h(s, XJdBs. The
process iit = B t - f~ h(s. XJds is a BM and (X, ii) is a solution of e(f, g + fh)
under Q. The details are left to the reader as an exercise.
0
Ix
The reader will observe that the density ~~
is simpler than in the CameronMartin formula. This is due to the fact that we have changed the accompanying
BM. One can also notice that the assumption on h may be replaced by
for every t > O.
(1.12) Corollary. Assume that, for every s and x, the matrix a (s, x) is invertible
and that the map (s, x) -+ a(s, x)-l is bounded; if e(a, 0) has a solution, then for
any bounded measurable b, the equation e(a, b) has a solution. Ifuniqueness in
law holds for e(a, 0), it holds for e(a, b).
§ 1. Fonnal Definitions and Uniqueness
373
Proof We apply the previous result with I(s, XJ = a(s, X s ), g(s, XJ = b(s, Xs)
and h(s, XJ = -a(s, Xs)-lb(s, Xs).
Remark. Even if the solutions of e(j, 0) are strong, the solutions obtained for
e(j, g) by the above method of transfonnation of drift are not always strong as
will be shown in Sect. 3.
We now tum to the method of time-change.
(1.13) Proposition. Let y be a real-valued function on lRd such that 0 < k ~ y ~
K < 00; there is a one-to-one and onto correspondence between the solutions
to the martingale problem rr (x , a, b) and the solutions to the martingale problem
rr(x, ya, yb).
Proof With the notation of Proposition (1.8) define
At =
fot y(Xs)ds,
and let (Tt) be the associated time-change (Sect. 1 Chap. V). We define a measurable transfonnation ¢ on W by setting X (¢(w))t = X,,(w). Let P be a solution
to the martingale problem rr(x, a, b); for any pair (s, t), s < t, and A E YJ" we
have, for lEer;,
i
(/(X t ) - I(Xs)
=
1
1
4>-I(A)
=
~l~)
-It
Y(Xu)LI(Xu)dU) d¢(P)
(/(X,,) - I(X,,)
(/(X,,) - I(X,,)
_it
-1't
s
~
Y(X,u)LI(X,JdU) dP
LI(Xu)dU) dP
thanks to the time-change formula of Sect. 1 Chap. V. Now since ¢-I(A) E .:?i"
the last integral vanishes, which proves that ¢(P) is a solution to rr(x, ya, yb).
Using y-l instead of y, we would define a map"" such that ""(¢(P)) = P which
completes the proof.
0
Together with the result on transfonnation of drift, the foregoing result yields
the following important example of existence and uniqueness.
(1.14) Corollary. If a is a bounded function on the line such that la I :::: B > 0
and b a bounded function on lR+ x lR, there is existence and uniquenss in law for
the SDE e(a, b). Moreover, if Px is the law of the solution such that Xo = x,for
any A E .n(lR), the map x --+ Px[X t E A] is measurable.
Proof By Corollary (1.12) it is enough to consider the equation e(a, 0) and since
the BM started at x is obviously the only solution to ex (1, 0), the result follows
from the previous Proposition. The measurability of P,[Xt E A] follows from the
fact that the Px's are the images of Wx under the same map.
374
Chapter IX. Stochastic Differential Equations
Remark. By Theorem (l.9) the solutions of e(o-, b) fonn a homogeneous Markov
process when b does not depend on s. Otherwise, the Markov process we would get
would be non homogeneous. Finally it is worth recording that the above argument
does not carry over to d > 1, where the corresponding result, namely that for
unifonnly elliptic matrices, is much more difficult to prove.
(1.15) Exercise. Let (Y, B) be a solution to e = ey(f, g) and suppose that f
never vanishes. Set
At = lot (2 + YsI(l + IY,I))ds
and call Tt the inverse of At. Prove that X t = Brr is a pure local martingale if
and only if Y is a strong solution to e.
#
(1.16) Exercise. 1°) Let a(x) = 1/\ Ixl" with 0 < a < 1/2 and B be the standard
linear BM. Prove that the process f~ a- 2 (Bs)ds is well-defined for any t > 0; let
T t be the time-change associated with it.
2°) Prove that the processes X t = B r , and X t = 0 are two solutions for eo(a, 0)
for which consequently, uniqueness in law does not hold. Observe that the second
of these solutions is strong.
#
(1.17) Exercise. A family Xx of lR d -valued processes with
= x a.s. is said
to have the Brownian scaling property if for any c > 0, the processes c- 1 X~2t
and Xr 1x have the same law. If uniqueness in law holds for e(o-, b) and if Xx is
a solution to ex (0-, b), prove that if o-(cx) = o-(x) and cb(cx) = b(x) for every
c > 0 and x E !Rd , then Xx has the Brownian scaling property. In particular if
¢ is a function on the unit sphere and b(x) = IIxll- I 4>(x/llxll), the solutions to
ex(ald , b) have the Brownian scaling property.
*
(1.18) Exercise. In the situation of Corollary (1.14) let (X, B) be a solution to
eAo-, b) and set Yt = Xt - X - f~ b(s, Xs)ds.
1°) Let W X be the space of continuous functions w on lR+, such that w (0) = x.
For w E W X , set
Xo
o--I(X)
for
k2- n
2n {
J(k-l)2- n
0:::; t < Tn,
0--1 (ws)ds for kTn:::; t < (k+ I)Tn.
Prove that for every t,
E [(lot (1fJn(S, XJ -
0--1 (X s») d Y,) 2] n~ O.
2°) Prove that there is an adapted function C/J from W X to W O which depends
only on the law of X and is such that B = C/J(X).
3°) Derive from 2°) that if there exists a strong solution, then there is pathwise
umqueness.
[Hint: Prove that if (X, B) and (X', B) are two solutions, then (X, B) ~
(X', B).]
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
#
375
10
(1.19) Exercise. 1°) If f3 is a BM(O) and Bt = sgn(f3s )df3s, prove that (f3, B)
and (-f3, B) are two solutions to eo(sgn, 0).
More generally, prove that, if (cu, U ::: 0) is predictable with respect to the
natural filtration of f3, and takes only values +1 and -1, then (Cg,f3t, B t ; t ::: 0) is
a solution to eo(sgn, 0), where gt = sup {s < t : f3s = O}.
2°) Prove that eo(sgn, 0) cannot have a strong solution.
[Hint: If X is a solution, write Tanaka's formula for IXI.]
(1.20) Exercise. 1°) Retain the notation of Lemma (1.6) and prove that (X, B) is
a strong solution to eAI, g) if and only if there is an adapted map F from W 2
into WI such that
Q(W2, .) = CF(W2)
Q-a.s.
2°) Let (X, B) and (X', B) be two solutions to ex(f, g) with respect to the
same BM. Prove that if
i) (X, B) rgJ (X', B),
ii) one of the two solutions is strong,
then X = X'.
§2. Existence and Uniqueness
in the Case of Lipschitz Coefficients
In this section we assume that the functions I and g of Definition (1.2) satisfy
the following Lipschitz condition: there exists a constant K such that for every t
and w,
I/(t, w) - I(t, w')1 + Ig(t, w) - get, w')1 :s K sup Iw(s) - w'(s)1
s-:::t
where I I stands for a norm in the suitable space. Under this condition, given a
Brownian motion B in ]Rr, we will prove that for every x E ]Rd, there is a unique
process X such that (X, B) is a solution to eAI, g); moreover this solution is
strong. As the pair (B t , t) may be viewed as a r + I-dimensional semimartingale
we need only prove the more general
(2.1) Theorem. Let (Q, .¥;, P) be afiltered space such that (.¥;) is right-continuous and complete and Z a continuous r-dimensional semimartingale. If I satisfies
the above Lipschitz condition and if. for every y, 1(', y) is locally bounded where
yet) == y, then for every x E ]Rd, there is a unique (up to indistinguishability)
process X such that
t
X = X +
Moreover, X is (.y;z)-adapted.
lot I(s, XJdZ
s'
376
Chapter IX. Stochastic Differential Equations
Proof We deal only with the case d = 1, the added difficulties of the general
case being merely notational. If M + A is the canonical decomposition of Z we
first suppose that the measures d(M, M)t and IdAlt on the line are dominated by
the Lebesgue measure dt.
Let x be a fixed real number. For any process V with the necessary measurability conditions, we set
If V is another such process, we set
cPt(V, V) = E [sup IVs s::::::f
",12] .
Because any two real numbers hand k satisfy (h + k)2 :s: 2(h 2 + k 2), we have
cPt(SU, SV)
<
2E
+
[~~f (1 (f(r, VJ - fer, V»dMrY
S
~~f (1 If(r, VJ - fer, V)I IdAlr
S
y]
and by the Doob and Cauchy-Schwarz inequalities, it follows that
cPt(SV, SV)
<
t
8E [ (1 (f(r, VJ - fer, V» dMr) 2]
t
t
+2E [(1 IdA Is) (1 If(r, VJ - fer, vJI 2 IdAlr)]
<
t
8E [1 (f(r, VJ - fer, V»2 d(M, M)r]
t
+2tE [1 If(r, VJ - fer, V)1 2 IdAlr]
:s:
2K2(4 + t)E [1
t
~~~ IVs - "' 12 dr]
2K2(4 + t) 1t cPr(V, V)dr.
Let us now define inductively a sequence (xn) of processes by setting XO == x
and xn = S(X n- 1); let us further pick a time T and set C = 2K2(4 + T). Using
the properties of f it is easy to check that D = cPT (Xo, Xl) is finite. It then
follows from the above computation that for every t :s: T and every n,
cPt (Xn-l, Xn) :s: Dcnr In! .
Consequently
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
377
-
Thus, the series L~1 SUPSg IX~ X~-ll converges a.s. and as a result, xn converges a.s., uniformly on every bounded interval, to a continuous process X. By
Theorem (2.12) Chap. IV, X = SX, in other words X is a solution to the given
equation.
To prove the uniqueness, we consider two solutions X and Y and put Tk =
inf{t: IXtl or IYtI > k}. Let S be defined as S but with ZTk in lieu of Z. Then it
is easily seen that XTk = S(XTk) and likewise for Y, so that for t < T
4J t (XTk, yTk) = 4J t (SXTk, SyTk) :s: C lot 4J s (XTk, yTk)ds.
Since by the properties of !, the function 4J t (XTk, yTk) is locally bounded, Gronwall's lemma implies that 4J,(X Tk, yTk) is identically zero, whence X = Y on
[0, Tk 1\ T] follows. Letting k and T go to infinity completes the proof in the
particular case.
The general case can be reduced to the particular case just studied by a suitable
time-change. The process A; = t + (M, M}t + f~ IdAls is continuous and strictly
iE;creasing. If we use the time-change C t associated with A;, then Mt = M Ct and
At = ACt satisfy the hypothesis of the particular case dealt with above. Since
C t :s: t, one has
1!(Ct • UJ - !(Ct , V,) I :s: K sup IUs - Vsl,
s~t
and this condition is sufficient for the validity
of the reasoning in the first part of
t
the proof, so that the equation X t = x + fo !(CS , XJdZ s has a unique solution.
By the results of Sect. 1 Chap. V, the process X t = XA; is the unique solution to
the given equation.
0
~
~
~
As in the case of ordinary differential equations, the above result does not
provide any practical means of obtaining closed forms in concrete cases. It is
however possible to do so for the class of equations defined below. The reader may
also see Exercise (2.8) for a link between SDE's and ODE's (ordinary differential
equations).
(2.2) Definition. A stochastic equation is called linear if it can be written
Yt = Ht + lot YsdX s
where H and X are two given continuous semimartingaies.
It can also be written as
Yo = Ho.
378
Chapter IX. Stochastic Differential Equations
An important example is the Langevin equation dVt = dB t - f3Vtdt, where B is
a linear BM and f3 a real constant, which was already studied in Exercise (3.14)
of Chap. VIII. Another example is the equation Yt = 1 + J~ YsdXs for which we
know (Sect. 3 Chap. IV) that the unique solution is Y = ~ (X). Together with the
fonnula for ordinary linear differential equations, this leads to the closed fonn for
solutions of linear equations, the existence and uniqueness of which are ensured
by Theorem (2.1).
(2.3) Proposition. The solution to the linear equation of Definition (2.2) is
Yt =
~(X)t (Ho + lot 3'(X);! (dHs - d(H, X)S));
in particular, if (H, X) = 0, then
Yt = g(X)t ( Ho + lot i5(X);!dHs ) .
Proof Let us compute J~ YsdX s for Y given in the statement. Because of the
equality ~ (X)sd Xs = d g (X)s and the integration by parts fonnula, we get
lot YsdX s
Ho lot g(X)sdX s + lot i5(X)sdXs (loS 3'(X);;-! (dHu - d(H, X)u))
=
-Ho + H oi5(X)t + t?f(X)t lot i5(X);! (dHs - d(H, X)s)
-lot i5(X)s (g(X);!dHs - (3'(X),
L
~(X);!d(H, X)s)
i5 (X);! (dHs - d(H, X)s))t
Yt - Ht + (H, X)t -
(L i5(X)sdXs , 10· i5(X);!dHs)t
Yt-Hr.
o
which is the desired result.
Remark. This proof could also be written without prior knowledge of the fonn of
the solution (see Exercise (2.6) 2°).
One may also prove that if H is a progressive process and not necessarily a
semimart., the process
Y = H - 3'(X)
f
H d (i5(X)-!)
is a solution of the linear Equation (2.2). The reader will check that this jibes with
the fonnula of Proposition (2.3) if H is a semimart ..
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
379
The solution to the Langevin equation starting at v is thus given by
Vt = e- fit (v + i t efiSdBs).
For f3 > 0 this is the OU process with parameter f3 of Exercise (1.13) of Chap.
m (see Exercise (2.16)). The integral J~ \I,ds is sometimes used by physicists
as another mathematical model of physical Brownian motion. Because of this
interpretation, the process V is also called the OU velocity process of parameter
f3. In physical interpretations, f3 is a strictly positive number. For f3 = 0, we
get Vt = V + B t • In all cases, it also follows from the above discussion that the
infinitesimal generator of the OU process of parameter f3 is the differential operator
I iJ2
a
2 ax' - f3x ilx .
We now go back to the general situation of the equation e(f, g) with f and g
satisfying the conditions stated at the beginning of the section. We now know that
for a given Brownian motion B on a space (Q, .¥;, P) and every x E ]Rd, there
is a unique solution to exCf, g) which we denote by X~. We will prove that Xx
may be chosen within its indistinguishability class so as to get continuity in x.
(2.4) Theorem. Iff and g are bounded, there exists a process X~, x E ]Rd, t E ]R+,
with continuous paths with respect to both variables t and x such that, for every x,
X; = x + i t f(s, XX)dBs + i t g(s, XX)ds
P-a.s.
Proof Pick p ::: 2 and t > O. Retain the notation of the proof of Theorem (2.1)
writing Sx, rather than S, to stress the starting point x.
Because la + b + cl P :'S 3 P- 1 (lal P + Ibl P + IcI P ), we have
sup ISx(U)s - Sy(V)s IP
s-::-:;:t
<
3 P- 1 {IX - YiP +
~~~ liS (f(r, UJ - fer, VJ) dBr I
P
+ ~~~ liS (g(r, UJ - g(r, VJ)drIP}.
Thanks to the BDG and Holder inequalities
E
[~~~ liS (f(r, UJ - fer, VJ) dBrn
r/2]
<
CpE [ (it (f(r, UJ - fer, \1»2 dr
<
C pt(P-2)/2 E [it If(r, UJ - fer, VW dr]
<
KPC pt(p-2)/2 E [
r
sup IUs - VslPdr] .
Jo s<r
380
Chapter IX. Stochastic Differential Equations
Likewise,
sup
s:St
lis
(g(r, VJ - g(r, V)) driP:::; KPt p - 1
0
r
sup IVs - VslPdr.
10 s:Sr
Applying this to V = XX and V = xy where Xx (resp. XY) is a solution to
it follows that
ex(j, g) (resp. ey(j, g)) and setting h(t) = E [suPu
h(t) :::; bflx - yiP +
IX: - xrn,
c; It h(s)ds
for two constants b; and c;. Gronwall's lemma then implies that there is a constant
a; depending only on p and t such that
E [sup IX: - X~IPJ : :; a;lx - yiP.
s-:::'f
By Kolmogorov's criterion (Theorem (2.1) of Chap. I), we can get a bicontinuous
modification of X and it is easily seen that for each x, the process Xx is still a
solution to ex(j, g).
Remark. The hypothesis that I and g are bounded was used to insure that the
function h of the proof is finite. In special cases, h may be finite with unbounded
I and g and the result will still obtain.
From now on, we tum to the case of e(a, b) where a and b are bounded
Lipschitz functions of x alone. If we set
Pt(x, A) = P [X; E A]
we know from Theorem (l.9) that each Xx is a Markov process with transition
function Pt. We actually have the
(2.5) Theorem. The transition function Pt is a Feller transition function.
Proof Pick I in Co. From the equality Pt/(x) = E [J(Xn], it is plain that the
function Pt I is continuous. It is moreover in Co; indeed,
I pt/(x) I :::;
sup
y:ly-xlsr
I/(y)1 + II/IIP [IX; - xl> r]
and
p[IX;-xl>r]
<
r-2E[IX;-xI2]
<
t
2r- 2E [Il a (X:)dBsI2 +
<
2k 2r-2(t + t 2)
III
b(X:)dSn
where k is a uniform bound for a and b. By letting x, then r, go to infinity, we
get lim Ix I"'" 00 Pt/(x) = O.
On the other hand, for each x, t ~ Pt I (x) is also clearly continuous which
completes the proof.
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
381
With respect to the program outlined at the end of Sect. 1 in Chap. VII we
see that the methods of stochastic integration have allowed us to construct Feller
processes with generators equal on ci to
1
a2
a
- Laij(x)-- + Lb;(x)-,
2
aX;aXj
ax;
whenever a and b are bounded and Lipschitz continuous and a = aa t .
#
(2.6) Exercise (More on Proposition (2.3». 1°) Denoting by * the backward integral (Exercise (2.18) of Chap. IV), show that in Proposition (2.3) we can write
Yt = g'(X)t
(Y + fot ;;f(X);! * dHs)
which looks even more strikingly like the formulas for ordinary linear differential
equations.
2°) Write down the proof of Proposition (2.3) in the following way: set Y =
g'(X)Z and find the conditions which Z must satisfy in order that Y be a solution.
(2.6) Bis Exercise (Vector linear equations). If X is a d x d matrix of cont.
semimarts. and H a r x d matrix of locally bounded predictable processes, we
define the right stochastic integral (H ·dX)t = f~ HsdXs as the r x d matrix whose
general term is given by
L t H;kdX;j.
k
10
Likewise, there is a left integral d X . H provided that H and X have matching
dimensions.
1°) If Y is a r x d matrix of cont. semimarts., we define (Y, X) as the r x d
matrix whose entries are the processes
Prove that
d(YX) = dY·X + Y·dX + d(Y, X},
and that, provided the dimensions match,
(H·dY, X) = H·d(Y, X},
(Y,dX·H) = d(Y, X}·H.
2°) Given X, let us call g(X) the d x d matrix-valued process which is the
unique solution of the SDE
Ut = Id + fot UsdX s ,
and set g" (X) = &' (xt)t. Prove that &" (X) is the solution to a linear equation
involving left integrals. Prove that g'(-X + (X, X}) is the inverse of the matrix
382
Chapter IX. Stochastic Differential Equations
g(X) (which thus is invertible, a fact that can also be proved by showing that its
determinant is the solution of a linear equation in dimension 1).
[Hint: Compute d(g(X)gl(-X + (X, X))).]
3°) If H is a r x d matrix of cont. semimarts., prove that the solution to the
equation
is equal to
(HO + 1t (dHs - d(H, X)s) (5(X s )-I) g(X)t.
State and prove the analogous result for the equation
Yt = Ht + 1t dXsYs .
(2.7) Exercise. Let F be a real-valued continuous function on lit and f the solution
to the ODE
!'(s) = F(s)f(s);
f(O) = 1.
1°) Let C be another continuous function and X a continuous semimartingale.
Prove that
Zt = J(t)
[z + 1t J(u)-IC(u)dX
u]
is the unique solution to the SDE
Moreover
+ It J(t) C(u)dX .
Z = J(t) Z
t
J(s)
S
s
feu)
U
2°) If X is the BM, prove that Z is a Gaussian Markov process.
3°) Write down the multidimensional version of questions 1°) and 2°). In
particular solve the d-dimensional Langevin equation
where (J and fJ are d x d matrices.
[Hint: use Exercise (2.6) bis.]
*
(2.8) Exercise (Doss-Sussman method). Let (J be a C 2 -function on the real line,
with bounded derivatives (J' and (J" and b be Lipschitz continuous. Call h (x, s)
the solution to the ODE
ah
-(x, s) = (J(h(x, s)),
as
h(x, 0) = x.
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
Let X be a continuous semimartingale such that Xo =
of the ODE
383
°
and call D t the solution
d: t = b (h (Df, Xt(w))) exp {_IoX/(W) a' (h(Dt, s)) dS} ; Do = y.
Prove that h(Dt , Xt) is the unique solution to the equation
f t = y + lot a(fs) 0 dX s + lot b(Y,)ds
where 0 stands for the Stratonovich integral. Moreover Dt = h(fr, -Xt).
(2.9) Exercise. For semimartingales H and X, call i5x (H) the solution to the
linear equation in Definition (2.2).
1°) Prove that if K is another semimartingale,
i5x (H + K . X) = i5x (H + K) - K.
2°) Let X and f be two semimartingales and set Z = X + f + (X, f).
For any two semimartingales Hand K, there is a semi martingale L such that
(/;x(H) . (5y(K) = (/;z(L). For H = K = I, one finds the particular case treated
in Exercise (3.11) Chap. IV.
3°) If X, f, H are three semimartingales, the equation Z = H + (Z, Y) . X
has a unique solution given by
Z = H + (/;(X,y) «(H, y)), X,
* (2.10) Exercise (Explosions). The functions a and b on !Rd are said to be locally
Lipschitz if, for any n E N, there exists a constant C n such that
Ila(x) - a(y)11 + Ilb(x) - b(y) II ::: Cnllx - yll
for x and y in the ball B(O, n),
1°) If a and b are locally Lipschitz, prove that for any x E !Rd and any BM B
on a space (Q, .Y, P) there exists a unique (.yB)-adapted process X such that,
if e = inf {t : IX t I = +oo}, then X is continuous on [0, e[ and
X t = x + lot a(Xs)dBs + lot b(Xs)ds
on {t < e}. The time e is called the explosion time of X.
[Hint: Use globally Lipschitz functions an and bn which agree with a and b
on B(O, n).]
2°) If there is a constant K such that
for every x E !R d , prove that E [IX t I2 ] < 00 and conclude that pre = 00] = 1.
[Hint: If Tn = inf{t : IXrI > n}, develop IXtI\T,l by means of Ito's formula.]
384
Chapter IX. Stochastic Differential Equations
W
3°) Let
be the space of functions w from JR+ to JRd U {.1} such that if
{(w) = inf{t : wet) = .1}, then w(s) = .1 for any s ~ { and w is continuous on
[0, {(w)[. The space W contains the space C(JR+, JRd) on which { is identically
+00. Call Y the coordinate process and, with obvious notation, let Qx be the law
on W of the process X of 1°) and Px be the law of BM(x). For d
1 and a
I
=
prove that
Qx IXn(t<O = t5
(1"
=
b(Ys)dYs} . Px I.w
As a result, C; (fri b(Y,)dYs ) is a true martingale if and only if Qx({ < 00) = 0.
4°) Prove that for any a E JR, IS (a fri BsdBs) is a true martingale although
Kazamaki's criterion does not apply for t ~ a-I.
#
(2.11) Exercise (Zvonkin's method). 1°) Suppose that a is a locally Lipschitz
(Exercise(2.1O» function on JR, bounded away from zero and b is Borel and
locally bounded; prove that the non-constant solutions h to the differential equation
2 h" + bh' =
are either strictly increasing or strictly decreasing and that
g = (ah') 0 h- I is locally Lipschitz.
2°) In the situation of 1°) prove that pathwise uniqueness holds for the equation
e(a, b).
[Hint: Set up a correspondence between the solutions to e(a, b) and those to
e(g, 0) and use the results in the preceding exercise.]
This method amounts to putting the solution on its natural scale (see Exercise
(3.20) in Chap. VII).
3°) Prove that the solution to e(a, b) where a is bounded and ~ £ > and b
is Lebesgue integrable, is recurrent.
°
!a
°
#
(2.12) Exercise (Brownian Bridges). The reader is invited to look first at questions nand 2°) in Exercise (3.18) of Chap. IV.
1°) Prove that the solution to the SDE
X~ =
f3t +
1___
t X -
o
XX
1- s
s ds,
t E [0, 1[, x E R
is given by
1t df3s
f3sds
X; = xt + f3t - (l - t) 1 ot (l-s)
2 = xt + (l - t)
--.
ol-s
X:'
2°) Prove that limttl X: = x a.s. and that if we set Xf = x, then
t E [0, 1],
is a Brownian Bridge.
[Hint: If g is a positive continuous decreasing function on [0, 1] such
that g(l) = 0, then for any positive f such that fol f(u)g(u)du < +00,
limltl g(t) f~ f(u)du = 0; apply this to g(t)
1 - t and f(t) =
1f31 - f3tl(l - t)-2.]
3°) If px is the law of XX on C([O, 1], JR), the above questions together with
1°) in Exercise (3.18) of Chap. IV give another proof of Exercise (3.16) in Chap. I,
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
385
namely that px is a regular disintegration of the Wiener measure on C([O, 1], lR)
with respect to a(B 1) where B is the canonical process.
4°) We retain henceforth the situation and notation of Exercise (3.18) in
Chap. IV. Prove that XX and fJ have the same filtration.
5°) Prove that the (~)-predictable processes are indistinguishable from the
processes K(B1(w), s, w) where K is a map on lR x (lR+ x Q) which is .33l(lR) x
.,7>(fJ)-measurable, ;7>(fJ) being the a-algebra of predictable sets with respect to
the filtration of fJ. Prove that any square-integrable (.~)-martingale can be written
where f and K satisfy some integrability conditions.
(2.13) Exercise. 1°) Let B be the linear BM, b a bounded Borel function and set
t t-It
Y = B
**
b(Bs)ds.
Prove that .:YfY = .:YfB for every t.
[Hint: Use Girsanov's Theorem and Exercise (2.11) 2° above.]
2°) If b is not bounded, this is no longer true. For example if Yt = Bt f~ B;lds where the integral was defined in Exercise (1.19) of Chap. VI, then
E [BI I a(Ys , s ::::: 1)] = O.
[Hint: For f E L2([0, 1]), prove that E[BIg(j'Y)l] = 0 and use the argument
of Lemma (3.1) Chap. V. One may also look at 4°) in Exercise (1.29) Chap. VI.]
* (2.14) Exercise. (Stochastic differential equation with reflection). Let a(s, x)
and b(s, x) be two functions on lR+ x lR+ and B a BM. For Xo ~ 0, we call
solution to the SDE with reflection exo(a, b) a pair (X, K) of processes such that
i) the process X is continuous, positive, .¥' B -adapted and
t
X = Xo +
It
a(s, Xs)dBs +
lt
b(s, Xs)ds + K
t.
ii) the process K is continuous, increasing, vanishing at 0, .¥' B -adapted and
1
00
XsdKs = O.
If a and b are bounded and satisfy the global Lipschitz condition
laCs, x) - a(s, y)1 + Ib(s, x) - b(s, y)1 ::::: Clx - yl
for every s, x, y and for a constant C, prove that there is existence and uniqueness
for the solutions to exo(a, b) with reflection.
[Hint: Use a successive approximations method with Lemma (2.1) of Chap.
VI as the means to go from one step to the following one.]
386
*
Chapter IX. Stochastic Differential Equations
(2.15) Exercise (Criterion for explosions). We retain the situation of Exercise
(2.11) and set
s(x)
=
2 fox exp (foY 2b(Z)a- 2(Z)dZ) a- 2(y)dy,
m(x)
k(x)
fox exp ( -foY 2b(Z)a- 2(Z)dZ) dy,
=
fox m(y)s(dy).
The reader is invited to look at Exercise (3.20) in Chap. VII for the interpretations
of sand m.
1°) If U is the unique solution of the differential equation 4a 2 U" + bU' = U
such that U (0) = 1 and U ' (0) = 0, check that
U(x) = 1 +
foX ds(y) foY U(z)dm(z)
and prove that
1 +k SUS exp(k).
2°) If e is the explosion time, prove that exp( -t 1\ e)U(X tAe ), where X is the
solution to eAa, b), is a positive supermartingale for every x. Conclude that if
k(-oo) = k(+oo) = 00, then Px[e = 00] = 1 for every x.
3°) If as usual To = inf{t : X t = OJ, prove that exp(-t 1\ To)U(X tATo ) is a
bounded Px-martingale for every x and conclude that if either k( -(0) < 00 or
k(oo) < 00, then Px[e < 00] > for every x.
4°) Prove that if Px[e < 00] = 1 for every x in JR, then one of the following
three cases occurs
°
i) k(-oo) < 00 and k(+oo) < 00,
ii) k( -00) < 00 and s(+oo) = 00,
iii) k(+oo) < 00 and s(-oo) = 00.
5°) Assume to be in case i) above and set
G(x, y) = (s(x) - s(-oo»)(s(+oo) - s(y»)/(s(+oo) - s(-oo») if x
= G (y, x)
sy
if y S x,
J
and set U\(x) = G(x, y)m(y)dy (again see Sect. 3 Chap. VII for the rationale).
Prove that U\(Xtl\e) + t 1\ e is a local martingale and conclude that Ex[e] < 00,
hence Px[e < 00] = 1.
6°) Prove that in the cases ii) and iii), P, [e < 00] = 1.
(2.16) Exercise. 1°) Prove that for {3 > 0, the solution Vt to the Langevin equation
is the OU process with parameter {3 and size 1/2{3.
§2. Existence and Uniqueness in the Case of Lipschitz Coefficients
387
2°) Prove that if Xa is independent of B, the solution V such that Va = Xa is
stationary if Xa ~, V(O, 1/2f3). In that case
Vt = (2f3)-1/2 e- fJ t B (e 2fJt ) = (2f3)-1/ 2efJ t
where Band
**
r (e- 2fJt )
r are two standard linear BM's such that Bu = ur u.
l/
(2.17) Exercise. Exceptionally in this exercise we do not ask for condition i) of
Definition (1.2) to be satisfied.
1°) Prove that the stochastic equation
X t = Bt +
i
t
Xs
-
a s
ds
has a solution for t E [0, 1].
[Hint: Use a time reversal in the solution to the equation of Exercise (2.12).]
2°) Prove that for 0 < 8 < t :::: 1
t
X t = -XE+t
8
it dB u
--.
E
U
3°) Prove that X is not ,rB-adapted.
[Hint: If the solution were strong, the two terms on the right-hand side of the
equality in 2°) would be independent. By letting 8 tend to zero this would entail
that for "A =I 0, E[exp(i"AX t )] is identically zero.]
4°) By the same device, prove that if ¢ is a locally bounded Borel function
on ]0, 00[, the equation
X t = B t + i t ¢(s)Xsds
does not have an .rB-adapted solution as soon as
11 (211
exp
¢(S)dS) du = 00.
(2.18) Exercise (Laws of exponentials of BM). Let f3 and y be two independent
standard linear BM's and for {t, v in lR set f3?<l = f3t + (tt and y/") = Yt + vt.
1°) Prove that the process
X t = exp (f3t(JL»)
(xa + it exp (-f3;JL») d Y}"»)
is for every real Xa, a diffusion with infinitesimal generator equal to
2°) Prove that X has the same law as sinh(Yt ), t 2: 0, where Y is a diffusion
with generator
(1/2)d 2/di + ({t tanh(y) + v coSh(y)-I) d/dy.
388
Chapter IX. Stochastic Differential Equations
3°) Derive from 2°) that for fixed f,
1t (13;/1»)
exp
dy}") <:ll sinh(Yt ).
In particular we get (Bougerol's identity): for fixed f,
1t
exp(f3s)dys <:ll sinh(f3t).
[Hint: For a Levy process X vanishing at 0, the processes X" s :::: f, and
X t - X t - s , S :::: f, are equivalent.]
4°) Prove that on .Yr, when]} =
°and Yo = 0, the law p/1 of Y has the density
cosh(Xt)/1 exp ( - (JL - JL2)/2)
1t
COSh(Xs)-2dS) exp (_(JL2 /2)f)
with respect to the Wiener measure. Prove as a consequence that for fixed f,
1t
exp(f3s + s)dys <:ll sinh(f3t + Ef),
where E is a Bernoulli r.v. independent of 13.
[Hint: See case i) in 2°) Exercise (3.18) Chapter VIII.]
(2.19) Exercise (An extension of Levy's equivalence). 1°) Let A E R Prove
that pathwise uniqueness holds for the one-dimensional equation e(l, bJ, where
b;"(x) = -A sgn(x).
[Hint: Use Exercise (2.11)].
2°) We denote by X A a solution to e(l, b A ) starting from 0, and by BA a
Brownian motion with constant drift A, starting from 0, and S; = sUPs::ot B;.
Prove that:
0) <:ll (IXAI
t >
0)
(SAt - BAt' t >
t'
-
[Hint: Use Skorokhod's lemma (2.1) in Chap. VI].
§3. The Case of Holder Coefficients in Dimension One
In this section, we prove a partial converse to Theorem (1.7) by describing a
situation where uniqueness in law entails pathwise uniqueness, thus extending the
scope of the latter property. We rely heavily on local times and, consequently,
have to keep to dimension 1. We study the equation e(cr, b) where cr is locally
bounded on IR.+ x IR. and consider pairs of solutions to this equation defined on the
same space and with respect to the same BM. We recall that any solution X is a
continuous semimartingale and we denote by LX (X) the right -continuous version
of its local times.
What may seem surprising in the results we are about to prove is that we get
uniqueness in cases where there is no uniqueness for ODE's (ordinary differential
§3. The Case of Holder Coefficients in Dimension One
389
equations). This is due to the regularizing effect of the quadratic variation of BM
as will appear in the proofs. One can also observe that in the case of ODE's, the
supremum of two solutions is a solution; this is usually not so for SDE's because
of the appearance of a local time. In fact we have the following
(3.1) Proposition. If Xl and X2 are two solutions of e(cr, b) such that Xl = X5
a.s., then Xl V X2 is a solution if and only if Lo(X I - X2) vanishes identically.
Proof By Tanaka's formula,
xi v
X: + (X; - X:t = Xi + it l(x;>xnd (X2 - XI)s
X;
1 o
+ _L
(X2 _ Xl)
2
t
'
and replacing Xi, i = 1,2 by Xb + Jo cr(s, X~)dBs + Jo b(s, X~)ds, it is easy to
check that
(xl v X5) +
it a (s, x; v XndB s + it b(s, x; Xnds
+ ~LO
(X2 - Xl)
2
t
V
'
o
which establishes our claim.
The following result is the key to this section.
(3.2) Proposition. Ifuniqueness in law holds for e(cr, b) and if LO(X I - X2) = 0
for any pair (X I, X2) ofsolutions such that Xl = X5 a.s., then pathwise uniqueness
holds for e(a, b).
Proof By the preceding proposition, if Xl and X2 are two solutions, Xl V X2 is
also a solution; but Xl and Xl V X2 cannot have the same law unless they are
equal, which completes the proof.
0
The next lemma is crucial to check the above condition on the local time. In
the sequel p will always stand for a Borel function from ]0, oo[ into itself such
that Jo+ daj pea) = +00.
(3.3) Lemma. If X is a continuous semimartingale such that, for some £ > 0 and
every t
At = it l(o<x,:oc)p(Xs)-ld(X, X)s <
00
a.s.,
then LO(X) = o.
Proof Fix t > 0; by the occupation times formula (Corollary (1.6) Chap. VI)
At i" p(a)-I L~(X)da.
=
At
If L?(X) did not vanish a.s., as L~(X) converges to L?(X) when a decreases to
zero, we would get
= 00 with positive probability, which is a contradiction.
390
Chapter IX. Stochastic Differential Equations
(3.4) Corollary. Let bi, i = 1,2 be two Borel functions; if
la(s, x) - a(s, y)1 2 ::s p(lx - yl)
for every s, x, y and if Xi, i = 1, 2, are solutions to e (a, b i ) with respect to the
same BM, then Lo(X I - X2) = o.
Proof We have
xi-x;
=
xJ-x~+
+
11
11
(a(s,X;)-a(s,X;))dB s
(b l (s,X;)-b 2(s,X;))ds,
and therefore
11
p(X; - X;)-ll(Xl>x;)d (Xl - X2, Xl - X2)s
=
2 2
10r p(XsI - Xs)2 -I (a(s, Xs)I - a(s, Xs))
I(Xl>x;)ds::s t.
o
We may now state
(3.5) Theorem. Pathwise uniqueness holds for e(a, b) in each of the following
cases:
i) la(x) - a(Y)12 ::s p(lx - yl), lal ~ e > 0 and b and a are bounded;
ii) la(s, x) - a(s, y)1 2 ::s p(lx - yl) and b is Lipschitz continuous i.e., for each
compact H and each t there is a constant K 1, such that for every x, y in H
and s ::s t
Ib(s, x) - b(s, y)1 ::s Krlx - yl;
iii) la(x)-a(y)1 2 ::s I/(x)- l(y)1 where I is increasing and bounded, a ~ e > 0
and b is bounded.
Remark. We insist that in cases i) and iii) a does not depend on s, whereas in ii)
non-homogeneity is allowed.
Proof i) By Corollary (1.14), since lal ~ e, uniqueness in law holds for e(a, b).
The result thus follows from Proposition (3.2) through Corollary (3.4).
ii) Let Xl and X2 be two solutions with respect to the same BM and such that
XJ = X~ a.s. By the preceding corollary,
Ixi - X;I =
11
sgn(X; - X;)d(X; - X;).
The hypothesis made on a and b entails that we can find a sequence (Tn) of
stopping times converging to 00, such that, if we set yi = (Xi)Tn, i = 1,2, for
fixed n, then a (s, Y1) is bounded and
§3. The Case of Holder Coefficients in Dimension One
391
for s :::: t and some constant Ct. As a result,
is a martingale vanishing at 0 and we have
Using Gronwall's lemma, the proof is now easily completed.
iii) By Corollary (1.14) again, the condition a :::: 8 implies uniqueness in law
for e(a, b); we will prove the statement by applying Corollary (3.4) with p(x) = x
and Proposition (3.2). To this end, we pick a 8 > 0 and consider
E
[1
t
(X] -
:::: E
X;)-ll(x;_X~>8)d (Xl - X 2 , Xl - X2)sJ
I
2)
1
2 -1
] def
[10t (I(X
s ) - I(X s ) (Xs - Xs) l(x;-x;>8)ds = K(f)t.
We now choose a sequence Un} of uniformly bounded increasing C 1_ functions
such that limn In (x) = I (x) for any x which is not a discontinuity point for
I. The set D of discontinuity points for I is countable; by the occupation times
formula, the set of times s such that
or
belongs to D has a.s. zero Lebesgue
measure, and consequently
X1 X;
for almost all s :::: t. It follows that K(f)t = limn K(fn)t.
For U E [0, 1], set ZU = X2 + u(X I - X2); we have
K(fn)t
[1 (1 1~(Z~)dU) 1(X;-x~>8)dS]
11 [1 1~(Z~)dS]
11 [1 1~(a)L~(ZU)da]
t
E
<
<
1
t
E
812
E
du
du,
because Z~(w) = zg(w) + f~ aU(s, w)dBs(w) + f~ hues, w)ds where aU :::: 8.
Moreover, la u I + Ib u I :::: M for a constant M; by a simple application of Tanaka's
formula
supE [L~(ZU)] = C < 00,
a.u
and it follows that
392
Chapter IX. Stochastic Differential Equations
n
Hence K (f)t is bounded by a constant independent of 8; letting 8 go to zero, we
see that the hypothesis of Lemma (3.3) is satisfied for p(x) = x which completes
the proof.
Remarks. 1°) In iii) the hypothesis a :::: EO cannot be replaced by la I :::: EO; indeed,
we know that there is no pathwise uniqueness for e(a, 0) when a(x) = sgn(x).
2°) The significance of iii) is that pathwise uniqueness holds when a is of
bounded quadratic variation. This is the best that one can obtain as is seen in
Exercise (1.16).
3°) The hypothesis of Theorem (3.5) may be slightly weakened as is shown
in Exercises (3.l3) and (3.14).
At the beginning of the section, we alluded to the difference between SDE's
and ODE's. We now see that if, for instance, a(x) = JIxT, then the ODE
dX t = a(Xt)dt has several solutions whereas the SDE e(a, b) has only one.
The point is that for SDE's majorations are performed by using the increasing
processes of the martingale parts of the solutions and thus it is (a (x) - a (y»2
and not la(x) - a(y)1 which comes into play. We now tum to other questions.
A consequence of the above results together with Corollary (l.14) is that for
bounded b, the equation e(l, b) has always a solution and that this solution is
strong (another proof is given in Exercise (2.11». We now give an example,
known as Tsirel'son example, which shows that this does not carry over to the
case where b is replaced by a function depending on the entire past of X.
We define a bounded function r on ~+ x W in the following way. For a strictly
increasing sequence (tk. k E -N), of numbers such that < tk < 1 for k < 0,
to = 1, limk--+ -co tk = 0, we set
°
r(t, w)
where [x] is the fractional part of the real number x. This is clearly a predictable
function on W. If (X, B) is a solution to e(l, r), then on ]tko tHd
if we set for tk < t ::s tHI,
we have
§3. The Case of Holder Coefficients in Dimension One
393
(3.6) Proposition. The equation e(1, r) has no strong solution. More precisely
i) Jar every t in [0, 1], the r.v. [7],] is independent OJ.¥jB and uniformly distributed on [0, 1];
ii) Jar any 0 < s ~ t, ~x = a ([7]s]) V.¥;B.
Proof The first statement is an obvious consequence of properties i) and ii), and
ii) follows easily from the definitions. We turn to proving i).
Let p E Z - {OJ and set dk = E [exp {2irrp7]I, }]; by (*), we have
dk
=
E[exp{2irrp(8I,+[7]lk-I])}]
=
E[exp{2irrp(8I,+7]I,_I)}]
= dk-IE [exp (2irrp8I,)] = dk - I exp {_2rr2p2 (tk - tk-I)-I} ,
because 81, is independent of .91;;-1' where (Yr) is the filtration with respect to
which (X, B) is defined. It follows that
Idk I ~ Idk-ti exp {-2rr2 p2} ~ ... ~ Idk - nI exp {-2nrr2 p2} ,
and consequently, dk = 0 for every k. This proves that [7]1,] is uniformly distributed
on [0, 1].
Define further Jfi; = a (Bu - Bv, tn ~ u ~ v ~ tk); then
E [exp {2irrp7]t,}
1.39;-n]
= E[exp{2irrp(8I, +8t'_1 + ... +8t'_n+1 +7]t,-J}
= exp {2irrp (81, + ... + 8t'_n+I)} dk-n
1.99;-n]
since 7]1'_n is independent of .'XJ;-n. The above conditional expectation is thus zero;
it follows easily that E [exp {2irrp7]t,} I ~B] = 0 and since .9iii;; is independent
of {BI - Btk' t 2: tk},
Finally for tk < t ~ tHI, we have
E [exp {2irrp7]t} I.~B] = exp {2irrp8tl E [exp {2irrp7]t,} I.¥jB] = 0
and this being true for every p i= 0, proves our claim.
Remark. As a result, there does exist a Brownian motion B on a space (Q, Yr, P)
and two processes Xl, X2 such that (Xi, B), i = 1,2, is a solution of e(l, r). The
reader will find in Exercise (3.17) some information on the relationship between
Xl and X2. Moreover, other examples of non existence of strong solutions may
be deduced from this one as is shown in Exercise (3.18).
We will now use the techniques of this section to prove comparison theorems
for solutions of SDE's. Using the same notation as above, we assume that either
(a(s, x) - a(s, y»2 ~ p(lx - yl) or that a satisfies hypothesis iii) of Theorem
(3.5).
394
Chapter IX. Stochastic Differential Equations
(3.7) Theorem. Let bi, i = 1,2, be two bounded Borelfunctions such that b l ::: b 2
everywhere and one of them at least satisfies a Lipschitz condition. If Xi, i = I, 2,
are solutions to e(a, bi) defined on the same space with respect to the same BM
and if XJ ::: X~ a.s., then
P [xl::: X; for all t ::: 0] = 1.
Proof It was shown in Corollary (3.4) and in the proof of Theorem (3.5) that, in
each case, LD(XI - X2) = 0, and therefore
4J(t)
E [(X; - XI)+] = E
<
E[1
11
[1
1
I(x;>x}) (b 2 (s, X;) - b l (s, Xl») dS]
(X;>X1l(b l (S,X;)-b l (S,Xl»)dS].
Thus, if b l is Lipschitz with constants Kr.
and we conclude by using Gronwall's lemma and the usual continuity arguments.
If b2 is Lipschitz, using the same identity again, we have
¢(t)
<
E
[1
+
<
E
1
11
[1
1
l(x;>xnlb2(s, X;) - b 2 (s,
xl)1 ds
I(X;>X1l(b 2 (S,Xl)-b l (S,Xl»)dS]
I(x;>x}) Ib 2 (s, X;) - b2 (s,
xl)1 dS]
since b2 :s b l , and we complete the proof as in the first case.
D
With more stringent conditions, we can even get strict inequalities.
(3.8) Theorem. Retain the hypothesis of Theorem (3.7). If the functions b i do not
depend on s and are continuous and a is Lipschitz continuous and if one of the
following conditions is in force:
i) b l > b 2 everywhere,
ii) lal::: 8> 0, either b l or b2 is Lipschitz and there exists a neighborhood V(x)
of a point x such that
r
lv(x)
l(b'(a)=b 2 (a))da =
0,
then, if (Xi, B) is a solution of ex (a, bi),
P [xl> X; for all t > 0] = 1.
§3. The Case of Holder Coefficients in Dimension One
395
Proof In case i) one can suppose that either b l or b 2 is Lipschitz continuous,
because it is possible to find a Lipschitz function b3 such that b I > b3 > b 2 .
We now suppose that b 1 is Lipschitz, the other case being treated in similar
fashion. We may write
where
and
The hypothesis made on a and b i entails that Hand M are continuous semimartingales; by Proposition (2.3) we consequently have
xl - X; = g(M)t 1t 3(M);ldHs ,
°
and it is enough to prove that for every t > 0, H t >
a.s. This property is
obviously true in case i); under ii), by the occupation times formula,
°
If L;(X2) > for all t > 0, the result will follow from the right-continuity in a
of L~. Thus we will be finished once we have proved the following
(3.9) Lemma. If X is a solution of exCa, b) and, moreover lal >
almost surely, L; (X) > for every t > 0.
°
E
> 0, then
Proof We may assume that X is defined on the canonical space. By Girsanov's
theorem, there is a probability measure Q for which X is a solution to e(a,O).
The stochastic integrals being the same under P and Q, the formula
IX t - xl = 1t sgn(Xs - x)dX s +
L:
shows that the local time is the same under P and Q. But under Q we have
X t = x + 1t a(s, Xs)dBs,
hence X = f3A, where f3 is a BM(x) and At a strictly increasing process of timechanges. The result follows immediately.
D
396
#
Chapter IX. Stochastic Differential Equations
(3.10) Exercise (Stochastic area). 1°) Give another proof of 3°) in Exercise
(2.19) Chap. V in the following way: using the results of this chapter, prove that
(.:Yr"P) C (.~~) where 'ff is defined within the proof of Theorem (2.11) Chap. V
and compute (S, S) and (S, 'ff).
2°) Prove that
where f3 is a linear BM. The exact value of this function of A and t is computed
in Sect. I Chap. XI.
* (3.11) Exercise. (Continuation of Exercise (4.15) of Chap. V). Let ~n be the
time-change associated with (Mn, Mn).
1°) Let Z~ = B;;;!
, and prove that
(Z~)2 = Cn fot Z;df3.~ + dnt
where Cn and dn are two constants and f3n is the DDS Brownian motion of Mn.
2°) If n is odd, prove that M n is pure.
[Hint: Use Theorem (3.5) to show that Ttn is .yfJ" -measurable.]
* (3.12) Exercise. 1°) Retain the notation of Exercise (1.27) Chap. VIII and suppose
that (.S~n = (.~ B) where B is a BM under P. Let r be the Tsirel' son drift and
define
Q = tS
(1"
Set
At =
res, BJdBs} . P
on,~.
fot (2 + Bs/(l + IBsl» ds,
and call Tt the inverse of A. Using Exercise (1.15) prove that G~ (Bh is not pure,
hence that a Girsanov transform of a pure martingale may fail to be pure.
2°) In the situation of Proposition (3.6), let V t be the inverse of the process
J~ (l + r (s, XJ) ds and set M t = B u,. Prove that M is not pure although .~M =
(J (.~M, .r;:) for every 8 > O. This question is independent of the first. The
following question shows that the situation is different if we replace purity by
extremality.
3°) Let M be a (.3f)-loc. mart. with the following representation property: for
every 8 > 0 and every X E L2(~) there is a suitable predictable process CPF:
such that
X = E [X
Prove that M has the (.jf)-PRP.
I.~] +
1
00
CPF:(s)dMs .
§3. The Case of Holder Coefficients in Dimension One
397
(3.13) Exercise. Prove that Theorem (3.5) is still true if in iii) we drop the hypothesis that f is bounded or if we replace the hypothesis a ~ C by : for every
r > 0, there is a number Cr > 0 such that a ~ Cr on [-r, r].
(3.14) Exercise. Prove that parts i) and ii) of Theorem (3.5) are still true if the
hypothesis on a reads: there are locally integrable functions g and c and a number
8 > 0 such that for every x, for every Y E [x - 8, x + 8],
(a(s, x) - a(s, y»2 :::: (c(s) + g(x)a 2(s, x») p(lx - YI).
(3.15) Exercise. Let y be a predictable function on Wand r be the Tsirel'son
drift; define
res, w) = res, w) + y
(s,
w - !a' r(u, W)dU) .
Let (X, B) be a solution to eo(l, r) on the space (D,.9;, P).
1°) If fJ = X - fo r(u, XJdu, prove that (.~B) C (.¥;fJ).
2°) Find a probability measure Q on (D, .¥oo), equivalent to P, for which fJ
is a BM and (X, fJ) a solution to eo(l, r). Derive therefrom that (X, B) is not a
strong solution to eo (1, r).
#
(3.16) Exercise. 1°) If B is a standard linear BM, prove that the process Zt = B;
satisfies the SDE
Zt = 2 !at jZ;dfJs + t
and derive therefrom another proof of the equality .y;fJ = .~IBI of Corollary (2.2)
in Chap. VI.
2°) More generally if B is a BMd(O), show that IBI and the linear BM
have the same filtration (see Sect. 3 Chap. VI and Sect. 1 Chap. XI).
3°) If A is a symmetric d x d-matrix and B a BMd (0), prove that the local
martingale fo (ABs, dBs) has the same filtration as a BM r where r is the number of
distinct, non zero eigenvalues of A. In particular prove that a planar BM (B I , B2)
has the same filtration as (IB I + B21, IBI - B 2 1).
[Hint: Use Exercises (1.36) and (3.20) in Chap. IV.]
*
(3.17) Exercise. 1°) Suppose given a Gaussian vector local martingale B in ]Rd on
a space (D, .X, P) such that (B i , Bj)t = Pijt with Pi.i = 1. For each i, we suppose
that there is an (.X)-adapted process Xi such that (Xi, Bi) is a solution of e(l, r)
with Xo = Bo = O. With obvious notation derived from those in Proposition (3.6)
prove that the law of the random vector [1)rJ = ([1);], i = 1, ... , d) is independent
of t and is invariant by the translations Xi ---+ Xi + Ui (mod. 1) if L PiUi = 0 for
398
Chapter IX. Stochastic Differential Equations
any (Pi) E Zd such that L Pij Pi Pj = O. Prove further that this random variable
is independent of .Yj B.
2°) Suppose from now on that all the components Bi of B are equal to the
same linear BM f3, and let a be a vector random variable independent of .¥;.B,
whose law is carried by ([0, I [)d and is invariant under translations Xi ---+ Xi + U
(mod. 1) for every U E lR, (not lR,d !). Set [17t_I] = a and prove that one can define
recursively a unique process TJ such that
For any t, the vector random variable [TJd is independent of .¥i fJ.
3°) Prove that the family of a-algebras ·'IJr = .Yi.B v a ([TJtl) is a filtration and
that f3 is a (.'~)-Brownian motion.
4°) If, for t E ]t/, tHIl, we define
X t = f3t + L(tk - tk~I)[TJtk-1] + (t - t/)[TJt,]
ks/
the process X is (·'~t)-adapted and (Xi, (3) is for each i a solution to e(l, r).
5°) Prove that for any Z E L2(.Y;X, P), there is a (..Y; x)-predictable process
¢ such that
E[fo ¢;ds] <
l
00 and
Z = Zo +
11
¢sdf3s
where Zo is .Yo x -measurable.
*
(3.18) Exercise (Tsirel'son type equation without a drift). With the same sequence (td as for the Tsirel'son drift we define a predictable function f on W
by
1°) Let B be an (.jf)-BM on (Q,.¥r, P) and (Xi, B), i
solutions to eo(f, 0); if we define TJ by
= 1,2, ... , n be
then, for any t E ]0, 1], the law of TJt is invariant by the symmetry (XI, ... , xn) ---+
(-Xl, ... , -X n ), TJt is independent of.¥j B and almost-surely
a(Xs, s :S t) = .y:; B va(TJs)
for any s :S t.
2°) Conversely, if a is a r.v. taking its values in {-I, l}n, which is .Yoomeasurable, independent of .¥;B, and such that its law is invariant by the above
symmetry, then there exists a filtration (.'~) on Q and a (.'~t)-adapted process X
such that:
i) B is a (.(~)-BM;
ii) for each i, (Xi, B) is a solution to eo(f, 0) and a i = sgn(X tio - Xti -I ).
Notes and Comments
399
(3.19) Exercise. (Continuation of Exercise (2.24) of Chap. VI). With obvious
definition prove that there is path uniqueness for the equation (*).
[Hint: One can use Exercise (1.21) Chapter VI.]
(3.20) Exercise. Retain the situation and notation of Theorem (3.8).
1°) For p as in Lemma (3.3), prove that, for every t > 0,
l
t
p(X; - X;)-Ids = 00
a.s.
[Hint: Use the expression of Xl - X2 as a function of M and H given in the
proof of Theorem (3.8).]
2°) Ifin addition b l -b2 :::: a > 0 and ifnow p is such that 10+ p(u)-Idu < 00,
then
lt
**
p(xl - X;)-Ids < 00
a.s.
(3.21) Exercise. Retain the situation and notation of Theorem (3.7) and suppose
that (]' and b l satisfy some Lipschitz conditions. Prove that when a tends to zero,
for every t > 0 and t: E ]0, 1/2[,
L~(XI - X2) = 0(a lj2 - e )
a.s.
[Hint: Use Corollary (1.9) in Chap. VI and the exponential fonnulas of the
proof of Theorem (3.8).]
Notes and Comments
Sect. 1. The notion of stochastic differential equation originates with Ito (see Ito
[2]). To write this section, we made use ofIkeda-Watanabe [2], Stroock-Varadhan
[1] and Priouret [1]. For a more general exposition, see Jacod [2]. The important
Theorem (1.7) is due to Yamada and Watanabe [1]. The result stated in Remark 2)
after Theorem (1.7) was proved in Kallenberg [2].
Exercise (l.l5) is taken from Stroock-Yor [1] and Exercise (1.16) from Girsanov [2]. The result in Exercise (l.l8) is due to Perkins (see Knight [7]). Pathwise
uniqueness is a property which concerns all probability spaces. Kallsen [1] has
found a Tsirel'son-like example of an SDE which enjoys the existence and uniqueness properties on a particular space but not on all spaces.
Sect. 2. As for many results on SDE's, the results of this section originate with Ito.
Theorem (2.1) was proved by Doleans-Dade [1] for general (i.e. non continuous)
semimartingales. Proposition (2.3) comes from an unpublished paper of Yoeurp
and Yor.
Theorem (2.4) and its corollaries are taken from Neveu [2] and Priouret [1],
but of course, most ideas go back to Ito. Theorem (2.4) is the starting point for
the theory of flows of SDE' s in which, for instance, one proves, under appropriate
400
Chapter IX. Stochastic Differential Equations
hypothesis, the differentiability in x of the solutions. It also leads to some aspects
of stochastic differential geometry. An introduction to these topics is provided by
the lecture course of Kunita [4].
Exercise (2.6)bis is taken from Jacod [3] and Karandikar [1].
Exercise (2.8) is due to Doss [1] and Sussman ([1] and [2]). Exercise (2.9)
is from Yoeurp [3] and Exercise (2.10) is taken in part from Ikeda-Watanabe [2].
Exercise (2.12), inspired by Jeulin-Yor [2] and Yor [10], originates with Ito [6] and
provides a basic example for the theory of enlargements of filtrations. Exercise
(2.14) is taken from EI Karoui and Chaleyat-Maurel [1]. Exercise (2.15) describes
results which are due to Feller; generally speaking, the contribution of Feller to
the theory of diffusions is not sufficiently stressed in these Notes and Comments.
Exercise (2.17) is very close to Chitashvili-Toronjadze [1] and is further developed in Jeulin-Yor [4]. It would be interesting to connect the results in this
exercise with those of Carlen [1] and Carlen-Elworthy [1]. For Exercise (2.18)
see Bougerol [1] and Alili [1], Alili-Dufresne-Yor [1]; related results, for Levy
processes instead of BM with drift, are found in Carmona et al. [1].
Some explicit solutions to SDE's are exhibited in Kloeden-Platen [1] using
Ito's formula.
The result of Exercise (2.19) may be found in Fitzsimmons [1] who studies in
fact a converse to Levy's equivalence.
Sect. 3. Our exposition of Theorem (3.5) is based on Le Gall [1] who improved
earlier results of Nakao [1] and Perkins [5]. Problems of stability for solutions of
such one-dimensional SDE's are studied in Kawabata-Yamada [1] and Le Gall [1].
The proof given here for the Tsirel'son example is taken from Stroock-Yor [1]
and is inspired by a proof due to Krylov which is found in Liptser-Shiryayev [1].
Benes [1] gives another proof, as well as some extensions. The exercises linked to
this example are also mainly from Stroock-Yor [I] with the exception of Exercise
(3.17) which is from Le Gall-Yor [1]. Further general results about Tsirel' son's
equation in discrete time are developed in Yor [19].
Notice that in the TsireI'son example, despite the fact that (,Yr B ) is strictly
coarser than (.~x), the latter filtration is still a strong Brownian filtration as was
discussed in the Notes and Comments of Chap. V.
For the comparison theorems see Yamada [1], Ikeda-Watanabe ([1] and [2])
and Le Gall [1], but there are actually many other papers, too numerous to be
listed here, devoted to this question.
Exercise (3.1 0) is from Williams [4] and Yor [11] and Exercise (3.11) is a
result of Stroock-Yor [2]. Exercise (3.16) is taken from Yor [7]; with the notation
of this exercise let us mention the following open
Question 1. If in 3°) the matrix A is no longer supposed to be symmetric, is the
filtration of the martingale still that of a BM' and, in the affirmative, what is r in
terms of A?
A partial answer is found in Auerhan-Lepingle [1]; further progress on this
question has been made by Malric [1].
Chapter X. Additive Functionals of Brownian Motion
§ 1. General Definitions
Although we want as usual to focus on the case of linear BM, we shall for a while
consider a general Markov process for which we use the notation and results of
Chap. III.
(1.1) Definition. An additive functional of X is a Ifi+-valued, (Yf'J-adapted process
A = {At. t ::: O} defined on g and such that
i) it is a.s. non-decreasing, right-continuous, vanishing at zero and such that At =
A~_ on {S' :::: t};
ii) for each pair (s, t), As+t = At + As 0 ()t a.s.
A continuous additive functional (abbreviated CAF) is an additive functional such
that the map t --+ At is continuous.
Remark. In ii) the negligible set depends on s and t, but by using the rightcontinuity it can be made to depend only on t.
The condition At = A~_ on {~ :::: t} means that the additive functional does
not increase once the process has left the space. Since by convention f(Ll) = 0
for any Borel function on E, if r is a Borel subset of E, this condition is satisfied
by the occupation time of r, namely At = f~ lr(Xs)ds, which is a simple but
very important example of a CAF. In particular At = t /\ ~, which corresponds to
the special case r = E, is a CAF.
Let X be a Markov process with jumps and for t: > 0 put
Te = inf {t > 0 : d (Xr. X t - ) > e} .
Then Te is an a.s. strictly positive stopping time and if we define inductively a
sequence (Tn) by
the reader will prove that At = L;'" l(Tn :9l is a purely discontinuous additive
functional which counts the jumps of magnitude larger than t: occuring up to
time t.
We shall now give the fundamental example of the local time of Brownian motion which was already defined in Chap. VI from the stochastic calculus
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
402
Chapter X. Additive Functionals of Brownian Motion
point of view. Actually, all we are going to say is valid more generally for linear Markov processes X which are also continuous semimartingales such that
(X, X)t =
cp(Xs)ds for some function cp and may even be extended further by
time-changes (Exercise (1.25) Chap. XI). This is in particular the case for the au
process or for the Bessel processes of dimension d ::: 1 or the squares of Bessel
processes which we shall study in Chap. XI. The reader may keep track of the
fact that the following discussion extends trivially to these cases.
We now consider the BM as a Markov process, that is to say we shall work
with the canonical space W = C (JR.+, JR.) and with the entire family of probability
measures Pa , a E JR..
With each PQ , we may, by the discussion in Sect. 1, Chap. VI, associate a
process L which is the local time of the martingale B at zero, namely, such that
f;
IBtl = lal + it sgn(Bs)dBs + L t
Actually, L may be defined simultaneously for every Pa , since, thanks to Corollary
(1.9) in Chap. VI,
-
L t = lim k
1 it
2£k
0
a.s.,
Ij-sk.sk[(Bs)ds
where {Ed is any sequence of real numbers decreasing to zero. The same discussion
applies to the local time at a and yields a process U. By the results in Chap. VI,
the map (a, t) --+ L~ is a.s. continuous.
Each of the processes La is an additive functional and even a strong additive
functional which is the content of
(1.2) Proposition. If T is a stopping time, then, for every a
L~+s = L~ + L~((h)
for every b and every positive random variable S.
Proof Set 1(£) =]a - £, a + £[; it follows from Corollary (1.9) in Chap. VI that
if T is a stopping time
L aT+S = Lra + I·1m - 1
stO 2£
1
s
0
1/ (s) (Bu(8r
»d u
for every b. By the Strong Markov property of BM
Pb
[L~(8r) = lim
~ (s I /
stO 2£ 10
= Eb [EBT
(s) (Bu(8 r
» dU]
[L~ = lim
~ {s I/ (s)(Bu)dU]] = 1.
stO 2£ 10
Consequently,
L~+s = L~
+ L~ o8r
a.s.
§ 1. General Definitions
403
A case of particular interest is that of two stopping times T and S. We saw
(Proposition (3.3) Chap. III) that T + S 0 eT is again a stopping time and the
preceding result reads
L~+solh = L~ + L~ 0 eT.
s
We draw the attention of the reader to the fact that L 0 eT is the map W --+
L~(lh(w))(eT(W)), whereas the L~(eT) in the statement of the proposition is the
map W --+ L~(w) (eT(w».
The upcoming consequence of the preceding result is very important in Chap.
XII on Excursion theory as well as in other places. As in Chap. VI, we write Tt
for the time-change associated with L t = L?
(1.3) Proposition. For every t, there is a negligible set rr such that for w ~ rr
and for every s > 0,
Tt+s(W) = Tt(W) + Ts (e" (w») ,
T(t+s)_(W) = Tt(W) + Ts- (e,,(w»).
Proof Plainly, Tt is a.s. finite and Tt+s > Tt . Therefore
Tt+s
inf{u > 0: Lu > t +s}
Tt + inf {u > 0 : L,,+u > t + s} .
Using the strong additivity of L and the fact that L" = t, we have that, almostsurely, for every s,
Tt+s
Tt +inf{u > 0: L" + Lu(e,,) > t +s}
Tt + inf {u > 0 : Lu (e,,) > s} = Tt + Ts 0 e't.
The second claim follows at once from the first.
D
Remarks. 1°) The same result is clearly true for Lain place of L. It is in fact true
for any finite CAF, since it is true, although outside the scope of this book, that
every additive functional of a Markov process has the strong additivity property.
2°) Since the processes (L t ) and (St) have the same law under Po (Sect. 2
Chap. VI), it follows from Sect. 3 Chap. III that the process (Tt) is a stable
subordinator of index 1/2. This may also be proved using the above result and
the strong Markov property as is outlined in Exercise (1.11).
3°) Together with Proposition (3.3) in Chap. III, the above result entails that
e,,+, = e't 0 e" a.s.
Our goal is to extend these properties to all continuous additive functionals
of linear BM. This will be done by showing that those are actually integrals of
local times, thus generalizing what is known for occupation times. To this end, we
will need a few results which we now prove but which, otherwise, will be used
sparingly in the sequel.
Again, we consider a general Markov process. By property i) of the definition
of additive functionals, for every w, we can look upon At (w) as the distribution
function of a measure on lR+ just as we did for the increasing processes of Sect. 1
404
Chapter X. Additive Functionals of Brownian Motion
Chap. IV. If Ft is a process, we shall denote by f~ FsdAs its integral on [0, t] with
respect to this measure provided it is meaningful. Property ii) of the definition is
precisely an invariance property of the measure d As.
(1.4) Proposition. If f is a bounded positive Borel function and At is finite for
every t, the process (f . A) defined by
is an additive functional. If A is a CAF, then (f . A) is a CAF.
Proof The process X is progressively measurable with respect to (.9i), thus so
is f(X) and as a result, (f. A)t is Sf-measurable for each t. Checking the other
conditions of Definition (1.1) is easy and left to the reader.
0
Remarks. 1°) The hypothesis that f is bounded serves only to ensure that t -+
(f. A)t is right-continuous. If f is merely positive and (f. A) is right-continuous,
then it is an additive functional. The example of the uniform motion to the right
and of f(x) = (l/x)l(x>o) shows that this is not always the case.
2°) If At = t /\ ~ and f = lr, then (f. A)t is the occupation time of r by X.
(1.5) Definition. For a 2: 0, the function
U~(x) = Ex
[1
00
e-atdAt]
is called the a-potential of A. We also write U~ f (x) for the a-potential of f . A,
in other words
For At = t /\ ~ , we have U~ f = U a f, and, more generally, it is easily checked
that the map (x, T) -+ U~ 1r (x) is a kernel on (E, g') which will be denoted
U~ (x, .). Moreover, these kernels satisfy a resolvent-type equation.
(1.6) Proposition. For a, fJ 2: 0, if U~ f (x) and U! f (x) are finite,
U~f(x) - U!f(x) = (fJ - a)UaU!f(x) = (fJ - a)Uf3U~f(x).
Proof Using the definitions and the Markov property
UaU!f(x)
=
Ex
Ex
=
Ex
[1
[1
[1
[1
[1
00
e-atE x ,
00
e- at Ex
00
e-atdt
00
00
1
00
e-f3Sf(Xs)dAs]dt]
e- f3s f (xs(et )) dAs(et) I Sf] dt]
e- f3s f(XsH)dAs(et)].
§ I. General Definitions
405
By property ii) in Definition (1.1), we consequently have
Ex
UaU!f(x)
Ex
[1
[1
1
1
00
e-atdt
00
00
e-(a-f3)tdt
e- f3s f(Xs+t)dAs+t]
00
e- f3u f(Xu)dAu]
and Fubini's theorem yields the desired result.
D
We will prove that the map which with A, associates the family of kernels
B will mean that A
and B are indistinguishable i.e. in this context, Px [3t : At =1= B t ] = for every x.
Because of the right-continuity, this is equivalent to Px [At = Btl = I for every
x and t. The following proposition will be used in §2.
U~(x, .) is one-to-one. Let us first mention that as usual A =
°
(1. 7) Proposition. If A and B are two additive functionals such that for some
ex ~ 0, U~ = U~ < 00 and U~f = U~f for every f E Cj(, then A = B.
Proof The hypothesis extends easily to U~f = U~f for every f E bi~+. In
particular U~ U a f = U~ u a f. But computations similar to those made in the
preceding proof show that
u~ua f(x) = Ex
[1
and likewise for B. As a result,
1
00
e- at Ex [f(Xt)Atl dt =
00
e- at f(Xt)Atdt]
1
00
e- at Ex [f (Xt )Btl dt.
By the resolvent equation of Proposition (1.6), the same result is true for each
f3 ~ ex.
If f is continuous and bounded, the map t ~ Ex [f(Xt)Atl is finite since
00 >
U~(x) ~ Ex
[1
t
e-asdA s ]
~ e-at Ex [Atl,
hence right-continuous and the same is true with B in place of A. The injectivity
of the Laplace transform implies that
Ex [f(Xt)Atl = Ex [f(Xt)BrJ
for every t and for every continuous and bounded f. This extends at once to
f E bi5+. Finally, an induction argument using the Markov property and the
t\ ::::: t2 ::::: ... ::::: tn ::::: t and
additivity property of A and B shows that for
fk E bg"+,
Ex
[0
°: :
fk(X tk ) . At] = Ex
[0
fk(X tk ), Bt] .
Since At and B t are Yf-measurable and.3if is generated by X s , s ::::: t, it follows
that A = B.
D
406
Chapter X. Additive Functionals of Brownian Motion
(1.8) Exercise. If A and B are two additive functionals such that for every t > 0
and every x E E
Ex[Atl = Ex[Btl < 00,
then M t = At - Bt is a (.g;1f, Px)-martingale for every x E E. (Continuation in
Exercise (2.22».
#
(1.9) Exercise (Extremal process). 1°) For the standard linear BM, set, with the
usual notation, Xa = LT. with a 2: O. Prove that the process a ~ Xa has
independent (non stationary) increments. Prove further that for XI < X2 < ... < xp
and al < a2 < .. . ap,
Po [Xal > XI, X a2 > X2,·.·, Xa p > xp]
_
(XI
-exp - - -
X2 -XI
2al
2a2
... -
xp -XP_I)
2ap
.
2°) If as usual it is the time-change associated with L, prove that the process
Yt = S" is the inverse of the process Xa. Deduce therefrom, that for tl < t2 <
... < tp and YI < Y2 < ... < Yp,
Po [Ytl < YI, Yt2 < Y2,···, Ytp < Yp]
= exp (_~ _ t2 - tl ... _ tp - tP_ I ) .
2YI
2Y2
2yp
Some further information on these processes may be found in Exercise (4.11) in
Chap. XII.
(1.10) Exercise. For 0 < a < 1, define
Za(t) =
1+ L~,x(l-2a)/adx
00
where it is the time-change associated with L = L o.
1°) Prove that Za is a stable subordinator of index a.
[Hint: Use the scaling properties of Exercise (2.11) in Chap. VI.]
2°) Prove that
a.s.
lim (Za(t»a = S,'
a-->O
(see the previous exercise for the study of the limit process).
*#
(1.11) Exercise. Let L be the local time of BMI at zero and it its inverse.
1°) Deduce from Proposition (1.3) and the strong Markov property that it is
under Po, an increasing Levy process.
2°) From the definition of L in Chap. VI and the integration by parts formula,
deduce that the a-potential (a > 0) of L at 0, namely Uf(O) is equal to (2a)-1/2.
This will also follow from a general result in the next section.
3°) For X E ~, derive from the equality
§ 1. General Definitions
Uf(x) =
that
1
00
407
E t [exp(-(Ht)]dt,
Ex [exp (-(HI)] = Ex [exp (-aTo)] exp ( -t~)
,
and conclude that under Po, the process (Tt ) is a stable subordinator of index 1/2.
We thus get another proof of this result independent of the equivalence in law of
L t and St. See also 6°) in Exercise (4.9) of Chap. VI.
4°) If f3 is an independent BM d , prove that f3r 1 is a d-dimensional Cauchy
process, i.e. the Levy process in lRd such that the increments have Cauchy laws
in lRd (see Exercise (3.24) in Chap. III).
(1.12) Exercise. Let X be a Markov process on lR with continuous paths such
that its t.f. has a density Pt (x, y) with respect to the Lebesgue measure which is
continuous in each of the three variables. Assume moreover that for each Px , X
is a semimartingale such that (X, X)r = t. If U is the family of its local times,
prove that
~Ex
[Ln = Pr(x, y)
dt
for every t, x, yin ]0, oo[xlR x R
The reader is invited to compare this exercise with Exercise (4.16) Chap. VII.
#
(1.13) Exercise. Let X = (XI' .17[", px) be a Markov process with state space E
and A an additive functional of X. Prove that, for any Px , the pair (X, A) is a
(.3~)-Markov process on E x lR+ with t.f. given by
[Hint: The reader may find it useful to use the Markov property of Chap. III
Exercise (3.19).] A special case was studied in Exercise (2.18) of Chap. VI.
(1.14) Exercise. 1°) Let La be the family of local times of BM and set L;
sUPa L~. Prove that for every P E]O, oo[ there are two constants cp and C p such
that
cpE [TP/2] ::s E [(L~)P] ::s CpE [TP/2]
for every stopping time T.
[Hint: Use Theorems (4.10) and (4.11) in Chap. IV.]
2°) Extend adequately the results to all continuous local martingales. This
result is proved in Sect. 2 Chap. XI.
#
(1.15) Exercise. 1°) Let A be an additive functional of the Markov process X and
e an independent exponential r.v. with parameter y. Let
I; = inf {t : At > e} .
Prove that the process X defined by
408
Chapter X. Additive Functionals of Brownian Motion
Xt = Xt
on {t < {},
is a Markov process for the probability measure Px of X. Prove that the semigroup
Qt of X is given by
If X is the linear BM and A its local time at 0, the process X is called the elastic
Brownian motion.
2°) Call Va the resolvent of the elastic BM. If ! is bounded and continuous
prove that u = Va! satisfies the equations
{
au - u" /2 = !,
u~(O) - u~(O) = 2yu(0);
loosely speaking, this means that the infinitesimal generator A of X is given by
Au = ~u" on the space {u E C 2 (JR) : u~(O) - u~(O) = 2yu(0)}.
**
(1.16) Exercise. (Conditioning with respect to certain random times). Let X
be a continuous Markov process such that there exists a family p;.y of probability
measures on (.0, .Yc:c,) with the following properties
i) for each s > 0, there is a family '6; of Y;-measurable r.v.'s, closed under
pointwise multiplication, generating .~ and such that the map (t, y) ~ E~.y [C]
is continuous on ]s, oo[ x E for every x E E and C E '6;.
ii) for every r E .:Y;, the r.v. p;.x, (r) is a version of Ex [lr I Xd·
The reader will find in Chap. XI a whole family of processes satisfying these
hypotheses.
1°) Prove that for every additive functional A of X such that Ex[A t ] < 00 for
every t, and every positive process H,
2°) Let L be a positive a.s. finite .~-measurable r.v., A a predictable process
and A an additive functional such that
for every positive predictable process H (see Exercise (4.16) Chap. VII). If rx =
{(s, y) : E~.y[As] = A}, prove that Px [(L, Xd E rx] = 0 and that
Ex [HL I L = S, XL = y] = E~.y [HsAs] / E~.y[As].
3°) Assume that in addition, the process X satisfies the hypothesis of Exercise
(1.12) and that there is a measure v such that At = f L~v(dx). Prove that the law
of the pair (L, Xd under Px has density E~.y[Adpt(x, y) with respect to dtv(dy).
Give another proof of the final result of Exercise (4.16) Chap. VII.
§2. Representation Theorem for Additive Functionals
409
§2. Representation Theorem for Additive Functionals
of Linear Brownian Motion
With each additive functional, we will associate a measure which in the case of
BM, will serve to express the functional as an integral of local times. To this end,
we need the following definitions.
If N is a kernel on (E, ~) and m a positive measure, one sets, for A E g',
mN(A) =
f
m(dx)N(x, A).
We leave to the reader the easy task of showing that the map A --+ mN(A) is
a measure on g which we denote by m N.
(2.1) Definition. A positive a-jinite measure m on (E, ~) is said to be invariant
(resp. excessive) for X, or for its semi-group, ifmPt = m (resp. mPt S m) for
every t ~ O.
For a positive measure m, we may define a measure Pm on (ll,.'7') by setting
as usual
Pm[r] =
Ie
px[r]m(dx).
This extends what was done in Chap. III with a starting probability measure v, but
here Pm is not a probability measure if m(E) i= 1. Saying that m is invariant is
equivalent to saying that Pm is invariant by the maps (}t for every t. Moreover, if
m is invariant and bounded and if we normalize m so that m(E) = 1 then, under
Pm, the process is stationary (Sect. 3 Chap. I). For any process with independent
increments on ]Rd, in particular BMd , the Lebesgue measure is invariant, as the
reader will easily check. Examples of excessive measures are provided by potential
measures vU = fooo vPtdt when they are a-finite as is the case for transient
processes such as BMd for d ~ 3.
The definition of invariance could have been given using the resolvent instead
of the semi-group. Plainly, ifm is invariant (resp. excessive) then mU' = m (resp.
mU' S m) and the converse may also be proved (see Exercise (2.21».
From now on, we assume that there is an excessive measure m which will
be fixed throughout the discussion. For all practical purposes, this assumption is
always in force as, for transient processes, one can take for m a suitable potential
measure whereas, in the recurrent case, it can be shown under mild conditions
that there is an invariant measure. In the sequel, a function on E is said to be
integrable if it is integrable with respect to m.
Let A be an additive functional; under the above assumption, we set, for
f E bg'+,
410
Chapter X. Additive Functionals of Brownian Motion
(2.2) Proposition. For every f and A,
VA(f) = lim ~Em [(f. A)tl = lim a
t,J,O t
a--->oo
f m(dx)U~f(x).
Moreover, the second limit is an increasing limit.
Proof Since m is excessive, it follows easily from the Markov property that
t -+ Em [(f . A)tl is sub-additive; thus, the first equality follows from the wellknown properties of such functions. The second equality is then a consequence of
the abelian theorem for Laplace transform and finally the limit is increasing as a
consequence of the resolvent equation.
0
For every a ~ 0, the map f -+ a J m(dx)U~f(x) is a positive measure, and
since one can interchange the order of increasing limits, the map f -+ vA (f) is a
positive measure which is denoted by vA.
(2.3) Definition. The measure v A is called the measure associated with the additive
functional A. If VA is a bounded (a-finite) measure, A is said to be integrable (aintegrable).
It is clear that the measures vA do not charge polar sets. One must also observe
that the correspondence between A and v A depends on m, but there is usually a
canonical choice for m, for instance Lebesgue measure in the case of BM. Let us
further remark that the measure associated with At = t /\ ~ is m itself and that
Vf.A = f . VA· In particular, if At = J~ f(Xs)ds is an additive functional, then
vA = f . m and A is integrable if and only if f is integrable.
Let us finally stress that if m is invariant then v A is defined by the simpler
formula
VA(f)
= Em
[1
1
f(Xs)dAsJ .
As a fundamental example, we now compute the measure associated with the
local time La of linear BM. As we just observed, here m is the Lebesgue measure
which is invariant and is in fact the only excessive measure of BM (see Exercise
(3.14».
(2.4) Proposition. The local time La is an integrable additive functional and the
associated measure is the Dirac measure at a.
Proof Plainly, we can make a = 0 and we write L for LO. Proposition (1.3) in
Chap. VI shows that the measure VL has {OJ for support and the total mass of VL
is equal to
1:
00
dxEx[Ld =
1:
00
dxEo[LI] = Eo
[1:
by the occupation times formula. The proof is complete.
00
LfdXJ = 1
o
§2. Representation Theorem for Additive Functionals
411
The measure v A will be put to use in a moment, thus it is important to know
how large the class of a-integrable additive functionals is. We shall need the
following
(2.5) Lemma. If C is an integrable AF and h E bf!:+ and if there is an ex > 0
such that U~h :s U't:, then vA(h) < 00.
Proof For f3 2: ex, the resolvent equation yields
f3
f (U~-U!h)dm=f3 f (U't:-U~h)dm-(f3-ex) f f3UP(U't:-U~h)dm,
and because m is excessive
f3
f (U~-U!h)dm
>
f3
=
ex
f (U't:-U~h)dm-(f3-ex) f (U't:-U~h)dm
f U~h)dm
(U't: -
2: O.
Thus f3 J U!h dm :s J f3U~dm for every f3 2: ex, which entails the desired result.
o
We may now state
(2.6) Theorem. Every CAF is a-integrable. Moreover, E is the union of a sequence of universally measurable sets En such that the potentials u1 (', En) are
bounded and integrable.
Proof Let f be a bounded, integrable, strictly positive Borel function and set
tfJ(x) = Ex
[Io~ e-t f(Xt)e-Atdt] .
Plainly, 0 < tfJ :s 11111; let us compute U1tfJ. We have
U1tfJ(x)
=
Ex
[Io~ e- t EXt [Io~ e-s f(Xs)e-A'ds] dAt]
=
Ex
[Io~ e- t Ex [Io~ e- s f(Xs+t)e-A,oIJtds I~] dAt].
Using the result in 1°) of Exercise (1.13) of Chap. V and Proposition (4.7) Chap. 0,
this is further equal to
Ex
[Io~ e-tdA t Io~ e- s f(xs+t)e-A*eAtds]
= Ex [Io~ e- s f(Xs)e-A'ds
= Ex
loS eAtdA t ]
[Io~ e-s f(Xs) (1 - e- A,) dS] = U 1 f(x) - tfJ(x).
412
Chapter X. Additive Functionals of Brownian Motion
It remains to set En = {if> 2: lin} or {if> > lin} and to apply the preceding lemma
with C t =
f(Xs)ds and h = n-llEn •
D
f;
Now and for the remainder of this section, we specialize to the case of linear
BM which we denote by B. We recall that the resolvent u a is a convolution kernel
which is given by a continuous and symmetric density. More precisely
ua f(x) =
(ffa)
f
ua(x, y)f(y)dy
(-ffaIY - XI).
where ua(x, y) =
-I exp
As a result, ifm is the Lebesgue
measure, then for two positive functions f and g,
f
g(U a f)dm =
f
(Uag)f dm.
Moreover, these kernels have the strong Feller property: if f is a bounded Borel
function, then u a f is continuous. This is a well-known property of convolution
kernels.
Finally, we will use the following observation: if f is a Borel function on ~
and f(Bt) is a.s. right-continuous at t = 0, then f is a continuous function; this
follows at once from the law of the iterated logarithm. This observation applies
in particular to the a-potentials of a CAF when they are finite. Indeed, in that
case,
u~ (Bt ) = Ex
[1
00
e- as dAs 0 Bt
1.%'] = eat Ex [1 e- as dAs 1.%']
00
Px-a.s.,
and this converges to U~(x) when t -J.. 0 thanks to Theorem (2.3) in Chap. II.
D
We may now state
(2.7) Proposition. In the case of linear BM, for every continuous additive functional A, the measure v A is a Radon measure.
Proof Thanks to Lemma (2.5) it is enough to prove that the sets En = {if> > lin}
in the proof of Theorem (2.6) are open, hence that if> is continuous. Again, this
follows from the right-continuity of if> (B t ) at t = 0 which is proved as above. We
have Px-a.s.,
if> (B t )
=
EBr
[1~ e- u f(Bu)e-Audu ]
Ex
[1~ e- u f(Bu+t)e-Au+r+Ardu 1.%']
eAret Ex
[1~ e- u f(Bu)e-Audu 1.%']
and this converges Px-a.s. to if>(x) by Corollary (2.4) of Chap. II.
D
§2. Representation Theorem for Additive Functiona1s
413
We shall now work in the converse direction and show that, with each Radon
measure v, we can associate a CAF A such that v = VA. Although it could be
done in a more general setting, we keep with linear BM.
(2.8) Theorem. If A is a CAF, then for every a > 0 and f
U~f(x) =
f
E g+,
u<1(x, y)f(Y)VA(dy).
In particular, U~ = f u<1(·, y)vA(dy).
Proof Since vf.A = f· VA, it is enough to prove the particular case.
Supppose first that U~ is bounded and integrable and let ¢ be in Cj(. By
Proposition (2.2),
The function s -+ e-/i s U<1¢(Bs) is continuous, hence, is the limit of the sums
Sn =
L exp ( - fJ(k + l)/n)U<1¢(Bk/n)l]k/n,(k+l)/n];
k~O
as a function on Q x lR.+, the sum Sn is smaller than a-111¢lIe-/is, which thanks
to the hypothesis that vA is bounded, is integrable with respect to the measure
Pm 0 d As defined on Q x lR.+ by
Consequently, Lebesgue's dominated convergence theorem tells us that
Using the Markov property and the fact that m is an invariant measure, this is
further equal to
}iIlJoL Em [exp ( - fJ(k + l)/n)U<1¢(Bk/n)EBkl" [Al/n]]
k~O
= }iIlJo
(L
exp ( - fJ(k + l)/n))
k~O
= }~IlJo exp( -fJ/n)(l - exp( -fJ/n)
f
U<1¢(y)Ey [Al/n] m(dy)
r f ¢(y)U<1 (E. [Al/n)) (y)m(dy)
1
where we lastly used the duality relation (*) above. As for every fJ we have
limn - HxJ fJ exp( -fJ / n) / n( 1 - exp( -fJ / n») = 1, it follows that
414
f
Chapter X. Additive Functionals of Brownian Motion
UU¢(y)vA(dy) = }lIIJo n
f
¢(y)UU (E. [AI/nJ) (y)m(dy)
= lim
10tx) ne- Erp.m [EXt [A l/n]] dt
= lim
10roo ne- Erp.m [At+l/n - At] dt
n-+oo
n-+oo
ut
ut
n_1) 11/n e-us Erp.m[As}ds _ n 10tin e-us Erp.m[As}ds}
00
= ex LX) e- US Erp.m[As}ds = Erp.m [1 e-USdAsJ '
= lim {n (e u/
roo
n-+oo
where, to obtain the last equality, we used the integration by parts formula. Thus,
we have obtained the equality
f
namely
ff
which entails
UU¢(y)vA(dy) =
f U~(x)¢(x)m(dx)
= f U~(x)¢(x)m(dx),
UU(x, y)¢(x)m(dX)VA(dy)
f
UU(x, y)vA(dy) = U~(x)
m-a.e ..
Both sides are continuous, the right one by the observation made before (2.7)
and the left one because of the continuity of u U , so this equality holds actually
everywhere and the proof is complete in that case. The general case where A is
a-integrable follows upon taking increasing limits.
0
We now state our main result which generalizes the occupation times formula
(see below Definition (2.3)).
(2.9) Theorem. If A is a CAF of linear BM, then
At =
£:00 L~vA(da).
As a result, any CAF of linear BM is a strong additive functional.
Proof Since VA is a Radon measure and a --+ L~ is for each t a continuous
function with compact support, the integral
At =
£:00 L~vA(da)
is finite and obviously defines a CAF. Let us compute its associated measure. For
f E 0'+, and since m is invariant,
§2. Representation Theorem for Additive Functionals
Em
Em
[1'
415
f(Bs)dAs]
[1: (1 f(Bs)dL~)
00
1
VA(da)]
1: vA(da)Em [10' f(Bs)dL~] = VA(f).
00
Thus VA = VA and by Proposition (l.7) and the last result, it is easily seen that
A = A (up to equivalence) which completes the proof.
0
Remark. It is true that every additive functional of a Feller process is a strong
additive functional, but the proof of this result lies outside the scope of this book.
We now list a few consequences of the above theorem.
(2.10) Corollary. Every CAF of linear BM is finite and the map A -+ VA is aneta-one.
o
Proof Left to the reader.
Let us mention that for the right-translation on the line there exist non finite
CAF's i.e. such that At = 00 for finite t, for instance f~ f(Xs)ds with f =
Ixl- 1 1(x<0).
(2.11) Corollary. For every CAF, there is a modification such that A is (~o)­
adapted.
Proof We recall that (.Yf o) is the uncompleted filtration of BM. The property is
clear for local times by what was said in the last section, hence carries over to all
CAF's.
0
We also have the
(2.12) Corollary. For any CAF A of linear BM, there exists a convex function f
such that
At = f(Bt) - f(Bo) - lot f~(Bs)dBs.
Proof Let f be a convex function whose second derivative is precisely 2 VA
(Proposition (3.2) in the Appendix). Ifwe apply Tanaka's formula to this function,
we get exactly the formula announced in the statement.
0
(2.13) Corollary. If A is a CAF and f a positive Borel function, then
1t f(Bs)dAs =
f L~f(a)vA(da).
Proof Obvious.
(2.14) Corollary. For any non zero CAF A of linear BM, Aoo = 00 a.s.
Chapter X. Additive Functionals of Brownian Motion
416
Proof Because of Corollary (2.4) Chapter VI and the translation invariance of
BM, Lr:x, = 00 for every a simultaneously almost-surely; this obviously entails
the Corollary.
Remark. A more general result will be proved in Proposition (3.14).
Let us further mention that, although we are not going to prove it in this book,
there are no additive functionals of BM other than the continuous ones; this is
closely related to Theorem (3.4) in Chap. V.
We will now tum to the investigation of the time-change associated with CAF
A, namely
Tt = inf{s : As > t},
t ~ o.
Because A is an additive functional and not merely an increasing process, we have
the
(2.15) Proposition. For every t, there is a negligible set outside which
Tt+s = Tt + Ts 0 er ,
for every s.
Proof Same as for the local time in Sect. 1.
D
In the sequel, A is fixed and we denote by S(A) the support of the measure VA.
We write TA for TS(A)' We will prove that At increases only when the Brownian
motion is in S(A). As a result, A is strictly increasing if and only if VA charges
every open set.
(2.16) Lemma. The times TO and TA are a.s. equal.
For notational convenience, we shall write T for TA, throughout the proof.
Proof Pick a in S(A); it was seen in the proof of Proposition (2.5) in Chap. VI
that for t > 0, L~ is > 0 Pa-a.s. By the continuity of L: in x, it follows that L;
is > 0 on a neighborhood of a Pa-a.s., which entails
1
+00
At =
-00
L: V A (dx) > 0
Pa-a.s.
Let now x be an arbitrary point in lR; using the strong additivity of A and the
strong Markov property, we get, for t > 0,
Px [Ant> 0] ~ Ex [PBT [At> 0]] = 1
since, S(A) being a closed set, BT E S(A) a.s. This shows that TO ~ T a.s.
On the other hand, by Corollary (2.13),
rs
lo
I (Ay(B s )dA s = [
L~vA(da) = 0
JS(A)'
a.s.,
hence
At = lot lS(A)(Bs)dA s
which entails that AT = 0 and TO ~ T a.s.
a.s.,
D
§2. Representation Theorem for Additive Functionals
417
(2.17) Proposition. The following three sets are a.s. equal:
i) the support of the measure dAt;
ii) the complement LA of Us ]LS-' LS [;
iii) the set r = (t : Bt E S(A)}.
Proof The equality of the sets featured in i) and ii) is proved as in the case of
the local time, hence LA is the support of dAt.
Again, because J~ IS(Ay(Bs)dA s = 0, we have r c c US ]LS-, LS[ since these
intervals are the intervals of constancy of A; consequently, LA C
To prove
r.
the reverse inclusion, we observe that
LA = (t : A(t + c) - A(t - c) > 0 for all c > O} .
Consequently,
where the union is over all pairs (r, s) of rational numbers such that 0 :s r < s.
But for each x,
and this vanishes thanks to the preceding lemma. which completes the proof.
0
We now study what becomes of the BM under the time-change associated with
an additive functional A, thus substantiating in part the comment made in Sect. 3
Chap. VII about linear Markov processes on natural scale being time-changed
BM's. We set iit = B'r; the process ii is adapted to the filtration
= (~).
Since t --+ Lt is right-continuous, the filtration
is right-continuous; it is also
complete for the probability measures (Px ) of BM. If T is a (~)-stopping time,
then LT i.e. the map W --+ LT(w)(W) is a (.¥r)-stopping time; indeed for any s
(s:n
(s:n
{LT
< s} =
U {T :s q} n
{Lq < s}
qEQ+
and since {T :s q} E ,~, each of these sets is in.9iff. Moreover,
for every t
a.s.
as can be deduced from Proposition (2.15) using an approximation of T by decreasing, countably valued stopping times. All these facts will be put to use to
prove the
.?r,
(2.18) Theorem. If the support of VA is an interval I, the process (iif,
Px ,
x E I) is a continuous regular strong Markov process on I with natural scale and
speed measure 2VA. Its semi-group Qt is given by Qrf(x) = EX£f(B.rH
418
Chapter X. Additive Functiona1s of Brownian Motion
Proof We first observe that B rt has continuous paths. Indeed, by the foregoing
proposition, if rt- -=1= rt, then B rt _ must be a finite end-point of I and B must
be leaving I at time rt-. But then rt is the following time at which B reenters I
and it can obviously reenter it only at the same end-point. As a result B rt = B rt _
which proves our claim.
We now prove that if has the strong Markov property. Let f ~ a positive
Borel function; by the remarks preceding the statement, if T is a (.9f)-stopping
time
Px-a.s. on {T < oo}. The result follows from Exercise (3.16) Chap. III.
That if is regular and on its natural scale is clear. It remains to compute its
speed measure. Let J =]a, b[ be an open sub-interval of I. With the notation of
Sect. 3 Chap. VII we recall that for x E J, and a positive Borel function f,
it follows that Ex [Lr,a AT.]
= 2G J (x, .) a.e. and, by continuity, everywhere. We
b..-.
..-.
..-.
now compute the potential of B. Define Ta = inf{ t : B t = a} = inf {t : r t = Ta}
and ~ similarly. By the time-change formula in Sect. 1 Chap. V
which completes the proof. Actually, we ought to consider the particular case of
end-points but the same reasoning applies.
0
Using Corollary (2.11), it is easy to check that Qd is a Borel function if f
is a Borel function. One can also check, using the change of variable formula in
Stieltjes integrals that the resolvent of if is given by
An unsatisfactory aspect of the above result is that if I has a finite end-point,
this point must be a reflecting or slowly reflecting boundary and in particular
belongs to E in the notation of Sect. 3 Chap. VII. We haven't shown how to
obtain a process on ]0, oo[ as a time-changed BM. We recall from Exercise (3.23)
Chap. VII that if the process is on its natural scale, then must be a natural
boundary. We will briefly sketch how such a process may be obtained from BM.
°
§2. Representation Theorem for Additive Functionals
419
Let v be a Radon measure on ]0, oo[ such that v(]O, ED = 00 for every E > 0;
then the integral
no longer defines an additive functional because it is infinite Po-a.s. for any t > O.
But it may be viewed as a CAF of the BM killed at time To. The associated
time-changed process will then be a regular process on ]O,oo[ on natural scale
and with 0 as natural boundary. We leave as an exercise for the reader the task
of writing down the proof of these claims; he may also look at the exercises for
particular cases and related results.
It is also important to note that Theorem (2.18) has a converse. Namely, starting
with X, one can find a BM B and a CAF of B such that X is the associated timechanged process (see Exercise (2.34».
(2.19) Exercise. If A is a CAF of BM and (rt) the inverse of the local time
at 0, prove that the process Art has stationary independent increments. See also
Proposition (2.7) Chap. XII.
(2.20) Exercise. Let B be the BM d , d > 1, and v a unit vector. The local time
at y of the linear BM X = (v, B) is an additive functional of B. Compute its
associated measure.
(2.21) Exercise. If m is a a-finite measure and mU 1 = m, then m is invariant.
[Hint: If meA) < 00, prove that amUCi(A) = meA) for each a and use the
properties of Laplace transforms.]
(2.22) Exercise (Signed additive functionals). If, in the definition of a CAF, we
replace the requirement that it be increasing by that it merely be of finite variation
on each bounded interval, we get the notion of signed CAF.
1°) Prove that if A is a signed CAF there exist two CAF's A + and A-such
that A = A + - A -, this decomposition being minimal.
2°) In the case of BM, extend to signed CAF's the results of this section.
In particular for a signed CAF A, there exists a function f which is locally the
difference of two convex functions, such that f"(dx) = 2VA(dx) and
At = f(Bt) - f(Bo) -fot
**
f~(Bs)dBs.
(2.23) Exercise (Semimartingale functions of BM). 1°) If f(Bt) is for each Px
a continuous semimartingale, prove that f is locally the difference of two convex
functions.
[Hint: Use the preceding exercise and Exercise (3.13) in Chap. II.]
As it is known that all additive functionals of BM are continuous, one can
actually remove the hypothesis that f(Bt) is continuous and prove that it is so.
2°) If f(Bt) is a continuous semimartingale under P v for one starting measure
v, prove that the result in 1°) still holds.
[Hint: Prove that the hypothesis of 10) holds.]
420
#
Chapter X. Additive Functionals of Brownian Motion
(2.24) Exercise (Skew Brownian motion). Let 0 < a < 1 and define
If B is the linear BM, we call ya the process obtained from B by the time-change
associated with the additive functional
i t ga(Bs)ds.
Finally we set ra(X) = x/(l - a) if x ~ 0, ra(x) = x/a if x < 0 and Xf =
ra(yt). The process xa is called the Skew Brownian Motion with parameter a.
For completeness, the reader may investigate the limit cases a = 0 and a = 1.
1°) Compute the scale function, speed measure and infinitesimal generator of
xa. As a result xa has the transition density of Exercise (1.16), Chap. III. Prove
that Po[X t > 0] = a for every t.
2°) A skew BM is a semimartingale. Prove that for a i= 1/2, its local time is
discontinuous in the space variable at level 0 and compute its jump.
3°) Prove that Xf = f3t + (2a - 1)Lt where f3 is a BM and L the symmetric
local time of xa at zero given by
Lt = lim
~ t 1[-e.el(X~)ds.
e,j.O 2810
*
4°) Let y be a constant and let X be a solution to the equation X t = f3t + y Lt
where L is the symmetric local time of X as defined in 3°). Compute L o(X)t
and LO-(X)t as functions of Lt. Derive therefrom that a solution X to the above
equation exists if and only if Iy I :::; 1.
[Hint: Use Exercise (2.24) Chapter VI. See also Exercise (3.19) in Chapter IX.]
* (2.25) Exercise (Additive local martingales). 1°) Let A be a continuous additive
functional oflinear BM. Prove that a.s. the measures dA t are absolutely continuous
with rspect to dt if and only if there is a positive Borel function f such that
At = i t f(Bs)ds.
2°) Prove that M is a continuous process vanishing at 0 and such that
i) it is an (~-local martingale for every P v ,
ii) for every pair (s, t),
Mt+s - M t = Ms oOt
a.s.,
if and only if there is a Borel function f such that
Mt = i t f(Bs)dBs
a.s.
[Hint: Use the representation result of Sect. 3 Chap. V and the fact that (M, B)
is an additive functional.]
§2. Representation Theorem for Additive Functiona1s
#
421
(2.26) Exercise (Continuation of Exercise (1.13». 1°) Suppose that X is the linear BM and prove that the pair (X, A) has the strong Markov property for every
(.X)-stopping time.
2°) If At = f~ f(Xs)ds, then (X, A) is a diffusion in ]R2 with generator
2
il . In particular, the process of Exercise (1.12) of Chap. III has the
.,il 2 + f(XI)-a
ox!
Xl
il 2
a
generator -2I -a
2 + XI -a .
1
-2
X2
Xl
*
(2.27) Exercise. 1°) In the setting of Theorem (2.18), check that v A is excessive
for E.
2°) If more generally X is a linear Markov process on a closed interval and is
not necessarily on natural scale, prove that the speed measure is excessive.
3°) Extend this result to the general case.
[Hint: Use Exercise (3.18) in Chap. VII.]
(2.28) Exercise. In the setting of Theorem (2.18) suppose that v A (dx) = V (X )dx
with V > 0 on I° = Int(/). Prove that the extended infinitesimal generator of
c'h
E is given on c2 by Af = ~ V- I /". Check the answer against the result in
Theorem (3.12) Chap. VII and compare with Proposition (1.13) Chap. IX.
[Hint: Use the characterization of the extended generator in terms of martingales.]
* (2.29) Exercise. 1°) In the setting of Theorem (2.18), if At = f~ I(B,>o)ds+AL?
with 0 < A < 00, prove that 0 is a slowly reflecting boundary for E.
2°) Prove that if E is a Feller process, the domain gA of the infinitesimal
generator of E is
{ f E C 2 (]0, ooD,
/" (0) = lim f" (x)
x-+o
exists and
f~ (0) = V" (O)} ,
and that Af(x) = ~f"(x) for x > 0, Af(O) = A-I f~(O).
#
(2.30) Exercise. Take up the skew BM XCi of Exercise (2.24) with 0 < a < I
and let L be its local time at O.
1°) Prove that there is a BM f3 such that
X~ = f3t + (2a - 1)/2a)L t.
2°) Let a and b be two positive numbers such that a = b/(a + b) and put
= al(x>o) -bl(x:,:o). Set Yt = a (X~t -b (X~r and Bt = IX~ ~Lt (I XCi I)
and prove that (Y, B) is a solution to eo(a, 0).
3°) By considering another skew BM, say ZCi, such that IXCiI = IZCiI (see
Exercise (2.16) Chap. XII), prove that pathwise uniqueness does not hold for
a(x)
eo(a, 0).
1-
422
#
Chapter X. Additive Functionals of Brownian Motion
(2.31) Exercise. Let f be a positive Borel function on JR and B be the linear BM.
1°) If f is not locally integrable prove that there exists a point a in JR such
that for every t > 0,
1t f(Bs)ds = 00
2°) Prove that the following three conditions are equivalent
(i)
Po [f~ f(Bs)ds < 00 Vt E [0, OO[] > 0;
(ii) Px [J~ f(Bs)ds < 00 Vt E [0, OO[] = 1 for every x E JR;
(iii) f is locally integrable.
(2.32) Exercise. 1°) In the situation of Theorem (2.18) prove that there is a bicontinuous family of r.v. 's L~ such that for every positive Borel function f,
1t f(ifs)ds =
1f(a)L~VA(da)
a.s.
2°) If in addition I = JR, then if is a loco mart.; prove that (if, if)t = it.
(2.33) Exercise. Construct an example of a continuous regular strong Markov
process X on lR which "spends all its time on Q", i.e. the set {t : X t E JR\Q} has
a.s. Lebesgue measure 0.
(2.34) Exercise. 1°) With the notation of Proposition (3.5) Chap. VII prove that
l.
is a uniformly integrable Px-martingale for every x in
2°) Assume that X is on natural scale and that E = R Call m its speed
measure and prove that if B is the DDS Brownian motion of X and it is the
inverse of (X, X), then
it =
f L~(B)m(da)
(see also Exercise (2.32». In particular X is a pure loco mart.
[Hint: The measure m is twice the opposite of the second derivative of the
concave function mI; use Tanaka's formula.]
§3. Ergodic Theorems for Additive Functionals
In Sect. 1 Chap. II and Sect. 2 Chap. V, we proved some recurrence properties of
BM in dimensions 1 and 2. We are now taking this up to prove an ergodic result
for occupation times or more generally additive functionals. Since at no extra cost
we can cover other cases, we will consider in this section a Markov process X
for which we use the notation and results of Chap. III. We assume in addition
§3. Ergodic Theorems for Additive Functionals
423
that the resolvent U'" has the strong Feller property, namely U'" ! is continuous
for every a and every bounded Borel function! and also that Pt 1 = 1 for every
t ~ 0 which is equivalent to p.[I; = 00] == 1.
Our first definition makes sense for any Markov process and is fundamental
in the description of probabilistic potential theory.
(3.1) Definition. A positive universally measurable function ! is excessive for the
process X (or for its semi-group) if
Pt !::::! for every t > 0;
ii) limq,o Pd = !.
i)
A finite universally measurable function h is said to be invariant if Pth = h for
every t.
(3.2) Proposition. If! is excessive, ! (X t ) is a (.7;)-supermartingale for every P
If h is invariant, h (X t) is a martingale.
V'
Proof By the Markov property, and property i) above
In the case of invariant functions, the inequality is an equality.
o
This proposition, which used only property i) in the definition above, does not
say anything about the possibility of getting a good version for the super-martingale
! (X t ); the property ii) is precisely what is needed to ensure that! (X t ) is a.s. rightcontinuous, but we are not going to prove it in this book. We merely observe that,
if ! is excessive, then aU'" ! :::: ! for every a and lim",-+oo aU",! = ! as the
reader will easily show; moreover, the limit is increasing and it follows easily
from the strong Feller property of U'" that an excessive function is lower-semicontinuous. If h is invariant and bounded, then aU'" h = h hence h is continuous;
the martingale h(Xt ) is then a.s. right-continuous, a fact which we will use below.
Moreover if conversely h is bounded and aU'" h = h for every a, the continuity
of h hence the right-continuity of Pth in t, entails, by the uniqueness property of
Laplace transform that h is invariant.
(3.3) Definition. An event r of.¥oo is said to be invariant ifet-1(r) = r for
every t. The (J-field ;$ of invariant events is called the invariant (J-field and an
c'7 -measurable r. v. is also called invariant. Two invariant r. v. 's Z and Z' are said
to be equivalent if Px[Z = Z'] = 1 for every x.
Invariant r.v.'s and invariant functions on the state space are related by the
following
(3.4) Proposition. Theformula hex) = Ex[Z] sets up a one-to-one and onto correspondence between the bounded invariant functions and the equivalence classes
of bounded invariant r. v. 'so Moreover
Z = lim h(X t )
t-+oo
a.s.
424
Chapter X. Additive Functionals of Brownian Motion
Proof If Z is invariant, a simple application of the Markov property shows that
h(·) = E.[Z] is invariant (notice that if we did not have l; = 00 a.s., we would
only get Pth S h).
Conversely, since h(Xt ) is a right-continuous bounded martingale, it converges
a.s. to a bounded r.v. Z which may be chosen invariant. Moreover, by Lebesgue's
dominated convergence theorem, hex) = Ex[Z] for every x in E. The correspondence thus obtained is clearly one-to-one.
0
Let A be a Borel set; the set
is the set of paths w which hit A infinitely often as t -+ 00; it is in ~ since it is
equal to
{n + TA 0 en < oo}. It is then clear that it is an invariant event. The
corresponding invariant function hA = P.[R(A)] is the probability that A is hit at
arbitrarily large times and limHOO h A (X t ) = 1R(A) a.s. by the above result.
nn
(3.5) Definition. A set A is said to be transient if h A == 0 and recurrent if h A == 1.
In general, a set may be neither recurrent nor transient but we have the
(3.6) Proposition. The following three statements are equivalent:
i) the bounded invariant functions are constant;
ii) the a -algebra 7' is a.s. trivial;
iii) every set is either recurrent or transient.
Proof The equivalence of i) and ii) follows immediately from Proposition (3.4),
and it is clear that ii) implies iii).
We prove that iii) implies ii). Let r E 7' and put A = {x : PAr] > a} for
0< a < 1. We know that 1r = limHoo px,[r] a.s.; if A is recurrent, then r = Q
a.s. and if A is transient then r = 0 a.s.
0
Although we are not going to develop the corresponding theory, Markov processes have roughly two basic behaviors. Either they converge to infinity in which
case they are called transient, or they come back at arbitrarily large times to relatively small sets, for instance open balls of arbitrarily small radius, in which case
they are called recurrent. After proving a result pertaining to the transient case,
we will essentially study the recurrent case. Let us first observe that because of
the right-continuity of paths, if A is an open set, R(A) = {limq--+oo lA (Xq) = l}
where q runs through the rational numbers.
The following result applies in particular to BMd , d > 2, in which case
however it was already proved in Sect. 2 Chap. V.
(3.7) Proposition. If for every relatively compact set A, the potential U (', A) is
finite, then the process converges to infinity.
§3. Ergodic Theorems for Additive Functionals
425
Proof We have, by the Markov property
it follows on the one hand that U(" A) is excessive, hence lower-continuous, on
the other hand that limt-+oo Pt (U (-, A)) = O. From the first property we deduce
that U(X q , A) is a positive supermartingale indexed by Q+, and by the second
property and Fatou's lemma, its limit as q --+ 00 is zero a.s.
Let now rand r' be two relatively compact open sets such that r c r'.
The function U (., r') is strictly positive on r', because of the right-continuity of
paths; by the lower-semi-continuity of U(', r'), there is a constant a > 0 such
that U(', r') ~ a on r. Thus on the paths which hit r at infinitely large times,
we have limq--+ooU(X q , r') ~ a. By the first paragraph, the set of these paths is
a.s. empty. Therefore, r is a.s. not visited from some finite time on, and the proof
is now easily completed.
0
We now study the opposite situation.
(3.8) Definition. The process X is said to be Harris-recurrent or merely Harris if
there is an invariant measure m such that meA) > 0 implies that A is recurrent.
In the sequel, when we deal with Harris processes, we will always assume
that the support of m is the whole space. Indeed, the support of an invariant
measure is an absorbing set, a fact which is proved in the following way. Let r
be the complement of the support; since r is open, the right-continuity of paths
entails that the set of points from which the process can reach r is precisely
r' = {x : U"(x, r) > O} for some a > O. Clearly r' J r and since m is
invariant, amU"(r) = mer) = 0 which proves that mer') = 0; as a result
r' = rand r c is absorbing. Thus, one loses little by assuming that r is empty
and in fact this is naturally satisfied in most cases. This condition implies that
every open set is recurrent.
Conversely we have the following result which shows that BM d , d = I, 2, the
au process and many linear Markov processes such as the Bessel processes of
low dimensions are Harris-recurrent.
(3.9) Proposition. If X has an invariant measure and if every open set is recurrent,
then X is Harris.
Proof If meA) > 0, since Pm[X t E A] = meA) for every t, there is a constant
a > 0 such that the set r = {x : Px [TA < 00] > a} is not empty.
Now the function f == p. [TA < 00] is excessive because Prf(x) = Px[t+TAo
et < 00] ::: f(x) and one checks that limq,o (t + TA 0 et ) = TA which implies that
limtto Prf(x) = f(x). As a result the set r is open; furthermore, by Corollary
(2.4) in Chap. II,
lim Px [TA < 00] = lim p. [q + TA 0 eq < 00 I.¥~] = lR(A)
q-----+oo
q
q---+oo
a.s.
426
Chapter X. Additive Functionals of Brownian Motion
and since r is recurrent, we find that l R (A) :::: a a.s. hence R(A) = Q a.s. which
D
completes the proof.
For a Harris process, the equivalent conditions of Proposition (3.6) are in force.
(3.10) Proposition. If X is a Harris process, the excessive functions and the
bounded invariant functions are constant.
Proof If the excessive function f were not constant, we could find two constants
a < b such that the sets J = {f > b} and Jf = {f :::: a} are not empty. The set
J is open, hence recurrent, and by Fatou's lemma, for each x E E,
f(x):::: lim Pq/(x) :::: Ex [lim f(X q )] :::: b
q-+oo
q-+oo
and we get a contradiction.
For a bounded harmonic function h, we apply the result just proved to h + Ilh II.
D
By the occupation times formula together with Corollary (2.4) Chapter IV (or
Corollary (2.14», we know that in the case ofBMI, if m is the Lebesgue measure,
meA) > 0 implies
1
00
lA(X s )ds = 00
a.s.,
which is apparently stronger than the Harris condition. We will prove that actually
this property is shared by every Harris process, in particular by BM2. We consider a
strong additive functional A which, as we already observed, is in fact no restriction.
(3.11) Proposition. If VA does not vanish, then Aoo = 00 a.s.
Of course, here vA is computed with respect to the invariant measure m which
is the only invariant measure for X (see Exercise (3.14».
Proof For 8> 0, we set T, = inf{t : At > 8}. If VA does not vanish, we may find
8 > 0 and a > 0 such that
m ({x: Px[Te < 00] >
an > o.
Therefore, limHOO Px , [T, < 00] :::: a a.s. But on the other hand, for x E E,
P x, [Te
nt
< 00] = P x [t + T, 0
e < 00 I .¥t]
t
and by Corollary (2.4) in Chap. II, this converges Px-a.s. to 1[n,{t+r,oe, <ool]' It
follows that
{t + Te 0 et < oo} = Q a.s. and a fortiori p. [Te < 00] = 1.
If we define now inductively the stopping times Tn by TI = Te and Tn =
p - l + Te 0 eT"-I, a simple application of the strong Markov property shows that
p'[Tn < 00] = 1. By the strong additivity of A, we have AT" :::: n8 for every n,
which completes the proof.
§3. Ergodic Theorems for Additive Functionals
427
Remarks. 1°) The function P.[ re < 00] can be shown to be excessive, hence
constant and the proof could be based on these facts.
2°) This result shows that for m(A) > 0 we have U(·, A) == 00 which is to be
compared with Proposition (3.7).
We now turn to the limit-quotient theorem which is the main result of this
section.
(3.12) Theorem. If X is Harris, if A and C are two integrable additive functionals
and if live II > 0, then
By the preceding result, the condition II Ve II > 0 ensures that the quotient on
the left is meaningful at least for t sufficiently large.
Proof By afterwards taking quotients, it is clearly enough to prove the result
when C t = J~ f(Xs)ds where f is a bounded, integrable and strictly positive
Borel function.
We will use the Chacon-Ornstein theorem (see Appendix) for the operator
()a, a > o. Since m is invariant for the process, the measure Pm on ([2, §?"') is
invariant by ()a, so that Z --+ Z o()a is a positive contraction of L1(Pm ). Moreover,
by the preceding result, we have
L Ca ()na = Coo =
00
0
00
a.s.
n=O
which proves that the set D in the Hopfs decomposition of [2 with respect to ()a
is empty; in other words, Z --+ Z 0 ()a is conservative. We may therefore apply
the Chacon-Ornstein theorem; by hypothesis, Aa and Ca are in L 1 ( Pm) so that the
limit
exists Pm-a.s. As limt->oo C t = 00, it is easily seen that limn->oo Cn+J/Cn = 1;
therefore the inequalities
(A[t/a)a/Crt/a)a) (Crt/a)a/Crt/a)a+l)
~ AriCr ~ (A[t/a)a+l/Crt/a)a+l) (Crt/a)a+J/C[t/a)a)
imply that
As a result, there is a r.v. A such that
and A = A 0 ()a Pm-a.s. for every a > O. It follows from Propositions (3.6) and
(3.10) that A is Pm-a.s. equal to a constant. From the Chacon-Ornstein theorem,
428
Chapter X. Additive Functionals of Brownian Motion
it follows that this constant must be Em[Aa]/Em[Ca ] for an arbitrary a, that is
IlvAIi/llveII·
Set
F = {(f): lim At ((f))/Ct((f)) = IlvAII/livell}.
t-+oo
We have just proved that Pm(F C ) = 0; moreover if (f) E F then 8s ((f)) E F for
every s or, in other words, 1F ::s 1F o8s for every s. But since limt-+oo C t = +00
a.s. if 8s ((f)) E F, then (f) E F. Thus IF = IF o8s a.s. for every s which implies
that P.[FC] is a bounded invariant function, hence a constant function, which has
to be identically zero. The proof is complete.
Remarks. 1°) In the case of BM, the end of the proof could also be based on the
triviality of the asymptotic u-field (see Exercise (2.28) Chap. III). The details are
left to the reader as an exercise.
2°) In the case of BMI, one can give a proof of the above result using only
the law oflarge numbers (see Exercise (3.16».
3°) If the invariant measure m is bounded, as for instance is the case for the au
process (see Exercise (1.13) in Chap. III), then constant functions are integrable
and taking Ct = t in the above result we get
lim (Arlt) = IIvAII/m(E)
t-+oo
a.s.
Thus in that case the additive functionals increase like t. When m(E) = 00, it
would be interesting to describe the speed at which additive functionals increase
to infinity. This is tackled for BM2 in the following section and treated in the case
of BMI in Sect. 2 Chap. XIII. (See also the Notes and Comments).
4°) Some caution must be exercised when applying the above result to occupation times because there are integrable functions f such that f~ f(Xs)ds is not
an additive functional; one may have f~ f(Xs)ds = 00 for every t > 0, Px-a.s.
for x in a polar set (see Exercise (3.17». The reader will find in Exercise (2.6) of
Chap. XI how to construct examples of such functions in the case of BM2 (see
also Exercise (2.31) above in the case of linear BM). The limit-quotient theorem
will then be true Px-a.s. for x outside a polar set. If f is bounded, the result is
true for f~ f(Xs)ds without qualification.
(3.13) Exercise. Under the assumptions of this section, if X is Harris then for
every A E g either U(-, A) == 00 or U(·, A) == O. Prove moreover that all
cooptional times are equal to 0 a.s.
[Hint: See Exercise (4.14) in Chap. VII].
(3.14) Exercise. Suppose X is Harris with invariant measure m and that the hypotheses of this section are in force.
1°) Prove that m is equivalent to u a (x, .) for every ex > 0 and x E E.
2°) Prove that m is the unique (up to multiplication by a constant) excessive
measure for X.
[Hint: Prove that an invariant measure is equivalent to m, then use the limitquotient theorem.]
§3. Ergodic Theorems for Additive Functionals
#
429
(3.15) Exercise. Let X be a Harris process, A an integrable additive functional
of X and C a a-integrable but not integrable additive functional, prove that
lim (AdC t ) = 0
t-+oo
a.s.
[Hint: Use the ergodic theorem for A and I F C where vcCF) < 00.]
(3.16) Exercise. For the linear BM and for a < b define
Tl = Tb + Ta oeTb , ••. , Tn = T n- 1 + Tl. ep-l, ...
1°) Let A be a continuous additive functional. Prove that the r.v. 's Zn =
A p - A p-l, n > 1, are independent and identically distributed under every Px ,
x E R If II VA II < 00, prove that the Zn's are Px-integrable.
[Hint: For this last fact, one can consider the case of local times and use the
results in Sect. 4 Chap. VI.]
2°) Applying the law of large numbers to the variables Zn, prove Theorem
(3.12).
[Hint: Prove that Ad inf {n : Tn ::: t} converges as t goes to infinity, then use
quotients. ]
3°) Extend the above pattern of proof to recurrent linear Markov processes.
(3.17) Exercise. Let X be Harris and f be positive and m-integrable. Prove that
f~ f(Xs)ds < 00 Px-a.s. for every t > 0 and for every x outside a polar set. That
this result cannot be improved is shown in Exercise (2.6) Chap. XI.
* (3.18) Exercise. In the setting of Theorem (3.12), prove that
for m-almost every x.
[Hint: Prove that for each a > 0, Pa is a conservative contraction of L I (m)
and apply the Chacon-Ornstein theorem.]
(3.19) Exercise. We retain the situation of Exercise (2.22) 2°) and we put VA =
VA+ - VA-·
1°) If VA is bounded, VA(l) = 0 and
f Ixl IVAI(dx) < 00, prove that f is
bounded and f~ is in LIn L 2 of the Lebesgue measure.
[Hint: This question is solved in Sect. 2 Chap. XIII.]
2°) Under the hypothesis of 1°), prove that there is a constant C such that
IEx[AT]I::::;C
for every point x and stopping time T such that Ex[T] < 00.
3°) If k, i = 1,2, are positive integrable additive functionals of BMI such
that II VA' I > 0 and f Ix IVA' (dx) < 00, then for any probability measure fL on JR,
lim Ell [A:]/E ll [A;] = IlvAll1 / IlvA211.
t-+oo
The results in 2°) and 3°) are strengthenings of the result in the preceding exercise.
430
*#
Chapter X. Additive Functionals of Brownian Motion
(3.20) Exercise. 1°) Let c be a positive real number. On the Wiener space W d
the transformation w --+ w(c·)/JC is measurable and leaves the Wiener measure
W invariant. By applying Birkhoffs theorem to this transformation, prove that for
d ~ 3,
lim -1IBs l- 2 ds = -1W-a.s.
1->00 log t I d - 2
It
[Hint: To prove that the limit provided by Birkhoffs theorem is constant
use the 0 - 1 law for processes with independent increments. The value of the
constant may be computed by elementary means or derived from Exercise (4.23)
in Chap. IV.]
2°) Prove the companion central-limit theorem to the above a.s. result, namely,
that, in distribution,
lim Jlogt (OOgt)-1
t-+oo
JIt IB s l- ds - (d - 2)-1) = N,
2
where N is a Gaussian r.v.
[Hint: Use the methods of Exercise (4.23) in Chap. IV.]
§4. Asymptotic Results for the Planar Brownian Motion
This section is devoted to some asymptotic results for functionals of BM2. In
particular it gives a partial answer to the question raised in Remark 3°) at the end of
the previous section. We use the skew-product representation of BM2 described in
Theorem (2.11) of Chap. V and the notation thereof and work with the probability
measure Pz for z i= O.
(4.1) Theorem (Spitzer). As t converges to infinity, 2et / log t converges in distribution to a Cauchy variable with parameter 1.
Proof Because of the geometric and scaling invariance properties of BM, we
may assume that z = 1. For r > 1, define a r = inf{u : IZul = r} and for a > 0,
Ta = inf{t > 0 : f3t = a}. From the representation theorem recalled above, it
follows that Ca, = Tiogr. As a result
ea, -- YCa, -- YIio <!!J (10g r) YT\ '
g,
-
the last equality being a consequence of the independence of p and Y and of the
scaling properties of Ta and Y (Proposition (3.10), Chap. III). Therefore for every
r> 1,
_1_e <:ll C
logr a,
'
where C is a Cauchy variable with parameter 1; this can alternatively be written
as
§4. Asymptotic Results for the Planar Brownian Motion
431
_2_0u <gJ C.
logt ,fi
We will be finished if we prove that (Ot - OU,fi) / log t converges to zero in probability.
We have
it dZs
t dZs
Ot - OU,fi = 1m i
= 1m
----=-,
,
U,fi Zs
U,fi 1 + Zs
where Zs = Zs - 1 is a BM2(0). Setting Zs = Zts/,J't, we get
1
1
dZs
Ot - OU,fi = 1m
~ .
,
t-1u,fi (1/,J't)+Zs
Let Z' be a BM2 (0) fixed once and for all. Since
u.jt
= inf{u: 11 + Zul =,J't} <gJ tinf{u: 1(1/,J't) + z~1 = I}
we have
o- 0
t
U,fi
<gJ 1m
t
dZ;
lv, (1/,J't)+Z~
I
where Vt = inf{u : (1/,J't) + Z~I = 1}.
We leave as an exercise to the reader the task of showing that Vt converges
in probability to u{ = inf {u : IZ~ I = I} as t goes to infinity. It then follows
from Exercise (1.25) in Chap. VIII that the last displayed integral converges in
0
probability to the a.s. finite r.v. Jr(11l, dZ;/Z;, which completes the proof.
We will now go further by studying the windings of Z according as Z is close
to or far away from O. More precisely, pick a real number r > 0 and set
We want to study the asymptotic behavior of the pair (OtO, Ot') and we will actually
treat a more general problem. Let ¢ be a bounded and positive function on lR; we
assume that ml(¢) = I'M. ¢(x)dx < 00. We set
At =
11
IZsl-2¢(1og IZsl)ds.
This may fail to be an additive functional because At may be infinite Po-a.s.
for every t > 0, but the equality At+s = At + As 0 Ot holds Pz-a.s. for z =f. 0
(see Remark 4 below Theorem (3.12)). It is moreover integrable in the sense that
Em[AJ1 = 2Jrml(¢) as is easily checked using the skew-product representation.
432
Chapter X. Additive Functionals of Brownian Motion
(4.2) Theorem. Under Pz, Z I- 0, the 3-dimensionalfamily ofr.v. 's
-I-2
ogt
(000
)
()t' ()t ,At
converges in distribution as t converges to 00, to
(loTI
I(Ps~o)dys, loTI I(Ps~o)dys, ml(¢)L~I)
where (f3, y) is a standard planar BM starting at 0, L O is the local time of f3 at
and TI = inf {t : f3t = 1}.
°
In the following proof as well as in similar questions treated in Chap. XIII we
will make an extensive use of the scaling transformations. If B is a BM and a is
> 0, we will denote by B(a) the BM a-I Ba2, and anything related to B(a) will
sport the superscript (a); when a = 1, we have B(l) = B and we drop the (1).
For instance
.
TI(a) = In
t: B(a)
t
= = a -2Ta'
f{
I}
Proof Again we may assume that z = 1 and as in the above proof we look at the
given process at time at. By the time-changes and properties already used in the
previous proof, we get
()2, = loa, l(logIZsl~logr)dycs = IoCu, I(Ps~logr)dys = Io~Og, I(Ps~logr)dys;
setting a = log t, we get
(logt)-1()2, =
10
a-2 T
a
I(Pa2s~logr)dYs(a) = 10
T(a)
1
1(p}a)9-IJogr)dy}a).
The same computation will yield
(logt)-l(): =
10
T(a)
1
l(p;a)~a-IJogr)dYs(a).
We tum to the third term for which we have
Consequently, (logt)-l (()2, , ():,', A a ,) has the same law as
§4. Asymptotic Results for the Planar Brownian Motion
433
where (f3, y) is a planar standard BM and TI = inf{t : f3t = I}. The first two
terms converge in probability thanks to Theorem (2.12) in Chap. IV; as to the
third, introducing L}" the local time of f3 at x up to time TI , and using the
occupation times formula, it is equal to
which converges a.s. to ml (4))L~, by dominated convergence.
(e2, , e;::, A"J converges in distribution to
Thus we have proved that IO~ t
Furthermore, as in the preceding proof, we have
(e? - e~-Ii ) = P- lim -og21t (er'XJ - e;:-Ii ) = O.
p- lim -21
t-+oo
og t
t->oo
Also
2
( A t -A O'-li )
logt
-2
logt
It
2114>1100
<
logt
IZsl-24>(log IZsl)ds
It
O'-li
O'-li
IZsl-2ds = 2114>1100
logt
[I IZsl-2ds
t-'O'-li
where Zs = t- I / 2 Zts and this converges to zero in probability; indeed, as in the end
of the proof of Theorem (4.1), the last integral converges in law to J~; IZ;I- 2ds.
o
Remark. It is noteworthy that the limiting expression does not depend on r. If,
in particular, we make r = I and if we put together the expressions for
and
OOO given
at
the
beginning
of
the
proof
and
the
fact
that
-I It
e;;o
)
t
t
,
og
-Ii
-Ii
converges to zero in probability, we have proved that
e;:
(e
lo~t {(e?, e
OO
t
)
-
(l
1iOg
-ii I (,B,:::o)dys,
l
1iOg
,ji
e2 ' e
e2,
1(,B'~O)dYs)}
converges to zero in probability, a fact which will be used in Sect. 3 Chap. XIII.
We now further analyse the foregoing result by computing the law of the limit
which we will denote by (W-, W+, A). This triplet takes its values in ~2 x ~+.
(4.3) Proposition. Ifml (4)) = l,for a > 0, (b, c) E ~2,
E [exp (-aA + ibW- + icW+)] = f(2a + Ibl, c)
where feu, c) = (coshc + (~) sinhcrl for c i= 0, feu, 0) = (l + U)-I.
434
Chapter X. Additive Functionals of Brownian Motion
Proof By conditioning with respect to
where
t
H = aL? +
.yet, we get
~ lot 1(p,:"o)ds + ~ lot l(p,:>:o)ds.
The integrand exp (- H T1 ) now involves only f3 and the idea is to find a function
F such that F(f3t) exp (- Ht ) is a suitable local martingale.
With the help of Tanaka's formula, the problem is reduced to solving the
equation
FI! = (2a8 0 + b 2 l(x:"o) + c2 l(x:>:o)) F
in the sense of distributions. We take
F(x) = exp(lblx)l(x<o) + ( cosh(cx) +
2a + Ibl.
)
c
smh(cx) l(x:>:o)·
Stopped at TI, the local martingale F (f3t) exp( - H t ) is bounded. Thus, we can
apply the optional stopping theorem which yields the result.
(4.4) Corollary. i) The r.v. A = L~l is exponentially distributed with parameter
1/2.
ii) Conditionally on A, the r. v. 's W- and W+ are independent, W- is a Cauchy
variable with parameter A/2 and the characteristic function of w+ is equal to
(c / sinh c) exp ( - (c coth c - 1)).
iii) The density of w+ is equal to (2 cosh(n x /2)) -I.
4
Proof The proof of i) is straightforward. It is also proved independently in Proposition (4.6) of Chap. VI.
To prove ii) set Ac(A) = (si~hc)exp(-~(ccothc-l + Ibl)) and compute
E [exp( -aA) fb,c(A)]. Using the law of A found in i), this is easily seen to be
equal to
c
- . - (l + 2a + c coth c - 1 + Ibl)-I = f(2a + Ibl, c)
smhc
where f is the same as in Proposition (4.3). As a is an arbitrary positive number,
it follows that
which proves ii).
Finally, the proof of iii) is a classical Fourier transform computation (see
Sect. 6 Chap. 0).
Remark. The independence in ii) has the following intuitive meaning. The r.v. A
accounts for the time spent on the boundary of the disk or for the number of times
the process crosses this boundary. Once this is known, what occurs inside the disk
is independent of what occurs outside. Moreover, the larger the number of these
crossings, the larger in absolute value the winding number tends to be.
§4. Asymptotic Results for the Planar Brownian Motion
435
We finally observe that since we work with z i= 0, by Remark 4 at the end of
the preceding section, if G is an integrable additive functional and if
(¢) > 0
we have lim(Gt/A t ) = Ilvell/2;rm,(¢) Pz-a.s. As a result we may use G instead
of A in the above results and get
m,
(4.5) Corollary. If G is any integrable additive functional,
2 (logt)-' (e?,etOO , G t )
converges in law under Pz to (W-, W+, (2;r)-111 vellA) as t goes to infinity.
In Theorem (4.2) we were interested in the imaginary parts of
_2
logt
(t10
1
>
(lZ,I;r)
dZ s )
Zs
.
For later needs, it is also worth recording the asymptotic behavior of the real parts.
We set
With the same notation as above, we have the
(4.6) Proposition. As t converges to infinity, 2 (log t) -I (N?, NtOO , At) converges
in distribution to (~A, ~A - 1, ml(¢)A).
Proof The same pattern of proof as in Theorem (4.2) leads to the convergence of
the law of2(logt)-1 (N?, NtOO , At) towards that of
(faTI l UJ,so)df3s, faTI l(fJ,:;~o)df3s, ml (CP)L~I) .
Thus the result follows immediately from Tanaka's formula.
(4.7) Exercise. Deduce Theorem (4.1) from Theorem (4.2).
(4.8) Exercise. With the notation of Theorem (4.1) prove that Xu = e (uexp(u))
and Yu = e (uexp(-u)) are two Cauchy processes.
*
*
(4.9) Exercise. Prove Theorem (4.1) as a Corollary to Proposition (3.8) in Chap.
VIII.
[Hint: Use the explicit expressions for the density of Pt and make the change
of variable P = u.Jt.]
8r,
(4.10) Exercise. Let B be a BM2(0) and call
t > 0 a continuous determination
of arg(B t ), t > O. Prove that as t converges to 0, the r.v.'s 28r/logt converge in
distribution to the Cauchy r.v. with parameter 1.
[Hint: By scaling or time-inversion,
W, - 8r) c:!l WI/t - 8,).]
436
*
Chapter X. Additive Functionals of Brownian Motion
(4.11) Exercise (Another proof of Theorem (4.1». 1°) With the notation of
Theorems (4.1) and (4.2) prove that
1u
a- 2Ct = inf {U : a 2
exp (2at3;a)) ds >
t} .
2°) Using the Laplace method prove that for a fixed BM, say B,
lim (2a)-110g (
a----+oo
r
Jo
eX P(2aB s )ds) -
sup Bs = 0
O::ss::su
holds a.s. for every u (see Exercise (1.18) Chap. 1).
3°) Prove that for a = log t /2,
P- lim
t->oo
{a- C
2
t -
T1(a)} = o.
[Hint: The processes t3(a) have all the law of B which shows that the convergence holds in law.]
4°) Give another proof of Theorem (4.1) based on the result in 3°).
*
(4.12) Exercise. Let Z be a BM2(l) and 8 be the continuous determination of
argZ such that 80 = O. Set Tn = inf{t: IZtl 2: n}.
1°) If r = inf{t : 8t > I} prove that
lim (logn)P[r > Tn]
n->oo
exists.
[Hint: P[r > Tn] = P[C r > C T.].]
2°) 1fT = inf{t: 18t l > I}, prove that P[T > Tn] = O(l/n).
Notes and Comments
Sect. 1. The basic reference for additive functionals is the book of Blumenthal
and Getoor [1] from which most of our proofs are borrowed. There the reader will
find, for instance, the proof of the strong additivity property of additive functionals
and an account of the history of the subject. Our own exposition is kept to the
minimum which is necessary for the asymptotic results of this chapter and of
Chap. XIII. It gives no inkling of the present-day state of the art for which we
recommend the book of Sharpe [3].
The extremal process of Exercise (1.9) is studied in Dwass [1] and Resnick
[1]. It appears as the limit process in some asymptotic results as for instance in
Watanabe [2] where one finds also the matter of Exercise (1.10).
Exercise (1.11) is actually valid in a much more general context as described
in Chap. V of Blumenthal and Getoor [1]. If X is a general strong Markov process,
and if a point x is regular for itself (i.e. x is regular for {x} as defined in Exercise
(2.24) of Chap. III), it can be proved that there exists an additive functional A
Notes and Comments
437
such that the measure dAr is a.s. carried by the set {t : Xr = x}. This additive
functional, which is unique up to multiplication by a constant, is called the local
time of X at x. Thus, for a Markov process which is also a semimartingale, as
is the case for BM, we have two possible definitions of local times. A profound
study of the relationships between Markov processes and semi-martingales was
undertaken by (:inlar et al. [I].
Exercise (1.14) is from Barlow-Yor ([I] and [2]), the method hinted at being
from Bass [2] and B. Davis [5].
Exercise (1.16) is closely linked to Exercise (4.16) of Chap. VII. The interested
reader shall find several applications of both exercises in Jeulin-Yor [3] and Yor
[16]. For an up-date on the subject the reader is referred to Fitzsimmons et al. [1]
who in particular work with less stringent hypotheses.
Sect. 2. This section is based on Revuz [1]. The representation theorem (2.9) is
valid for every process having a local time at each point, for instance the linear
Markov processes of Sect. 3 Chap. VII. It was originally proved in the case of BM
in Tanaka [1]. Some of its corollaries are due to Wang [2]. The proof that all the
additive functionals ofBM are continuous may be found in Blumenthal-Getoor [1].
For BM d , d > 1, there is no result as simple as (2.9), precisely because for
d > 1, the one-point sets are polar and there are no local times. For what can
nonetheless be said, the reader may consult Brosamler [1] (see also Meyer [7])
and Bass [1].
Exercise (2.23) is taken from (:inlar et al. [1]. The skew Brownian motion
of Exercises (2.24) and (2.30) is studied in Harrison-Shepp [1], Walsh [3] and
Barlow [4]. Walsh's multivariate generalization of the skew Brownian motion is
studied by Barlow et al. [2]. Exercise (2.31) is due to Engelbert-Schmidt [1].
Sect. 3. Our exposition is based on Azema et al. [1] (1967) and Revuz [2], but
the limit quotient theorem had been known for a long time in the case of BMl
(see ito-McKean [1]) and BM2 for which it was proved by Maruyama and Tanaka
[1]. For the results of ergodic theory used in this section see for instance Krengel
[1], Neveu [4] or Revuz [3].
Exercise (3.18) is from Azema et al. [1] (1967) and Exercise (3.19) from Revuz
[4]. Incidentally, let us mention the
Question 1. Can the result in Exercise (3.19) be extended to all Harris processes?
Exercise (3.20) is taken from Yor [17].
Sect. 4. Theorem (4.1) was proved by Spitzer [1] as a consequence of his explicit
computation of the distribution of er • The proof presented here, as well as the
proof of Theorem (4.2) is taken from Messulam and Yor [1] who followed an idea
of Williams [4] with an improvement of Pitman-Yor [5]. A variant of this proof
based on Laplace's method is given in Exercise (4.11); this variant was used by
Durrett [1] and Le Gall-Yor [2]. The almost-sure asymptotic behavior of winding
numbers has been investigated by Bertoin-Wemer ([1], [2]) and Shi [1].
438
Chapter X. Additive Functionals of Brownian Motion
The asymptotic property of additive functionals which is part of Theorem (4.2)
was first proved by Kallianpur and Robbins [1]. This kind of result is proved for
BM) in Sect. 2 Chap. XIII; for more general recurrent Markov processes we refer
to Darling-Kac [1], Bingham [1] and the series of papers by Kasahara ([1], [2]
and [3]).
The formula given in Proposition (4.3) may be found in the literature in various
disguises; it is clearly linked to P. Levy's formula for the stochastic area, and the
reader is referred to Williams [5], Azema-Yor [2] and Jeulin-Yor [3]. For more
variations on Levy's formula see Biane-Yor [2] and Duplantier [1] which contains
many references.
Chapter XI. Bessel Processes
and Ray-Knight Theorems
§ 1. Bessel Processes
In this section, we take up the study of Bessel processes which was begun in
Sect. 3 of Chap. VI and we use the notation thereof. We first make the following
remarks.
If B is a BMo and we set P = IBI, Ito's formula implies that
2
2
Pt = Po + 2
~ 10t BsdBs + 8t.
f;;:
i
i
For 8 > 1, Pt is a.s. > 0 for t > 0 and for 8 = 1 the set {s : Ps = O} has a.s. zero
Lebesgue measure, so that in all cases we may consider the process
°10t (B;/Ps)dB;
f3t = {;;
which, since (f3, f3}t = t, is a linear BM; therefore P; satisfies the SDE
P; = P5 + 2 lot Psdf3s + 81.
For any real 8 ~ 0, and x ~ 0, let us consider the SDE
Zt = x +
210 t JiZsTdf3s + 8t.
I.JZ -.J?I
Since
< Jlz - z'l for z, z' ~ 0, the results of Sect. 3 in Chap. IX
apply. As a result, for every 8 and x, this equation has a unique strong solution.
Furthermore, as for 8 = x = 0, this solution is Zt == 0, the comparison theorems
ensure that in all cases Zt ~ 0 a.s. Thus the absolute value in the above SDE may
be discarded.
(1.1) Definitions. For every 8 ~ 0 and x ~ 0, the unique strong solution of the
equation
is called the square of 8-dimensional Bessel process started at x and is denoted by
BESQO(x). The number 8 is the dimension of BESQo.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
440
Chapter XI. Bessel Processes and Ray-Knight Theorems
The law of BESQ8(x) on C(lR+, lR) is denoted by Q~. We will also use the
number v = (8/2) -I which is called the index of the corresponding process, and
write BESQ(v) instead of BESQ8 if we want to use v instead of 8 and likewise
Qiv ). We will use v and 8 in the same statements, it being understood that they
are related by the above equation.
We have thus defined a one-parameter family of processes which for integer
dimensions, coincides with the squared modulus of BM8. For every t and every
a 2: 0, the map x -+ Q~[Xt 2: a] where X is the coordinate process is increasing,
thanks to the comparison theorems, hence Borel measurable. By the monotone
class theorem, it follows that x -+ Q~[Xt E A] is Borel measurable for every
Borel set A. By Theorem (1.9) in Chap. IX, these processes are therefore Markov
processes. They are actually Feller processes, which will be a corollary of the
following additivity property of the family BES Q8. If P and Q are two probability
measures on C(lR+, lR), we shall denote by P * Q the convolution of P and Q,
that is, the image of P ® Q on C(lR+, lR)2 by the map (w, w') -+ w+w'. With this
notation, we have the following result which is obvious for integer dimensions.
(1.2) Theorem. For every 8, 8' 2: 0 and x, x' 2: 0,
8
Qx
8+8'
* Qx'8' = Qx+x'·
Proof For two independent linear BM's fJ and fJ', call Z and Z' the corresponding
two solutions for (x, 8) and (x', 8'), and set X = Z + Z'. Then
t
X = x + x' + 2
lot (.;z;dfJs + /Z'sdfJ;) + (8 + 8')t.
Let (J" be a third BM independent of fJ and fJ'. The process y defined by
r
Yt = 10 1(X, >0)
r
"
.jZ;dfJs + JZ!.dfJ;
,JX;
+ 10 1(X, =0) dfJs
is a linear BM since (y, Y}t = t and we have
t
X = (x + x') + 2
lot .jX;dys + (8 + 8')t
which completes the proof.
Remark. The family Q~ is not the only family with this property as is shown in
Exercise (1.13).
(1.3) Corollary. If JL is a measure on lR+ such that Jooo(1 + t)dJL(t) < 00, there
exist two numbers AJL and BJL > 0 such that
where X is the coordinate process.
§ I. Bessel Processes
441
Proof Let us call ¢ (x, 8) the left-hand side. The hypothesis on JL entails that
¢ (x, 8) ::: exp ( -
Q~
(1
00
XtdJL(t)) ) = exp (
-1 +
00
(x
Ot)dJL(t») > o.
Furthennore, from the theorem it follows easily that
¢(x + x', 8 + 8') = ¢(x, 8)¢(x', 8'),
so that
¢(x, 8) = ¢(x, O)¢(O, 8).
Each of the functions ¢ (., 0) and ¢ (0, .) is multiplicative and equal to 1 at O.
Moreover, they are monotone, hence measurable. The result follows immediately.
D
By making JL = Ast. we get the Laplace transfonn of the transition function
of BESQo. We need the corresponding values of AIL and BIL which we compute
by taking 8 = 1. We then have for A > 0,
Q~ [exp(-AX
t )]
= Q! [exp
(-A 1
00
XsSt(dS») ] = EJX
[exp(-AB~)]
where B is BMI. This is easily computed and found equal to
(l + 2M) - 1/2 exp ( - AX 1(1 + 2M) ) .
As a result
By inverting this Laplace transfonn, we get the
(1.4) Corollary. For 8 > 0, the semi-group ofBESQo has a density in y equal to
1 (Y)V/2
qf(x, y) = 2 ~
exp ( - (x + Y)/2t)Iv (FYlt) ,
t > 0, x> 0,
where v is the index corresponding to 8 and Iv is the Besselfunction of index v.
For x = 0, this density becomes
qf(O, y) = (2t)-o/2r(812)-ll/2-1 exp(-YI2t).
The semi-group of BES QO is given by, for x > 0,
Q?(x,.) = exp(-xI2t)so + St(x,·)
where St(x, .) has the density
q?(x, y) = (2t)-I(ylx)-1/2 exp ( - (x + y)/2t)h (FYlt)
(recall that II = Ld.
442
Chapter XI. Bessel Processes and Ray-Knight Theorems
A consequence of these results is, as announced, that BESQ8 is a Feller process. This may be seen either by using the value of the density or by observing
that for f E Co([O, oo[), Q~[f(Xt)] is continuous in both x and t; this follows
from the special case f (x) = exp( - A.x) and the Stone-Weierstrass theorem. Thus
we may apply to these processes all the results in Chap. III. We proceed to a few
observations on their behavior.
The comparison theorems and the known facts about BM in the lower dimensions entail that:
(i) for 8 ~ 3, the process BESQ8 is transient and, for 8 ::s 2, it is recurrent,
(ii) for 8 ~ 2, the set {a} is polar and, for 8 ::s 1, it is reached a.s. Furthermore
for 8 = 0, {a} is an absorbing point, since the process X == is then clearly
a solution of the SDE of Definitions (1.1).
°
These remarks leave some gaps about the behavior ofBESQ8 for small 8. But
if we put
sv(x) = _x- v
for v> 0,
so(x) = logx,
sv(x) = x-v
for v <
°
and if T is the hitting time of {O}, then by Ito's formula, svCxl is a local
martingale under Q~. In the language of Sect. 3 Chap. VII, the function Sv is a
scale function for BES Q8, and by the reasonings of Exercise (3.21) therein, it
8 < 2 the point is reached a.s.; likewise, the process is
follows that for
transient for 8 > 2. It is also clear that the hypotheses of Sect. 3 Chap. VII are in
force for BESQ8 with E = [0, oo[ if 8 < 2 and E =]0, oo[ if 8 ~ 2. In the latter
case, is an entrance boundary; in the former, we have the
°: :
°
°
°
(1.5) Proposition. For 8 = 0, the point
is instantaneously reflecting.
°
is absorbing. For
°
< 8 < 2, the point
°
Proof The case 8 = is obvious. For 0< 8 < 2, if X is a BESQ8, it is a semimartingale and by Theorem (1.7) Chap. VI, we have, since obviously L?-(X) = 0,
L?(X) = 281t l(x,=o)ds.
On the other hand, since d(X, X)t = 4Xt dt, the occupation times formula tells us
that
t ~ It l(o<x,)ds
=
=
It 1(o<x,)(4Xs )-'d(X, X)s
1
00
(4a)-' L~(X)da.
If L?(X) were not == 0, we would have a contradiction. As a result, the time spent
by X in has zero Lebesgue measure and by Corollary (3.13) Chap. VII, this
proves our claim.
°
§1. Bessel Processes
443
Remarks. i) A posteriori, we see that the term J~ l(x,=o)df3;' in the proof of Theorem (1.2) is actually zero with the exception of the case 8 = 8' = O.
ii) Of course, a BES Q8 is a semimartingale and, for 8 ~ 2, it is obvious that
LO(X) = 0 since 0 is polar.
This result also tells us that if we call mv the speed measure of BESQ8 then
for 8 > 0, we have mv({O}) = O. To find mv on ]0, oo[ let us observe that by
Proposition (1.7) of Chap. VII, the infinitesimal generator of BESQ8 is equal on
]0, oo[ ) to the operator
ci (
d2
d
2x-+8- .
dx 2
dx
By Theorem (3.14) in Chap. VII, it follows that, for the above choice of the
scale function, mv must be the measure with density with respect to the Lebesgue
measure equal to
XV /2v
for v > 0,
1/2
-XV /2v
for v = 0,
for v < o.
The reader can check these formulas by straightforward differentiations or by using
Exercise (3.20) in Chap. VII.
Let us now mention the scaling properties of BES Q8. Recall that if B is a
standard BM8 and Bf = x + B t then for any real c > 0, the processes B:2t
and cB;lc have the same law. This property will be called the Brownian scaling
property. The processes BES Q have a property of the same ilk.
(1.6) Proposition. If X is a BESQ8(x), then/or any c > 0, the process c- I X ct is
a BESQ8(x/c).
Proof By a straightforward change of variable in the stochastic integral, one sees
that
C
-I
Xct=c
-I
1°(c
t
x+2
-I
Xes)
1/2 -1/2
C
dBcs +8t
and since C- 1/2 Bet is a BM, the result follows from the uniqueness of the solution
to this SDE.
0
We now go back to Corollary (1.3) to show how to compute the constants A",
and B",; this will lead to the computation of the exact laws of some Brownian
functionals. Let us recall (see Appendix 8) that if f.L is a Radon measure on [0, 00[,
the differential equation (in the distribution sense) <p" = <Pf.L has a unique solution
<p", which is positive, non increasing on [O,oo[ and such that <p",(0) = 1. The
function <p", is convex, so its right-hand side derivative <P~ exists and is ::::: O.
Moreover, since <p", is non increasing, the limit <p", (00) = limx->oo <p", (x) exists
and belongs to [0, I]. In fact, <p", (00) < 1 if we exclude the trivial case f.L = 0;
indeed, if <p",(oo) = 1, then <p", is identically 1 and f.L = o.
We suppose henceforth that
+ x)df.L(x) < 00; we will see in the proof
below that this entails that <p", (00) is > o. We set
J(1
444
Chapter XI. Bessel Processes and Ray-Knight Theorems
1
00
X/-t =
Xtdf.L(t).
In this setting we get the exact values of the constants A/-t and B/-t of Corollary
(1.3).
(1. 7) Theorem. Under the preceding assumptions,
Proof The function ¢~ is right-continuous and increasing, hence F/-t(t) =
¢~ (t)!¢/-t(t) is right-continuous and of finite variation. Thus we may apply the
integration by parts fonnula (see Exercise (3.9) Chap. IV) to get
t
F/-t(t)X = F/-t(O)x
+
1t
F/-t(s)dXs
+
1t
XsdF/-t(s);
but
As a result, since Mt = X t - 8t is a Q~-continuous local martingale, the process
is a continuous local martingale and is equal to
Since F/-t is negative and X positive, this local martingale is bounded on [0, a]
and we may write
(t)
E [Z:] = E [Z~] = 1.
The theorem will follow upon letting a tend to +00.
Firstly, using Proposition (1.6), one easily sees that (Xa/a) converges in law
as a tends to +00. Secondly,
(see Appendix 3), and consequently,
0> aF/-t(a)
~ r
Jja,oo[
x df.L(x)
§ I. Bessel Processes
445
which goes to 0 as a tends to +00. As a result, FJ1(a)X a converges in probability
to 0 as a tends to +00 and passing to the limit in (t) yields
Since X J1 < 00 a.s., this shows that <PJ1 (00) > 0 and completes the proof.
Z;
Remarks. 1°) It can be seen directly that
is continuous by computing the jumps
of the various processes involved and observing that they cancel. We leave the
details as an exercise to the reader.
2°) Since aFJ1(a) converging to 0 was all important in the proof above, let us
observe that, in fact, it is equivalent to -afL([a, oo[) as a goes to infinity.
This result allows us to give a proof of the Cameron-Martin formula namely
E [exp ( -A
11
B;dS) ] = (cosh
hi) -1/2,
where B is a standard linear BM. This was first proved by analytical methods but
is obtained by making x = 0 and 8 = 1 in the following
(1.8) Corollary.
Q~ [exp ( - ~211 XsdS) ] = (coshb)-812 exp ( -~Xb tanhb) .
Proof We must compute <PJ1 when fL(ds) = b2ds on [0, 1]. It is easily seen that
on [0, 1], we must have <PJ1 (t) = a cosh bt + .B sinh bt and the condition <PJ1 (0) = 1
forces a = 1. Next, since <PJ1 is constant on [1, oo[ and <P~ is continuous, we must
have <p~(l) = 0, namely
b sinh b + .Bb cosh b = 0
which yields .B = - tanh b. Thus <PJ1 (t) = cosh bt - (tanh b) sinh bt on [0, 1] which
permits to compute <PJ1(00) = <PJ1(l) = (cosh b)-l and ¢~(O) = -btanhb.
Remark. This corollary may be applied to the stochastic area which was introduced
in Exercises (2.19) Chap. V and (3.10) Chap. IX.
We have dealt so far only with the squares of Bessel processes; we now tum
to Bessel processes themselves. The function x -+ JX is a homeomorphism of
lR+. Therefore if X is a Markov process on lR+, .;x is also a Markov process.
By applying this to the family BES Q8, we get the family of Bessel processes.
(1.9) Definition. The square root of BESQ8 (a 2), 8 2: 0, a 2: 0, is called the Bessel
process of dimension 8 started at a and is denoted by BE~(a). Its law will be
denoted by
p1.
446
Chapter XI. Bessel Processes and Ray-Knight Theorems
For integer dimensions, these processes were already introduced in Sect. 3 of
Chap. VI: they can be realized as the modulus of the corresponding BM".
Some properties of BES Q" translate to similar properties for BES". The Bessel
processes are Feller processes with continuous paths and satisfy the hypothesis of
Sect. 3 Chap. VII. Using Exercise (3.20) or Exercise (3.18) Chap. VII, it is easily
seen that the scale function of BES" may be chosen equal to
_x- 2v
for v > 0,
2logx
x- 2v
for v = 0,
for v < 0,
and with this choice of the scale function, the speed measure is given by the
densities
X2v+l/V
for v > 0,
x
for v = 0,
_X 2v + 1 /V
for v < o.
Moreover, for 0 < 0 < 2, the point 0 is instantaneously reflecting and for 0 ~ 2 it
is polar. Using Theorem (3.12) Chap. VII, one can easily compute the infinitesimal
generator of BES" .
The density of the semi-group is also obtained from that of BES Q" by a
straightforward change of variable and is found equal, for 0 > 0, to
p~(x, y) = rl(y/x)vyexp (_(x 2 + l)/2t) Iv(xy/t)
and
for x> 0, t > 0
p:(O, y) = rVr(v+l) r(v + l)- l lv+l exp( - l /2t).
We may also observe that, since for 0 ~ 2, the point 0 is polar for BESQ"(x),
x > 0, we may apply Ito's formula to this process which we denote by X and to
the function ,JX. We get
Xl/2 =
t
-=-11t
IX + f3t + o 2
VA
0
X-s 1/ 2ds
where f3 is a BM. In other words, BES"(a), a > 0, is a solution to the SDE
0-1
Pt = a + f3t + -2-
10t p;lds.
By Exercise (2.10) Chap. IX, it is the only solution to this equation. For 0 < 2 the
situation is much less simple; for instance, because of the appearance of the local
time, BES 1 is not the solution to an SDE in the sense of Chap. IX (see Exercise
(2.14) Chap. IX).
Finally Proposition (1.6) translates to the following result, which, for 0 ~ 2,
may also be derived from Exercise (1.17) Chap. IX.
(1.10) Proposition. BES" has the Brownian scaling property.
We will now study another invariance property of this family of processes. Let
X" be a family of diffusions solutions to the SDE's
X: = x + B + lt b"(X:)ds
t
§ 1. Bessel Processes
447
where b s is a family of Borel functions. Let I be a positive strictly increasing
C 2 -function with inverse I-I. We want to investigate the conditions under which
the process I(Xf) belongs to the same family up to a suitable time change.
Ito's formula yields
Setting Tt = inf {u :
be rewritten
f; 1'2(X~)ds > t} and Y/ = I(X~), the above equation may
!
where f3 is a BM. Thus, if we can find [; such that (I' b s + 1") /1'2 = by 0 I,
then the process yS will satisfy a SDE of the given family.
This leads us to the following result where Pv is a BES(v).
(1.11) Proposition. Let p and q be two conjugate numbers (p -I + q -I = I). If
v > -I/q, there is a BES(vq) defined on the same probability space as Pv such
that
qpyq = Pvq
(L
p';-2/ P(S)dS) .
Proof For v :::: 0, the process Pv lives on ]0, oo[ and xl/q is twice differentiable
on ]0,00[. Thus we may apply the above method with I(x) = qx l / q and since
then, with bv(x) = (v + x-I, we have
D
the result follows from the uniqueness of the solutions satisfied by the Bessel
processes for v :::: 0.
For v < 0, one can show that qp~/q (Tt), where Tt is the time-change associated
with fo p;;2/ P (s )ds, has on ]0, oo[ the same generator as Pvq; this is done as in
Exercise (2.29) Chap. X. Since the time spent in has zero Lebesgue measure,
the boundary is moreover instantaneously reflecting; as a result the generator of
qp~/q(Tt) is that of Pvq(t) and we use Proposition (3.14) Chap. VII to conclude.
The details are left to the reader.
D
°
°
This invariance principle can be put to use to give explicit expressions for the
laws of some functionals of BM.
(1.12) Corollary. In the setting of Proposition (1.11), if Pv(O) =
°a.s., then
448
Chapter XI. Bessel Processes and Ray-Knight Theorems
Proof Let C, = J~ p;;2/ P(s )ds and T, the associated time-change. Within the
proof of the proposition we saw that qp~/q(Ts) = Pvq(s); since dT, = p~/p(T,)dt,
it follows that
-I
2q/p
T, = 0 (q Pvq(s»)
ds.
l'
It remains to prove that C 1 has the same law as T 1- 1/ q . To this end, we first remark
that {C 1 > t} = {T, < I}; we then use the scaling property of Pv to the effect that
1
1
T,=
0
(q
-I
Pvq(tu»)
2q/p
(d) q
tdu=tTI,
which yields
o
The point of this corollary is that the left-hand side is, for suitable v's, a
Brownian functional whereas the Laplace transform of the right-hand side may be
computed in some cases. For instance, making p = q = 2, we get
11 p;;l(s)ds <:!d 2
(1 P~v(S)dS)-1/2,
1
and the law of the latter r.v. is known from Corollary (1.8).
(1.12his) Exercise (Continuation of Corollary (1.12».
1°) In the setting of Proposition (1.11) prove that
E
[(1
1
Pvq(s)2q/ Pds
r
l q
/ ]
= qr((ljq) + v)j r(l + v).
2°) Deduce that
Explain this result in view of Exercise (l.18) 1°).
(1.13) Exercise. 1°) Let X 8 , 8 :::: 2, be a family of diffusions which are solutions
to the SDE's
x: = + t+ 1t
x
B
b8 (X;)ds.
Prove that the laws of the processes (X8)2 satisfy the additivity property of Theorem (l.2) provided that
2xb8 (x) + 1 = 8 + bx 2
for a constant b. This applies in particular to the family of Euclidean norms of
JR8- valued OU processes, namely X t = IYtl, where
b
Yt = Y + B, + "2
(See Exercise (3.14) Chap. VIII.)
r
10 Y,ds.
§ 1. Bessel Processes
449
2°) Prove that the diffusion with infinitesimal generator given on C~( ]0, oo[)
by
with f3 i=
°
2x d 2 /dx 2 + (2f3x + 8)d/dx
and 8 :::: 0, may be written
e 2,BtX((1_e- 2,Bt)/2f3)
or
e 2,BtY(le- 2,Bt -11)/21f31
where X and Yare suitable BESQo processes.
[Hint: Start with 8 = 1 in which case the process is the square of a OU process,
then use 1°).]
°
(1.14) Exercise (Infinitely divisible laws on W). 1°) For every 8 :::: and x :::: 0,
prove that the law Q~ is infinitely divisible, i.e. for every n, there is a law pen)
on W = C(l~+, lR) such that
Q~ =
pen) * ... * pen)
(n terms).
2°) Let Band C be two independent Brownian motions starting from 0, and
R be a BES o(x) process independent from B. Prove that the laws of
a) BC;
b) fa (BsdC s - CsdBs);
c) fa RsdBs;
d) fa dR;q;(s), where q; : lR+ ~ lR+ is bounded, Borel
are infinitely divisible.
(1.15) Exercise. Compute Aft and Bft in Corollary (1.3) when f.L is a linear combination of Dirac masses i.e. f.L = L Aiet;.
[Hint: Use the Markov property.]
#
(1.16) Exercise. (More on the counterexample of Exercise (2.13) in Chap. V).
Let X be a BESO(x), and recall that if 8 > 2, x > 0, the process X;-o is a local
martingale.
1°) Using the fact that [v(z) is equivalent to rev + 1)-1 (z/2)V as z tends to
zero, prove that for a < 8/(8 - 2) and e > 0,
sup E [ X~(2-0)] < +00.
c::st:Sl
[Hint: Use the scaling properties of BES" and the comparison theorems for
SDE's.]
2°) For every p :::: 1, give an example ofa positive continuous local martingale
bounded in LP and which is not a martingale.
[Hint: Show that E [X;-o] is not constant in t.]
*
(1.17) Exercise. Let M be in .~/; (Exercise (3.26) in Chap. IV) and such that
limt to M t = +00. Prove that the continuous increasing process
At =
lot exp (-2Ms) d(M, M)s
is finite-valued and that there exists a BES 2 , say X, such that exp( - M t ) = X A"
450
*#
Chapter XI. Bessel Processes and Ray-Knight Theorems
(1.18) Exercise. 1°) If X is a BES 8 (0), prove that TI = inf{t : X t = I} and
(SUPt::::1 x t 2 have the same law.
[Hint: Write {TI > t} = {suPSg Xs < I} and use the scaling invariance properties.]
2°) If moreover 8 > 2, prove that L = sup {t : X t = I} and (inft ::: I X t ) -2 have
the same law.
3°) Let Y be a BES8 (l) independent of X. Show, using the scaling invariance
of X, that inft>1 X t and XI· inft>o Yt have the same law.
4°) Using the property that y2-8 is a local martingale, show that inft:::o Yt has
the same law as U I / 8- 2 , where U is uniformly distributed on [0, 1].
[Hint: Use Exercise (3.12) Chap. II.]
5°) Show finally that the law of the r.v. L defined in 2°) above is that of
ZI 2 where Z is a BES8 - 2 (0). Compare this result with the result obtained for the
distribution of the last passage time of a transient diffusion in Exercise (4.16) of
Chap. VII.
r
(1.19) Exercise. For p > 0, let X be a BES 2p+3(0). Prove that there is a BES 3 (0),
say Y, such that
X;p+1 = Y (lot (2p + 1)2 X;PdS).
[Hint: Use Ito's formula and the time-change associated with
J~ (2p + 1)2 X;P ds.]
(1.20) Exercise. For 8 2: 1, prove that BES8 satisfies the various laws of the
iterated logarithm.
[Hint: Use Exercise (1.21) in Chap. II and the comparison theorems of Sect.
3, Chap. IX.]
(1.21) Exercise. For 0 < 8 < 1, prove that the set {t : X t = O} where X is a
BES8 is a perfect set with empty interior.
*#
(1.22) Exercise. 1°) Let (.jf0) be the filtration generated by the coordinate process
X. For the indexes /-t, v 2: 0, T a bounded (~O)-stopping time, Y any .9iif~­
measurable positive r.v. and any a > 0, prove that
Pd IL ) [yex p ( - ~ loT X;2dS) (XT/a)-IL]
=
Pd v) [yex p (_~2IoT X;2dS) (XT/a)-V].
[Hint: Begin with /-t = 0, v > 0 and use the p;O)-martingale 8(vM) where
M = log(X/a) as described in Exercise (1.34) of Chap. VIII.]
2°) Let Wa be the law of the linear BM started at a and define a probability
measure Ra by
on ~o
t
'
§ I. Bessel Processes
451
where To = inf{t : X t = O}. Show that Ra is the law of BES~ i.e. Pd 1/ 2 ) in the
notation of this exercise.
[Hint: Use Girsanov's theorem to prove that, under Ra , the process X satisfies
the right SDE.]
3°) Conclude that for every v ~ 0,
Pd v) = (XtATo/a)"+1/2 exp ( _(v 2 - 1/4)/2)
#
L
X;2dS) . Wa
on
.~o.
(1.23) Exercise. Prove that for v > 0, and b > 0, the law of {X Lb- t , t < Lb}
under p~v) is the same as the law of {Xt, t < To} under p~-V).
[Hint: Use Theorem (4.5) Chap. VII.]
In particular Lb under p~v) has the same law as To under pt V ). [This generalizes Corollary (4.6) of Chap. VII.] Prove that this common law is that of
2/yv.
[Hint: Use Exercise (1.18).]
(1.24) Exercise. Let Z be the planar BM. For Ci < 2, prove that [5'( -Ci log IZI)
is not a martingale.
[Hint: Assume that it is a martingale, then follow the scheme described above
Proposition (3.8) Chap. VIII and derive a contradiction.]
(1.25) Exercise. 1°) Prove that even though BES(v) is not a semimartingale for
v < -1/2, it has nonetheless a bicontinuous family of occupation densities namely
of processes such that
a family
I:
1t f (Pv(s)) ds =
1
00
f(x)l:mv(dx)
a.s.
Obviously, this formula is also true for v ~ -1/2.
[Hint: Use Proposition (1.11) with q = -2v.]
2°) Prove that for v E] - I, O[ the inverse i of the local time at 0 is a stable
subordinator with index (-v) (see Sect. 4 Chap. III), i.e.
E [exp( -A it)] = exp( -etA -V).
(1.26) Exercise. (Bessel processes with dimension 8 in ]0, 1[ ). Let 8 > 0 and
P be a BES 8 process.
1°) Prove that for 8 ~ I, P is a semimartingale which can be decomposed as
(1)
Pt
Po + f3t + ((8 - 1)/2) 1t p;lds,
Pt
I 0
po+f3t+21t'
if 8 > 1,
and
(2)
if 8=1,
where 10 is the local time of P at O. This will serve as a key step in the study of
p for 8 < 1.
452
Chapter XI. Bessel Processes and Ray-Knight Theorems
2°) For 0 < a < 1/2 and the standard BMI B, prove that
IBol l - a + (l - a)
lt
IBsl-a sgn(Bs)dBs
lt
+ ( - a(l - a)/2)P.V.
IBsl- I- a ds,
where P.V. is defined by
P.V.
t IBsl- I- ds = i-oo
tx) Ibl- I- (If -I?) db.
a
io
a
[Hint: For the existence of the principal value, see Exercise (1.29) Chap. VL]
3°) Let now P be a BES o with 8 E]O, 1[; denote by la the family of its local
times defined by
lt
¢(Ps)ds =
1
00
¢(x)l;xo-Idx
in agreement with Exercise (1.25) above. Prove that
Pt = Po + f3t + ((8 - 1)/2)kt
where k t = P.y. J~ p;lds which, by definition, is equal to Jt a o- 2 (If -Z?) da.
[Hint: Use 2°) as well as Proposition (1.11) with IJ = -1/2.]
Q;
(1.27) Exercise. 1°) Prove that
(xX;I) = 1 - exp(-x/2t).
[Hint: ~-I = Jo exp(-A~)dA.]
2°) Comparing with the formulae given in Corollary (1.4) for
check, using the above result, that Q~ (x, lR+) = 1.
oo
*
Q; and Q~,
(1.28) Exercise (Lamperti's relation). 1°) Let B be a BMI (0). For IJ 2: 0, prove
that there exists a BES(v), say R(v), such that
exp(Bt + IJt) = R(V)
(1
1
exp (2(Bs + IJS) )dS) .
2°) Give an adequate extension of the previous result for any IJ E R
3°) Prove that, for a E lR, a 1= 0, and b > 0, the law of Jo ds exp(aBs - bs)
is equal to the law of 2/(a2Y(2h/a2», where Ya denotes a Gamma (a) variable.
[Hint: Use Exercise (1.23).]
oo
(1.29) Exercise (An extension of Pitman's theorem to transient Bessel processes). Let (Pr, t 2: 0) be a BESO(O) with 8 > 2, and set It = infs=::t Ps. Prove
that there exists a Brownian motion (Yr, t 2: 0) such that
8-
31
Pt = Yt + 2It - - 2
[Hint: Use Proposition (1.11).]
0
t
ds
-.
Ps
§ I. Bessel Processes
453
(1.30) Exercise (A property equivalent to Pitman's theorem). Let B be a BMI
and S its supremum. Assume that for every t, conditionally on .7;25-B, the r.v.
St is uniformly distributed on [0, 2St - Bt ], then prove that the process 2S - B is
a BES\O). Compare with the results in Section VI.3, notably Corollary (3.6).
[Hint: Prove that (2St - B t )2 - 3t is a local martingale.]
(1.31) Exercise (Seshadri's identities). In the notation of Sections 2 and 3 m
Chapter VI, write
(2S t - B t )2 = (St - Bt )2 + r~ = S~ + p~.
1°) For fixed t > 0, prove that the r.v.'s rf and pf are exponentially distributed
and that (St - B t )2 and rt2 are independent; likewise
and pf are independent.
[Hint: Use the results on Gamma and Beta variables from Sect. 6 Chap. 0.]
2°) Prove further that the processes rf and pf are not BESQ2(0), however
tempting it might be to think so.
[Hint: Write down their semimartingale decompositions.]
Sf
(1.32) Exercise. (Asymptotic distributions for functionals X" whenever
fooo (1 + t)dp.,(t) = (0). P) Let X be a BESQ8(x) with 8 > 0; for ex < 2,
prove that, as t tends to infinity, t,,-2 u-" Xu du converges in distribution to
fOI u-ayu du, where Y is a BESQ8(0).
u- 2Xu du converges a.s. to
2°) In the same notation, prove that (log t)-I
f:
f:
8 = E[Yd. Prove further that (Iogt)-1/2 (8(1ogt) -
f:
u- 2 X u
dU) converges in
distribution to 2Y8 where Y is a BMI (0).
[Hint: Develop (Xtlt), t ~ 1, as a semimartingale.]
(1.33) Exercise. ("Square" Bessel processes with negative dimensions). 1°) Let
x, 8 > 0, and f3 be a BMI. Prove that there exists a unique strong solution to the
equation
Zt = X + 2 fot
M df3s - 8t.
Let Q;8 denote the law of this process on W = C(lR+, 1R).
2°) Show that To = inf{t: Zt = O} < 00 a.s., and identify the process
{-ZTo+t, t ~ OJ.
3°) Prove that the family {Qi, Y E lR, x ::: O} does not satisfy the additivity
property, as presented in Theorem (1.2).
(1.34) Exercise (Complements to Theorem 1.7). Let p., be a positive, diffuse,
Radon measure on 1R+. Together with ¢,,' introduce the function 1/I,,(t) =
",ttS)'
¢" (t) f~
10) Prove that 1/1" is a solution of the Sturm-Liouville equation ¢" = p.,¢, and
that, moreover, 1/1" (0) = 0, 1/1~ (0) = 1.
Note the Wronskian relation
454
Chapter XI. Bessel Processes and Ray-Knight Theorems
2°) Prove that, for every t ~ 0, one has
3°) Check, by letting t -+ 00, that the previous formula agrees with the result
given in Theorem 1. 7.
[Hint: If 100 t dp,(t) < 00, then: 1/f~(t) -+ 1/<P/1(00) (> 0).]
4°) Let B be a BMI (0). Prove that, if v is a second Radon measure on ~+,
then
E [exp
=
{lot Bsv(ds) - ~ lot B;P,(dS)} ]
1 1/2 exp
(1/f'(t»
[lit (1
2
0
du
u
t
</>(s)
--v(ds)
</>(u)
)2 -
ect)
(it
0
1/f(s)vCds)
)2)
where e(t) = </>~Ct)/21/f~(t).
[Hint: Use the change of probability measure considered in the proof of Theorem (3.2) below.]
§2. Ray-Knight Theorems
Let B be the standard linear BM and (Ln the family of its local times. The
Ray-Knight theorems stem from the desire to understand more thoroughly the
dependence of (L~) in the space variable a. To this end, we will first study the
process
O:Sa:sl,
where Tl = inf{ t : B t = I}. We will prove that this is a Markov process and in fact
a BESQ2 restricted to the time-interval [0,1]. We call (~a)aE[O,II' the complete
and right-continuous filtration generated by Z. For this filtration, we have a result
which is analogous to that of Sect. 3, Chap. V, for the Brownian filtration.
(2.1) Proposition. Any r.v. H of L2 (£a),
H = E[H] +
°a
:s
:s 1, may be written
loTI hs I(B,>I-a)dBs
where h is predictable with respect to the filtration of Band
E
[loTI h;I(B,>I-a)dS] < 00.
Proof The subspace .% of r.v.'s H having such a representation is closed in
L2 (Za), because
§2. Ray-Knight Theorems
455
E[H2] = E[Hf + E [loTI h;I(B,>I-a)ds]
and one can argue as in Sect. 3 of Chap. V.
We now consider the set of r.v.'s K which may be written
K = exp {-loa g(b)Zbdb}
with g a positive Cl-function with compact support contained in ]0, a[. The vector
space generated by these variables is an algebra of bounded functions which,
thanks to the continuity of Z, generates the a-field Za. It follows from the
monotone class theorem that this vector space is dense in L 2 (£a). As a result, it
is enough to prove the representation property for K.
Set Ut = exp {- f~ g(l - Bs)ds}; thanks to the occupation times formula,
since g(l - x) vanishes on ]0, 1 - a[,
K = exp {-
j~a g(l - X)L~,dX} = UTI'
If F E C 2, the semimartingale M t = F(Bt)Ut may be written
M t = F(O) - l t F(Bs)Usg(l - Bs)ds + lt UsF'(Bs)dBs + -I1t UsFI/(Bs)ds.
o
0
2 0
We may pick F so as to have F' == 0 on ] - 00, 1 - a], F(l) i= 0 and FI/(x) =
2g(l - x)F(x). We then have, since F' = F'l jl - a . oo [,
MTI = F(O) + loTI Us F'(B,)I(B,>l-a)dBs
and, as K = F (1) -I M TI , the proof is complete.
We may now state what we will call the first Ray-Knight theorem.
(2.2) Theorem. The process Za, 0 :::: a :::: 1 is a BESQ2(0) restricted to the time
interval [0, 1].
Proof From Tanaka's formula
it follows that
Za - 2a = -2
{TI
10
I(B,>I-a)dB s '
It also follows that Za is integrable; indeed, for every t
E [L;;;J = 2E [(BtATI - (l and passing to the limit yields E[Za] = 2a.
a)tJ
456
Chapter XI. Bessel Processes and Ray-Knight Theorems
Now, pick b < a and H a bounded Zb-measurable r.v. Using the representation of the preceding proposition, we may write
E [(Za - 2a) H]
E [ -2
E
loT! hs I(B,>I-b) I(B,>I-a)ds]
[-2 loT!
hs l(B,>I-b)dS]
E[(Zb- 2b )H].
Therefore, Za -2a is a continuous martingale and by Corollary (1.13) of Chap. VI,
its increasing process is equal to 4 foa Zudu. Proposition (3.8) of Chap. V then
asserts that there exists a BM f3 such that
Za =
21 a ZI/2dP. + 2a·
°
u
Pu
,
in other words, Z is a BESQ2(0) on [0, 1].
D
Remarks. 1°) This result may be extended to the local times of some diffusions, by
using for example the method of time-substitution as described in Sect. 3 Chap. X
(see Exercise (2.5».
2°) The process Za is a positive submartingale, which bears out the intuitive
feeling that L~ has a tendency to decrease with a.
3°) In the course of the proof, we had to show that Za is integrable. As it turns
out, Za has moments of all orders, and is actually an exponential r.v. which was
proved in Sect. 4 Chap. VI.
4°) Using the scaling properties ofBM and BESQ2, the result may be extended
to any interval [0, c].
We now tum to the second Ray-Knight theorem. For x > 0, we set
Tx
= inf {t : L? > x} .
(2.3) Theorem. The process L~x' a 2: 0, is a BESQo(x).
Proof Let g be a positive C1-function with compact support contained in ]0, oo[
and Fg the unique positive decreasing solution to the equation F" = g F such
that Fg(O) = 1 (See the discussion before Theorem (1.7) and Appendix 8). If
I(A, x) = exp (-(A/2)F;(0») Fg(x), Ito's formula implies that, writing L for LO,
r
r
Jo I; (Ls, Bn I(B,>o)dBs + ~2 Jo I; (Ls, Bn dL s
+ ~ lot I:i (Ls, Bn I(B,>o)ds + lot I{ (Ls, B:) dL s.
1+
In the integrals with respect to dL s , one can replace B,+ by 0, and since
I~ (A, 0) + I{ (A, 0) = 0, the corresponding terms cancel. Thus, using the integration by parts formula, and by the choice made of Fg , it is easily seen that
!
§2. Ray-Knight Theorems
457
-!
f (Lt, Bt) exp ( f~ g(Bs)ds) is a local martingale. This local martingale is
moreover bounded on [0, Tx ], hence by optional stopping,
°
But B yx = a.s. since Tx is an increase time for L t (see Sect. 2 Chap. VI) and of
course L yx = x, so the above formula reads
By the occupation times formula, this may also be written
If we now compare with Theorem (1.7), since g is arbitrary, the proof is finished.
Remarks. 1°) The second Ray-Knight theorem could also have been proved by
using the same pattern of proof as for the first (see Exercise (2.8)) and vice-versa
(see Exercise (2.7)).
2°) The law of the r.v. L~x has also been computed in Exercise (4.14) Chap.
VI. The present result is much stronger since it gives the law of the process.
We will now use the first Ray-Knight theorem to give a useful BDG-type
inequality for local times, a proof of which has already been hinted at in Exercise
(l.l4) of Chap. X.
(2.4) Theorem. For every p E ]0, 00[, there exist two constants
such that for every continuous local martingale vanishing at 0,
°
< cp < Cp < 00
where La is the family of local times of M and L; = sUPaE][l; L~.
Proof One can of course, thanks to the BDG inequalities, use (M, M)'J,2 instead
of M::C in the statement or in its proof. The occupation times formula yields
(M, M)oo = /
+00
-00
L'::xoda =
IM* L'::xoda ::::: 2M* L *.
-M*
Therefore, there exists a constant dp such that
if E [(M::C)"] is finite, which can always be achieved by stopping, we may divide
by E [( M::c)"f /2 and get the left-hand side inequality.
458
Chapter XI. Bessel Processes and Ray-Knight Theorems
We now tum to the right-hand side inequality. If Tt is the time-change associated with (M, M), we know from Sect. 1 Chap. V that B. = Mr: is a BM, possibly
stopped at S = (M, M)oo. By a simple application of the occupation times formula, (L~,) is the family of local times, say (ln, of B.l's, and therefore L"oo = l~.
Consequently, it is enough to prove the right-hand side inequality whenever M is
a stopped BM and we now address ourselves to this situation.
We set ~n = inf{t : IBtl = 2n}. For n = 0, we have, since ~o = T j /\ L j ,
By the first Ray-Knight theorem, L~~a - 2a is a martingale with moments of
every order; thus, by Theorem (1.7) in Chap. II the above quantity is finite; in
other words, there is a constant K such that E [LtoJ = K and using the invariance
under scaling of (Bt, Lt ) (Exercise (2.11) Chap. VI), E [LU = 2 n K.
We now prove that the right-hand side inequality of the statement holds for
p = 1. We will use the stopping time
T = inf{~n : ~n 2: S}
for which B~ :::: B~ :::: 2B~ + 1 (the I on the right is necessary when T = ~o).
Plainly, E [q] .: : E [Lf] and we compute this last quantity by means of the
strong Markov property. Let m be a fixed integer; we have
J = E
[LfA~J = E [~(LfA~n+1 - LfA~J] .
Obviously, ~n+j = ~n + ~n+j 0 8~n which by the strong additivity of local times
(Proposition (1.2) Chap. X), is easily seen to imply that
L*~n+1 -<L*~n +L*~n+1 08.
~n
Therefore,
J.:::: E
[~l(T>~n)qn+1 08~n] = E [~I(T>~n)EB,n [LLJ].
Furthermore, if lal = 2 n , then, under the law Pa, ~n+j(B) = ~n+2(B - a); since
on the other hand L7(B - a) = L7(B), we get
J
<
E[~I(T>~n)EO[Ltn+J]
E [~2n+2K I(T>~n)] .:::: 4KE [B~A~J.
By letting m tend to infinity, we finally get E [Lf] .: : 4K E [B~] and as a result,
§2. Ray-Knight Theorems
459
E[q] S E[L~] S 8KE[B~+ 1].
By applying this inequality to the Brownian motion c- I Be 2., and to the time c- 2 S,
one can check that
E [q] S 8K (E [B~] + c2 ).
Letting c tend to zero we get our claim in the case p = 1, namely, going back to
M,
E [L::O(M)] s 8KE [M~].
To complete the proof, observe that by considering for a stopping time S, the
loc.mart. M HI> t 2: 0, we get from (*):
E [L::O(M) - L~(M)] s 8KE [M::O].
By applying the Garsia-Neveu lemma of Exercise (4.29) Chap. IV, with At = L;,
X = 8K M~ and F(A) = AP we get the result for all p's.
0
*
(2.5) Exercise (Local times of BESd ). 1°) Let La, a 2: 0, be the family of local
times of BES 3 (0). Prove that the process t~, a 2: 0, is a BESQ2(0).
[Hint: Use the time-reversal result of Corollary (4.6) of Chap. VII.]
2°) For p > 0 let Aa be the family of local times of BES 2 p+3(0). Prove that
A~, a 2: 0, has the same law as the process iVp(a) 12, a 2: 0, where
Vp(a) = a- P
loa sPdf3s
with f3 a BM2.
[Hint: Use Exercise (1.19) in this chapter and Exercise (1.23) Chap. VI.]
3°) The result in 2°) may also be expressed: if t a is the family of local times
of BES d(0) with d 2: 3, then
(t~, a 2: 0) <f!. (Cd - 2)-l a d- 1 U(a 2 - d ), a 2: 0)
where V is a BESQ2(0).
[Hint: Use property iv) in Proposition (1.10) Chap. I.]
4°) Let f be a positive Borel function on ]0, oo[ which vanishes on [b, oo[
for some b > 0 and is bounded on [c, oo[ for every c > O. If X is the BESd(O)
with d 2: 3, prove that
10 f(Xs)ds <
1
00
a.s.
iff
lob rf(r)dr <
00.
[Hint: Apply the following lemma: let JL be a positive Radon measure on ]0, 1];
let (Vr, r E ]0, 1]) be a measurable, strictly positive process such that there exists
a bounded Borel function ¢ from ]0, 1] to ]0, oo[ for which the law of ¢(r)-I Vr ,
does not depend on r and admits a moment of order 1. Then
10 VrdJL(r) <
1
00
a.s.
iff
10 ¢(r)dJl(r) <
1
00.
]
N.B. The case of dimension 2 is treated in the following exercise.
460
*
Chapter XI. Bessel Processes and Ray-Knight Theorems
(2.6) Exercise. 1°) Let X be a BES 2(0), )..,a the family of its local times and
TI = inf{t : XI = I}. Prove that the process A~I' 0 < a < 1, has the same law as
aU-Ioga,O < a < 1, where U is BESQ2(0).
[Hint: Use 1°) of the preceding exercise and the result in Exercise (4.12) of
Chap. VI!.]
2°) With the same hypothesis and by the same device as in 4°) of the preceding
exercise, prove that
11
f(Xs)ds < 00
a.s.
iff
11
rllogrlfCr)dr < 00.
Conclude that for the planar BM, there exist functions f such that f~ f(Bs)ds =
+00 Po-a.s. for every t > 0 although f is integrable for the two dimensional
Lebesgue measure. The import of this fact was described in Remark 4 after Theorem (3.12) Chap. X.
* (2.7) Exercise. (Another proof of the first Ray-Knight theorem). 1°) Let Z be
the unique positive solution to the SDE
Zt =
211
JZ;dfJs + 21t l(o::os:'Ol)ds,
Zo = o.
Prove that the stopping time (J = inf{t : Zt = O} is a.s. finite and> l.
2°) Let g be a positive continuous function on lR with compact support and
f the strictly positive, increasing solution to the equation f" = 2fg such that
f'(-oo) = 0, f(O) = l. With the notation of Theorem (2.2) prove that
E [exp ( -
f~ g(a)L~lda) ] = f(1)-I.
3°) Set vex) = f(1 - x) for x 2: 0; check that
v'(a) v(a 1\ 1)-1 exp ( Za--
2vCa)
loa g(1 - b)Zbdb )
0
is a local martingale and conclude that
E [exp (-
10 g(l - b)Zbdb) ] = f(1)-I.
00
4°) Prove that L~~a, a 2: 0, has the same law as Za, a 2: 0, which entails in
particular Theorem (2.2).
*
(2.8) Exercise. (Another proof of the second Ray-Knight theorem). 1°) In the
situation of Theorem (2.3), call (£~) the right-continuous and complete filtration
of L~x' a 2: O. Prove that any variable H in L 2 (£~) may be written
for a suitable h.
§2. Ray-Knight Theorems
461
2°) Prove that L~x - x is a (£~)-martingale and derive therefrom another
proof of Theorem (2.3).
*
(2.9) Exercise. (Proof by means of the filtration of excursions). Let B be the
standard linear BM. For x E ~, call r/ the time-change inverse of f~ I (8,:9)ds
and set tSx = (J (B T( , t ::: 0).
1°) Prove that t5x C g~ for x :s y.
2°) Prove that, for H'E L2(t5x ), there exists a (.¥,"")-predictable process h,
such that E [1000 h;I(8,:9)ds] < 00 and
H = E[H] +
1
00
hs I(B,::,:x)dBs'
3°) For x E ~, define ~/ = ne>O (J (.¥tH' tS~). Prove that if Y is a (.~)­
local martingale, the process f~ I(B,<x)dYs is a local martingale with respect to
the filtration U~/).
4°) Let a E ~, T be a (.(~/)-stopping time such that BT :s a a.s. and L~ is
ga-measurable. For a :s x < y, set
1
Vt = 2: (L; - Ln + y- - x- - (B t - y)+ + (B t - x)+
and show that
E [(V, Vh I ,~x] < +00.
[Hint: For p > 0, set F(z) = cosh J2P (y - sup(z, x»+ and 2c = J2P
x). The process V t
F(Bt/\T)exp(-c(L~ - L~/\T»X
exp(-p f~/\T l(x<B,::,:y)ds) is a bounded (;~/)-martingale.]
5°) Prove that Lj + 2y-, y ::: a is an (tSy )yo:a -continuous martingale with
L'y.dz. Derive therefrom another proof of the Ray-Knight
increasing process 4
theorems.
tanhJ2PCy -
f:
*
(2.10) Exercise (Points of increase of Brownian motion). Let B be the standard linear BM.
1°) Let r be a positive number and r the set of w's for which there exists a
time V (w) < r such that
BtCw) < Bu(w)(w)
for 0 :s t < V(w),
BtCw) > Bu(w)(w)
for V(w) < t :s r.
Let x be a rational number and set St = sUPs<t Bs; prove that a.s. on r n
{Bu < x < Sr}, we have Ltv = L~: > 0 and that-consequently r is negligible.
2°) If f is a continuous function on ~+, a point to is called a point of increase
. of f if there is an 8 > 0 such that
f(t) :s f(to)
for to - 8 :s t < to,
f(t) ::: f(to)
for to < t :s to + 8.
Prove that almost all Brownian paths have no points of increase. This gives another
proof of the non-differentiability of Brownian paths.
[Hint: Use Exercise (2.15) in Chap. VI to replace, in the case of the Brownian
path, the above inequalities by strict inequalities.]
462
Chapter XI. Bessel Processes and Ray-Knight Theorems
3°) Prove that for any T > 0 and any real number x the set
AT = {t :s T : Bt = x}
is a.s. of one of the following four kinds: 0, a singleton, a perfect set, the union
of a perfect set and an isolated singleton.
* (2.11) Exercise. 1°) Derive the first Ray-Knight theorem from the result in Exercise (4.17) Chap. VI by checking the equality of the relevant Laplace transforms.
2°) Similarly, prove that the process L~l' a :s 0, has the law of (1 - a)2
X«((1 - a)-l - (1 - m)-l)+) where X is a BESQ4 and m a r.v. on ] - 00, O[
independent of X with density (1 - x)-2.
[Hint: For the law of m, see Proposition (3.13) i) in Chap. VI.]
* (2.12) Exercise. Let B be the standard linear BM, Lathe family of its local times
and set Yt = L~r.
1°) If ~(tl = ~B V O'(Bt ), prove that for each t
~Yt = Bt 1\ 0
-I
t
I(B,> Brl dB s
where the stochastic integral is taken in the filtration (~(t)) (see Exercise (1.39)
Chap. IV).
2°) Prove that for every t,
s~p E [ ~ (YS;+1 - Ys;)4] <
00
where t" = (sd ranges through the finite subdivisions of [0, t].
[Hint: Use the decomposition of B in the filtration (~(t)) and the BDG inequalities for local martingales and for local times.]
(2.13) Exercise. Retain the notation of the preceding exercise and let S be a
(~B)-stopping time such that the map x --+
is a semimart.. Prove that if ¢ is
Ls
continuous, then, in the notation of Exercise (1.35) Chapter VI,
P_ lim
L (10{s ¢(Yu)dL~;+1 _ 10{s ¢(Yu)dL~;)2 = 41b LS¢ (LS)2 dx.
n---+oo 4,
a
By comparison with the result of Exercise (1.35) Chap. VI, prove that Y is not a
(~B)-semimart ..
[Hint: Use the result in 3°) Exercise (1.33) Chapter IV.]
(2.14) Exercise. (Time asymptotics via space asymptotics. Continuation to Exercise (1.32» Prove the result of Exercise (3.20) Chapter X in the case d = 3, by
considering the expression
(log.JU)-1
1 IBsl-21{l~IB,I~v'U}ds.
00
[Hint: Use the Ray-Knight theorems for the local times ofBES\O) as described
in Exercise (2.5).]
§3. Bessel Bridges
463
§3. Bessel Bridges
In this section, which will not be needed in the sequel save for some definitions,
we shall extend some of the results of Sect. 1 to the so-called Bessel Bridges. We
take 8 > 0 throughout.
For any a > 0, the space Wa = C([O, a], lR) endowed with the topology
of uniform convergence is a Polish space and the a-algebra generated by the
coordinate process X is the Borel a-algebra (see Sect. 1 in Chap. XIII). As a
result, there is a regular conditional distribution for P;[. I X a ], namely a family
P;:; of probability measures on Wa such that for any Borel set r
where /.La is the law of Xa under P;. Loosely speaking
P::;[r] = P:[r I Xa = y].
For fixed x, 8 and a, these transition probabilities are determined up to sets of
measure 0 in y; but we can choose a version by using the explicit form found
in Sect. 1 for the density P:' For y > 0, we may define P;:; by saying that for
o < tl <
< tn < a, the law of (Xtl' ... , Xt.) under
is given by the
density
P~ (x, XI)P:2- tl (XI, X2) ... p!-tn (xn, y)/ p!(x, y)
P;:;
with respect to dXI dX2 ... dxn . This density is a continuous function of y on
lR+\{O}. Moreover, since Iv(z) is equivalent for small z to cvz v where C v is a
constant, it is not hard to see that these densities have limits as y --+ 0 and that the
limits themselves form a projective family of densities for a probability measure
which we call P;";. From now on, P;:; will always stand for this canonical system
of probability distributions. Notice that the map (x, y) --+
is continuous in
the weak topology on probability measures which is introduced in Chap. XIII.
and leads to
The same analysis may be carried through with Q! instead of
a family Q!',~ of probability measures; thus, we lay down the
P;:;
P;
(3.1) Definition. A continuous process, the law of which is equal to P;:; (resp.
Qg) is called the Bessel Bridge (resp. Squared Bessel Bridge) from x to y over
[0, a] and is denoted by BES~(x, y) (resp. BESQ~(x, y)).
All these processes are inhomogeneous Markov processes; one may also observe that the square of BES~(x, y) is BESQ~(x2, yZ).
Of particular interest in the following chapter is the case of BES~ (0, 0). In
this case, since we have explicit expressions for the densities of BES 3 which
are given in Sect. 3 of Chap. VI, we may compute the densities of BES~(O, 0)
without having to refer to the properties of Bessel functions. Let us put It (y) =
(2Jl't 3 )-1/2 y exp ( - (y2 /2t)) 1(y>O) and call qt the density of the semigroup ofBM
464
Chapter XI. Bessel Processes and Ray-Knight Theorems
killed at 0. If 0 < tl < t2 < ... < tn < a, by the results in Sect. 3 Chap. VI, the
density of (X/l' ... , X t.) under the law pg,~a is equal to
III (YI )qI2-/l (YI, Y2) ... qa-In(Yn, Z) / la (z).
Letting z converge to zero, we get the corresponding density for
Pg;, namely
2 ( 2na 3) 1/2 III (YI)%-tl (YI, Y2) ... qln-In-l (Yn-I, Yn)la-In(Yn).
We aim at extending Theorem (1.7) to BESQ~(x, y).
(3.2) Theorem. Let fL be a measure with support in [0, 1]. There exist three constants A, A, B depending only on fL, such that
Q~'.~ [exp ( -~XfL) ] = AX AY B2 Iv (foB2) / Iv (fo),
Proof We retain the notation used in the proof of Theorem (1.7) and define a
probability measure R~'fL on .¥J = (J (X s, s :s 1) by R~'fL = Zi . Q~. The law
of XI under R~·fL has a density rf'fL (x, .) which we propose to compute. By an
application of Girsanov's theorem, under R;'fL, the coordinate process X is a
solution to the SDE
Eq. (3.1)
Xt = x +
210 IXsdf3s + 210 FfL(S)Xsds + t.
1
1
If H is a solution to
Eq. (3.2)
HI = U + Bt +
10 FfL(S)Hsds,
1
then H2 is a solution to Eq. (3.1) with x = u 2 and f3t = f~(sgn Hs)dBs. But
Eq. (3.2) is a linear equation the solution of which is given by Proposition (2.3)
Chap. IX. Thus, H t is a Gaussian LV. with mean um(t) and variance (J2(t) where
If we recall that q/ (x, .) is the density of the square of a Gaussian LV. centered
at yIx and with variance t, we see that we may write
r/,fL(x,.) = q;2(/) (xm 2(t), .).
Furthermore, it follows from Theorem (1.2) that
R 8,jL
x
as a result
* Rx'8',fL = R8+8',jL,
x+x"
rf,fL(x,.) = q!2(t) (xm 2(t), ,).
We tum to the proof of the theorem. For any Borel function f
:::: 0,
§3. Bessel Bridges
f Q~'.~
[ex p (
465
-~X/i)] !(y)qf(x, y)dy
R~,/i [(Zi}-l exp ( -~X/i) !CX1)]
=
= exp {~ (F/i (O)x + 8 log ¢/i (1) ) } R~·/i [f(X 1)]
since ¢~ (I) = O. Consequently, for Lebesgue almost every y,
Q~',~ [exp ( - ~ X /i ) ]
= exp
{~(F/i(O)X + 810g ¢/i(1»)} r~,/i(x, y)/qf(x, y)
= exp
{~(F/i(O)X + 810g ¢/i(l»)} q~2(1) (xm 2 (1), y) /qf(x, y).
Using the explicit expressions for 0'2(1) and m 2(1) and the value of qf found in
Sect. 1, we get the desired result for a.e. y and by continuity for every y.
D
In some cases, one can compute the above constants. We thus get
(3.3) Corollary. For every b :::: 0
Q~:~ [exp ( _ ~2 10 XsdS) ]
1
= (b / sinh b) exp { ( x ;
y) (1 - b coth b) } Iv (b JXY/ sinh b) / Iv (JXY) .
In particular,
Q~:~ [exp (_ ~2101 XsdS)]
( _._b_)"/2 exp (::'(1 - b cothb») .
smhb
2
Proof The proof is patterned after Corollary (1.8). The details are left to the
reader.
D
We are now going to extend Corollary (l.l2) to Bessel Bridges. We will need
the following
(3.4) Lemma. Let X be a real-valued Markov process, g a positive Borelfunction
such that Jooo g(Xs)-lds = 00. Ifwe set
Ct =
lot g(Xs)-lds,
X t = Xc"
466
Chapter XI. Bessel Processes and Ray-Knight Theorems
where Ct is the time-change associated with C, then dCt = g(Xt)dt. Moreover, if
we assume the existence of the following densities
Pt(x, y) = PAX t E dy]/dy,
p,(x, y) = Px[Xt E dy]/dy,
h.x,y(t, u) = PACt E du I Xt = y]/du,
hx.y(t, u) = Px[Ct E du I X t = y]/du,
then dt du dy-a.e.
Pu(x, y)g(y)hx,y(u, t) = Pt(x, y)hx.y(t, u).
Proof The first sentence has already been proved several times. To prove the
second, pick arbitrary positive Borel functions </J, ¢; and f; we have
Ex
[10 dt¢;(t)f(Xt)</J(Ct)]
00
=
10 dt¢;(t)
00
f
Pt(x, y)f(y)dy
f
</J(u)hx.y(t, u)du,
but on the other hand, using the time-change formulas the same expression is
equal to
Ex
[1
=
00
dt¢;(Ct)f(Xt)g(xt)</J(t)]
10 dt</J(t)
00
f
p,(x, y)g(y)f(y)dy
f
¢;(u)hx,y(t, u)du.
Interchanging the roles played by t and u in the last expression and comparing
with that given above yields the result.
We may now state
(3.5) Theorem. For v > 0, P and q two conjugate numbers> I and Pv(t), 0 :::
t ::: 1 a BES;V) (0,0), we put
Yv,p
Then, with Av = (2V
= q2/ p
(10 Ip~%/P(s)ds )-I/q
rev + 1»-I,for every positive Borelfunction f
Proof We use Lemma (3.4) with g(x) = x 2/ p and X a BES(v)(O). Using Propo-
sition (l.lI), we see that
~
X t = Xc, =
(
q -I Rvq(t) )q
§3. Bessel Bridges
467
where Rvq is a BES(vq). It follows from the first sentence of Lemma (3.4) that
t 1t
C=
g(Xs)ds = q-2 q/ p
1t
R vq (s)2 q/ P ds.
By a straightforward change of variable, the density fit (0, y) of Xt may be deduced
from pivq\o, y) and is found to be equal to
Avq t-(V q+1) q2v+ I y2v+(2/ q)-1 exp ( _ (qyl/q)2 /2t).
If we write the equality of Lemma (3.4) for u = 1 and x = 0, we obtain
Avqq2v+l exp ( - (qyl/q)2 /2t )hoy (1, t) = Avt-(v+l) exp (- y2 /2t) hoy (t, 1)
because of the cancellation of the powers of y. If we further make y = 0, we get
,
2v+l~h (1 )
, -(v+l)h 00 ( t, 1) .
Avqq
00, t = Avt
Next, by the definition of h xy and the scaling properties of X seen in Proposition
(1.10), one can prove that
hoo(t, 1) = t- 1/ qh oo (1, t- 1/ q );
this is left to the reader as an exercise. As a result
Avqq2v+lhoo(1, t) = Avr(v+l+(l/q»h oo (1, r 1/ q ).
Now, hoo(t, 1) is the density of y;;~ and hoo(l, t) is that of x v.p. Therefore if f
is a positive Borel function
Avq
f
f (rl/q) h oo (1, t)dt
Avq-(2v+l)
f
f (t- 1/ q ) t-(v+I+O/q»h oo (1, t- 1/ q) dt.
Making the change of variable t = u- q , this is further equal to
Av
f
AvE
feu) (u/q 2
(3.6) Exercise. 1°) Prove that under Q~, the process
0:::: u < 1,
Q;.
Q 8.1
x.o *
Q8'.1
x'.o =
Q8+8'.1
q
h
[f (xv.p) (x v.p/q 2rq]
as was to be proved.
has the law
~.
2°) Prove that
r oo (1,
x+x'.o·
u)du
468
Chapter XI. Bessel Processes and Ray-Knight Theorems
(3.7) Exercise. Let P be the law of X I-t, 0 .:'S t .:'S 1, when X has law P. Prove
that
Qo,l = (20,1.
x,y
y,x
* (3.8) Exercise. State and prove a relationship between Bessel Bridges of integer
dimensions and the modulus of Brownian Bridges. As a result, the corresponding
Bessel Bridges are semimartingales.
(3.9) Exercise. If p is the BES~(O, 0), prove that
(3.10) Exercise. For a fixed a > 0 and any x, y, 8, t such that t > a, prove that
p;:~ has a density Z~t) on the a-algebra.9;; = a(Xs , S .:'S a) with respect to P;.
Show that as t ---+ 00, (Zf») converges to 1 pointwise and in LI (pD, a fact which
has an obvious intuitive interpretation. The same result could be proved for the
Brownian Bridge.
[Hint: Use the explicit form of the densities.]
* (3.11) Exercise. (Stochastic differential equation satisfied by the Bessel
Bridges).
1°) Prove that for t < 1,
P;:J = h(t, X t )P3 on .Yr,
where h(t, x) = limy-+o pL(x, y)/ pf(O, y) = (1 - t)-0/2 exp (_x 2/2(1 - t)).
[Hint: Compute Ep~ [F (Xu, u .:'S t) ¢(X I )] by conditioning with respect to
a(XI) and with respect to .¥t.]
2°) Prove that for 8 ~ 2, the Bessel Bridge between 0 and 0 over [0, 1] is the
unique solution to the SDE
X t = Bt +
1(8---1- - t
o
2Xs
Xs) ds,
1- s
Xo = o.
[Hint: Use Girsanov's theorem.]
3°) Prove that the squared Bessel Bridge is the unique solution to the SDE
Xt =
210t .jX;dBs + 10t (8 _12X- ss.) ds.
(3.12) Exercise. 1°) Prove that the Bessel processes satisfy the hypothesis of Exercise (1.12) Chap. X.
2°) Prove that for 8 > 2,
y [. I L = a]
po,a
x,y = {p8)L
x
y
,
where Ly = sup{t : X t = y} and the notation pLy is defined in Sect. 4 of Chap.
XII.
[Hint: Use Exercise (4.16) Chap. VI!.]
Notes and Comments
469
Notes and Comments
Sect. 1. The systematic study of Bessel processes was initiated in Mc Kean [1];
beside their interest in the study of Brownian motion (see Chaps. VI and VII)
they afford basic examples of diffusions and come in handy for testing general
conjectures. Theorem (1.2) is due to Shiga-Watanabe [1]; actually, these authors
characterize the family of diffusion processes which possess the additivity property
of Theorem (1.2) (in this connection, see Exercise (1.13». Corollaries (l.3) and
(1.4) are from Molchanov [1]. An explicit Levy-Khintchine formula for Q~ is
given in Exercise (4.21), Chap. XII. Exercise (1.14) presents further examples of
infinitely divisible laws on W = C(lH.+, JR). The following question arises naturally
Question 1. Under which condition on the covariance 8(s, t) of a continuous
Gaussian process (Xt. t ~ 0) is the distribution of (X;, t ~ 0) infinitely divisible?
Some answers are given by Eisenbaum [4].
Theorem (1.7) is taken from Pitman-Yor [3] and Corollary (1.8) is in Levy [3].
Proposition (1.11) is in Biane-Yor [1] where these results are also discussed at the
Ito's excursions level and lead to the computation of some of the laws associated
with the Hilbert transform of Brownian local times, the definition of which was
given in Exercise (1.29), Chap. VI.
Exercise (1.17) is due to Calais-Genin [1]. Exercise (1.22) is from Yor [10]
and Pitman-Yor [1]; Gruet [3] develops similar results for the family of hyperbolic
Bessel processes, mentioned in Chap. VIII, Sect. 3, Case 3.
The relationship described in this exercise together with the related HartmanWatson distribution (see Yor [10]) have proven useful in several questions: the
shape of random triangles (Kendall [1]), flows of Bessel processes (Hirsch-Song
[1]), the study of SLE processes (Werner [1]).
For Exercise (1.25) see Molchanov-Ostrovski [1] who identify the distribution
of the local time at 0 for a BES 8 , 8 < 2 as the Mittag-Leffler distribution; their
result may be recovered from the result in question 2°).
Exercise (1.26) may serve as a starting point for the deep study of BES 8 ,
o < 8 < 1, by Bertoin [6] who shows that (0,0) is regular for the 2-dimensional
process (p, k) and develops the corresponding excursion theory.
Exercise (1.28) is a particular example ofthe relationship between exponentials
of Levy processes and semi-stable processes studied by Lamperti ([1], [2]) who
showed that (powers of) Bessel processes are the only semi-stable one-dimensional
diffusions. Several applications of Lamperti's result to exponential functionals of
Brownian motion are found in Yor ([27], [28]); different applications to Cauchy
principal values are found in Bertoin [9].
The result presented in Exercise (1.29) is a particularly striking case of the extensions of Pitman's theorem to transient diffusions obtained by Saisho-Tanemura
[1]; see also Rauscher [1], Takaoka [1] and Yor [23].
Exercise (1.33) is taken from A. GOIng's thesis [1]. Exercise (l.34) is taken
partly from Pitman-Yor [3], partly from Follmer-Wu-Yor [1].
470
Chapter XI. Bessel Processes and Ray-Knight Theorems
Bessel processes and their squares have undergone a series of generalizations,
especially with matrix-valued diffusion processes, leading for example to Wishart
processes (Bru [1]-[3], Donati-Martin et al. [1]). Pitman's theorem has been generalized within this framework (Bougerol-Jeulin [1], O'Connell-Yor [1]) and shown
to be closely connected with some important results of Littleman in Representation
theory (Biane-Bougerol-O'Connell [1]).
Sect. 2. The Ray-Knight theorems were proved independently in Ray [1] and
Knight [2]. The proof given here of the first Ray-Knight theorem comes from
Jeulin-Yor [1] as well as the second given in Exercise (2.8). The proof in Exercise
(2.7) is due to McGill [1] and provides the pattern for the proof of the second
Ray-Knight theorem given in the text. Another proof is given in Exercise (2.9)
which is based on Walsh [2] (see also Jeulin [3]). Exercise (2.11) is from Ray [1].
The proof of Theorem (2.4) is the original proof of Barlow-Yor [1], its difficulty
stemming from the fact that, with this method, the integrability of L has to be
taken care of. Actually, Bass [2] and Davis [5] proved that much less was needed
and gave a shorter proof which is found in Exercise (1.14) of Chap. X. See Gundy
[1] for some related developments in analysis.
Exercises (2.5) and (2.6) are from Williams [3] and Le Gall [3], the lemma in
the Hint of (2.5) 4°) being from Jeulin ([2] and [4]) and the final result of (2.6)
from Pitman-Yor [5]. The result of Exercise (2.10) is due to Dvoretsky et al [1]
and Exercise (2.12) to Yor [12].
Although it is not discussed in this book, there is another approach, the socalled Dynkin's isomorphism theorem (Dynkin [3]), to Ray-Knight theorems (see
Sheppard [1], Eisenbaum [2], [3], [5]). A number of consequences for local times
of Levy processes have been derived by Marcus and Rosen ([1], [2]).
Exercise (2.13) comes from discussions with B. Toth and W. Werner. Another
proof, related to the discussion in Exercise (2.12), of the fact that L~t is not a
semimartingale has been given by Barlow [1]. The interest in this question stems
from the desire to extend the Ray-Knight theorems to more general classes of
processes (for instance diffusions) and possibly get processes in the space variables
other than squares of Bessel or OU processes.
r
Sect. 3. For the results of this section, see Pitman-Yor ([1] and [2]) and Biane-Yor
[1 ].
Chapter XII. Excursions
§1. Prerequisites on Poisson Point Processes
Throughout this section, we consider a measurable space (U, lh) to which is
added a point 8 and we set U 8 = U U {8},168 = aClh, {8}).
(1.1) Definition. A process e = (et, t > 0) defined on a probability space
(D, .'Y', P) with values in (U8, ctb8) is said to be a point process if
i) the map (t, w) ~ et(w) is .15>(]0, oo[) ®."Y-measurable;
ii) the set Dw = {t : et(w) =f. 8} is a.s. countable.
The statement ii) means that the set {w : Dw is not countable} is contained in
a P-negligible Y-measurable set.
Given a point process, with each set r Ei'h8, we may associate a new point
process e r by setting e{ (w) = et(w) if et(w) E r, e{ (w) = 8 otherwise. The
process e r is the trace of e on r. For a measurable subset A of ]0, oo[xU, we
also set
NII(W) =
II1(t, et(w».
L
t>O
In particular, if A
Ls<ug IrCe u ).
= ]0, t] x r, we will write Nr for Nil; likewise Nl~,tl =
(1.2) Definition. A point process is said to be discrete if N tU < 00 a.s. for every t.
The process e is a-discrete if there is a sequence (Un) of sets, the union of which
is U and such that each eUn is discrete.
If the process is a -discrete, one can prove, and we will assume, that all the
Nil'S are random variables.
Let us now observe that the Poisson process defined in Exercise (1.14) of
Chap. II is the process N U associated with the point process obtained by making
U = IR+ and et(w) = t if there is an n such that Sn(w) = t, et(w) = 8 otherwise.
More generally we will set the
(1.3) Definition. Let (D, ."Y, §f, P) be a filtered probability space. An (.Yf)Poisson process N is a right-continuous adapted process, such that No = 0, and
for every s < t, and kEN,
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
472
Chapter XII. Excursions
for some constant c > 0 called the parameter of N. We set ,1Nt = Nt - N t _·
We see that in particular, the Poisson process of Exercise (1.14) Chap. II is
an (.¥r"")-Poisson process for its natural filtration (3<;) = «(Y(Ns , s :s t)). We have
moreover the
(1.4) Proposition. A right-continuous adapted process is an (,g7{)-Poisson process
if and only if it is a Levy process which increases only by jumps a.s. equal to 1.
Proof Let N be an (.9f)-Poisson process. It is clear from the definition that it is
an integer-valued Levy process and that the paths are increasing. As a result, for
any fixed T, the set of jumps on [0, T] is a.s. finite and consequently,
sup (Nt - Nt-) = lim max (NkT/n - N(k-I)T/n)
Og:'OT
n
l:'Ok:'On
a.s.;
but
P [max (NkT/n - N(k-l)T/n) :s lJ
l:'Ok:'On
= P[NT/n:S If = (e-CT/n(l +cT/n)f
which goes to 1 as n tends to infinity. Hence the jumps of N are a.s. of magnitude
equal to 1.
Conversely, if N is a Levy process which increases only by jumps of magnitude
1, the right-continuity of paths entails that each point is a holding point; moreover
because of the space homogeneity, the times of jumps are independent exponential
r.v. 's with the same parameter, hence the increments Nt _. Ns are Poisson r.v. 'so
o
Here are a few more properties of the Poisson process, the proofs of which
are left to the reader.
Firstly, recall from Exercise (1.14) Chapter II that M t = Nt - ct is a (.9f)martingale and
ct is another one. More generally, the paths of N being
increasing and right-continuous, the path by path Stieltjes integral with respect to
d Ns (w) is meaningful. Thus, if Z is a predictable process such that
M; -
for every t,
then
10 ZsdNs - c lot Zsds = lot ZsdMs
1
is an (3f)-martingale. This may be seen by starting with elementary processes
and passing to the limit. The property does not extend to optional processes (see
§ 1. Prerequisites on Poisson Point Processes
473
Exercise (1.14) below). Further, as for BM, we also get exponential martingales,
namely, if f E Lloc(lR+), then
L{ = exp
{i lot
f(s)dN s - c
lot (eif(S) - 1) dS}
is a (,Yf,)-martingale. Again, this may be proved by starting with step functions
and passing to the limit. It may also be proved as an application of the "chain
rule" formula for Stieltjes integrals seen in Sect. 4 of Chap. O.
Finally, because the laws of the jump times are diffuse, it is easily seen that
for any Poisson process and every fixed time t, .t1Nt = Nt - N t- is a.s. zero (this
follows also from the general remark after Theorem (2.7) Chapter III).
(1.5) Proposition. ifNI and N 2 are two independent Poisson processes, then
L (.t1N;)(.t1N;) = 0
a.s.;
s>o
in other words, the two processes almost surely do not jump simultaneously.
Proof Let Tn, n ~ 1, be the successive jump times of N 1• Then
Since .t1N? = 0 a.s. for each t, by the independence of N 2 and the Tn's, we get
.t1Nfn = 0 a.s. for every n, which completes the proof.
0
We now generalize the notion of Poisson process to higher dimensions.
(1.6) Definition. A process (N 1 , ••• ,Nd ) is ad-dimensional (.9if)-Poisson Process if each N i is a right-continuous adapted process such that N& = 0 and if there
exist constants Ci such that for every t ~ s ~ 0,
P
[n{i
d
i=1
J nexp(-ci(t-S» (Ci(t-S»k
Nt -Nsi =ki } I $. =
d
.f
i=1
k,.
i
•
By Proposition (1.5), no two components N i and Nj jump simultaneously.
We now work in the converse direction.
(1.7) Proposition. An adapted process N = (N 1 , ••• , N d ) is ad-dimensional
(.9if)-Poisson process if and only if
i) each N i is an (§f)-Poisson process,
ii) no two N i 's jump simultaneously.
Proof We need only prove the sufficiency, i.e. that the r.v. 's N: -N:, i = 1, ... ,d,
are independent. For clarity's sake, we suppose that d = 2. For any pair (fl, h)
of simple functions on lR+, the process
474
Chapter XII. Excursions
Xt = exp {i (lot i,(s)dN; + lot !z(S)dN;)}
changes only by jumps so that we may write
L (Xs - X s-)
1 + L X {exp (i (I, (s)L1N; + !z(s)L1N;)) -l}.
1+
Xt
O<s:9
s-
O<s:9
Condition ii) implies that, if L1N; = 1 then L1N; = 0 and vice-versa; as a result
X t = 1+
L X s- ((ei/I(S) - 1) L1N; + (eih(S) - 1) L1N;}.
O<s:9
The process X s - is predictable; using integration with respect to the martingales
=
cit, we get
M; N: -
E[XrJ = 1 + E
[lot X s- ((eifI(S) - 1) c, + (eih(S) - 1) cd dsJ .
But {s : Xs # X s-} is a.s. countable hence Lebesgue negligible, and consequently
E[XrJ
=
[lot Xs ((eiil(s) - 1) c, + (eih(S) - 1) C2} dSJ
1+ lot E[Xsl{ (eiil(s) - 1) c, + (eih(S) - 1) C2} ds.
1+ E
As a result,
E[XrJ = exp
(lot c, (ei/I(S) - 1) dS) exp (lot (eih(S) - 1) dS) ,
C2
which completes the proof.
o
We now turn to the most important definition of this section.
(1.8) Definition. An (.PD-Poisson point process (in short: (§f)-PPP) is a adiscrete point process (et), such that
i) the process e is (§f)-adapted, that is, for any r E 'l6, the process Nr is
(§f)-adapted;
ii) for any sand t > 0 and any r E 'l6, the law of Nl~,S+tl conditioned on ~ is
the same as the law of Nt.
The property ii) may be stated by saying that the process is homogeneous in
time and that the increments are independent of the past. By Propositions (1.4)
and (1.7) each process Nr for which Nr < 00 a.s. for every t, is a Poisson
process and, if the sets Ii are pairwise disjoint and such that Nji < 00 a.s.,
the process
(N/' ,i = 1,2, ... , d) is a d-dimensional Poisson process. Moreover,
when Nr < 00 a.s., then E [Nr] < 00 and the map t ---+ E [Nr] is additive,
thus E [Nt] does not depend on t > O.
f
§ 1. Prerequisites on Poisson Point Processes
475
(1.9) Definition. The a-jinite measure n on Ib defined by
n(r)=~E[Nn, t>O,
t
is called the characteristic measure of e. The measure n is continued to U8 by
setting n({8}) = O.
Thus, if nCr) < 00, nCr) is the parameter of the Poisson process N r .
If A E .J5>(lR+) 0 IL8 , the monotone class theorem implies easily that
E[N A ] =
1 f
00
dt
lA(t, u)n(du).
Let us also observe that, if nCr) < 00, then Nt - tncr) is an (.~)-martingale
and, more generally, we have the
(1.10) Proposition (Master Formula). Let R be a positive process defined on
(lR+ x g x U8), measurable with respect to .~(,9f) 0 'lL8 and vanishing at 8,
then
Proof The set of processes R which satisfy this equality is a cone which is
stable under increasing limits; thus, by the monotone class theorem, it is enough
to check that the equality holds whenever R(s, w, u) = K(s, w)lrCu) where K
is a bounded positive (.:YD-predictabie process and r ElL with nCr) < 00. In
that case, since Nt - tn(r) is a martingale, the left-hand side is equal to
E
[L:
E
K(s, W)lr(es(W))]
[1
00
s>O
n(r)E
K(s, w)dN;
[1
00
K(s, W)dS]
which completes the proof.
(1.11) Corollary. Ifmoreover
E
[1 f
t
ds
R(s, w, U)n(dU)] < 00
for every t, the process
L:
R(s, w, es(w))
O<sg
is a martingale.
Proof Straightforward.
-1t f
ds
0
(W)]
R(s, w, u)n(du)
476
Chapter XII. Excursions
The following result shows that n characterizes e in the sense that two PPP
with the same characteristic measure have the same law.
(1.12) Proposition (Exponential formulas). If 1 is a .93'(~+) ®l6-measurable
function such that It ds I I/(s, u)ln(du) < 00 then,for every t :s 00,
For 1 2: 0, the following equality also holds,
E[ exp { -
L I(s, e
s )}] =
exp
O<s:9
{-It f (1 ds
e-f(S,u)) n(dU)} .
0
Proof The process of bounded variation X t = LO<s<t I(s, es ) is purely discon- tinuous, so that, for g E .93'(~+),
g(Xt ) - g(Xo) =
L (g(Xs) - g(Xs-)) .
s~t
If we put ¢(t) = E [exp(iXt )], we consequently have
¢(t) = 1+
E[ L exp {i L I(s', e
which, using the master formula is equal to
1+ E
[I
t
o
ds exp
s')} (exp{il(s, es )} -1)
o::::s' <s
O~s:::;t
{i L I(s', e
s')}
O<s'<s
The function 1jI0 = I (eif("U) made on I, and we have
f
l
(eif(S,U) - 1) n(dU)] .
1) n(du) is in Ll(~+) thanks to the hypothesis
¢(t) = 1 +
lt
1jI(s)¢(s-)ds.
It follows that ¢ is continuous, so that
¢(t) = exp
{I
t
1jI(S)dS}
which is the first formula of the statement. The second formula is proved in much
the same way.
0
Remark. In the above proposition one cannot replace the function 1 by a non
deterministic process H as in Proposition (1.10) (see Exercise (1.22». Moreover
Proposition (1.12) has a martingale version along the same line as in Corollary
(1.11), namely
§ 1. Prerequisites on Poisson Point Processes
exp let
477
L I(s, es ) + i t ds f (I - e"f(s.u)) n(du)l
O<s~t
0
where et is equal to i or -I, is a martingale.
We close this section with a lemma which will be useful later on. We set
S = inf{ t : N tU > o}.
(1.13) Lemma. Ijn(U) < 00, then Sand es are independent and for any r E -t6
P [es E r] = n(r)/n(U).
Proof Since r n r c = 0, the Poisson processes N rand Nrc are independent.
Let T and T C be their respective first jump times. For every t,
{S > t; es E r} = {t < T < T"}
and consequently, since T and T C are independent exponential r.v.'s with parameters nCr) and n(r C ) ,
P [S > t; es E r]
[00 n(r)e-n(r)s ds [00 n(rC)e-n(rC)s' ds'
(n(r)/n(U) )e-n(U)t;
this shows in one stroke the independence of Sand es and the formula in the
statement and confirms the fact that S is exponentially distributed with parameter
n(U).
#
(1.14) Exercise. 1°) Let N be the Poisson process on lR+ and M t = Nt - ct.
Prove that J~ Ns_dMs is a martingale and that J~ NsdMs is not. Conclude that
the process N is not predictable with respect to its natural filtration.
2°) If Z is predictable, then under a suitable integrability hypothesis,
(fot ZsdMs) 2 _ c fot Z;ds
is a martingale. This generalizes the result in 3°) of Exercise (1.14) Chapter II.
#
(1.15) Exercise. 1°) Let (.'¥f) be the natural filtration of a Poisson process Nt and
set M t = Nt - ct. Prove that every (.jf)-local martingale X t may be written
Xo + fot ZsdMs
for a .X-predictable process Z.
[Hint: Use the same pattern of proof as for Theorem (3.4) in Chap. V.]
2°) Let B t be a BM independent of Nt and (.'~t) the smallest right-continuous
complete filtration for which Band N are adapted. Give a representation of (.'9r)local martingales as stochastic integrals.
478
Chapter XII. Excursions
3°) Let N' be another (.%)-Poisson process. Deduce from 1°) that N' = N
a.s. The reader will observe the drastic difference with the Brownian filtration for
which there exist many p'f)-BM's.
[Hint: Use 2°) of the previous exercise.]
4°) Describe all the (.~)-Poisson processes, when (.~) is the natural filtration
of a d-dimensional Poisson process.
[Hint: The discussion depends on whether the parameters of the components
are equal or different.]
(1.16) Exercise. 1°) Let N be a Poisson process and f ELI (lR+). What is the
necessary and sufficient condition under which the r.v. N(f) =
f(u)dN u is a
Poisson r.v.?
2°) If g is another function of L 1 (lR+), find the necessary and sufficient condition under which N(f) and N(g) (not necessarily Poisson) are independent.
It
(1.17) Exercise. Let N be an (.'¥r')-Poisson process and T a (.%)-stopping time.
Prove that N; = NTH - NT is a Poisson process. [For the natural filtration of
N, this is a consequence of the Strong Markov property of Sect. 3 Chap. III; the
exercise is to prove this with the methods of this section.] Generalize this property
to (.~%)-Poisson point processes with values in (U, l6).
*
(1.18) Exercise. Let (E, g) be a measurable space and .~6 the space of nonnegative, possibly infinite, integer-valued measures on (E, fe). We endow .~6
with the coarsest a-field for which the maps M --+ M(A), A E fe, are measurable.
An .Ai?'6-valued r.v. N is called a Poisson random measure if
i) for each A E fe, the r.v. N(A) is a Poisson r.v. (the map which is identically
+00 being considered as a Poisson r.v.);
ii) if (Ai) is a finite sequence of pairwise disjoint sets of g, the r.v.'s N(Ai) are
independent.
1°) Prove that if N is a Poisson random measure and if we set A(A) =
E[N(A)], then A is a measure on fe. Conversely, given a a-finite measure A on
(E, g) prove that there exist a probability space and a Poisson random measure
N on this space such that E[N(A)] = A(A) for every A E g. The measure A is
called the intensity of N.
2°) Prove that an .~6-valued r.v. N is a Poisson random measure with intensity
A, if and only if, for every f E g~,
E [exp (- / f(X)N(dX»)] = exp (- /
(1 - exp(- f(X»)A(dX»).
3°) Let n be a a-finite measure on a measurable space (U,1h). Prove that
there exists a PPP on U with characteristic measure n.
[Hint: Use 1°) with dA = dt dn on lR+ xU.]
§ 1. Prerequisites on Poisson Point Processes
#
479
(1.19) Exercise. Let N be a Poisson random measure on (E, g) with intensity A.
(see the preceding exercise) and i be a uniformly bounded positive function on
]0, oo[ x E such that for every s > 0, the measure isA. is a probability measure.
Assume that there exist an exponential r.v. S with parameter 1 and an E-valued
r.v. X such that, conditionally on S = s
N is a Poisson random measure with intensity gsA. where gs(x) = fsco it (x)dt;
ii) X has distribution isA.;
iii) X and N are independent.
i)
Prove that N* = N + ex is a Poisson random measure with intensity goA..
[Hint: Use 2°) in the preceding exercise.]
#
(1.20) Exercise. (Chaotic representation for the Poisson process). 1°) Retain
the notation of Exercise (1.15) and let i be a positive locally bounded function
on lR+. Prove that
?!f;f = exp (
-1t
i(s)dNs + c
1t
(1 - exp( - i(s» )dS)
is a (s;f)-martingale for which the process Z of Exercise (1.15) is equal to
g;~( exp (- f(s») -1).
2°) Prove that every Y E L 2 (!P'co) can be written
Y = E[Y]
+
f: {CO
n=] Jo
dMsl {
J[O,SI[
dMs2 '"
(
J[O,Sn_d
dMs.!n(SI, ... , sn)
where Un) is a sequence of functions such that
f: f
dSl ...
n=1
f
dsnf;(s], ... , sn) < 00.
(1.21) Exercise. 1°) Let e be a PPP and in the notation of this section, let f be
.39 (lR+) ® 'U~-measurable and positive. Assume that for each t, the r.v.
N( = L f(s, es )
0<s9
is a.s. finite.
1°) Prove that the following two conditions are equivalent:
i) for each t, the r.v. N( is a Poisson r.v.;
ii) there exists a set r E .39(lR+) ® Ph~ such that f = 1r ds ® dn-a.e.
[Hint: See Exercise (1.16).]
2°) Prove that the process (Nt; t 2: 0) satisfies the equality
r
~
Nt = Na(t)
where aCt) = J~ n(r(s, ·»ds and N is a Poisson process. Consequently, N r is a
is Lebesgue a.e. constant in s.
Poisson process iff n(r(s,
.»
480
Chapter XII. Excursions
(1.22) Exercise. 1°) Let N be a Poisson process with parameter c and call T\ the
time of its first jump. Prove that the equality
E [exp ( -
J
f(s, W)dNs(W»)] = E [exp ( -c
J
(l - exp( - f(s, w))) dS)]
fails to be true if f(s, w) = Al {O:9::oT(w)). with T = T\ and A > O.
2°) Prove that if g is not ds ® dP-negligible, the above equality cannot be
true for all functions f(s, w) = g(s, w)l{o::os::oT(w)} where T ranges through all the
bounded (..%")-stopping times.
§2. The Excursion Process of Brownian Motion
In what follows, we work with the canonical version of BM. We denote by W
the Wiener space, by P the Wiener measure, and by .¥ the Borel o--field of W
completed with respect to P. We will use the notation of Sect. 2, Chap. VI.
Henceforth, we apply the results of the preceding section to a space (U8 ,f68 )
which we now define. For aWE W, we set
R(w) = inf{t > 0 : wet) = OJ.
The space U is the set of these functions w such that 0 < R (w) < 00 and
w(t) = 0 for every t 2: R(w). We observe that the graph of these functions lies
entirely above or below the t-axis, and we shall call U+ and U_ the corresponding
subsets of U. The point 8 is the function which is identically zero. Finally, 16 is
the o--algebra generated by the coordinate mappings. Notice that U8 is a subset
of the Wiener space Wand that 168 is the trace of the Borel o--field of W. As
a result, any Borel function on W, as for instance the function R defined above,
may be viewed as a function on U. This will often be used below without further
comment. However, we stress that U 8 is negligible for P, and that we will define
and study a measure n carried by U8, hence singular with respect to P.
(2.1) Definition. The excursion process is the process e = (e s , s > 0), defined on
(W, .r, P) with values in (U8 , its) by
i) ifTs(w) - Ts-(W) > 0, then es(w) is the map
ii) ifT,(W) - Ts-(W) = 0, then es(w) = 8.
We will sometimes write es(r, w) or es(r)for thefunction es(w) taken at time r.
The process e is a point process. Indeed, to check condition i) in Definition
(1.1), it is enough to consider the map (t, w) -+ Xr(et(w» where Xr is a fixed
coordinate mapping on U, and it is easily seen that this map is measurable. Moreover, es is not equal to 8 if, and only if, the local time L has a constant stretch
§2. The Excursion Process of Brownian Motion
481
at level sand es is then that part of the Brownian path which lies between the
times "l"s- and "l"s at which B vanishes. It was already observed in Sect. 2 Chap. VI
that there are almost-surely only countably many such times s, which ensures that
condition ii) of Definition (1.1) holds.
(2.2) Proposition. The process e is a -discrete.
Proof The sets
Un = {u E U; R(u) > lin}
are in 'Y6, and their union is equal to U. The functions
NFn(w) =
L
n)
l(e,(w)EU
s~t
are measurable. Indeed, the process t --+ "l"t is increasing and right-continuous; if
for n EN, we set
T\ = inf {t > 0: "l"t - "l"t- > lin} ,
then P [T\ > 0] = 1. If we define inductively
Tk = inf{t > Tk-\ : "l"t - "l"t- > lin}
then, the Tk'S are random variables and
NtUn =
L
I(Tk:9)
k
is a random variable. Moreover, NFn < n"l"t. as is easily seen, which proves our
claim.
We will also need the following
(2.3) Lemma. For every r > 0, almost-surely, the equality
es+r(w) = es (e,,(w»)
holds for all s.
Proof This is a straightforward consequence of Proposition (1.3) in Chap. X.
We may now state the following important result.
(2.4) Theorem (Ito). The excursion process (et) is an (.9£) -Poisson point process.
Nr
Proof The variables
are plainly .9£ -measurable. Moreover, by the lemma
and the Strong Markov property of Sect. 3 Chap. III, we have, using the notation
of Definition (1.8),
P [Nl~t+rl E AI.¥;;]
=
P [Nt 0 e" E A I.~]
PBr, [Nt E A] = P [N{ E A]
since B" = 0 P-a.s. The proof is complete.
a.s.,
o
482
Chapter XII. Excursions
Starting from B, we have defined the excursion process. Conversely, if the
excursion process is known, we may recover B. More precisely
(2.5) Proposition. We have
Tt(W) =
L R (es(w»,
Tt-(W) =
and
Bt(w) =
L R (es(w»
s<t
s~t
L es (t - Ts-(W), w)
s:SL,
where L t can be recovered as the inverse ofTt.
Proof The first two formulas are consequences of the fact that Tt
Ls<t(Ts - Ts-). For the third, we observe that if Ts- < t < Ts for some s,
then L t = s, and for any u < L t , eu(t - Tu-) = 0; otherwise, t is an increase
point for L, and then edt - TL,) = 0 so that Bt(w) = 0 as it should be.
Remark. The formula for B can also be stated as B t = e s (t - Ts-) if Ts- :::: t :::: Ts.
The characteristic measure of the excursion process is called the Ito measure.
It will be denoted by n and its restrictions to U+ and U_ by n+ and n_. It is
carried by the sets of u's such that u(O) = o. Our next task is to describe the
measure n and see what consequences may be derived from the formulas of the
foregoing section. We first introduce some notation.
If W E W, we denote by io(w) the element u of U such that
u(t) = w(t)
if t < R(w),
u(t) = 0
if t ::: R(w).
If s E lR+, we may apply this to the path Os(w). We will put is(w) = io(Os(w».
We observe that R (Os(w» = R (is(w».
We will also call G w the set of strictly positive left ends of the intervals
contiguous to the set Z (w) of zeros of B, in other words, the set of the starting
times of the excursions, or yet the set
{Ts-(W) : Ts-(W) i= Ts(W)} = {Ts-(W) : R (O,,_(w») > O}.
Let H be a 91'(31) ® P68 -measurable positive process vanishing at o. The
process (s, w, u) --+ H (Ts-(W), w; u) is then 91' (~) ® 'l68 -measurable as is
easily checked. The master formula of Sect. 1 applied to this process yields
E [
~ H (Ts-(W), w; es(W»] = E [1 ds f H (Ts-(W), w; u) n(dU)] .
00
The left-hand side may also be written E [LYEGw H(y, w, iy(W»] and in the
right-hand side, owing to the fact that {s : Ts- i= Ts} is countable and that we
integrate with respect to the Lebesgue measure, we may replace Ts- by Ts. We
have finally the
§2. The Excursion Process of Brownian Motion
(2.6) Proposition. If H is as above,
E[ L H(y,W;iy(W))]
yEG w
=
E[Io'''' ds
= E
[1
00
f
483
H(rs(W),W;U)n(dU)]
f H(t, w; U)n(dU)].
dLt(w)
Proof Only the second equality remains to be proved, but it is a direct conse0
quence of the change of variables formula in Stieltjes integrals.
The above result has a martingale version. Namely, provided the necessary
integrability properties obtain,
L
s~t
H (rs-(w), w; iT,_(w)) -
is a (.%;)-martingale and
L
H (y, w; iy(w))
yEGwn[O,t]
t ds
10
-I
t
f
H (rs(w), w; u)n(du)
f
dLs(w)
H(s, w; u)n(du)
is a (~t )-martingale, the proof of which is left to the reader as an exercise.
The exponential formula yields also some interesting results. In what follows,
we consider an additive functional At such that VA({O}) = 0, or equivalently
l z dA = O. In this setting, we have
J
(2.7) Proposition. The random variable ATt is infinitely divisible and the Laplace
transform of its law is equal to
rPt()"') = exp
{t f (e-
Ax -
1) mA(dX)}
where mA is the image ofn under A R, the random variable AR being defined on
U by restriction from W to U.
Proof Since dA t does not charge Z, we have
At =
lt
Izc(s)dA s ,
hence, thanks to the strong additivity of A,
ATt = L
(AT, - AT,J = LAR o8T,_.
s~t
s~t
The exponential formula of Proposition (1.12) in Sect. 1 then implies that
rPt()"') = E[exp(-)"'ATt)]
E [ exp ( -)..,
=
which is the announced result.
exp {t
f
~ AR 8
0
T,_ )
]
n(du) (e-AAR(U) - I)}
484
Chapter XII. Excursions
Remark. The process AT, is a Levy process (Exercise (2.19) Chap. X) and mA
is its Levy measure; the proof could be based on this observation. Indeed, it is
known in the theory of Levy processes that the Levy measure is the characteristic
measure of the jump process which is a PPP. In the present case, because of the
hypothesis made on VA, the jump process is the process itself, in other words
AT, = Ls:'Ot AR(es ); as a result
mA(C) = E[L: Ie (AR(e s
»] = n {AR E C}.
s:'Ol
The following result yields the "law" of R under n.
(2.8) Proposition. For every x > 0,
nCR > x) = (2/rrx)I/2.
Proof The additive functional At = t plainly satisfies the hypothesis of the pre-
vious result which thus yields
E [exp(-).:rt)] = exp
{t f mA(dx) (e-}.x - I)}.
By Sect. 2 in Chap. VI, the law of 'fa is that of Ta which was found in Sect. 3 of
Chap. II. It follows that
1
1
00
mA(dx) (1 - e- Ax ) =
m.
By the integration by parts formula for Stieltjes integrals, we further have
A
00
mAC lx, oo[ )e-}.xdx =
Since it is easily checked that
m=A
1
00
m.
e- Ax (2/rrx)I/2dx,
we get mA(]X, oo[) = (2/rrx)I/2; by the definition ofmA, the proof is complete.
Remarks. 1°) Another proof is hinted at in Exercise (4.13).
2°) Having thus obtained the law of R under n, the description of n will be
complete if we identify the law of u(t), t < R, conditionally on the value taken
by R. This will be done in Sect. 4.
The foregoing proposition says that R (es(w» is a PPP on lR+ with characteristic measure ii given by ii(]x, oo[) = (2/rrx)I/2. We will use this to prove another
approximation result for the local time which supplements those in Chap. VI. For
8 > 0, let us call l1t(8) the number of excursions with length:::: 8 which end at
a time s :s t. If N is the counting measure associated with the PPP R(es ), one
moment's reflection shows that 11,(8) = Nf" where N~ = NJe.oo[, and we have
the
§2. The Excursion Process of Brownian Motion
485
(2.9) Proposition. P [lim, to 0"f1t(E) = L t for every t] = I.
Proof Let Ek = 2/Jrk 2 ; then ii([Ek, oo[) = k and the sequence {Nt'k+l - Nt':'} is
a sequence of independent Poisson r.v.'s with parameter t. Thus, for fixed t, the
law of large numbers implies that a.s.
N'n
· -1 N'n
I1m
t = I·1m /FEn
t = t.
n n
n
2
As Nt increases when E decreases, for En+l ::::: E < En,
jJrEn+l'
- - N n < fIE
_N° < /FEn
- N , n+ 1
2
t 2 t2
t
<
and plainly
P [lim
,to
V[iU
2 Nt' = t] = 1.
We may find a set E of probability 1 such that for wEE,
. fIE
hm
-Nt'(w) = t
,to
2
for every rational t. Since Nt increases with t, the convergence actually holds for
all t's. For each wEE. we may replace t by Lt(w) which ends the proof.
Remarks. 1°) A remarkable feature of the above result is that "fit (E) depends only
on the set of zeros of B up to t. Thus we have an approximation procedure for
Lr, depending only on Z. This generalizes to the local time of regenerative sets
(see Notes and Comments).
2°) The same kind of proof gives the approximation by downcrossings seen
in Chap. VI (see Exercise (2.10».
(2.10) Exercise. 1°) Prove that
n (sup lu(t)1 ~ x) = l/x.
t<R
[Hint: If Ax = {u : SUPt<R u(t) ~ x}, observe that LTx is the first jump time
of the Poisson process
and use the law of LTx found in Sect. 4 Chap. VI.]
2°) Using 1°) and the method of Proposition (2.9), prove, in the case of BM,
the a.s. convergence in the approximation result of Theorem (1.10) Chap. VI.
3°) Let a > 0 and set Ma(w) = SUPt:<:gTa wet) where, as usual, gTa is the last
zero of the Brownian path before the time Ta when it first reaches a. Prove that
Ma is uniformly distributed on [0, a]. This is part of Williams' decomposition
theorem (see Sect. 4 Chap. VII).
[Hint: If fa (resp. fy) is the first jump of the Poisson process NAn (resp. NAy),
then P [Ma < Y] = P [fy = fa]; use Lemma (1.13).]
N/'x
486
Chapter XII. Excursions
(2.11) Exercise. Prove that the process X defined, in the notation of Proposition
(2.5), by Xt(w) = les (t - Ts-(W), w)1 - s, if Ts-(W) S t S "sew), is the BM
Pt =
1t
sgn(Bs)dBs
and that ft(w) = s + le s (t - Ts_(W), w)1 if Ts_(W) S t S Ts(W), is a BES 3 (0).
(2.12) Exercise. 1°} Let A E 'U8 be such that n(A) < 00. Observe that the
number Ct, of excursions belonging to A in the interval [0, dt ] (i.e. whose two end
points lie between 0 and dt ) is defined unambiguously and prove that E [Ct,] =
n(A)E[LrJ·
2°} Prove that on {ds < t},
and consequently that Ct, - n(A)L t is a (~ )-martingale.
[Hint: Use the strong Markov property of BM.]
#
(2.13) Exercise (Scaling properties). 1°} For any c > 0, define a map Sc on W
or U8 by
sc(W)(t) = w(ct)/y'c.
Prove that et (sc(w» = Sc (etJ('(w») and that for A E CU8
n (s;I(A») = n(A)/y'c.
[Hint: See Exercise (2.ll) in Chap. VI.]
2°} (Normalized excursions) We say that U E U is normalized if R(u) = 1.
Let U l be the subset of normalized excursions. We define a map v from U to U I
by
v(u) = SR(u)(U).
Prove that for r c U l , the quantity
y(n = n+ (v-I(n n (R ~ c») /n+(R ~ c)
is independent of c > O. The probability measure y may be called the law of the
normalized Brownian excursion.
3°) Show that for any Borel subset S of lR+,
which may be seen as displaying the independence between the length of an
excursion and its form.
4°) Let eC be the first positive excursion e such that R(e) ~ c. Prove that
y(n = P [v(e
[Hint: Use Lemma (1.13).]
C)
E
r].
§2. The Excursion Process of Brownian Motion
#
(2.14) Exercise. Let Al(E) be the total length of the excursions with length < E,
strictly contained in [0, t[. Prove that
P [lim
eta
[Hint: At(E) = #
487
Al(E) = L t for every t] = 1.
Vf7T
~
f; x1']t(dx) where 1']t is defined in Proposition (2.9).]
(2.15) Exercise. Let St = SUPSg Bs and nt (E) the number of flat stretches of S of
length::::: E contained in [0, t]. Prove that
(ii8 nt (E) = St for every t] = 1.
etOV2
P [lim
(2.16) Exercise (Skew Brownian motion). Let (Yn ) be a sequence of independent r. v.' s taking the values 1 and -1 with probabilities ()( and 1 - ()( (0 :s ()( :s 1)
and independent of B. For each W in the set on which B is defined, the set of
excursions el(w) is countable and may be given the ordering of N. In a manner
similar to Proposition (2.5), define a process XOI by putting
X~ = Yn lesU - Ts-(W), w)1
if Ts-(W) :s t < Ts(W) and es is the n-th excursion in the above ordering. Prove
that the process thus obtained is a Markov process and that it is a skew BM by
showing that its transition function is that of Exercise (1.16), Chap. 1. Thus, we
see that the skew BM may be obtained from the reflecting BM by changing the
sign of each excursion with probability 1 - ()(. As a result, a Markov process X is
a skew BM if and only if IXI is a reflecting BM.
*
(2.17) Exercise. Let Ai = f~ I(B,>o)ds, A; = f~ I(B,<o)ds.
1°) Prove that the law of the pair L;2(Ai, An is independent of t and that
A~ and A;; are independent stable r.v.'s of index 1/2.
[Hint: A~ + A;; = Tt which is a stable r.v. of index 1/2.]
2°) Let a+ and a- be two positive real numbers, S an independent exponential
r.v. with parameter 1. Prove that
E [exp (_Ls2 (a+ At + a- As))] =
where ¢(s) =
1
00
exp(-¢(s))¢'(s)ds
Jz [(s2 +a+)1/2 + (s2 +a-)1/2]. Prove that, consequently, the
pair L;2 (Ai, A;) has the same law as ~(T+, T-) where T+ and T- are two
independent r.v.'s having the law of Tl and derive therefrom the arcsine law of
Sect. 2 Chap. VI (see also Exercise (4.20) below). The reader may wish to compare
this method with that of Exercise (2.33) Chap. VI.
(2.18) Exercise. Prove that the set of Brownian excursions can almost-surely be
labeled by Q+ in such a way that q < q' entails that eq occurs before e q ,.
[Hint: Call e I the excursion straddling 1, then elj2 the excursion straddling
gJ/2, ... ]
488
Chapter XII. Excursions
§3. Excursions Straddling a Given Time
From Sect. 4 in Chap. VI, we recall the notation
gt = sup {s < t : Bs = O} ,
d t = inf{s > t : Bs = O},
and set
We say that an excursion straddles t if gt < t < dt. In that case, At is the age of
the excursion at time t and At is its length. We have At = RUg).
(3.1) Lemma. The map t --* gt is right-continuous.
+
Proof Let tn t; if there exists an n such that gtn < t, then gtm = gt for m :::: n;
if gtn :::: t for every n, then t :::: gtn :::: tn for every n, hence t is itself a zero of B
and gt = t = limn gtn'
B,
•
AT----------~~~
-ATFig. 8.
For a positive r.v. S we denote by .Ys the a-algebra generated by the variables
Hs where H ranges through the optional processes. If S is a stopping time, this
coincides with the usual a-field .¥S. Let us observe that in general S :::: Sf does
not entail .¥S C .¥Sf , when S and Sf are not stopping times; one can for instance
find a r.v. S :::: 1 such that .¥S = .~.
Before we proceed, let us recall that by Corollary (3.3), Chap. V, since we are
working in the Brownian filtration (.Yf), there is no difference between optional
and predictable processes.
(3.2) Lemma. The fami(y (.Y;) = (.Yg,) is a subfiltration of (.X) and if T is a
(.Yr>-stopping time, then .YfT C .Y:, C .Y;.
§3. Excursions Straddling a Given Time
489
Proof As gt is Yr-measurable, the same reasoning as in Proposition (4.9) Chap. I
shows that, for a predictable process Z, the r.v. Zg, is .j{f-measurable whence
.~ c Yr follows.
Now choose u in lR+ and set Z; = Zg'M; thanks to Lemma (3.1) and to what
we have just proved, Z' is (.:7r,,)-optional hence predictable. Pick v > u; if gv ~ u,
then gu = gv and since gg, = gt for every t,
Z'gv = Zg v = Zg,
u
and if gv > u, then Z~v = Zgu' As a result, each of the r.v.'s which generate §l';
is among those which generate §ie. It follows that .9{ c .~, that is, (~ is a
filtration.
Let now T be a (.%)-stopping time. By definition, the a-algebra.¥gT is generated by the variables ZgT with Z optional; but ZgT = (Zg)T and Zg is .¥g,optional because of Lemma (3.1) which entails that ZgT is ~-measurable, hence
that .¥gT ~ .~. On the other hand, since .~ C Yr, the time T is also a (.9f)stopping time from which the inclusion §T c §T is easily proved.
0
We now come to one of the main results of this section which allows us to
compute the laws of some particular excursions when n is known. If F is a positive
'1I6'-measurable function on U, for s > 0, we set
q(s, F) = n(R >
S)-Ij{R>s} F dn == n(F I R s).
>
We recall that 0 < n(R > s) < 00 for every s > O.
(3.3) Proposition. For every fixed t > 0,
E[F(ig,)
Ik] =q(At,F)
a.s.,
and/or a (~-stopping time T,
E [F (igT ) I ~] = q(AT' F)
a.s. on the set {O < gT < TJ.
Proof We know that, a.s., t is not a zero of B hence 0 < gt < t and q(At. F) is
defined; also, if s E G w and s < t, we have s = gt if and only if s + Roes> t.
As a result, gt is the only s E G w such that s < t and s + Roes> t. If Z is a
positive (.¥r")-predictable process, we consequently have
E [Zg,F (i g,)] = E[
L ZsF(is)I{Roo,>t-s>O) J.
SEG w
We may replace Roes by R(is) and then apply the master formula to the right-hand
side which yields
E [Zg,F (i g,)] = E
[1 !
00
ds
Z,,(W)F(U)I{R(u»t-,,(w»o}n(dU)].
490
Chapter XII. Excursions
Since by Proposition (2.8), for every x > 0, we have nCR > x) > 0, the right-hand
side of the last displayed equality may be written
E
[1
00
ds Z,,(w)n(R > t - .s(w»q(t - .s(w), F)].
And, using the master formula in the reverse direction, this is equal to
E[
L Zsq(t - s, F) 1{ROII,>t-s>OI] = E [Zg,q(t - gr. F)]
SEG w
which yields the first formula in the statement.
To get the second one, we consider a sequence of countably valued (.¥t)stopping times Tn decreasing to T. The formula is true for Tn since it is true for
constant times. Moreover, on {O < gT < T}, one has {gTn = gT} from some no
onwards and limn 1{8Tn<Tnl = 1; therefore, for bounded F,
E [F (igT ) I ~]
=
li~E [F (igTJ 1{gTn <Tnl I ~n I ~]
li~E [q (ATn' F) 1{gTn<Tnl I #r] = q(AT' F)
because limn ATn = AT, the function q(., F) is continuous and AT is ~­
measurable. The extension to an unbounded F is easy.
D
The foregoing result gives the conditional expectation of a function of the
excursion straddling t with respect to the past of BM at the time when the excursion begins. This may be made still more precise by conditioning with respect
to the length of this excursion as well. In the sequel, we write E[· I ,k, Ad
for E[· I ,kv a(At)]. Furthermore, we denote by v(·; F) a function such that
vCR; F) is a version of the conditional expectation n(F I R). This is well defined since n is a-finite on the a-field generated by R and by Proposition (2.8),
the function r -+ vCr; F) is unique up to Lebesgue equivalence. We may now
state
(3.4) Proposition. With the same hypothesis and notation as in the last proposition,
and
E [F (i gT ) I.#;, AT] = v(A T; F)
on {O < gT < T}.
Proof Let ¢ be a positive Borel function on lR+; making use of the preceding
result, we may write
E [Zg,¢ (R (i g,)) F (i g,)]
E [Zg,¢(At)F (ig,)]
=
E [Zg,q (At, ¢(R)F)].
§3. Excursions Straddling a Given Time
491
But, looking back at the definition of q, we have
q(-, ¢(R)F) = q(., ¢(R)v(R; F)),
so that using again the last proposition, but in the reverse direction, we get
which is the desired result. The generalization to stopping times is performed as
in the preceding proof.
D
We now prove an independence property between some particular excursions
and the past of the BM up to the times when these excursions begin. A (.~o)­
stopping time T is said to be terminal if T = t + T 0 at a.s. on the set {T > t};
hitting times, for instance, are terminal times. For such a time, T = gT + T O(JgT a.s.
on {gT < T}. A time T may be viewed as defined on U by setting for u = io(w),
T(u) = T(w)
if R(w) ::: T(w),
T(u) = +00
otherwise.
By Galmarino's test of Exercise (4.21) in Chap. I, this definition is unambiguous.
If T(u) < 00, the length AT of the excursion straddling T may then also be
viewed as defined on U. Thanks to these conventions, the expressions in the next
proposition make sense.
(3.5) Proposition. 1fT is a terminal (.'¥;O)-stopping time, then on {O < gT < T},
and
Proof For a positive predictable process Z, the same arguments as in Proposition
(3.3) show that
E [ZgTF
(i gr ) l(o<gr<T)]
= E [
L l(s<T)ZSFCis)l(R(;'»T(i'))]
1
1
SEG w
= E [
=E [
00
00
ds ZT, l(T,<T)
!
F(u) l(R(u»T(u))n(du) ]
ds ZT,l(T,<T)n(R > T) ] n (Fl(R>T)) /n(R > T)
= E [ Zgr l(O<gr<T)] n (Fl(R>T)) /n(R > T)
which proves the first half of the statement. To prove the second half we use the
first one and use the same pattern as in Proposition (3.4).
492
Chapter XII. Excursions
Remark. As the right-hand side of the first equality in the statement is a constant,
it follows that any excursion which straddles a terminal time T is independent
of the past of the BM up to time gT. This is the independence property we had
announced. We may observe that, by Proposition (3.3), this property does not hold
with a fixed time t in lieu of the terminal time T (see however Exercise (3.11
».
We close this section with an interesting application of the above results which
will be used in the following section. As usual, we set
T,(w) = inf{t > 0 : wet) > £}.
On U, we have {T, < oo} = {T, < R}, and moreover
(3.6) Proposition. n (SUPs:ooR(U) u(s) > £) = neT, < 00) = 1/2£.
Proof Let 0 < x < y. The time Tx is a terminal time to which we may apply the
preceding proposition with F = I(T,<R); it follows that
P [Ty (egrJ < 00] = n(Ty < oo)/n(Tx < 00).
The left-hand side of this equality is also equal to Px [Ty < To] = x/y; as a result,
neT, < 00) = c/£ for a constant c which we now determine.
Proposition (2.6) applied to H(s,'; u) = I(T,<oo)(u) 1(s:oor) yields
neT, < oo)E[Lr] = E[
L
I(T,<oo)(is)],
SEGwn[O,tj
which, in the notation of Sect. 1 Chap. VI, implies that
cE[LrJ = E [£ (d,(t) ± 1)];
letting £ tend to 0, by Theorem (1.10) of Chap. VI, we get c = 1/2.
Remark. This was also proved in Exercise (2.10).
(3.7) Exercise. Use the results in this and the preceding section to give the conditional law of d 1 with respect to g1. Deduce therefrom another proof of 4°) and
5°) in Exercise (3.20) of Chap. III.
*#
(3.8) Exercise. 1°) Prove that conditionally on g1 = u, the process (Br, t ::s u) is
a Brownian Bridge over [0, u]. Derive therefrom that B rgl /,J8l, 0 ::s t ::s 1 is a
Brownian Bridge over [0, 1] which is independent of g1 and of {B g1 + u , u ~ o}
and that the law of g1 is the arcsine law. See also Exercise (2.30) Chap. VI.
2°) The Brownian Bridge over [0, 1] is a semimartingale. Let za be the family
of its local times up to time l. Prove that L~I has the same law as ,J8l[a/ ffi where
g1 is independent of the Bridge. In particular,
= ,J8l[0; derive therefrom that
[0 has the same law as .J2C, where e is an exponential r.v. with parameter l.
[Hint: See Sect. 6 Chap. 0.]
L?
§4. Descriptions of Ito's Measure and Applications
493
3°) Prove that the proces~ Mu = !Bgt+u(I_gt)! /~, 0 ::s u ::s 1, is independent of the a-algebra ..¥f; M is called the Brownian Meander of length 1.
Prove that MI has the law of ffe just as [0 above.
[Hint: Use the scaling properties of n described in Exercise (2.13).]
4°) Prove that the joint law of (gt. Lt. B t ) is
1(1)0) 1(s<t) : - . . , exp (-~)
Ix I
exp (_ x
) ds d[ dx.
- v2rrs 3
2s J2rr(t-s)3
2(t-s)
2
[Hint: (gl, LJ, B)) <!1 (gl, "fil[o, ~MI) where g)' [0, MI are independent.]
*
(3.9) Exercise. We retain the notation of the preceding exercise and put moreover
At = J~ I(B,>o)ds and U = Jol l(fJ,>o)ds where f3 is a Brownian Bridge. We recall
from Sect. 2 Chap. VI that the law of A I is the Arcsine law; we aim at proving
that U is uniformly distributed on [0, 1].
1°) Let T be an exponential r.v. with parameter 1 independent of B. Prove
that AgT and (AT - A gT ) are independent. As a result,
AT <!1 TAl <!1 TglU + Ts(1 - gl)
where s is a Bernoulli r.v. and T, g)' U, s are independent.
2°) Using Laplace transform, deduce from the above result that
~N2U <!1 ~N2V
2
2
where N is a centered Gaussian r.v. with variance 1 which is assumed to be
independent of U on one hand, and of V on the other hand, and where V is
uniformly distributed on [0, I]. Prove that this entails the desired result.
(3.10) Exercise. Prove that the natural filtration (.91f g ) of the process g. is strictly
coarser than the filtration
(.%') and is equal to (.9;L).
(3.11) Exercise. For a > 0 let T = inf{t : t - gt = a}. Prove the independence
between the excursion which straddles T and the past of the BM up to time gT.
§4. Descriptions of Ito's Measure and Applications
In Sects. 2 and 3, we have defined the Ito measure n and shown how it can be
used in the statements or proofs of many results. In this section, we shall give
several precise descriptions of n which will lead to other applications.
Let us first observe that when a a-finite measure is given on a function space,
as is the case for nand U, a property of the measure is a property of the "law" of
the coordinate process when governed by this measure. Moreover, the measure is
494
Chapter XII. Excursions
the unique extension of its restriction to the semi-algebra of measurable rectangles;
in other words, the measure is known as soon as are known the finite-dimensional
distributions of the coordinate process.
Furthermore, the notion of homogeneous Markov process makes perfect sense
if the time set is ]0, oo[ instead of [O,oo[ as in Chap. III. The only difference is that we cannot speak of "initial measures" any longer and if, given the
transition semi-group PI> we want to write down finite-dimensional distributions
P [Xtl E AI, ... , X tk E Ak] for k-uples 0 < tl < ... < tk, we have to know the
measures /-Lt = Xt(P). The above distribution is then equal to
1
Al
/-Ltl (dxd
r Pt2 - tl (XI, dX2) ... r Ptk-tk-I (Xk-I, dXk)·
iA2
iAk
The family of measures /-Lt is known as the entrance law. To be an entrance law,
(/-Lt) has to satisfy the equality /-Lt Ps = /-Lt+s for every sand t > o. Conversely,
given (/-Lt) and a t.f. (Pt ) satisfying this equation, one can construct a measure on
the canonical space such that the coordinate process has the above marginals and
therefore is a Markov process. Notice that the /-Lt'S may be a-finite measures and
that everything still makes sense; if /-L is an invariant measure for Pt, the family
/-Lt = /-L for every t is an entrance law. In the situation of Chap. III, if the process
is governed by Pv , the entrance law is (vPt ). Finally, we may observe that in this
situation, the semi-group needs to be defined only for t > o.
We now recall some notation from Chap. III. We denote by Qt the semi-group
of the BM killed when it reaches 0 (see Exercises (1.15) and (3.29) in Chap. III).
We recall that it is given by the density
qt(x, y) = (2m)-1/2 (exp ( - ;t (y - X)2) - exp ( - 2~ (y + X)2)) l(xy>o).
We will denote by At (dy) the measure on IR.\{O} which has the density
mt(Y) = (2:rr t3
r
l /2
Iyl exp( -l 12t)
with respect to the Lebesgue measure dy. For fixed y, this is the density in t of
the hitting time Ty as was shown in Sect. 3 Chap. III.
Let us observe that on ]0, oo[
8gt
1
.
mt = - - = hm -qt(x, .).
8y
x-+O 2x
Our first result deals with the coordinate process w restricted to the interval
]0, R[, or to use the devices of Chap. III we will consider - in this first result only
- that wet) is equal to the fictitious point 8 on [R, 00[. With this convention we
may now state
(4.1) Theorem. Under n, the coordinate process wet), t > 0, is a homogeneous
strong Markov process with Qt as transition semi-group and At, t > 0, as entrance
law.
§4. Descriptions of Ito's Measure and Applications
495
Proof Everything being symmetric with respect to 0, it is enough to prove the
result for n+ or n_. We will prove it for n+, but will keep n in the notation for
the sake of simplicity.
The space (Us,16s ) may serve as the canonical space for the homogeneous
Markov process (in the sense of Chap. III) associated with Qr, in other words
Brownian motion killed when it first hits {OJ; we call (Qx) the corresponding
probability measures on (Us, 'Yh's). As usual, we call er the shift operators on Us.
Our first task will be to prove the equality
eq.(4.l)
for r E1h', A E .J9(JR.+ - {On and r > O. Suppose that n(u(r) E A) > 0 failing
which the equality is plainly true. For r > 0, we have {u(r) E A} C {r < R}
hence n(u(r) E A) < 00 and the expressions we are about to write will make
sense. Using Lemma (1.13) for the process e{u(r)EA}, we get
where P is the Wiener measure.
The time S which is the first jump time of the Poisson process N{u(r)EA} is a
(.%,)-stopping time; the times TS- and TS are therefore (Sif)-stopping times. We
set T = TS- + r. The last displayed expression may be rewritten
P [{ B TEA} n
OJ eT E r}]
0
where fi. stands for the BM killed when it hits {OJ. By the strong Markov property
for the (.%')-stopping time T, this is equal to
As a result
n (Cu(r) E A) ne;I(T») = n(u(r) E A)
f
y(dx)Qx[T]
where y is the law of BT under the restriction of P to {BT E A}. For a Borel
subset C of JR., make r = {u(O) E C} in the above formula, which, since then
QAT] = lc(x), becomes
n(u(r) E A n C) = n(u(r) E A)y(C);
it follows that y(.) = n(u(r) E An ·)jn(u(r) E A) which proves eq. (4.1).
Let now 0 < tl < t2 < ... < tk < t be real numbers and let fl, ... , fb f be
positive Borel functions on R Since ii, the BM killed at 0, is a Markov process
we have, for every x
eq. (4.2)
496
Chapter XII. Excursions
Set F = n~=2 fi (U(ti - tl)). By repeated applications of equations (4.1) and (4.2),
we may now write
n[
(V.
fi (U(U)) f(U(t))] = n [iJ (U(t1)) (F· f(u(t - t1))) 0 8t1 ]
= n [11 (u(tj)) Qu(tIl [F . f (u(t - t1))]]
= n [11 (u(t\)) QU(tL) [FQt-tJ((U(tk - tl))]]
= n [11 (U(tl)) F 0 8tL Qt-tJ(U(tk))]
= n
[V.
fi (U(ti)) Qt-tJ(U(tk))]
which shows that the coordinate process u is, under n, a homogeneous Markov
process with transition semi-group Qt. By what we have seen in Chap. III, if has
the strong Markov property; using this in eq. (4.2) instead of the ordinary Markov
property, we get the analogous property for n; we leave the details as an exercise
for the reader.
The entrance law is given by At(A) = n(u(t) E A) and it remains to prove
that those measures have the density announced in the statement. It is enough to
compute At([Y, oo[) for y > O. For 0 < 8 < y,
At([Y, oo[) = n(u(t) 2: y) = n(u(t) 2: y; Te < t)
where Te = inf{t > 0 : u(t) > 8}. Using the strong Markov property for n just
alluded to, we get
At([Y, oo[)
n (Te < t; Qu(T,)(U(t - Te) 2: y))
n (Te < t; Qt-T, (8, [y, oo[ )) .
Applying Proposition (3.5) with F(u) = l(T,(u)<t) Qt-T,(u)(8, [y, oo[) yields
At([Y, oo[) = E [I (1',<t) Qt-1',(8, [y, oo[)] neTs < R)
where fs = To (igTJ. Using Proposition (3.6) and the known value of Qr, this is
further equal to
E [l(1',<t) (cpt-1',(Y + 8) - CPt-1',(Y - 8)) /28]
with CPt(Y) = f~oo gt(z)dz. If we let 8 tend to zero, then Te converges to zero
P-a.s. and we get At([Y, oo[ ) = gt(Y) which completes the proof.
Remarks. 1°) That (At) is an entrance law for Qt can be checked by elementary
computations but is of course a consequence of the above proof.
2°) Another derivation of the value of (At) is given in Exercise (4.9).
The above result permits to give Ito's description of n which was hinted at in
the remark after Proposition (2.8). Let us recall that according to this proposition
§4. Descriptions of Ito's Measure and Applications
497
the density of R under n+ is (2J2Jl'r 3)-I. In the following result, we deal with
the law of the Bessel Bridge of dimension 3 over [0, r] namely
which we
will abbreviate to Jl'r' The following result shows in particular that the law of the
normalized excursion (Exercise (2.13» is the probability measure Jl'1.
pt,;
(4.2) Theorem. Under n+, and conditionally on R = r, the coordinate process w
has the law Jl'r. In other words, if r E ~£t
n+(T) =
[00 Jl'r(r n {R = r}) ~.
10
2 2Jl'r 3
Proof The result of Theorem (4.1) may be stated by saying that for 0 < tl <
t2 < ... < tn and Borel sets Ai C ] 0, oo[ , if we set
n
r = n{U(ti) E Ai},
i=1
then
n+(T) =
1
Al
mIl (XI) dXI 1
A2
q12-11(XI, X2) dX 2···1 qln-In-I (Xn-I, Xn)dxn.
An
On the other hand, using the explicit value for Jl'r given in Sect. 3 of Chap. XI, and
taking into account the fact that r n {R < tn } = 0, the formula in the statement
reads
100 ~ 1
1
.··1
1 ~2J2Jl'r3mr_In
In
2 2Jl'r
An
But
AI
2J2Jl' r3m l, (xl)dxl [ q12-11 (XI, x2)dx2 ...
A2
qln-In-I (Xn-I, xn)mr-In(Xn)dxn.
00
(Xn) = 1
2 2Jl'r 3
as was seen already several times. Thus the two expressions for n+(r) are equal
and the proof is complete.
In
This result has the following important
(4.3) Corollary. The measure n is invariant under time-reversal; in other words,
it is invariant under the map u -+ u where
u(t) = u(R(u) - t)l(R(u)~I)'
Proof By Exercise (3.7) of Chap. XI, this follows at once from the previous
result.
This can be used to give another proof of the time-reversal result of Corollary
(4.6) in Chap. VII.
498
Chapter XII. Excursions
(4.4) Corollary. If B is a BM(O) and for a > 0, Ta = inf{t : Bt = a}, if Z is a
BES3 (0) and U a = sup{t : Zt = a}, then the processes Yt = a - B Ta - t , t < Ta and
Zt, t < U a are equivalent.
Proof We retain the notation of Proposition (2.5) and set f3t = L t - IBtl =
s - les(t - <s-)I if <s- ::: t ::: <s. We know from Sect. 2 Chap. VI that f3 is a
standard BM.
If we set Zt = L t + IBtl = s + les(t - <s-)I if <s- ::: t ::: <s, Pitman's theorem
(see Corollary (3.8) of Chap. VI) asserts that Z is a BES 3 (0).
For a > 0 it is easily seen that
<a = inf{t : L t = a} = inf{t : f3t = a};
moreover
<a = sup{t : Zt = a}
since Zra = Lra + IBra I = a and for t > <a one has L t > a.
We now define another Poisson point process with values in (U, U) by setting
es
=
es(t)
es
if s > a,
o ::: t ::: R(ea - s ),
ea - s (R(e a - s ) - t) ,
if s ::: a.
e e
In other words, for s ::: a, s = a - s in the notation of Corollary (4.3). Thus, for
a positive .~(lR+) x '2b8-measurable function I,
L I(s, es) = L I(a - s, es),
and the master formula yields
E[
L I(s, es)] = E [l a ds f I(a - s, U)n(dU)];
O<s~a
0
by Corollary (4.3), this is further equal to
E
[loa ds f I(s, U)n(dU)] .
e
This shows that the PPP has the same characteristic measure, hence the same
law as e. Consequently the process Z defined by
has the same law as Z. Moreover, one moment's reflection shows that
for 0 ::: t ::: <a
which ends the proof.
o
§4. Descriptions of Ito's Measure and Applications
499
Let us recall that in Sect. 4 of Chap. X we have derived Williams' path decomposition theorem from the above corollary (and the reversal result in Proposition
(4.8) of Chap. VII). We will now use this decomposition theorem to give another
description of n and several applications to BM. We denote by M the maximum
of positive excursions, in other words M is a r.v. defined on U8+ by
sup u(s).
M(u) =
s~R(u)
The law of M under n+ has been found in Exercise (2.10) and Proposition (3.6)
and is given by n+(M ~ x) = 1/2x.
We now give Williams' descrjption of n. Pick two independent BES3 (0) processes p and p and call Te and Te the corresponding hitting times of c > o. We
define a process ze by setting
Os t S Te ,
Te S t S
+ Te,
t ~ Te + Te.
Ie
For rEcut, we put N(c, r) = P [z~ E r]. The map N is a kernel; indeed
(Znt2:0 rgJ (CZi/e2 )t>o' thanks to the scaling properties of BES3(0), so that N
maps continuous functions into continuous functions on lR. and the result follows
by a monotone class argument. By Proposition (4.8) in Chap. VII, the second part
of ze might as well have been taken equal to p (Te + Te - t), Te S t S Te + Te.
(4.5) Theorem. For any r E cu8+
11
n+(r) = -
2
0
00
N(x, r)x- 2 dx.
In other words, conditionally on its height being equal to c, the Brownian excursion
has the law of ze.
Proof Let U e = {u : M(u) ~ c}; by Lemma (1.13), for r E cu8
+
e
1
e
n+(r n Ue ) = n+(Ue)P[e E r] = -P[e E Fl,
2c
where ee is the first excursion the height of which is ~ c. The law of this excursion
is the law of the excursion which straddles Te , i.e. the law of the process
By applying the strong Markov property to B at time Te , we see that the process
y may be split up into two independent parts yl and y2, with
By the strong Markov property again, the part y2 has the law of iit , 0 S t S To,
where ii is a BM(c). Thus by Proposition (3.13) in Chap. VI, y2 may be described
500
Chapter XII. Excursions
as follows: conditionally on the value M of the maximum, it may be further split
up into two independent parts VI and V 2, with
Moreover VI is a BES3(c) run until it first hits M and V2 has the law of M - Pt
where P is a BES 3(0) run until it hits M.
Furthermore, by Williams decomposition theorem (Theorem (4.9) Chap. VII),
the process yl is a BES\O) run until it hits c. By the strong Markov property for
BES 3, if we piece together yl and VI, the process we obtain, namely
is a BES\O) run until it hits M.
As a result, we see that the law of eC conditional on the value M of the
maximum is that of Z M. Since the law of this maximum has the density c / M2 on
[c, oo[ as was seen in Proposition (3.13) of Chap. VI we get
P[e C E r] = c
[00
x- 2 N(x, r)dx
which by the first sentence in the proof, is the desired result.
o
To state and prove our next result, we will introduce some new notation. We
will call.%' the space of real-valued continuous functions w defined on an interval
[0, S-(w)] C [0,00[. We endow it with the usual a-fields ~o and ~ generated
by the coordinates. The Ito measure n and the law of BM restricted to a compact
interval may be seen as measures on (.%', ~).
If f.L and f.L' are two such measures, we define f.L 0 f.L' as the image of f.L ® f.L'
under the map (w, w') --+ wow' where
S-(w) + S-(w') ,
S-(w 0 w')
wow'(S)
if 0 ::::: S::::: S-(w)
=
w(s),
=
w(S-(w» + w'(s - S-(w» - w'(O)
if S-(w) ::::: S ::::: S-(w) + S-(w').
We denote by v f.L the image of f.L under the time-reversal map w --+ ~ where
v
S-(w) = S-(w),
v
w(s) = w(S-(w) - s),
o : : : s ::::: S-(w).
Finally, if T is a measurable map from.%' to [0, 00], we denote by f.L T the image
of f.L by the map w --+ kT(W) where
S-(kT(w» = S-(w) /\ T(w),
kT(W)(S) = w(s)
if 0 ::::: S ::::: S-(w) /\ T(w).
We also define, as usual:
Ta(w) = inf{t : w(t) = a},
La(w) = sup{t : w(t) = a}.
§4. Descriptions of Ito's Measure and Applications
501
Although the law Pa ofBM(a) cannot be considered as a measure on .X;, we will
use the notation
for the law of BM(a) killed at 0 which we may consider as
a measure on .:is' carried by the set of paths W such that ~ (w) = To (w). If S3 is
the law of BES 3 (0), we may likewise consider sfa and the time-reversal result of
Corollary (4.6) in Chap. VII then reads
pIa
v
(p;o) = sfa.
In the same way, the last result may be stated
n+ = ~
10 00 a~2 (Sf" 0V( Sfa)) da.
The space U of excursions is contained in .J{;' and carries n; on this subspace,
we will write R instead of ~ in keeping with the notation used so far and also use
w or w indifferently. We may now state
(4.6) Proposition.
100
o
nrc-n{r < R})dr=
/+00 v(p;o)da.
~OO
Proof Let (e t ) be the usual translation operators and as above put
gt(W) = sup{s < t : w(s) = O},
dt(w)
= inf{s > t : w(s) = O},
We denote by Eo the expectation with respect to the Wiener measure.
The equality in the statement will be obtained by comparing two expressions
of
J = Em
[10 00 I[To<tle~Ag, Y kt~g, eg,dtJ
0
0
where Pm = f~::: Pada, Y is a positive .~-measurable function and A is > O.
From Proposition (2.6) it follows that
Eo
[10 00 e~Ag, Y kt~g, eg,dt ]
0
= Eo
[L e~AS I
SEG w
= Eo
0
d
,(W)
Y0
kt~s eS(W)dt]
0
S
[10 00 e~ASdLsJ n (loR YOkrdr).
Using the strong Markov property for Pa we get
J =
£:00 daE a [e~ATo] Eo [10 00 e~ASdLsJ n (loR YOkrdr).
502
Chapter XII. Excursions
But Ea [e- ATo ] = e- 1a1ffi so that f~:: daE a [e- ATo ] = J2/A and by the results
in Sect. 2 of Chap. X, Eo [Jooo e-AsdL,] = 11m. As a result
J = A-I
1
00
n(Y 0 k r 1(r<R»)dr.
On the other hand, because t; (k r- g, 0 (}g,) = t,
v
and since obviously (p~) = P~,
where Z(w) = (Y 0 kTo) (~); indeed t - gr is the hitting time of zero for the
process reversed at t. Now the integrand under E~ depends only on what occurs
before t and therefore we may replace E~ by Em. Consequently
[1
J
Em Z
0
00
l[To<r]e
-A(I-To)
1
dt ]
+00
A-I Em[Z] = A-I
-00
Ea[Z]da.
Comparing the two values found for J ends the proof.
Remark. Of course this result has a one-sided version, namely
rOO
roo v
10 n~(. n {r < R})dr = 10
(PdO) da.
We may now tum to Bismut's description of n.
(4.7) Theorem. Let ii+ be the measure defined on lR+ x U8 by
ii+(dt, du) = l(o:sr:SR(u»dt n+(du).
Then, under ii+ the law o/the r.v. (t, u) -+ u(t) is the Lebesgue measure da and
conditionally on u(t) = a, the processes {u(s), 0 :s s :s t} and {u(R(u) - s), t :s
s :s R(u)} are two independent BES 3 processes run until they last hit a.
The above result may be seen, by looking upon U8 as a subset of .'%', as an
equality between two measures on lR+ x .%'. By the monotone class theorem, it
is enough to prove the equality for the functions of the form f(t)H(w) where H
belongs to a class stable under pointwise multiplication and generating the a-field
on .'is'. Such a class is provided by the r.v.'s which may be written
§4. Descriptions of Ito's Measure and Applications
D10tx)
503
n
H =
e- AiS fi(w(s))ds
where Ai is a positive real number and fi a bounded Borel function on R It is clear
that for each t, these r.v.'s may be written Zt' Y t o()t where Zt is ~o-measurable.
We will use this in the following
Proof Using the Markov description of n proved in Theorem (4.1), we have
r
1R+ xU
f(t)Zt(u)Yt «()t(u)) n+(dt, du)
8
tx) dt f(t) I1It<R(Ul]Zt(U)Yt «()t(u)) n+(du)
= tx) dt f(t) 11[1<R(Ul]Zt(U)E~(t)[Yt]n+(du)
10
= tx) dt f(t) 1Zt(u)E~(t)[Ytldn~(. n (t < R»
10
=
10
U8
U8
U8
where E;o is the expectation taken with respect to the law of BM(x) killed at O.
Using the one-sided version of Proposition (4.6) (see the remark after it), this is
further equal to
v
s
But for PdQ, we have = La a.s. hence in particular w(s) = a a.s. and for PdQ,
we have = To a.s. so that we finally get
s
1
00
da vEIo [J(La)ZL.] EIo [YTo ] .
Using the time-reversal result of Corollary (4.4), this is precisely what was to be
proved.
#
(4.8) Exercise. (Another proof of the explicit value of At in Theorem (4.1».
Let f be a positive Borel function on lR+ and Up the resolvent of BM. Using the
formula of Proposition (2.6), prove that
Upf(O) = E
[1
00
e-P"ds]
1
00
e-PUAu(f)du.
Compute Au(f) from the explicit form of Up and the law of LS'
*
(4.9) Exercise. Conditionally on (R = r), prove that the Brownian excursion is a
semimartingale over [0, r] and, as such, has a family za, a > 0, of local times up
to time r for which the occupation times formula obtains.
504
Chapter XII. Excursions
(4.10) Exercise. For x E lR+, let Sx be the time such that e,x is the first excursion
for which R(eJ > x. Let L be the length of the longest excursion eu , U < Sx.
Prove that
P [L < y] = (y / x) 1/2
for y S x.
(4.11) Exercise. (Watanabe's process and Knight's identity). 1°) Retaining the
usual notation, prove that the process Yt = Sr, already studied in Exercise (1.9) of
Chap. X, is a homogeneous Markov process on [0, oo[ with semi-group Tt given
by
oo
t
To = I, Itf(x) = e- t / 2x f(x) +
e- t / 2Y - 2 f(y)dy.
x
2y
J
[Hint: Use the description of BM by means of the excursion process given in
Proposition (2.5)]. In particular,
P [SrI sa] = exp(-t/2a).
*
Check the answers given in Exercise (1.27) Chap. VII.
2°) More generally, prove that
E [exp (_)..2 Tt+ /2) I(S,,9)] = exp(-M coth(a)..)/2)
1;'
where T t+ =
l(B,>o)ds.
3°) Deduce therefrom Knight's identity, i.e.
Prove that consequently,
+
Tt
2 (d) .
.
/Sr, = mf{s . Us = 2},
where U is a BES 3 (0).
[Hint: Prove and use the formula
f (I -
exp(-R/2) 1(M::oX)) dn+ = (cothx)/2.
where M = SUPt<R w(t).]
4°) Give another proof using time reversal.
(4.12) Exercise. (Continuation of Exercise (2.13) on normalized excursions).
Let p be the density of the law of M (see Exercise (4.13» under y. Prove that
1
00
xp(x)dx = In/2,
that is: the mean height of the normalized excursion is In /2.
[Hint: Use 3°) in Exercise (2.13) to write down the joint law of Rand M
under n as a function of p, then compare the marginal distribution of R with the
distribution given in Proposition (2.8).]
§4. Descriptions of Ito's Measure and Applications
505
(4.13) Exercise. 1°) Set M(w) = SUPt<R w(t); using the description of n given
in Theorem (4.1), prove that
n+(M 2: x) = lim (
s_o
1
hr As (dy)Qy [Tx < To] + x As (dY »)
00
and derive anew the law of M under n+, which was already found in Exercise
(2.10) and Proposition (3.6).
2°) Prove that Mx = sup {Bt, t < gTx } is uniformly distributed on [0, x] (a part
of Williams' decomposition theorem).
[Hint: If Mx is less than y, for y .::: x, the first excursion which goes over x is
also the first to go over y.]
3°) By the same method as in 1°), give another proof of Proposition (2.8).
#
(4.14) Exercise. (An excursion approach to Skorokhod problem). We use the
notation of Sect. 5 Chap. VI; we suppose that 1/11-' is continuous and strictly increasing and call t/J its inverse.
1°) The stopping times
T = inf {t : St 2: 1/I1-'(Bt )}
T' = inf{t : IBtI 2: L t - t/J(Lt)}
and
have the same law.
2°) Prove that, in the notation of this chapter, the process {s, es } is a PPP with
values in lR.+ x U8 and characteristic measure ds dn(u).
3°) Let rx = {(s, u) E lR.+ xU: 0 .::: s .::: x and M(u) 2: s - t/J(s)} and
N x = Ls l rx (s, es). Prove that P [LT' 2: x] = P [Nx = 0] and derive therefrom
that t/J(ST) = BT has the law {t.
4°) Extend the method to the case where 1/11-' is merely right-continuous.
(4.15) Exercise. If A = L Z , Z > 0, prove that the Levy measure m A of A", defined
in Proposition (2.7) is given by
mA( lx, oo[) = (2z)-! exp(-x/2z),
*
x > o.
(4.16) Exercise (Azema martingale). 1°) Using Proposition (3.3) prove, in the
notation thereof, that if 1 is a function such that 1(IBtl) is integrable,
E [/(IB/I)
IkJ
= A;!
1 (-l/2At)
00
exp
yl(y)dy.
[Hint: Write B t = Bg,+t-g, = B A, (i g,).]
2°) By applying 1°) to the functions 1 (y) = l and Iy I, prove that t - 2g t and
v
J~(t - gt) - L t are (Sif)-martingales.
3°) If 1 is a bounded function with a bounded first derivative f' then
I(L t ) v
is a (Sif)-martingale.
J~(t
-
gt)/'(Lt)
506
Chapter XII. Excursions
(4.17) Exercise. Let ra = inf{t : L t = a}. Prove that the processes {Br. t ~ ra}
and {Hra -t, t ~ ra} are equivalent.
[Hint: Use Proposition (2.5) and Corollary (4.3).]
*#
(4.18) Exercise. In the notation of Proposition (4.6) and Theorem (4.7), for Poalmost every path, we may define the local time at 0 and its inverse rt ; thus
makes sense and is a probability measure on $b.
1°) Prove that
P;'
1 P~dt 100
00
=
P;'ds 0
100
n'(· n (r < R»dr.
This formula in another guise may also be derived without using excursion theory
as may be seen in Exercise (4.26). We recall that it was proved in Exercise (2.29)
of Chap. VI that
00 P;'ds = 100 Qt r;:c-:
dt
o
0
v2nt
where Qt is the law of the Brownian Bridge over the interval [0, t].
2°) Call Mt the law of the Brownian meander of length t defined by scaling
from the meander of length 1 (Exercise (3.8» and prove that
1
nt(. n (t < R» = Mt /J2nt.
As a result
1
1
00
dt
sfada.
o
v2nt
0
3°) Derive from (*) Imhof's relation, i.e., for every t
00
Mt
r;:c-: =
(+)
where Xt(w) = wet) is the coordinate process.
[Hint: In the left-hand side of (*) use the conditioning given La = t, then use
the result of Exercise (3.12) Chap. XI.]
By writing down the law of (I;, Xs) under the two sides of (*), one also finds
the law of X t under Mt which was already given in Exercise (3.8).
4°) Prove that (+) is equivalent to the following property: for any bounded
continuous functional F on C([O, 1], lR),
Ml(F) = lim(n/2) 1/2 E, [F1[To>lJ] /r
rt O
where P, is the probability measure of BM(r) and To is the first hitting time of
o. This question is not needed for the sequel.
[Hint: Use Exercise (1.22) Chap. XI.]
5°) On C([O, 1], lR), set ~ = a(Xs, s ~ t). Prove that for 0 ~ t ~ 1,
s~ [(n/2)1/2Xll I~] = X;l¢ ((1 - t)-1/2Xt )
§4. Descriptions of Ito's Measure and Applications
507
where ¢(a) = foa exp( _y2 /2)dy. Observe that this shows, in the fundamental
counterexample of Exercise (2.13) Chap. V, how much l/X t differs from a martingale.
[Hint: Use the Markov property of BES 3 .]
6°) Prove that, under Ml, there is a Brownian motion f3 such that
X -f3t+
t -
i (¢/) (
t
-
¢
0
Xs-) -ds-
.JI=S.JI=S'
O::;t::;l,
which shows that the meander is a semimartingale and gives its decomposition in
its natural filtration.
[Hint: Apply Girsanov's theorem with the martingale of 5°).]
**
(4.19) Exercise (Longest excursions). If Bs = 0, call D(s) the length of the
longest excursion which occured before time s. The aim of this exercise is to find
the law of D(gt) for a fixed t. For f3 > 0, we set
and
¢s(X, f3) = E [1(D(r,»x) exp( -f3Ys)]'
1°) If Lf!(x) = E [Jooo exp(-f3t)I(D(g,»x)dt], prove that
f3Lf!(~) = Cf!
2°) By writing
¢t(X, f3) = E
[L:
1
00
¢s(X, f3)ds.
{1(D(r,»x) exp( -f3 Ys) - 1(D(r,_»x) exp( -f3 Ys-)}] ,
s~t
prove that ¢ satisfies the equation
¢t(X, f3) = - (cf!
+ df!(x») i t ¢s(X, f3)ds + dfJ(x) i t e-C~Sds.
3°) Prove that
f3Lf!(x) = df!(x)/ (Cf! +df!(x»).
[Hint: {D(Ys) > x} = {D(Ys-) > x} U {Ys - Ys- > x}.]
4°) Solve the same problem with D(dt ) in lieu of D(gt).
5°) Use the scaling property of BM to compute the Laplace transforms of
(D(gl»-l and (D(d1»-I.
**
(4.20) Exercise. Let A be an additive functional of BM with associated measure
f.-L and Sf} an independent exponential r.v. with parameter (}2/2.
508
Chapter XII. Excursions
1°) Use Exercise (4.1S) 1°) to prove that for A> 0,
Eo [exp(-AAse)]
= e2
2
1
00
Eo [exp (-AAr, -
~2 is) ] ds 1.~Ea [exp ( -AA To - e22 To) ] da.
2°) If ¢ and 1/1 are suitable solutions of the Sturm-Liouville equation ¢I! =
2 (Aft +~) ¢, then
3°) With the notation of Theorem (2.7) Chap. VI find the explicit values of the
expressions in 1°) for At = At and derive therefrom another proof of the arcsine
law. This question is independent of 2°).
[Hint: Use the independence of A~ and A~, the fact that is = A~ + A~ and
the results in Propositions (2.7) and (2.S).]
**
(4.21) Exercise (Levy-Khintchine formula for BESQ8). If La is a family oflocal
times of the Brownian excursion (see Exercise (4.9)), call M the image ofn+ under
the map u -+ (a -+ L~(u)(u)). The measure M is a measure on W + = C (lK+, IK+).
If fEW + and X is a process we set
Xf =
1
00
f(t)Xtdt.
1°) With the notation of Sect. 1 Chap. XI prove that for x ::::
Q~ [exp(-Xf )] = exp {-x
I(l -
°
exp(-(f, ¢)))M(d¢)}
10
where (f, ¢) = 00 f(t)¢(t)dt.
[Hint: Use the second Ray-Knight theorem and Proposition (1.12).]
2°) For ¢ E W+ call ¢s the function defined by ¢s(t) = ¢ (t - s)+) and put
N = 00 Msds where Ms is the image of M by the map ¢ -+ ¢s. Prove that
10
Q~ [exp(-Xf )] = exp {-2
I
(1 - exp(-(f, ¢)))N(d¢)}.
[Hint: Use 1°) in Exercise (2.7) of Chap. XI and the fact that for a BM the
process IBI + L is a BES 3 (0).]
The reader is warned that Ms has nothing to do with the law M S of the meander
in Exercise (4.1S).
3°) Conclude that
Q~ [exp(-Xf )] = exp {-
I (l -
exp(-(f, ¢)))(xM + ON)(d¢)}.
§4. Descriptions of Ito's Measure and Applications
509
4°) Likewise prove a similar Levy-Khintchine representation for the laws Q:'o
of the squares of Bessel bridges ending at 0; denote by Mo and No the corresponding measures, which are now defined on C( [0, 1]; ~+).
5°) For a subinterval I of ~+, and x, y E I, with x < y, let Px,y be the
probability distribution on C(l, ~+) of a process Xx,y which vanishes off the
interval (x, y), and on (x, y), is a BESQ~~x(O, 0) that is
Xx,y(v) = (y - x)Z
V-x)
- 1(x:sv:sy)
(y-x
(V E 1)
where Z has distribution Q6:~.
Prove that the Levy measures encountered above may be represented by the
following integrals:
1
M = "2
10t y~2 PO,y dy;
XJ
It
10 y~2 PO,y dy;
Mo = "2
#
N = -1
2
1 1
111 11
00
dx
No = -
2
00
(y - x)~2 Px,y dy;
x
0
dx
(y - x)~2 Px,y dy,
x
0
(4.22) Exercise. Let ¢ and f be positive Borel functions on the appropriate
spaces. Prove that
f
n+(de) foRCe) ¢(s)f(es)ds = 2
f
n+(de)¢(R(e» foRCe) f(2e s )ds.
[Hint: Compute the left member with the help of Exercise (4.17) 2°) and the
right one by using Theorem (4.1).]
* (4.23) Exercise. Prove that Theorem (4.7) is equivalent to the following result.
Let ~ be the measure on ~+ x W x W given by
~(dt, dw, dw ' ) = l(t>o)dt S3(dw)S3(dw ' )
and set L(t, w) = sup{s : w(s) = t}. If we define an Uo-valued variable e by
w(s)
es(t, w, Wi) = { w'(L(t, w) + L(t, Wi) - s)
o
if 0 S s S L(t, w)
if L(t, w) S s S L(t, w) + L(t, Wi)
if s :::: L(t, w) + L(t, Wi),
then the law of (L, e) under ~ is equal to ii+.
**
(4.24) Exercise (Chung-Jacobi-Riemann identity). Let B be the standard BM
and T an exponential r. v. with parameter 1/2, independent of B.
1° Prove that for every positive measurable functional F,
E [F (Bu; U S gT) I LT = s] = eS E [F (Bu; uS Ts) exp (-rs!2)],
and consequently that
510
Chapter XII. Excursions
E [F (Bu; U ::: gT)] =
1
00
E [F (Bu; U ::: rs) exp (-rs/2)] ds.
2°) Let So, /0 and 1° denote respectively the supremum, the opposite of the infimum and the local time at 0 of the standard Brownian bridge (b(t); t ::: 1). Given
a %(0, 1) Gaussian r.v. N independent of b, prove the three variate formula
P [INISo ::: x; INI/o ::: y; INl/o E dl] = exp(-/(cothx + cothy)/2)dl.
3°) Prove as a result that
P [INISO ::: x; INI/o ::: y] = 2/(cothx + cothy)
and that, if MO = sup{lb(s)l; s ::: I},
P [INIMO ::: x]
= tanhx.
Prove Csaki'sformula:
P{So ISo + /0::: v} = (l - v)(1 - JrV cot(Jrv»
(0 < v < 1)
[Hint: Use the identity:
2V2
roo d)" (Si~:)..»)2
= 1 _ JrV cot(Jrv). ].
v
1o
SI
()..)
4°) Prove the Chung-jacobi-Riemann identity:
(SO + /0)2 ~ (MO)2 + (MO)2
where MO is an independent copy of MO.
5°) Characterize the pairs (S, I) of positive r.v.'s such that
i) P[INIS::: x; INI/ ::: y] = 2/(h(x) + h(y» for a certain function h,
ii) (S + 1)2 ~ M2 + M2,
where M and if are two independent copies of S v /.
(4.25) Exercise. (Brownian meander and Brownian bridges). Let a E JR., and
let ITa be the law of the Brownian bridge (Bt, t ::: 1), with Bo = 0 and B] = a.
Prove that, under ITa, both processes (2St - Bt , t ::: 1) and (IBtl + L t , t ::: 1)
have the same distribution as the Brownian meander (mt, t ::: 1) conditioned on
(m] ~ laD.
[Hint: Use the relation (+) in Exercise (4.18) together with Exercise (3.20) in
Chap. VI.]
In particular, the preceding description for a = 0 shows that, if (b t , t ::: 1) is
a standard Brownian bridge, with at = sUPs<t bs, and (It. t ::: 1) its local time at
0, then
(mt, t ::: 1) ~ (2at - bt. t ::: 1) ~ (lbtl + It, t ::: 1).
Prove that under the probability measure (aI/c) . ITo (resp. (II/c) . ITo), c being
the normalization constant, the process (2at - bt , t ::: 1) (resp. (Ibtl + It. t ::: 1»
isaBES3.
Notes and Comments
511
(4.26) Exercise. 1°) With the notation of this section set J = fooo p~ dt and prove
that
where "sa = inf{t : L~ > s}.
[Hint: Use the generalized occupation times formula of Exercise (1.13)
Chap. VI.]
2°) Define a map w -+ W on .X· by
{(w) = {(w)
and
_
v
wet) = w(O) + w({(w» - w(t),
and call ii the image by this map of the measure fJ. Prove that J = J and that
(fJ 0 fJ'f = ii' 0 ii
for any pair (fL, fJ') of measure~ on ,%'.
3°) Prove that p;2 =
(p;,O) and conclude that
[Hint: See Exercise (2.29) Chap. VI.]
Notes and Comments
Sect. 1. This section is taken mainly from Ito [5] and Meyer [4].
Exercise (1.19) comes from Pitman-Yor [8].
Sect. 2. The first breakthrough in the description of Brownian motion in terms of
excursions and Poisson point processes was the paper of Ito [5]. Although some
ideas were already, at an intuitive level, in the work of Levy, it was Ito who put
the subject on a firm mathematical basis, thus supplying another cornerstone to
Probability Theory. Admittedly, once the characteristic measure is known all sorts
of computations can be carried through as, we hope, is clear from the exercises of
the following sections. For the results of this section we also refer to Maisonneuve
[6] and Pitman [4].
The approximation results such as Proposition (2.9), Exercise (2.14) and those
already given in Chap. VI were proved or conjectured by Levy. The proofs were
given and gradually simplified in Ito-McKean [1], Williams [6], Chung-Durrett
[1] and Maisonneuve [4].
Exercise (2.17) may be extended to the computation of the distribution of the
multidimensional time spent in the different rays by a Walsh Brownian motion
(see Barlow et al. [1] (1989».
Sect. 3. In this section, it is shown how the global excursion theory, presented
in Section 2, can be applied to describe the laws of individual excursions, i.e.
excursions straddling a given random time T. We have presented the discussion
512
Chapter XII. Excursions
v
only for stopping times T W.r.t. the filtration (.¥t) = (.¥g',) , and terminal (Y;)
stopping times. See Maisonneuve [7] for a general discussion. The canevas for
this section is Getoor-Sharpe [5] which is actually written in a much more general
setting. We also refer to Chung [1]. The filtration (..0g() was introduced and studied
in Maisonneuve [6].
The Brownian Meander of Exercise (3.8) has recently been much studied (see
Imhof ([1] and [2]), Durrett et al [1], Denisov [1] and Biane-Yor [3]). It has found
many applications in the study of Azema's martingale (see Exercise (4.16) taken
from Azema-Yor [3]).
Sect. 4. Theorems (4.1) and (4.2) are fundamental results of Ito [5]. The proof of
Corollary (4.4) is taken from Ikeda-Watanabe [2].
Williams' description of the Ito measure is found in Williams [7] and RogersWilliams [1] (see also Rogers [1]) and Bismut's description appeared in Bismut
[3]. The formalism used in the proof of the latter as well as in Exercise (4.18)
was first used in Biane-Yor [1]. The paper of Bismut contains further information
which was used by Biane [1] to investigate the relationship between the Brownian
Bridge and the Brownian excursion and complement the result of Vervaat [1].
Exercise (4.8) is due to Rogers [3]. Knight's identity (Knight [8]) derived in
Exercise (4.11) has been explained in Biane [2] and Vallois [3] using a pathwise
decomposition of the pseudo-Brownian bridge (cf. Exercise (2.29) Chap. VI);
generalizations to Bessel processes (resp. perturbed Brownian motions) have been
given by Pitman-Yor [9] (resp. [23]). The Watanabe process appears in Watanabe
[2]. Exercise (4.14) is from Rogers [1]. Exercise (4.16) originates with Azema
[2] and Exercise (4.17) with Biane et al. [1]. Exercise (4.18) is taken partly from
Azema-Yor [3] and partly from Biane-Yor ([1] and [3]) and Exercise (4.19) from
Knight [6]. Exercise (4.20) is in Biane-Yor [4] and Exercise (4.21) in Pitman-Yor
[2]; further results connecting the Brownian bridge, excursion and meander are
presented in Bertoin-Pitman [1] and Bertoin et al. [1].
With the help of the explicit Uvy-Khintchine representation of Q~ obtained
in Exercise (4.21), Le Gall-Yor [5] extend the Ray-Knight theorems on Brownian
local times by showing that, for any 8 > 0, Qg is the law of certain local times
processes in the space variable. In the same Exercise (4.21), the integral representations of M, N, Mo and No in terms of squares of BES4 bridges are taken from
Pitman [5]. Exercise (4.22) is in Azema-Yor [3], and Exercise (4.23) originates
from Bismut [3].
The joint law of the supremum, infimum and local time of the Brownian bridge
is characterized in Exercise (4.24), taken from work in progress by Pitman and
Yor. The presentation which involves an independent Gaussian random variable,
differs from classical formulae, in terms of theta functions, found in the literature
(see e.g. Borodin and Salminen [1]). Csaki's formula in question 3°) comes from
Csaki [1] and is further discussed in Pitman-Yor [13]. Chung's identity of question
4°) remains rather mysterious, although Biane-Yor [1] and Williams [9] explain
partly its relation to the functional equation of the Riemann zeta function. See also
Notes and Comments
513
Smith and Diaconis [1] for a random walk approach to the functional equation,
and Biane-Pitman-Yor [1] for further developments.
Exercise (4.25) is a development and an improvement of the corresponding
result found in Biane-Yor [3] for a = 0, and of the remark following Theorem 4.3
in Bertoin-Pitman [1]. The simple proof of Exercise (4.26) is taken from Leuridan
[1 ].
Chapter XIII. Limit Theorems in Distribution
§1. Convergence in Distribution
In this section, we will specialize the notions of Sect. 5 Chap. 0 to the Wiener
space wd . This space is a Polish space when endowed with the topology of
uniform convergence on compact subsets of lR+ This topology is associated with
the metric
00
SUPt<n I w(t) - w'(t)1
d(w, w') =
I
1 + SUPt~n I w(t) - w'(t)1
Lrn
-
.
The relatively compact subsets in this topology are given by Ascoli's theorem.
Let
yN (w, 8) = sup {I w(t) - w(t')I; It - t'l ::: 8 and t, t' ::: N}.
With this notation, we have
(1.1) Proposition. A subset r ojWd is relatively compact if and only if
(i) the set {w(O), w E r} is bounded in JR.d;
(ii) jor every N,
lim sup yN (w, 8) = O.
8.j,O WET
In Sect. 5 Chap. 0, we have defined a notion of weak convergence for probability measures on the Borel a -algebra of W d ; the latter is described in the
following
(1.2) Proposition. The Borel a-algebra on W d is equal to the a-algebra.¥ generated by the coordinate mappings.
Proof The coordinate mappings are clearly continuous, hence .¥ is contained
in the Borel a-algebra. To prove the reverse inclusion, we observe that by the
definition of d, the map w --+ d(w, w') where w' is fixed, is '¥-measurable. As a
result, every ball, hence every Borel set, is in .¥.
Before we proceed, let us observe that the same notions take on a simpler
form when the time range is reduced to a compact interval, but we will generally
work with the whole half-line.
D. Revuz et al., Continuous Martingales and Brownian Motion
© Springer-Verlag Berlin Heidelberg 1999
516
Chapter XIII. Limit Theorems in Distribution
(1.3) Definition. A sequence (xn) ofJRd-valued continuous processes defined on
probability spaces (Sr,,¥n, pn) is said to converge in distribution to a process
X if the sequence (xn(pn)) of their laws converges weakly on W d to the law of
x. We will write xn ~ x.
In this definition, we have considered processes globally as W d -valued random
variables. If we consider processes taken at some fixed times, we get a weaker
notion of convergence.
(1.4) Definition. A sequence (xn) of (not necessarily continuous) JRd-valued processes is said to converge to the process X in the sense of finite distributions if
for any finite collection (fl, ... , tk) of times, the JRdk-valued r.v. 's (X~, ... , X~)
converge in law to (X/l' ... , Xlk). We will write xn ~ X.
Since the map w -+ (W(tl), ... , w(td) is continuous on W d , it is easy to see
that, if Xn ~ X, then Xn ~ X. The converse is not true, and in fact continuous processes may converge in the sense of finite distributions to discontinuous
processes as was seen in Sect. 4 of Chap. X and will be seen again in Sect. 3.
The above notions make sense for multi-indexed processes or in other words
for C (JR+)1 , JRd) in lieu of the Wiener space. We leave to the reader the task of
writing down the extensions to this case (see Exercise (1.12».
Convergence in distribution of a sequence of probability measures on W d is
fairly often obtained in two steps:
i) the sequence is proved to be weakly relatively compact;
ii) all the limit points are shown to have the same set of finite-dimensional distributions.
In many cases, one gets ii) by showing directly that the finite dimensional
distributions converge, or in other words that there is convergence in the sense
of finite distributions. To prove the first step above, it is usually necessary to use
Prokhorov's criterion which we will now translate in the present context. Let us
first observe that the function V N (·,8) is a random variable on W d •
(1.5) Proposition. A sequence (Pn) ofprobability measures on W d is weakly relatively compact if and only if the following two conditions hold:
i) for every E > 0, there exist a number A and an integer no such that
Pn[ I w(O)1 > A] :s E,
for every n :::: no;
ii) for every '1, E > 0 and N E N, there exist a number 8 and an integer no
such that
for every n :::: no.
Remark. We will see in the course of the proof that we can actually take no = o.
Proof The necessity with no = 0 follows readily from Proposition (Ll) and
Prokhorov's criterion of Sect. 5 Chap. O.
§ I. Convergence in Distribution
517
Let us tum to the sufficiency. We assume that conditions i) and ii) hold. For
every no, the finite family (Pn)n~no is tight, hence satisfies i) and ii) for numbers
A' and 8' . Therefore, by replacing A by A V A' and 8 by 8 /\ 8' , we may as well
assume that conditions i) and ii) hold with no = O. This being so, for E > 0 and
N E N, let us pick A N •e and 8N .k.s such that
supPn[lw(O)I>A N .e ]
<
rN-IE,
sup Pn [VN (', 8N .k •e ) > 1/ k]
<
r N - k - 1E,
n
n
and set K N.£ = {w: I w(O)I:s A N.s , VN(w, 8N,k,s):::: l/k for every k:::: l}. By
KN,e is relatively compact in W d and we have
Proposition (1.1), the set Ks =
Pn (Kn :s LN PN (K~,e) :s E, which completes the proof.
0
nN
We will use the following
(1.6) Corollary. Ifxn = (X~, ... , X~) is a sequence ofd-dimensional continuous
processes, the set (xn(pn)) of their laws is weakly relatively compact if and only
if, for each j, the set of laws X'j(pn) is weakly relatively compact.
Hereafter, we will need a condition which is slightly stronger than condition ii)
in Proposition (1.5).
(1.7) Lemma. Condition ii) in Proposition (1.5) is implied by the following condition: for any Nand E, 1] > 0, there exist a number 8, 0 < 8 < 1, and an integer
no, such that
8- 1 Pn [{w:
sup I w(s) - w(t)1 :::: 1]}] :s E for n :::: no and for all t :::: N.
t~s~t+8
Proof Let N be fixed, pick E, 1] > 0 and let no and 8 be such that the condition
in the statement holds. For every integer i such that 0 :::: i < N 8-I, define
Ai =
I
sup
i89~(i+1)8I\N
As is easily seen {VN(., 8) < 31]} :J
we get
I w(i8) - cu(s)1 ::::
1]\'
ni Af, and consequently for every n :::: no,
Pn [VN(-,8):::: 31]]:S Pn[UAi]:S (1 + [Nrl])8E < (N + l)E
i
which proves our claim.
o
The following result is very useful.
(1.8) Theorem (Ko)mogorov's criterion for weak compactness). Let (xn) be a
sequence of[J?"d-valued continuous processes such that
518
Chapter XIII. Limit Theorems in Distribution
i) the family {Xo (pn)} of initial laws is tight in ~d,
ii) there exist three strictly positive constants a, f3, y such that for every s, t E ~+
and every n,
En [IX; - X~n : : : f3ls - fiY+I;
then, the set {xn(pn)} of the laws of the Xn's is weakly relatively compact.
Proof Condition i) implies condition i) of Proposition (1.5), while condition ii)
of Proposition (1.5) follows at once from Markov inequality and the result of
Theorem (2.1) (or its extension in Exercise (2.10» of Chap. I.
0
We now tum to a first application to Brownian motion. We will see that the
Wiener measure is the weak limit of the laws of suitably interpolated random
walks. Let us mention that the existence of Wiener measure itself can be proved
by a simple application of the above ideas.
In what follows, we consider a sequence of independent and identically distributed, centered random variables ~k such that E [~n = 0'2 < 00. We set So = 0,
Sn = L~=I ~k· If [x] denotes the integer part of the real number x, we define the
continuous process xn by
(1.9) Theorem (Donsker). The processes xn converge in distribution to the standard linear Brownian motion.
Proof We first prove the convergence of finite-dimensional distributions. Let tl <
t2 < ... < tk; by the classical central limit theorem and the fact that [nt]/n converges to t as n goes to +00, it is easily seen that (X~, X~ - X~, ... , X~ - X~_J
converges in law to (Btl' Bt2 - Btl' ... , Btk - B tk _l ) where B is a standard linear
BM. The convergence of finite-dimensional distributions follows readily.
Therefore, it is sufficient to prove that the set of the laws of the Xn's is weakly
relatively compact. Condition i) of Proposition (1.5) being obviously in force, it
is enough to show that the condition of Lemma (1.7) is satisfied.
Assume first that the ~k'S are bounded. The sequence ISkl4 is a submartingale
and therefore for fixed n
One computes easily that E [S~] = nE [~n + 3n(n a constant K independent of the law of ~k such that
1)0'4. As a result, there is
lim P [~ax ISil > J...o',Jri] : : : KJ... -4.
n---+oo
l~n
By truncating and passing to the limit, it may be proved that this is still true if
we remove the assumption that ~k is bounded. For every k 2: 1, the sequence
§ 1. Convergence in Distribution
519
{Sn+k - Sd has the same law as the sequence {Sn} so that finally, there exists an
integer n 1 such that
for every k :::: 1 and n :::: n 1. Pick £ and 'f/ such that 0 < £, 'f/ < 1 and then choose
A such that K A-2 < 'f/£2; set further 8 = £2 A-2 and choose no > n 10-1. If n :::: no,
then [no] :::: n 1, and the last displayed inequality may be rewritten as
P [max iSi+k - Ski:::: AaJ[nO]] :::: 1I£2 A-2.
i::;[n8]
Since A,J[n8] :::: £Vn, we get
0- 1P [max iSi+k - Ski::::
i ::;[n8]
ca..jii] : : 'f/
for every k :::: 1 and n :::: no. Because the Xn's are linear interpolations of the
random walk (Sn), it is now easy to see that the condition in Lemma (1.7) is
satisfied for every N and we are done.
0
To illustrate the use of weak convergence as a tool to prove existence results,
we will close this section with a result on solutions to martingale problems. At
no extra cost, we will do it in the setting of Ito processes (Definition (2.5), Chap.
VII).
We consider functions a and b defined on JR+ x W d with values respectively
in the sets of symmetric non-negative d x d-matrices and JRd-vectors. We assume
these functions to be progressively measurable with respect to the filtration (.~o)
generated by the coordinate mappings wet). The reader is referred to the beginning
of Sect. 1 Chap. IX. With the notation of Sect. 2 Chap. VII, we may state
(1.10) Theorem. If a and b are continuous on JR+ x W d , then for any probability
measure J.L on JRd, there exists a probability measure P on W d such that
i) P[w(O) E A] = J.L(A);
ii) for any fECi, the process f(w(t») - f(w(O») - f~ Ls!(w(s»)ds is a
(9fo, P )-martingale, where
Proof For each integer n, we define functions an and bn by
an(t, w) = a([nt]/n, w),
bn(t, w) = b([nt]/n, w).
These functions are obviously progressively measurable and we call L~ the corresponding differential operators.
520
Chapter XIII. Limit Theorems in Distribution
Pick a probability space (Q, .'Y, P) on which a LV. XO oflaw f.L and a BMd (0)
independent of X o, say B, are defined. Let an be a square root of an. We define
inductively a process xn in the following way; we set X~ = Xo and if xn IS
defined up to time kin, we set for kin < t :s (k + l)ln,
X7 = XZ/ n + an(kln, X~)(Bt - B k / n) + bn(kln, Xn)(t - kin).
Plainly, xn satisfies the SDE
X7 = 1t an(s, r)dBs + 1t bn(s, Xn)ds
and if we call pn the law of xn on W d , then pn[w(O) E A] = f.L(A) and
f(w(t» - f(w(O» - J~ L~f(w(s»ds is a pn-martingale for every fECi·
The set (pn) is weakly relatively compact because condition i) in Theorem
(1.8) is obviously satisfied and condition ii) follows from the boundedness of a
and b and the Burkholder-Davis-Gundy inequalities applied on the space Q.
Let P be a limit point of (pn) and (pn') be a subsequence converging to
P. We leave as an exercise to the reader the task of showing that, since for
fixed t the functions J~ L~f(w(s»ds are equi-continuous on W d and converge to
J~ Ls!(w(s»ds, then
Ep [(f(w(t» -1t Lsf(W(S»dS) ¢ ] =
)~moo Epn' [(f(w(t» -1t L~' f(W(S»dS) ¢ ]
for every continuous bounded function ¢. If t1 < t2 and ¢ is .~o-measurable it
follows that
Ep [(f(W(t2» - f(w(tj) - It2 Ls!(W(S»dS) ¢ ] = 0
since the corresponding equality holds for pn' and L~'. By the monotone class
theorem, this equality still holds if ¢ is merely bounded and .~o-measurable; as
a result, f(w(t» - f(w(O» - J~ Ls!(w(s»ds is a P-martingale and the proof is
complete.
D
Remarks. With respect to the results in Sect. 2 Chap. IX, we see that we have
dropped the Lipschitz conditions. In fact, the hypothesis may be further weakened
by assuming only the continuity in w of a and b for each fixed t. On the other
hand, the existence result we just proved is not of much use without a uniqueness
result which is a much deeper theorem.
#
(1.11) Exercise. 1°) If (X") converges in distribution to X, prove that (xn)*
converges in distribution to X* where, as usual, X; = sUPsS! IXsl.
2°) Prove the reflection principle for BM (Sect. 3 Chap. III) by means of the
analogous reflection principle for random walks. The latter is easily proved in
§ I. Convergence in Distribution
521
the case of the simple random walk, namely with the notation of Theorem (1.9),
P[~k = 1] = P[~k = -1] = 1/2.
*
(1.12) Exercise. Prove that a family (p)J of probability measures on C (JR.+)k, JR.)
is weakly relatively compact if there exist constants a, fJ, y, p > 0 such that
suP;. E;.[ IXoIP] < 00, and for every pair (s, t) of points in (JR.+)k
sup E;. [ IXs - XII"] :::: fJls - tlk+ y
;.
where X is the coordinate process.
* (1.13) Exercise. Let fJ;, s E [0, 1] and yt, t E [0, 1] be two independent se-
quences of independent standard BM's. Prove that the sequence of doubly indexed
processes
n
Xn(s,l) = n- I / 2 '""'
fJiyi
~ s I
i=1
converges in distribution to the Brownian sheet. This is obviously an infinitedimensional central-limit theorem.
(1.14) Exercise. In the setting of Donsker's theorem, prove that the processes
O::::t::::l,
converge in distribution to the Brownian Bridge.
(1.15) Exercise. Let (Mn) be a sequence of (super) martingales defined on the
same filtered space and such that
i) the sequence (Mn) converges in distribution to a process M;
ii) for each t, the sequence (Mn is uniformly integrable.
Prove that M is a (super) martingale for its natural filtration.
*
(1.16) Exercise. Let (Mn) be a sequence of continuous local martingales vanishing at 0 and such that «(M n , Mn) converges in distribution to a deterministic
function a. Let Pn be the law of Mn.
1°) Prove that the set (Pn ) is weakly relatively compact.
[Hint: One can use Lemma (4.6) Chap. IV.]
2°) If, in addition, the Mn's are defined on the same filtered space and if, for
each t, there is a constant aCt) such that (M n , Mn)1 :::: aCt) for each n, show that
(Pn ) converges weakly to the law Wa of the gaussian martingale with increasing
process aCt) (see Exercise (1.14) Chap. V).
[Hint: Use the preceding exercise and the ideas of Proposition (1.23) Chap.
IV.]
3°) Let (Mn) = (Mr, i = 1, ... , k) be a sequence of multidimensional local
martingales such that (M;> satisfies for each i all the above hypotheses and, in
addition, for i I- j, the processes (Mr, Mj) converge to zero in distribution. Prove
that the laws of M n converge weakly to W al ® ... ® Wak •
[Hint: One may consider the linear combinations LUi Mr .]
522
Chapter XIII. Limit Theorems in Distribution
The two following exercises may be solved by using only elementary properties
ofBM.
* (1.17) Exercise (Scaling and asymptotic independence). P) Using the notation
of the following section, prove that if f3 is a BM, the processes f3 and f3(c) are
asymptotically independent as c goes to O.
[Hint: For every A > 0, (f3c2t, t ~ A) and (f3c2A+u - f3c2A' U ~ 0) are
independent. ]
2°) Deduce from 1°) that the same property holds as c goes to infinity. (See
also Exercise (2.9).)
Prove that for c =1= 1, the transformation x -+ X(c) which preserves the Wiener
measure, is ergodic. This ergodic property is the key point in the proof of Exercise
(3.20), 1°), Chap. X.
3°) Prove that if (Yr. t ~ 1) is a process whose law pY on C([O, 1], JR) satisfies
Plj., «WI.¥,"
for every t < 1,
«y/
c ), Yt), t ~ 1) converges in law as c
then the two-dimensional process Vr(c) =
goes to 0 towards «f3r. Yt), t ~ 1), where f3 is a BM which is independent of y.
[Hint: Use Lemma (5.7) Chap. 0.]
4°) Prove that the law of y(c) converges in total variation to the law of f3 i.e. the
Wiener measure. Can the convergence in 3°) be strengthened into a convergence
in total variation?
5°) Prove that V(c) converges in law as c goes to 0 whenever Y is a BB, a
Bessel bridge or the Brownian meander and identify the limit in each case.
*
(1.18) Exercise. (A Bessel process looks eventually like a BM). Let R be a
BES8 (r) with 8 > 1 and r ~ O. Prove that as t goes to infinity, the process
(R Hs - R t , S ~ 0) converges in law to a BMI.
[Hint: Use the canonical decomposition of R as a semimartingale. It may be
necessary to write separate proofs for different dimensions.]
§2. Asymptotic Behavior of Additive Functionals
of Brownian Motion
This section is devoted to the proof of a limit theorem for stochastic integrals with
respect to BM. As a corollary, we will get (roughly speaking) the growth rate of
occupation times of BM.
In what follows, B is a standard linear BM and Lathe family of its local
times. As usual, we write L for LO. The Lebesgue measure is denoted by m.
(2.1) Proposition. If f is integrable,
lim n
n--+oo
10t f(nBs)ds = m(f)L
a.s.,
§2. Asymptotic Behavior of Additive Functionals of Brownian Motion
523
and,for each t, the convergence ofn J~ f(nBs)ds to m(f)L t holds in LP for every
p ::: 1. Both convergences are uniform in t on compact intervals.
Proof By the occupation times formula
n
1
t
o
f(nBs)ds =
1+00 f(a)L~/nda.
-00
For fixed t, the map a -+ L~ is a.s. continuous and has compact support; thus, the
r.v. sUPa L~ is a.s. finite and by the continuity of L; and the dominated convergence
theorem,
limn
n
r f(nBs)ds = m(f)L
10
t
a.s.
Hence, this is true simultaneously for every rational t; moreover, it is enough to
prove the result for f ::: 0 in which case all the processes involved are increasing
and the proof of the first assertion is easily completed.
For the second assertion, we observe that
and, since by Theorem (2.4) in Chap. XI, sUPa L~ is in LP for every p, the result
follows from the dominated convergence theorem.
The uniformity follows easily from the continuity of L~ in both variables. D
The following is a statement about the asymptotic behavior of additive functionals, in particular occupation times. The convergence in distribution involved
is that of processes (see Sect. 1), not merely of individual r.v.'s.
(2.2) Proposition. If A is an integrable CAF,
.
11m
1
r.;An. = VA(1)L
n
n--+oo V
in distribution.
Proof Since (see Exercise (2.11) Chap. VI) L~. <gJ JnL~/.;n, it follows that
JnAn. =
In IL~VA(da) f L~/.;nvA(da)
<gJ
and the latter expression converges a.s. to vA (1) L. by the same reasoning as in
the previous proposition.
D
The above result is satisfactory for vA (1) =I- 0; it says that a positive integrable additive functional increases roughly like VA (1).Jt. On the contrary, the
case VA (1) = 0 must be further investigated and will lead to a central-limit type
theorem with interesting consequences. Moreover, measures with zero integral are
important when one wants to associate a potential theory with linear BM.
If we refer to Corollary (2.12) in Chap. X, we see that we might as well work
with stochastic integrals and that is what we are going to do. To this end, we
524
Chapter XIII. Limit Theorems in Distribution
need a result which will be equally very useful in the following section. It is an
asymptotic version of Knight's theorem (see Sect. 1 Chap. V).
In what follows, (Mjn, 1 S j S k) will be a sequence of k-tuples of continuous
local martingales vanishing at 0 and such that (M;, Mj)oo = 00 for every nand
j. We call
(t) the time-change associated with (M;, Mj) and f3; the DDS
Brownian motion of
rF
M;.
(2.3) Theorem. If, for every t, and every pair (i, j) with i =1= j
lim (MF, MJ?)T~(t) = n---+oo
lim (MF, MJ?)Tn(t)
=0
J
n400
l
in probability, then the k-dimensional process f3n = (f3;, 1 S j S k) converges in
distribution to a BMk.
Proof The laws of the processes f3; are all equal to the one-dimensional Wiener
measure. Therefore, the sequence {f3 n } is weakly relatively compact and we must
prove that, for any limit process, the components, which are obviously linear BM's,
are independent.
It is no more difficult to prove the results in the general case than in the case
k = 2 for which we introduce the following handier notation. We consider two
sequences of continuous local martingales (Mn) and (Nn). We call J-tn(t) and TJn(t)
the time-changes associated with (M n, Mn) and (N n , Nn) respectively and f3n and
yn the corresponding DDS Brownian motions.
If 0 = to < tl < ... < tp = t and if we are given scalars 10, ... ,Ip-I and
go, ... , gp-I, we set
1=
L /;
f3n (f) =
I]Vj+!l,
j
g=
L gj
L /; (f3~+1 - f3~) ,
j
yn(g) =
l]tj,lj+!l'
L gj (Yt;+l - Yt;) .
j
j
Let us first observe that if we set
U: = loS 1 ((M n, Mn)u)dM:,
then f3(f) = U~ and yn(g) = V~. Therefore writing E [go (i (Un + Vn»oo] = 1
yields
where
H n = exp(Io
OO
1 ((M n, Mn)s)g ((N n, Nn)s)d(Mn, Nn)s)
= exp (2::/igj
!,J
f
d(M n, Nn)sl[JLn(ti),JLn(ti+l)](S) 1['7n(tj)''7n(tj+l)](S») .
§2. Asymptotic Behavior of Additive Functionals of Brownian Motion
525
The hypothesis entails plainly that H n converges to I in probability; thus the
proof will be finished if we can apply the dominated convergence theorem. But
Kunita-Watanabe's inequality (Proposition (l.l5) Chap. IV) and the time-change
formulas yield
o
and we are done.
We will make a great use of a corollary to the foregoing result which we now
describe. For any process X and for a fixed real number h > 0, we define the
scaled process X(h) by
The importance of the scaling operation has already been seen in the case of BM.
If M is a continuous local martingale and 13 its DDS Brownian motion, then f3(h)
is the DDS Brownian motion of h- 1 M as is stated in Exercise (l.l7) of Chap. V.
We now consider a family M i , i = I, 2, ... , k of continuous local martingales
such that (Mi , Mi)oo = 00 for every i and call f3i their DDS Brownian motions.
We set M; = Mi / In and call 13; the DDS Brownian motion of M;. As observed
above, f3;(t) = f3i(nt)/Jn .
(2.4) Corollary. The k-dimensional process f3n = (13;, i = I, ... , k) converges in
distribution to a BMk as soon as
lim (Mi , Mj)r!(Mi , Mi)t = 0
t-+oo
almost surely for every i, j :s: k with i # j.
Proof If ri(t) (resp. rr(t)) is the time-change associated with (Mi , M i )
(resp. (M;,M;)), then rr(t) = ri(nt) and consequently (M;,Mj)r,"(t) =
n- 1 (Mi, Mj)ri(nt). The hypothesis entails that t- 1 (Mi , Mj)ri(t) converges a.s. to
o as t goes to +00, so that the result follows from Theorem (2.3).
0
The foregoing corollary has a variant which often comes in handy.
(2.5) Corollary. If there is a positive continuous strictly increasing function ¢ on
IR+ such that
i) ¢ (t)-l (Mi , Mi)t ..!!!2,. Vi, i = 1,2, ... , k where Vi is a strictly positive r. v.,
t-+oo
ii) ¢(t)-l sUPs<t I(Mi , Mj)s I ---+ 0 in probability for every i, j :s: k with i # j,
-
(--+00
then the conclusion of Corollary (2.4) holds.
Proof Again it is enough to prove that, for i # j, t- I (Mi, Mj)ri(t) converges to 0
in probability and in fact we shall show that t- 1 X t converges to 0 in probability
where X t = sup {1(Mi , Mj)sl; s:s: ri(t)}.
Hypothesis i) implies that t-I¢(ri(t)) converges to Vi-I in distribution. For
A > 0 and x > 0, we have, using the fact that X. is increasing,
526
Chapter XIII. Limit Theorems in Distribution
P [XTi(t) > At]
<
<
P [q,(Ti(t» 2: tX] + P [XTi(t) > At; Ti(t) < q,-l(tX)]
P [q,(Ti(t» 2: tX] + P [X",-I(tX) > At].
Pick 8 > 0; since Vi is strictly positive we may choose x sufficiently large and
T > 0, such that, for every t 2: T,
Hypothesis ii) implies that there exists T' 2: T such that for every t 2: T',
P [X",-I(tX) > At]:'S 8.
It follows that for t 2: T',
P [XTi(t) > At]:'S 28,
o
which is the desired result.
We now return to the problem raised after Proposition (2.2). We consider Borel
functions Ii, i = 1, 2, ... , k in L I (m) n L 2 (m) which we assume to be pairwise
orthogonal in L2, i.e. f khdm = 0 for i "I- j. We set
M; = .jfi i · Ii (nBs)dBs.
(2.6) Theorem (Papanicolaou-Stroock-Varadhan). The (k+ I)-dimensional process (B, M?, ... , Mn converges in distribution to (13, II Ii 112Y/ ' i = 1, 2, ... , k)
where (13, y 1, ... , yk) is a BMk + 1 and I is the local time of 13 at zero.
Proof We have
(Min, Mj}t = nit (fiij)(nBs)ds,
so that by Proposition (2.1), we have a.s.
lim (M;, M;)t = Il/ill~Lt; for i"l- j, lim (M;, Mj}t = lim (M;, B}t = 0
n---+oo
n---+OO
n---+oo
uniformly in t on compact intervals. Thus it is not difficult to see that the hypotheses of Theorem (2.3) obtain; as a result, if we call
the DDS Brownian motion
of M;, the process (B, B?, ... , Bf) converges in distribution to (13, y I , ... , yk).
Now (B,M?, ... ,Mf) is equal to (B, B; ((M;,M;)), i = I, ... ,k) and it is
plain that (B, (M;, M;), i = 1, ... , k, B;, i = 1, ... , k) converges in distribution
to (13, 11/;l1~ I, i = 1, ... , k, yi, i = 1, ... , k). The result follows.
0
B;
Remark. Instead of n, we could use any sequence (an) converging to +00 or, for
that matter, consider the real-indexed family
M; = .fi:
L
Ii (ABs)dBs
and let A tend to +00. The proof would go through just the same.
§2. Asymptotic Behavior of Additive Functionals of Brownian Motion
527
We will now draw the consequences we have announced for an additive functional A which is the difference of two integrable positive continuous additive
functionals (see Exercise (2.22) Chap. X) and such that VA(l) = O. In order to be
able to apply the representation result given in Theorem (2.9) of Chap. X, we will
have to make the additional assumption that
f IxllvAICdx) < 00.
As in Sect. 3 of the Appendix, we set
F(x) = f Ix - yIVACdy).
The function F is the difference of two convex functions and its second derivative
in the sense of distributions is 2 vA. Let F'- be its left derivative which is equal to
VAG - 00, ·D. We have the
(2.7) Lemma. The/unction F is bounded and F'- is in . 2i 1 Cm)n2i 2 Cm). Moreover
IIF~II~ = ICvA)
where ICvA) = -(1/2)
J J Ix - yIVA(dx)VACdy) is called the energy O/VA.
Proof Since
fix - yllvAICdy) .:'S IxlllvAIl + f lyllvAICdy),
the integral I Cv A) is finite and we may apply Fubini' s theorem to the effect that
l(vA)
-f1
-ff'[
x>y
Cx - y)vACdx)VACdy)
}x>z>y
VA CdX)VA Cdy)dz
- £:00 dz (1 00 VACdX») (i~ VA (dY») .
The set of z' s such that VA({ z}) =1= 0 is countable and for the other z' s, it follows
from the hypothesis vA C1) = 0 that
Thus the proof of the equality in the statement is complete.
By the same token
1 IF~Cx)ldx 1
1
00
00
IVAC ]x, oo[) Idx .:'S
00
XIVAICdx) < 00,
1
00
IVAI (]x, oo[ )dx
528
Chapter XIII. Limit Theorems in Distribution
and likewise
i~ 1F~(x)ldx S i~ IxllvAI(dx) <
00.
Consequently, F~ is in ~I and it follows that F is bounded.
o
We may now prove that additive functionals satisfying the above set of hypotheses are, roughly speaking, of the order of t l / 4 as t goes to infinity.
(2.8) Proposition . .(fVA(1) = 0 and J IxllvAI(dx) < 00, the 2-dimensional process (n- I/ 2B n., n- I/ 4An.) converges in distribution to ({J, I (VA) 1/2 y1 ), where ({J, y)
is a BM2 and I the local time of {J at O.
Proof By the representation result in Theorem (2.9) of Chap. X and Tanaka's
formula,
n- I/4A n.
= n- I/4 [F(BnJ - F(O)] - n- I/4 1 n. F~(Bs)dBs.
Since F is bounded, the first term on the right goes to zero as n goes to infinity
and, therefore, it is enough to study the stochastic integral part.
Setting s = nu, we see that we might as well study the limit of
(n-I/2Bn.,n-I/41· F~(Bnu)dBnu),
and since Bn. <!l JnB, this process has the same law as
(B., n l / 4 1'
F~ (vnBu) dBu ) .
Because F~ is in ~I(m) n ~2(m), it remains to apply the remark following
Theorem (2.6).
0
Remark. Propositions (2.2) and (2.8) are statements about the speed at which
additive functionals of linear BM tend to infinity. In dimension d > 2, there is no
such question as integrable additive functionals are finite at infinity but, for the
planar BM, the same question arises and it was shown in Sect. 4 Chap. X that
integrable additive functionals are of the order of log t. However, as the limiting
process is not continuous, one has to use other notions of convergence.
* (2.9) Exercise. 1°) In the situation of Theorem (2.3), if there is a sequence of
positive random variables Ln such that
i) limn (Mf ' Mf) Ln = +00 in probability for each i;
ii) limn sUPs:SLn \(Mf, Mj}s\ = 0 in probability for each pair i, j with i =1= j,
prove that the conclusion of the Theorem holds.
2°) Assume now that there are only two indexes and write M n for M~ and N n
for Mi. Prove that if there is a sequence (Ln) of positive random variables such
that
§2. Asymptotic Behavior of Additive Functionals of Brownian Motion
529
if) limll (Mil, Mil h" = 00 in probability,
ii) limll sUPsSLn [(Mil, NIl)s [ = 0,
then the conclusion of the Theorem holds.
3°) Deduce from the previous question that if 13. is a BM, and if c converges
to +00, then 13 and f3(c) are asymptotically independent.
Remark however that the criterion given in Corollary (2.4) does not apply in
the particular case of a pair (M, ~ M) as c --+ 00. Give a more direct proof of the
asymptotic independence of 13 and 13 (c) .
*
(2.10) Exercise. For f in L 2 n L I, prove that for fixed t, the random variables
Fn f~ f(nBs)dBs converge weakly to zero in L2 as n goes to infinity. As a
result, the convergence in Theorem (2.6) cannot be improved to convergence in
probability.
*
(2.11) Exercise. Let 0 = ao < al < '" < ak < 00 be a finite sequence of real
numbers. Prove that the (k + I)-dimensional process
Fn (La,ln Lai-J/n).
(B"2
t
, l =I, 2 , ... ,
t
-
k)
converges in distribution to
(f3,.ja; -ai-Iy/,i = 1,2, ... ,k)
where (yi, i = 1,2, ... , k) is a k-dimensional BM independent of 13 and I is the
local time of 13 at O.
*
(2.12) Exercise. 1°) Let
X(t, a) =
10 l[O,a](Bs)dBs'
1
Prove that for p :::: 2, there exists a constant Cp such that for 0 :::: s :::: t :::: 1 and
0:::: a:::: b :::: 1,
E [[X(t, b) - Xes, aW] :::: C p (t - s)p/2
+ (b - a)p/2).
2°) Prove that the family of the laws P)" of the doubly indexed processes
(BI'
AI/210 1[o,a1(ABs)dBs)
1
is weakly relatively compact.
[Hint. Use Exercise (1.12).]
3°) Prove that, as A goes to infinity, the doubly-indexed processes
(Bt , A1/2 (L~/)" - L?) /2)
converge in distribution to (Bt, lBS (L~, a)), where lBS is a Brownian sheet independent of B.
[Hint: Use the preceding Exercise (2.11 ).]
530
Chapter XIII. Limit Theorems in Distribution
4°) For J) > 0, prove that
£"
100
a-(3/2+v) (L~ - L?) da ~ 2 ['Xl e-(v+I/2)ulE (L?, e U ) duo
E
E--+O
h
5°) Let Tx = inf {u : L~ > x}; the processes A1/2 (L~!)..
- x) /2 converge in
distribution, as A tends to +00, to the process .jX Ya where Ya is a standard BM.
This may be derived from 3°) but may also be proved as a consequence of the
second Ray-Knight theorem (Sect. 2 Chap. XI).
*
(2.13) Exercise. With the notation of Theorem (LlO) in Chap. VI, prove that
lim
~ (£d O - ~L)
= Yl
2
E--+O V £
E
in the sense of finite distributions, where as usual, I is the local time at 0 of a BM
independent of y.
[Hint: If M; =
I~ e:dBs and p E is the law of (B t , L t , Mn, prove that the
set (FE, £ > 0) is relatively compact.]
Je
*
(2.14) Exercise. In the notation of this section, if (Xi), i = 1, ... , k is a sequence
of real numbers, prove that (B, £-1/2 (LXi+ E - LXi), i = 1, ... , k) converges in
distribution as £ ~ 0, to (B, 2fitxi' i = 1, ... , k), where (B, fil, ... , fik) is a
BMk+l.
**
(2.15) Exercise. Prove, in the notation of this section, that for any x E JR.,
£-1/2 [£-1 10 I[x.x+e](Bs)ds - U] converges in distribution to
fiu, as £
tends to O. The reader will notice that this is the "central-limit" theorem associated
with the a.s. result of Corollary (1.9) in Chap. VI.
[Hint: Extend the result of the preceeding exercise to (LXi +EZ - LXi) and get
a doubly indexed limiting process.]
(2/-J3)
* (2.16) Exercise (A limit theorem for the Brownian motion on the unit sphere).
Let Z be a BMd(a) with a#-O and d 2: 2; set P = IZI. Let V be the process
with values in the unit sphere of JR.d defined by
Zt = PtVe,
where Ct = I~ p;2ds. This is the skew-product decomposition of BMd.
1 Prove that there is a BMd , say B, independent of P and such that
0
)
Vt = Vo +
l
-Il
d u(Vs)dBs - 2
0
t
where u is the field of matrices given by
0
t
Vsds
§3. Asymptotic Properties of Planar Brownian Motion
531
2°) If X t = f~ a(Vs)dBs, prove that (Xi, Xi)t = (Xi, Bi)t.
[Hint: Observe that a(x)x = 0, a(x)y = y if (x, y) = 0, hence a 2(x) = a(x).]
3°) Show that
lim t- I (Xi, Bi)t = 8ij(l - r l )
t ..... oo
a.s.
4°) Prove that the 2d-dimensional process (c- I Be2t' (2c)-1 f~2t Vsds) converges in distribution, as c tends to 00, to the process
(B" r l (Bt + (d - 1)-1/2 B;))
where (B, B') is a BM2d.
§3. Asymptotic Properties of Planar Brownian Motion
In this section, we take up the study of some asymptotic properties of complex BM
which was initiated in Sect. 4 of Chap. X. We will use the asymptotic version of
Knight's theorem (see the preceding section) which gives a sufficient condition for
the DDS Brownian motions of two sequences of local martingales to be asymptotically independent. We will also have to envisage below the opposite situation
in which these BM's are asymptotically equal. Thus, we start this section with a
sufficient condition to this effect.
(3.1) Theorem. Let (M~), i = 1,2, be two sequences of continuous local martingales and fJ~ their associated DDS Brownian motions. If Rn(t) is a sequence
ofprocesses of time-changes such that the following limits exist in probability for
every t,
i) limn (M?, MnRn(t) = limn (M2', M2')R n(t) = t,
ii) limn (M~ - M2', M~ - M2) Rn(t) = 0,
then, limn sUPs::ot IfJ~ (s) - fJ2' (s ) I = 0 in probability for every t.
Proof If Tt is the time-change associated with (MF, MF)'
IfJ~(t) - fJ2'(t) I :S IM~ (Tt(t)) - M~ (Rn(t)) I + IM~ (Rn(t)) - M2' (Rn(t)) I
+ IM2' (Rn(t)) - M2' (T2n(t)).
By Exercise (4.14) Chap. IV, for fixed t, the left-hand side converges in probability
to zero if each of the terms
(MnI ' Mn)Tt(t)
I Rn(t) ,
(Mn
M n M n Mn)
1- 2' 1- 2 Rn(t),
(Mn Mn)T2(1)
2' 2 Rn(t)
converges in probability to zero. Since (M~, Mn Tt (I) = t, this follows readily
from the hypothesis.
As a result, IfJ~ - fJ2'1 ~ O. On the other hand, Kolmogorov's criterion (l.8)
entails that the set of laws of the processes fJ~ - f32' is weakly relatively compact;
thus, fJ~ - fJ2' ~ O. This implies (Exercise (1.11)) that sUPu 1f3~(s) - fJ2'(s) I
converges in distribution, hence in probability, to zero.
D
532
Chapter XIII. Limit Theorems in Distribution
The following results are to be compared with Corollary (2.4). We now look
for conditions under which the DDS Brownian motions are asymptotically equal.
(3.2) Corollary. If M i , i = 1,2, are continuous local martingales and R is a
process of time-changes such that the following limits exist in probability
1
lim -(Mi' Mi)R(u) = 1 for i = 1,2,
i)
u-+oo u
ii)
then Ju (fh (u·) - fh(u,» converges in distribution to the zero process as u tends
to infinity.
Proof By the remarks in Sect. 5 Chap. 0, it is equivalent to show that the convergence holds in probability uniformly on every bounded interval. Moreover, by
Exercise (1.17) Chap. V (see the remarks before Corollary (2.4», Juf3i(U') is the
DDS Brownian motion of JuMi . Thus, we need only apply Theorem (3.1) to
(JuMI, JuM2 ).
D
The above corollary will be useful later on. The most likely candidates for R
are mixtures of the time-changes {L; associated with (M i , Mi) and actually the
following result shows that {Li v {L; will do.
(3.3) Proposition. The following two assertions are equivalent:
1
lim -t (MI - M2, MJ - M 2)"r t1 V,,2
= 0 in probability;
rt
(i)
t--+oo
1
1
lim -(MI, MIlJ1-2 = lim -(M2, M 2)J1-' = 1 in probability,
1-+00 t
'1-+00 t
'
(ii)
1
lim -(MI - M 2, Ml - M 2)"I A,,2 = 0 in probability.
and
t-+oo t
rl
rt
Under these conditions, the convergence stated in Corollary (3.2) holds.
Proof From the "Minkowski" inequality of Exercise (1.47) in Chap. IV, we conclude that
l(r1(MI , M 1)J1-i)I/2 - 11 :s (t-1(M I - M 2 , Ml - M 2)J1-if/ 2 .
By means of this inequality, the proof that i) implies ii) is easily completed.
To prove the converse, notice that
(M J - M 2, Ml - M 2)J.l:I VJL?2
J1-,AJ1-,
= I(MI - M2, MI - M2)J1-i - (Ml - M2, MI - M2)J1-il
= (M1, MIlJ1-i - (M2, M2)J1-i + 2 ((MI' M2)J1-i - (M1, M2)J1-i) I·
1
§3. Asymptotic Properties of Planar Brownian Motion
533
Since by Kunita-Watanabe inequality
o
the equivalence of ii) and i) follows easily.
The foregoing proposition will be used under the following guise.
(3.4) Corollary. If (M 1, Ml)oo = (M2 , M2)00 = 00 and
lim (Ml - M 2 , Ml - M 2 )t/(Mi , Mi)t = 0 almost-surely,
t---+oo
for i = 1,2, then the conclusion of Proposition (3.3) holds.
Proof The hypothesis implies that f..L; is finite and increases to +00 as t goes to
infinity. Moreover
1
(Ml - M 2 , Ml - M 2 ),,;/(Mi , M,.),,;
= -(Ml
- M 2 , Ml - M 2 ),,;,
r'1
t
f""r
1-"'(
so that condition i) in the Proposition is easily seen to be satisfied.
o
From now on, we consider a complex BM Z such that Zo = Zo a.s. and pick
points Zl, ... , zp in <C which differ from Zoo For each j, we set
Xtj =
1t
dZ s = log IZt - Zj I
. j,
+ d)t
o Zs - Zj
Zo - Zj
where e! is the continuous determination of arg ( ;~
=~;) which vanishes for
t = O. The process (2n)-le! is the "winding number" of Z around Zj up to time t;
we want to further the results of Sect. 4 in Chap. X by studying the simultaneous
asymptotic properties of the e!, j = 1, ... , p.
Let us set
c! = lot IZs - Zjl-2 ds
and denote by T/ the time-change process which is the inverse of C j . As was
shown in Sect. 2 Chap. V, for each j, there is a complex BM t; j = fJj + i yj such
that
j
Xt
r j
= ~cj,
I
We observe that up to a
Descargar