πΈThe BIGGEST Wealth Building Opportunity in the Last 91 Years!

πΈSECRET ALGORITHM To Make Perpetual Income Every Month!

πΈHow To Manifest Money Easily Just Like The Elite Do

In probability theory, the optional stopping

theorem (or Doob's optional sampling theorem) says that, under certain conditions, the expected

value of a martingale at a stopping time is equal to its initial expected value. Since

martingales can be used to model the wealth of a gambler participating in a fair game,

the optional stopping theorem says that, on average, nothing can be gained by stopping

play based on the information obtainable so far (i.e., without looking into the future).

Certain conditions are necessary for this result to hold true. In particular, the theorem

applies to doubling strategies. The optional stopping theorem is an important

tool of mathematical finance in the context of the fundamental theorem of asset pricing. == Statement of theorem ==

A discrete-time version of the theorem is given below:

Let X = (Xt)tββ0 be a discrete-time martingale and Ο a stopping time with values in β0

βͺ {β}, both with respect to a filtration (Ft)tββ0. Assume that one of the following

three conditions holds: (a) The stopping time Ο is almost surely

bounded, i.e., there exists a constant c β β such that Ο β€ c a.s.

(b) The stopping time Ο has finite expectation and the conditional expectations of the absolute

value of the martingale increments are almost surely bounded, more precisely, E [

Ο ]

< β {\displaystyle \mathbb {E} [\tau ]<\infty

} and there exists a constant c such that E [ | X t

+ 1 β X t | | F t ] β€

c {\displaystyle \mathbb {E} {\bigl [}|X_{t+1}-X_{t}|\,{\big

\vert }\,{\mathcal {F}}_{t}{\bigr ]}\leq c} almost surely on the event {Ο > t} for all

t β β0.

(c) There exists a constant c such that |Xtβ§Ο|

β€ c a.s. for all t β β0 where β§ denotes the minimum operator.Then XΟ is an almost

surely well defined random variable and E [ X Ο ]

= E [ X 0 ]

. {\displaystyle \mathbb {E} [X_{\tau }]=\mathbb

{E} [X_{0}].} Similarly, if the stochastic process X is

a submartingale or a supermartingale and one of the above conditions holds, then E [ X Ο ]

β₯ E [ X 0 ]

, {\displaystyle \mathbb {E} [X_{\tau }]\geq

\mathbb {E} [X_{0}],} for a submartingale, and E [ X Ο ]

β€ E [ X 0 ]

, {\displaystyle \mathbb {E} [X_{\tau }]\leq

\mathbb {E} [X_{0}],} for a supermartingale. === Remark ===

Under condition (c) it is possible that Ο = β happens with positive probability. On

this event XΟ is defined as the almost surely existing pointwise limit of X, see the proof

below for details. == Applications ==

The optional stopping theorem can be used to prove the impossibility of successful betting

strategies for a gambler with a finite lifetime (which gives condition (a)) and a house limit

on bets (condition (b)).

Suppose that the gambler can wager up to c dollars on a fair

coin flip at times 1, 2, 3, etc., winning his wager if the coin comes up heads and losing

it if the coin comes up tails. Suppose further that he can quit whenever he likes, but cannot

predict the outcome of gambles that haven't happened yet. Then the gambler's fortune over

time is a martingale, and the time Ο at which he decides to quit (or goes broke and is forced

to quit) is a stopping time. So the theorem says that E[XΟ] = E[X0].

In other words,

the gambler leaves with the same amount of money on average as when he started. (The

same result holds if the gambler, instead of having a house limit on individual bets,

has a finite limit on his line of credit or how far in debt he may go, though this is

easier to show with another version of the theorem.)

Suppose a random walk starting at a β₯ 0 that goes up or down by one with equal probability

on each step.

Suppose further that the walk stops if it reaches 0 or m β₯ a; the time

at which this first occurs is a stopping time. If it is known that the expected time at which

the walk ends is finite (say, from Markov chain theory), the optional stopping theorem

predicts that the expected stop position is equal to the initial position a. Solving a

= pm + (1 β p)0 for the probability p that the walk reaches m before 0 gives p = a/m.

Now consider a random walk X that starts at 0 and stops if it reaches βm or +m, and

use the Yn = Xn2 β n martingale from the examples section. If Ο is the time at which

X first reaches Β±m, then 0 = E[Y0] = E[YΟ] = m2 β E[Ο]. This gives E[Ο] = m2.

Care must be taken, however, to ensure that one of the conditions of the theorem hold.

For example, suppose the last example had instead used a 'one-sided' stopping time,

so that stopping only occurred at +m, not at βm. The value of X at this stopping time

would therefore be m. Therefore, the expectation value E[XΟ] must also be m, seemingly in

violation of the theorem which would give E[XΟ] = 0.

The failure of the optional stopping

theorem shows that all three of the conditions fail. == Proof ==

Let XβΟ denote the stopped process, it is also a martingale (or a submartingale or

supermartingale, respectively). Under condition (a) or (b), the random variable XΟ is well

defined. Under condition (c) the stopped process XβΟ is bounded, hence by Doob's martingale

convergence theorem it converges a.s. pointwise to a random variable which we call XΟ.

If condition (c) holds, then the stopped process XβΟ is bounded by the constant random variable

M := c. Otherwise, writing the stopped process as X t Ο = X 0 + β s

= 0 Ο

β§ t

β 1 ( X s

+ 1 β X s )

, t

β N 0 , {\displaystyle X_{t}^{\tau }=X_{0}+\sum _{s=0}^{\tau

\land t-1}(X_{s+1}-X_{s}),\quad t\in {\mathbb {N} }_{0},}

gives |XtΟ| β€ M for all t β β0, where M

:= | X 0 | + β s

= 0 Ο

β 1 | X s

+ 1 β X s | = | X 0 | + β s

= 0 β | X s

+ 1 β X s | β
1 {

Ο >

s } {\displaystyle M:=|X_{0}|+\sum _{s=0}^{\tau

-1}|X_{s+1}-X_{s}|=|X_{0}|+\sum _{s=0}^{\infty }|X_{s+1}-X_{s}|\cdot \mathbf {1} _{\{\tau

>s\}}} .By the monotone convergence theorem E [

M ]

= E [ | X 0 | ]

+ β s

= 0 β E [ | X s

+ 1 β X s | β
1 {

Ο >

s } ] {\displaystyle \mathbb {E} [M]=\mathbb {E}

[|X_{0}|]+\sum _{s=0}^{\infty }\mathbb {E} {\bigl [}|X_{s+1}-X_{s}|\cdot \mathbf {1}

_{\{\tau >s\}}{\bigr ]}} .If condition (a) holds, then this series

only has a finite number of non-zero terms, hence M is integrable.

If condition (b) holds, then we continue by inserting a conditional expectation and using

that the event {Ο > s} is known at time s (note that Ο is assumed to be a stopping

time with respect to the filtration), hence E [

M ] = E [ | X 0 | ]

+ β s

= 0 β E [ E [ | X s

+ 1 β X s | | F s ] β
1 {

Ο >

s } β β€ c 1 {

Ο >

s } a.s.

By (b) ] β€ E [ | X 0 | ]

+ c β s

= 0 β P (

Ο >

s ) = E [ | X 0 | ]

+ c E [

Ο ]

< β

, {\displaystyle {\begin{aligned}\mathbb {E}

[M]&=\mathbb {E} [|X_{0}|]+\sum _{s=0}^{\infty }\mathbb {E} {\bigl [}\underbrace {\mathbb

{E} {\bigl [}|X_{s+1}-X_{s}|{\big |}{\mathcal {F}}_{s}{\bigr ]}\cdot \mathbf {1} _{\{\tau

>s\}}} _{\leq \,c\,\mathbf {1} _{\{\tau >s\}}{\text{ a.s. by (b)}}}{\bigr ]}\\&\leq \mathbb {E}

[|X_{0}|]+c\sum _{s=0}^{\infty }\mathbb {P} (\tau >s)\\&=\mathbb {E} [|X_{0}|]+c\,\mathbb

{E} [\tau ]<\infty ,\\\end{aligned}}} where a representation of the expected value

of non-negative integer-valued random variables is used for the last equality.

Therefore, under any one of the three conditions in the theorem, the stopped process is dominated

by an integrable random variable M. Since the stopped process XβΟ converges almost

surely to XΟβ, the dominated convergence theorem implies E [ X Ο ]

= lim t

β β E [ X t Ο ]

.

{\displaystyle \mathbb {E} [X_{\tau }]=\lim

_{t\to \infty }\mathbb {E} [X_{t}^{\tau }].} By the martingale property of the stopped

process, E [ X t Ο ]

= E [ X 0 ]

, t

β N 0 , {\displaystyle \mathbb {E} [X_{t}^{\tau }]=\mathbb

{E} [X_{0}],\quad t\in {\mathbb {N} }_{0},} hence E [ X Ο ]

= E [ X 0 ]

. {\displaystyle \mathbb {E} [X_{\tau }]=\mathbb

{E} [X_{0}].} Similarly, if X is a submartingale or supermartingale,

respectively, change the equality in the last two formulas to the appropriate inequality.