

Principle Seminar of the Department of Probability Theory, Moscow State University
March 15, 2017 16:45, Moscow, MSU, auditorium 1224






Optimal stopping for Lévy processes with random observations
S. I. Boyarchenko^{}, S. Z. Levendorskii^{} 
Number of views: 
This page:  119  Materials:  15 

Abstract:
In the standard optimal stopping problems, actions are artificially restricted to the moments of observations
of costs or benefits. In the standard experimentation and learning models based on twoarmed Poisson bandits, it is possible
to take an action between two sequential observations. The latter models do not recognize the fact that timing of decisions
depends not only on the rate of arrival of observations, but also on the stochastic dynamics of costs or benefits.
This paper demonstrates that both the size of stochastic breakdowns or breakthroughs and beliefs about their rates of arrival determine the optimal abandonment or adoption rules.
We present models with random breakdowns and investment into a startup with random breakthroughs. The arrival of breakdowns (respectively, breakthroughs) is modeled as a Poisson process, and the cost of breakdowns (respectively, profitability of breakthroughs)
follows a Lévy process independent of the Poisson process. In the model with breakdowns, the parameter of the Poisson process can take one of the two values, and the parameter's true value is initially unknown. Given a prior, posterior beliefs about the true value of this parameter are updated according to the Bayes rule.
Our model shows that if at the time of the news arrival it is not optimal to act, then it is optimal to fix the time $T(x)$ that depends
on the observed realization $x$ of the shock, and exercise the option at time $T(x)$ unless the new piece of information
arrives earlier. We study regularity of the solutions and formulate the dichotomy: either the optimal exercise policy is regular
at the boundary of the
inaction region at the moment of the last breakdown and the smooth pasting principle holds
or the optimal exercise policy has a jump, and the value function has a kink at the boundary of the inaction region.
Materials:
mgu_march17c.pdf (344.8 Kb)

