We will present a very general framework for unconstrained adaptive optimization which encompasses standard methods such as line search and trust region that use stochastic function measurements and derivatives. In particular, methods that fall in this framework retain desirable practical features such as step acceptance criterion, trust region adjustment and ability to utilize second order models and enjoy the same convergence rates as their deterministic counterparts. The assumptions on stochastic derivatives are weaker than those standard in the literature, in that they are robust with respect to the presence of outliers. The framework is based on bounding the expected stopping time of a stochastic process, which satisfies certain assumptions. Thus this framework provides strong convergence analysis under weaker conditions than alternative approaches in the literature. We will conclude with a discussion about some interesting open questions.