This paper studies dynamic mechanism design in a quasilinear Markovian environment and analyzes a direct mechanism model of a principal-agent framework in which the agent is allowed to exit at any period. We consider that the agent's private information, referred to as state, evolves over time. The agent makes decisions of whether to stop or continue and what to report at each period. The principal, on the other hand, chooses decision rules consisting of an allocation rule and a set of payment rules to maximize her ex-ante expected payoff. In order to influence the agent's stopping decision, one of the terminal payment rules is posted-price, i.e., it depends only on the realized stopping time of the agent. We define the incentive compatibility in this dynamic environment in terms of Bellman equations, which is then simplified by establishing a one-shot deviation principle. Given the optimality of the stopping rule, a sufficient condition for incentive compatibility is obtained by constructing the state-dependent payment rules in terms of a set of functions parameterized by the allocation rule. A necessary condition is derived from envelope theorem, which explicitly formulates the state-dependent payment rules in terms of allocation rules. A class of monotone environment is considered to characterize the optimal stopping by a threshold rule. The posted-price payment rules are then pinned down in terms of the allocation rule and the threshold function up to a constant. The incentive compatibility constraints restrict the design of the posted-price payment rule by a regular condition.