Probability in the Engineering and Informational Sciences

Research Article

OPTIMAL MIXING OF MARKOV DECISION RULES FOR MDP CONTROL

Dinard van der Laana1

a1 Tinbergen Institute and Department of Econometrics and Operations Research, VU University, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands E-mail: dalaan@feweb.vu.nl

Abstract

In this article we study Markov decision process (MDP) problems with the restriction that at decision epochs, only a finite number of given Markov decision rules are admissible. For example, the set of admissible Markov decision rules ${\cal D}$ could consist of some easy-implementable decision rules. Additionally, many open-loop control problems can be modeled as an MDP with such a restriction on the admissible decision rules. Within the class of available policies, optimal policies are generally nonstationary and it is difficult to prove that some policy is optimal. We give an example with two admissible decision rules—${\cal D}$={d1, d2} —for which we conjecture that the nonstationary periodic Markov policy determined by its period cycle (d1, d1, d2, d1, d2, d1, d2, d1, d2) is optimal. This conjecture is supported by results that we obtain on the structure of optimal ${\cal D}$ Markov policies in general. We also present some numerical results that give additional confirmation for the conjecture for the particular example we consider.

(Online publication May 17 2011)