Dynamic Information Acquisition
We consider optimal information acquisition for the control of linear discrete-time random systems with noisy observations and apply the findings to the problem of dynamically implementing emissions-reduction targets. The optimal policy, which is provided in closed form, depends on a single composite parameter which determines the criticality of the system. For subcritical systems, it is optimal to perform “noise-leveling,” that is, to reduce the variance of the uncertainty to an optimal level and keep it constant by a steady feed of information updates. For critical systems, the optimal policy is “noise attenuation,” that is, to substantially decrease the variance once and never acquire information thereafter. Finally for supercritical systems, information acquisition is never in the best interest of the decision maker. In each case, an explicit expression of the value function is obtained. The criticality of the system, and therefore the tradeoff between spending resources on the control or on information to improve the control, is influenced by a “policy parameter” which determines the importance a decision maker places on uncertainty reduction. The dependence of the system performance on the policy parameter is illustrated using a practical climate-control problem where a regulator imposes state-contingent taxes to probabilistically attain emissions targets.
Keywords: Bayesian updating ; Bellman equation ; dynamic programming ; emissions control ; information acquisition ; infinite-horizon optimal control ; linear-quadratic systems ; Markov decision problems ; optimal filtering.
Record created on 2015-10-12, modified on 2016-08-09