Handbook of Econometrics Vols1-5 _ Chapter 51

Chapter 51 STRUCTURAL ESTIMATION DECISION PROCESSES Contents 1. 2. Introduction Solving MDP’s via dynamic . . . . . Finite-horizon decision rules dynamic dynamicprogramming: A brief review of Markovian | Chapter 51 STRUCTURAL ESTIMATION OF MARKOV DECISION PROCESSES JOHN RUST University of Wisconsin Contents 1. Introduction 3082 2. Solving MDP s via dynamic programming A brief review 3088 . Finite-horizon dynamic programming and the optimality of Markovian decision rules 3089 . Infinite-horizon dynamic programming and Bellman s equation 3091 . Bellman s equation contraction mappings and optimality 3091 . A geometric series representation for MDP s 3094 . Overview of solution methods 3095 3. Econometric methods for discrete decision processes 3099 . Alternative models of the error term 3100 . Maximum likelihood estimation of DDP s 3101 . Alternative estimation methods Finite-horizon DDP problems 3118 . Alternative estimation methods Infinite-horizon DDP s 3123 . The identification problem 3125 4. Empirical applications 3130 . Optimal replacement of bus engines 3130 . Optimal retirement from a firm 3134 References 3139 This is an abridged version of a monograph Stochastic Decision Processes Theory Computation and Estimation written for the Leif Johansen lectures at the University of Oslo in the fall of 1991. I am grateful for generous financial support from the Central Bank of Norway and the University of Oslo and comments from John Dagsvik Peter Frenger and Steinar Strpm. Handbook of Econometrics Volume IV Edited by . Engle and . McFadden 1994 Elsevier Science . All rights reserved 3082 John Rust 1. Introduction Markov decision processes MDP provide a broad framework for modelling sequential decision making under uncertainty. MDP s have two sorts of variables state variables s and control variables d both of which are indexed by time t 0 1 2 3 . T where the horizon T may be infinity. A decision-maker or agent can be represented by a set of primitives u p where u s dt is a utility function representing the agent s preferences at time t pis t1 s d is a Markov transition probability representing the agent s subjective beliefs

Không thể tạo bản xem trước, hãy bấm tải xuống
TỪ KHÓA LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
12    26    1    30-11-2024
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.