Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Dynamic Programming (DP) provides a powerful framework for modeling complex decision problems where uncertainty is resolved and decisions are made over time. But it is difficult to scale to complex problems. Monte Carlo simulation methods, however, typically scale well, but typically do not provide a good way to identify an optimal policy or provide a performance bound. To address these restrictions, the authors review the information relaxation approach which works by reducing a complex stochastic DP to a series of scenario-specific deterministic optimization problems solved within a Monte Carlo simulation.Written in a tutorial style, the authors summarize the key ideas of information relaxation methods for stochastic DPs and demonstrate their use in several examples. They provide a one-stop-shop for researchers seeking to learn the key ideas and tools for using information relaxation methods.This book provides the reader with a comprehensive overview of a powerful technique for use by students, researchers and practitioners.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Dynamic Programming (DP) provides a powerful framework for modeling complex decision problems where uncertainty is resolved and decisions are made over time. But it is difficult to scale to complex problems. Monte Carlo simulation methods, however, typically scale well, but typically do not provide a good way to identify an optimal policy or provide a performance bound. To address these restrictions, the authors review the information relaxation approach which works by reducing a complex stochastic DP to a series of scenario-specific deterministic optimization problems solved within a Monte Carlo simulation.Written in a tutorial style, the authors summarize the key ideas of information relaxation methods for stochastic DPs and demonstrate their use in several examples. They provide a one-stop-shop for researchers seeking to learn the key ideas and tools for using information relaxation methods.This book provides the reader with a comprehensive overview of a powerful technique for use by students, researchers and practitioners.