Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Vision has to deal with uncertainty. The sensors are noisy, the prior knowledge is uncertain or inaccurate, and the problems of recovering scene information from images are often ill-posed or underconstrained. This research monograph, which is based on Richard Szeliski’s Ph.D. dissertation at Carnegie Mellon University, presents a Bayesian model for representing and processing uncertainty in low level vision. Recently, probabilistic models have been proposed and used in vision. Sze liski’s method has a few distinguishing features that make this monograph im portant and attractive. First, he presents a systematic Bayesian probabilistic estimation framework in which we can define and compute the prior model, the sensor model, and the posterior model. Second, his method represents and computes explicitly not only the best estimates but also the level of uncertainty of those estimates using second order statistics, i.e., the variance and covariance. Third, the algorithms developed are computationally tractable for dense fields, such as depth maps constructed from stereo or range finder data, rather than just sparse data sets. Finally, Szeliski demonstrates successful applications of the method to several real world problems, including the generation of fractal surfaces, motion estimation without correspondence using sparse range data, and incremental depth from motion.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Vision has to deal with uncertainty. The sensors are noisy, the prior knowledge is uncertain or inaccurate, and the problems of recovering scene information from images are often ill-posed or underconstrained. This research monograph, which is based on Richard Szeliski’s Ph.D. dissertation at Carnegie Mellon University, presents a Bayesian model for representing and processing uncertainty in low level vision. Recently, probabilistic models have been proposed and used in vision. Sze liski’s method has a few distinguishing features that make this monograph im portant and attractive. First, he presents a systematic Bayesian probabilistic estimation framework in which we can define and compute the prior model, the sensor model, and the posterior model. Second, his method represents and computes explicitly not only the best estimates but also the level of uncertainty of those estimates using second order statistics, i.e., the variance and covariance. Third, the algorithms developed are computationally tractable for dense fields, such as depth maps constructed from stereo or range finder data, rather than just sparse data sets. Finally, Szeliski demonstrates successful applications of the method to several real world problems, including the generation of fractal surfaces, motion estimation without correspondence using sparse range data, and incremental depth from motion.