Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Classical time series methods are based on the assumption that a particular stochastic process model generates the observed data. The, most commonly used assumption is that the data is a realization of a stationary Gaussian process. However, since the Gaussian assumption is a fairly stringent one, this assumption is frequently replaced by the weaker assumption that the process is wide~sense stationary and that only the mean and covariance sequence is specified. This approach of specifying the probabilistic behavior only up to second order has of course been extremely popular from a theoretical point of view be cause it has allowed one to treat a large variety of problems, such as prediction, filtering and smoothing, using the geometry of Hilbert spaces. While the literature abounds with a variety of optimal estimation results based on either the Gaussian assumption or the specification of second-order properties, time series workers have not always believed in the literal truth of either the Gaussian or second-order specifica tion. They have none-the-less stressed the importance of such optimali ty results, probably for two main reasons: First, the results come from a rich and very workable theory. Second, the researchers often relied on a vague belief in a kind of continuity principle according to which the results of time series inference would change only a small amount if the actual model deviated only a small amount from the assum ed model.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Classical time series methods are based on the assumption that a particular stochastic process model generates the observed data. The, most commonly used assumption is that the data is a realization of a stationary Gaussian process. However, since the Gaussian assumption is a fairly stringent one, this assumption is frequently replaced by the weaker assumption that the process is wide~sense stationary and that only the mean and covariance sequence is specified. This approach of specifying the probabilistic behavior only up to second order has of course been extremely popular from a theoretical point of view be cause it has allowed one to treat a large variety of problems, such as prediction, filtering and smoothing, using the geometry of Hilbert spaces. While the literature abounds with a variety of optimal estimation results based on either the Gaussian assumption or the specification of second-order properties, time series workers have not always believed in the literal truth of either the Gaussian or second-order specifica tion. They have none-the-less stressed the importance of such optimali ty results, probably for two main reasons: First, the results come from a rich and very workable theory. Second, the researchers often relied on a vague belief in a kind of continuity principle according to which the results of time series inference would change only a small amount if the actual model deviated only a small amount from the assum ed model.