Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

Composite NUV Priors and Applications
Paperback

Composite NUV Priors and Applications

$266.99
Sign in or become a Readings Member to add this title to your wishlist.

Normal with unknown variance (NUV) priors are a central idea of sparse Bayesian learning and allow variational representations of non-Gaussian priors. More specifically, such variational representations can be seen as parameterized Gaussians, wherein the parameters are generally unknown. The advantage is apparent: for fixed parameters, NUV priors are Gaussian, and hence computationally compatible with Gaussian models. Moreover, working with (linear-)Gaussian models is particularly attractive since the Gaussian distribution is closed under affine transformations, marginalization, and conditioning. Interestingly, the variational representation proves to be rather universal than restrictive: many common sparsity-promoting priors (among them, in particular, the Laplace prior) can be represented in this manner. In estimation problems, parameters or variables of the underlying model are often subject to constraints (e.g., discrete-level constraints). Such constraints cannot adequately be represented by linear-Gaussian models and generally require special treatment. To handle such constraints within a linear-Gaussian setting, we extend the idea of NUV priors beyond its original use for sparsity. In particular, we study compositions of existing NUV priors, referred to as composite NUV priors, and show that many commonly used model constraints can be represented in this way.

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO
Format
Paperback
Publisher
Hartung & Gorre
Date
19 August 2022
Pages
276
ISBN
9783866287686

Normal with unknown variance (NUV) priors are a central idea of sparse Bayesian learning and allow variational representations of non-Gaussian priors. More specifically, such variational representations can be seen as parameterized Gaussians, wherein the parameters are generally unknown. The advantage is apparent: for fixed parameters, NUV priors are Gaussian, and hence computationally compatible with Gaussian models. Moreover, working with (linear-)Gaussian models is particularly attractive since the Gaussian distribution is closed under affine transformations, marginalization, and conditioning. Interestingly, the variational representation proves to be rather universal than restrictive: many common sparsity-promoting priors (among them, in particular, the Laplace prior) can be represented in this manner. In estimation problems, parameters or variables of the underlying model are often subject to constraints (e.g., discrete-level constraints). Such constraints cannot adequately be represented by linear-Gaussian models and generally require special treatment. To handle such constraints within a linear-Gaussian setting, we extend the idea of NUV priors beyond its original use for sparsity. In particular, we study compositions of existing NUV priors, referred to as composite NUV priors, and show that many commonly used model constraints can be represented in this way.

Read More
Format
Paperback
Publisher
Hartung & Gorre
Date
19 August 2022
Pages
276
ISBN
9783866287686