Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

Signal Decomposition Using Masked Proximal Operators
Paperback

Signal Decomposition Using Masked Proximal Operators

$104.99
Sign in or become a Readings Member to add this title to your wishlist.

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.

The decomposition of a time series signal into components is an age old problem, with many different approaches proposed, including traditional filtering and smoothing, seasonal-trend decomposition, Fourier and other decompositions, PCA and newer variants such as nonnegative matrix factorization, various statistical methods, and many heuristic methods.

In this monograph, the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse are covered. A general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints) are included. When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases.

Summarizing and clarifying prior results, two distributed optimization methods for computing the decomposition are presented, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. Also included are tractable methods for evaluating the masked proximal operators of some loss functions that have not appeared in the literature.

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO
Format
Paperback
Publisher
now publishers Inc
Country
United States
Date
16 January 2023
Pages
92
ISBN
9781638281023

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.

The decomposition of a time series signal into components is an age old problem, with many different approaches proposed, including traditional filtering and smoothing, seasonal-trend decomposition, Fourier and other decompositions, PCA and newer variants such as nonnegative matrix factorization, various statistical methods, and many heuristic methods.

In this monograph, the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse are covered. A general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints) are included. When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases.

Summarizing and clarifying prior results, two distributed optimization methods for computing the decomposition are presented, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. Also included are tractable methods for evaluating the masked proximal operators of some loss functions that have not appeared in the literature.

Read More
Format
Paperback
Publisher
now publishers Inc
Country
United States
Date
16 January 2023
Pages
92
ISBN
9781638281023