Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant.
Beginning with an introduction, this monograph moves on to study:
* dynamic string-averaging methods for common fixed point problems in a Hilbert space
* dynamic string methods for common fixed point problems in a metric space<
* dynamic string-averaging version of the proximal algorithm
* common fixed point problems in metric spaces
* common fixed point problems in the spaces with distances of the Bregman type
* a proximal algorithm for finding a common zero of a family of maximal monotone operators
* subgradient projections algorithms for convex feasibility problems in Hilbert spaces
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant.
Beginning with an introduction, this monograph moves on to study:
* dynamic string-averaging methods for common fixed point problems in a Hilbert space
* dynamic string methods for common fixed point problems in a metric space<
* dynamic string-averaging version of the proximal algorithm
* common fixed point problems in metric spaces
* common fixed point problems in the spaces with distances of the Bregman type
* a proximal algorithm for finding a common zero of a family of maximal monotone operators
* subgradient projections algorithms for convex feasibility problems in Hilbert spaces