Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

Gradient Descent, Stochastic Optimization, and Other Tales
Paperback

Gradient Descent, Stochastic Optimization, and Other Tales

$140.99
Sign in or become a Readings Member to add this title to your wishlist.

The goal of this book is to debunk and dispel the magic behind the black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This book doesn’t shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms. Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published A stochastic approximation method, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO
Format
Paperback
Publisher
Eliva Press
Date
22 July 2022
Pages
96
ISBN
9789994981557

The goal of this book is to debunk and dispel the magic behind the black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This book doesn’t shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms. Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published A stochastic approximation method, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.

Read More
Format
Paperback
Publisher
Eliva Press
Date
22 July 2022
Pages
96
ISBN
9789994981557