Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

Learning Apache Spark 2
Paperback

Learning Apache Spark 2

$116.99
Sign in or become a Readings Member to add this title to your wishlist.

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.

Learn about the fastest-growing open source project in the world, and find out how it revolutionizes big data analytics

About This Book

* Exclusive guide that covers how to get up and running with fast data processing using Apache Spark * Explore and exploit various possibilities with Apache Spark using real-world use cases in this book * Want to perform efficient data processing at real time? This book will be your one-stop solution.

Who This Book Is For

This guide appeals to big data engineers, analysts, architects, software engineers, even technical managers who need to perform efficient data processing on Hadoop at real time. Basic familiarity with Java or Scala will be helpful. The assumption is that readers will be from a mixed background, but would be typically people with background in engineering/data science with no prior Spark experience and want to understand how Spark can help them on their analytics journey.

What You Will Learn

* Get an overview of big data analytics and its importance for organizations and data professionals * Delve into Spark to see how it is different from existing processing platforms * Understand the intricacies of various file formats, and how to process them with Apache Spark. * Realize how to deploy Spark with YARN, MESOS or a Stand-alone cluster manager. * Learn the concepts of Spark SQL, SchemaRDD, Caching and working with Hive and Parquet file formats * Understand the architecture of Spark MLLib while discussing some of the off-the-shelf algorithms that come with Spark. * Introduce yourself to the deployment and usage of SparkR. * Walk through the importance of Graph computation and the graph processing systems available in the market * Check the real world example of Spark by building a recommendation engine with Spark using ALS. * Use a Telco data set, to predict customer churn using Random Forests.

In Detail

Spark juggernaut keeps on rolling and getting more and more momentum each day. Spark provides key capabilities in the form of Spark SQL, Spark Streaming, Spark ML and Graph X all accessible via Java, Scala, Python and R. Deploying the key capabilities is crucial whether it is on a Standalone framework or as a part of existing Hadoop installation and configuring with Yarn and Mesos. The next part of the journey after installation is using key components, APIs, Clustering, machine learning APIs, data pipelines, parallel programming. It is important to understand why each framework component is key, how widely it is being used, its stability and pertinent use cases. Once we understand the individual components, we will take a couple of real life advanced analytics examples such as ‘Building a Recommendation system’, ‘Predicting customer churn’ and so on. The objective of these real life examples is to give the reader confidence of using Spark for real-world problems.

Style and approach

With the help of practical examples and real-world use cases, this guide will take you from scratch to building efficient data applications using Apache Spark. You will learn all about this excellent data processing engine in a step-by-step manner, taking one aspect of it at a time. This highly practical guide will include how to work with data pipelines, dataframes, clustering, SparkSQL, parallel programming, and such insightful topics with the help of real-world use cases.

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO
Format
Paperback
Publisher
Packt Publishing Limited
Country
United Kingdom
Date
28 March 2017
Pages
356
ISBN
9781785885136

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.

Learn about the fastest-growing open source project in the world, and find out how it revolutionizes big data analytics

About This Book

* Exclusive guide that covers how to get up and running with fast data processing using Apache Spark * Explore and exploit various possibilities with Apache Spark using real-world use cases in this book * Want to perform efficient data processing at real time? This book will be your one-stop solution.

Who This Book Is For

This guide appeals to big data engineers, analysts, architects, software engineers, even technical managers who need to perform efficient data processing on Hadoop at real time. Basic familiarity with Java or Scala will be helpful. The assumption is that readers will be from a mixed background, but would be typically people with background in engineering/data science with no prior Spark experience and want to understand how Spark can help them on their analytics journey.

What You Will Learn

* Get an overview of big data analytics and its importance for organizations and data professionals * Delve into Spark to see how it is different from existing processing platforms * Understand the intricacies of various file formats, and how to process them with Apache Spark. * Realize how to deploy Spark with YARN, MESOS or a Stand-alone cluster manager. * Learn the concepts of Spark SQL, SchemaRDD, Caching and working with Hive and Parquet file formats * Understand the architecture of Spark MLLib while discussing some of the off-the-shelf algorithms that come with Spark. * Introduce yourself to the deployment and usage of SparkR. * Walk through the importance of Graph computation and the graph processing systems available in the market * Check the real world example of Spark by building a recommendation engine with Spark using ALS. * Use a Telco data set, to predict customer churn using Random Forests.

In Detail

Spark juggernaut keeps on rolling and getting more and more momentum each day. Spark provides key capabilities in the form of Spark SQL, Spark Streaming, Spark ML and Graph X all accessible via Java, Scala, Python and R. Deploying the key capabilities is crucial whether it is on a Standalone framework or as a part of existing Hadoop installation and configuring with Yarn and Mesos. The next part of the journey after installation is using key components, APIs, Clustering, machine learning APIs, data pipelines, parallel programming. It is important to understand why each framework component is key, how widely it is being used, its stability and pertinent use cases. Once we understand the individual components, we will take a couple of real life advanced analytics examples such as ‘Building a Recommendation system’, ‘Predicting customer churn’ and so on. The objective of these real life examples is to give the reader confidence of using Spark for real-world problems.

Style and approach

With the help of practical examples and real-world use cases, this guide will take you from scratch to building efficient data applications using Apache Spark. You will learn all about this excellent data processing engine in a step-by-step manner, taking one aspect of it at a time. This highly practical guide will include how to work with data pipelines, dataframes, clustering, SparkSQL, parallel programming, and such insightful topics with the help of real-world use cases.

Read More
Format
Paperback
Publisher
Packt Publishing Limited
Country
United Kingdom
Date
28 March 2017
Pages
356
ISBN
9781785885136