Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Learn how to use the Apache Hadoop projects, including MapReduce, HDFS, Apache Hive, Apache HBase, Apache Kafka, Apache Mahout, and Apache Solr. From setting up the environment to running sample applications each chapter in this book is a practical tutorial on using an Apache Hadoop ecosystem project.
While several books on Apache Hadoop are available, most are based on the main projects, MapReduce and HDFS, and none discusses the other Apache Hadoop ecosystem projects and how they all work together as a cohesive big data development platform.
What You Will Learn:
Set up the environment in Linux for Hadoop projects using Cloudera Hadoop Distribution CDH 5
Run a MapReduce job
Store data with Apache Hive, and Apache HBase
Index data in HDFS with Apache Solr
Develop a Kafka messaging system
Stream Logs to HDFS with Apache Flume
Transfer data from MySQL database to Hive, HDFS, and HBase with Sqoop
Create a Hive table over Apache Solr
Develop a Mahout User Recommender System
Who This Book Is For: Apache Hadoop developers. Pre-requisite knowledge of Linux and some knowledge of Hadoop is required.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Learn how to use the Apache Hadoop projects, including MapReduce, HDFS, Apache Hive, Apache HBase, Apache Kafka, Apache Mahout, and Apache Solr. From setting up the environment to running sample applications each chapter in this book is a practical tutorial on using an Apache Hadoop ecosystem project.
While several books on Apache Hadoop are available, most are based on the main projects, MapReduce and HDFS, and none discusses the other Apache Hadoop ecosystem projects and how they all work together as a cohesive big data development platform.
What You Will Learn:
Set up the environment in Linux for Hadoop projects using Cloudera Hadoop Distribution CDH 5
Run a MapReduce job
Store data with Apache Hive, and Apache HBase
Index data in HDFS with Apache Solr
Develop a Kafka messaging system
Stream Logs to HDFS with Apache Flume
Transfer data from MySQL database to Hive, HDFS, and HBase with Sqoop
Create a Hive table over Apache Solr
Develop a Mahout User Recommender System
Who This Book Is For: Apache Hadoop developers. Pre-requisite knowledge of Linux and some knowledge of Hadoop is required.