Books: 26

Spark

CoverTitleYear
KEY FEATURES * This book is based on the latest 2.0 version of Apache Spark and 2.7 version of Hadoop integrated with most commonly used tools. * Learn all Spark stack components including latest topics such as DataFrames, DataSets, GraphFrames, Structured Streaming, DataFrame based ML Pipelines and more » SparkR. * Integrations with frameworks such as HDFS, YARN and tools such as Jupyter, Zeppelin, NiFi, Mahout, HBase Spark Connector, GraphFrames, H2O and Hivemall. BOOK DESCRIPTION Big Data Analytics book aims at providing the fundamentals of Apache Spark and Hadoop. All Spark components – Spark Core, Spark SQL, DataFrames, Data sets, Conventional Streaming, Structured Streaming, MLlib, Graphx and Hadoop core components – HDFS, MapReduce and Yarn are explored in greater depth with implementation examples on Spark + Hadoop clusters. It is moving away from MapReduce to Spark. So, advantages of Spark over MapReduce are explained at great depth to reap benefits of in-memory speeds. DataFrames API, Data Sources API and new Data set API are explained for building Big Data analytical applications. Real-time data analytics using Spark Streaming with Apache Kafka and HBase is covered to help building streaming applications. New Structured streaming concept is explained with an IOT (Internet of Things) use case. Machine learning techniques are covered using MLLib, ML Pipelines and SparkR and Graph Analytics are covered with GraphX and GraphFrames components of Spark. Readers will also get an opportunity to get started with web based notebooks such as Jupyter, Apache Zeppelin and data flow tool Apache NiFi to analyze and visualize data. WHAT YOU WILL LEARN * Find out and implement the tools and techniques of big data analytics using Spark on Hadoop clusters with wide variety of tools used with Spark and Hadoop * Understand all the Hadoop and Spark ecosystem components * Get to know all the Spark components: Spark Core, Spark SQL, DataFrames, DataSets, Conventional and Structured Streaming, MLLib, ML Pipelines and Graphx * See batch and real-time data analytics using Spark Core, Spark SQL, and Conventional and Structured Streaming * Get to grips with data science and machine learning using MLLib, ML Pipelines, H2O, Hivemall, Graphx, SparkR and Hivemall. ABOUT THE AUTHOR Venkat Ankam has over 18 years of IT experience and over 5 years in big data technologies, working with customers to design and develop scalable big data applications. Having worked with multiple clients globally, he has tremendous experience in big data analytics using Hadoop and Spark. He is a Cloudera Certified Hadoop Developer and Administrator and also a Databricks Certified Spark Developer. He is the founder and presenter of a few Hadoop and Spark meetup groups globally and loves to share knowledge with the community. Venkat has delivered hundreds of trainings, presentations, and white papers in the big data sphere. While this is his first attempt at writing a book, many more books are in the pipeline. TABLE OF CONTENTS 1. Big Data Analytics at 10,000 foot view 2. Getting Started with Apache Hadoop and Apache Spark 3. Deep Dive into Apache Spark 4. Big Data Analytics with Spark SQL, DataFrames, and Datasets 5. Real-Time Analytics with Spark Streaming and Structured Streaming 6. Notebooks and Dataflows with Spark and Hadoop 7. Machine Learning with Spark and Hadoop 8. Building Recommendation Systems with Spark and Mahout 9. Graph Analytics with GraphX 10. Interactive Analytics with SparkR « less
2016
This books fills the need for an easy and holistic book on essential Big Data technologies. Written in a lucid and simple language free from jargon and code, this book provides an intuition for Big Data from business as well as technological perspectives. This book is designed to provide the reader with more » the intuition behind this evolving area, along with a solid toolset of the major big data processing technologies such as Hadoop, MapReduce, Spark Streaming, and NoSql databases. A complete case study of developing a web log analyzer is included. The book also contains two primers on Cloud computing and Data Mining. It also contains two tutorials on installing Hadoop and Spark. The book contains caselets from real-world stories. Students across a variety of academic disciplines including business, computer science, statistics, engineering, and others attracted to the idea of harnessing Big Data for new insights and ideas from data, can use this as a textbook. Professionals in various domains, including executives, managers, analysts, professors, doctors, accountants, and others can use this book to learn in a few hours how to make the most of Big Data to monitor their infrastructure, discover new insights, and develop new data-based products. It is a flowing book that one can finish in one sitting, or one can return to it again and again for insights and techniques. Table of Contents 1.Wholeness of Big Data 2.Big Data Applications 3.Big Data Architectures 4.Distributed Systems with Hadoop 5.Parallel Programming with MapReduce 6.Advanced NoSQL databases 7.Stream programming with Spark 8.Data Ingest with Kafka 9.Cloud Computing Primer 10. Web Log Analyzer development 11.Data Mining Primer 12.Appendix 1 on Installing Hadoop on AWS cloud 13.Appendix 2 on Installing Spark « less
2016
A practical guide to obtaining, transforming, exploring, and analyzing data using Python, MongoDB, and Apache Spark
Key Features Learn to use various data analysis tools and algorithms to classify, cluster, visualize, simulate, and forecast your data Apply Machine Learning algorithms to different kinds of data such as social networks, time series, and images A hands-on guide to understanding the nature of data more » and how to turn it into insight Book Description Beyond buzzwords like Big Data or Data Science, there are a great opportunities to innovate in many businesses using data analysis to get data-driven products. Data analysis involves asking many questions about data in order to discover insights and generate value for a product or a service. This book explains the basic data algorithms without the theoretical jargon, and you'll get hands-on turning data into insights using machine learning techniques. We will perform data-driven innovation processing for several types of data such as text, Images, social network graphs, documents, and time series, showing you how to implement large data processing with MongoDB and Apache Spark. What you will learn Acquire, format, and visualize your data Build an image-similarity search engine Generate meaningful visualizations anyone can understand Get started with analyzing social network graphs Find out how to implement sentiment text analysis Install data analysis tools such as Pandas, MongoDB, and Apache Spark Get to grips with Apache Spark Implement machine learning algorithms such as classification or forecasting About the Author Hector Cuesta is founder and Chief Data Scientist at Dataxios, a machine intelligence research company. Holds a BA in Informatics and a M.Sc. in Computer Science. He provides consulting services for data-driven product design with experience in a variety of industries including financial services, retail, fintech, e-learning and Human Resources. He is an enthusiast of Robotics in his spare time. « less
2016
Analyze your data and delve deep into the world of machine learning with the latest Spark version, 2.0
About This Book Perform data analysis and build predictive models on huge datasets that leverage Apache Spark Learn to integrate data science algorithms and techniques with the fast and scalable computing features of Spark to address big data challenges Work through practical examples on real-world more » problems with sample code snippets Who This Book Is For This book is for anyone who wants to leverage Apache Spark for data science and machine learning. If you are a technologist who wants to expand your knowledge to perform data science operations in Spark, or a data scientist who wants to understand how algorithms are implemented in Spark, or a newbie with minimal development experience who wants to learn about Big Data Analytics, this book is for you! What You Will Learn Consolidate, clean, and transform your data acquired from various data sources Perform statistical analysis of data to find hidden insights Explore graphical techniques to see what your data looks like Use machine learning techniques to build predictive models Build scalable data products and solutions Start programming using the RDD, DataFrame and Dataset APIs Become an expert by improving your data analytical skills In Detail This is the era of Big Data. The words ‘Big Data’implies big innovation and enables a competitive advantage for businesses. Apache Spark was designed to perform Big Data analytics at scale, and so Spark is equipped with the necessary algorithms and supports multiple programming languages. Whether you are a technologist, a data scientist, or a beginner to Big Data analytics, this book will provide you with all the skills necessary to perform statistical data analysis, data visualization, predictive modeling, and build scalable data products or solutions using Python, Scala, and R. « less
2016
Discover how data science can help you gain in-depth insight into your business – the easy way! Jobs in data science abound, but few people have the data science skills needed to fill these increasingly important roles in organizations. Data Science For Dummies is the perfect starting point for IT more » professionals and students interested in making sense of their organization’s massive data sets and applying their findings to real-world business scenarios. From uncovering rich data sources to managing large amounts of data within hardware and software limitations, ensuring consistency in reporting, merging various data sources, and beyond, you’ll develop the know-how you need to effectively interpret data and tell a story that can be understood by anyone in your organization. * Provides a background in data science fundamentals before moving on to working with relational databases and unstructured data and preparing your data for analysis * Details different data visualization techniques that can be used to showcase and summarize your data * Explains both supervised and unsupervised machine learning, including regression, model validation, and clustering techniques * Includes coverage of big data processing tools like MapReduce, Hadoop, Dremel, Storm, and Spark It’s a big, big data world out there – let Data Science For Dummies help you harness its power and gain a competitive edge for your organization. « less
2015
Perform real-time analytics using Spark in a fast, distributed, and scalable way
Spark is a framework used for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does, but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and built-in tools for interactive query analysis (Spark SQL), large-scale more » graph processing and analysis (GraphX), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big datasets. Fast Data Processing with Spark - Second Edition covers how to write distributed programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API to developing analytics applications and tuning them for your purposes. ***** Who This Book Is For ***** Fast Data Processing with Spark - Second Edition is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too big to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python. « less
2015
Hadoop is mostly written in Java, but that doesn't exclude the use of other programming languages with this distributed storage and processing framework, particularly Python. With this concise book, you'll learn how to use Python with the Hadoop Distributed File System (HDFS), MapReduce, the Apache Pig more » platform and Pig Latin script, and the Apache Spark cluster-computing framework. Authors Zachary Radtka and Donald Miner from the data science firm Miner & Kasch take you through the basic concepts behind Hadoop, MapReduce, Pig, and Spark. Then, through multiple examples and use cases, you'll learn how to work with these technologies by applying various Python tools. Use the Python library Snakebite to access HDFS programmatically from within Python applications Write MapReduce jobs in Python with mrjob, the Python MapReduce library Extend Pig Latin with user-defined functions (UDFs) in Python Use the Spark Python API (PySpark) to write Spark programs with Python Learn how to use the Luigi Python workflow scheduler to manage MapReduce jobs and Pig scripts « less
2015
Gain expertise in processing and storing data by using advanced techniques with Apache Spark
ABOUT THIS BOOK * Explore the integration of Apache Spark with third party applications such as H20, Databricks and Titan * Evaluate how Cassandra and Hbase can be used for storage * An advanced guide with a combination of instructions and practical examples to extend the most up-to date Spark functionalities WHO more » THIS BOOK IS FOR If you are a developer with some experience with Spark and want to strengthen your knowledge of how to get around in the world of Spark, then this book is ideal for you. Basic knowledge of Linux, Hadoop and Spark is assumed. Reasonable knowledge of Scala is expected. WHAT YOU WILL LEARN * Extend the tools available for processing and storage * Examine clustering and classification using MLlib * Discover Spark stream processing via Flume, HDFS * Create a schema in Spark SQL, and learn how a Spark schema can be populated with data * Study Spark based graph processing using Spark GraphX * Combine Spark with H20 and deep learning and learn why it is useful * Evaluate how graph storage works with Apache Spark, Titan, HBase and Cassandra * Use Apache Spark in the cloud with Databricks and AWS IN DETAIL Apache Spark is an in-memory cluster based parallel processing system that provides a wide range of functionality like graph processing, machine learning, stream processing and SQL. It operates at unprecedented speeds, is easy to use and offers a rich set of data transformations. This book aims to take your limited knowledge of Spark to the next level by teaching you how to expand Spark functionality. The book commences with an overview of the Spark eco-system. You will learn how to use MLlib to create a fully working neural net for handwriting recognition. You will then discover how stream processing can be tuned for optimal performance and to ensure parallel processing. The book extends to show how to incorporate H20 for machine learning, Titan for graph based storage, Databricks for cloud-based Spark. Intermediate Scala based code examples are provided for Apache Spark module processing in a CentOS Linux and Databricks cloud environment. STYLE AND APPROACH This book is an extensive guide to Apache Spark modules and tools and shows how Spark's functionality can be extended for real-time processing and storage with worked examples. « less
2015
Create scalable machine learning applications to power a modern data-driven business using Spark
Apache Spark is a framework for distributed computing that is designed from the ground up to be optimized for low latency tasks and in-memory data storage. It is one of the few frameworks for parallel computing that combines speed, scalability, in-memory processing, and fault tolerance with ease of programming more » and a flexible, expressive, and powerful API design. This book guides you through the basics of Spark's API used to load and process data and prepare the data to use as input to the various machine learning models. There are detailed examples and real-world use cases for you to explore common machine learning models including recommender systems, classification, regression, clustering, and dimensionality reduction. You will cover advanced topics such as working with large-scale text data, and methods for online machine learning and model evaluation using Spark Streaming. « less
2014
High-speed distributed computing made easy with Spark
Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph more » processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets. Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes. « less
2013