Cademy logoCademy Marketplace

Course Images

Apache Spark with Scala - Hands-On with Big Data!

Apache Spark with Scala - Hands-On with Big Data!

🔥 Limited Time Offer 🔥

Get a 10% discount on your first order when you use this promo code at checkout: MAY24BAN3X

  • 30 Day Money Back Guarantee
  • Completion Certificate
  • 24/7 Technical Support

Highlights

  • On-Demand course

  • 8 hours 55 minutes

  • All levels

Description

This is a comprehensive and practical Apache Spark course. In this course, you will learn and master the art of framing data analysis problems as Spark problems through 20+ hands-on examples, and then scale them up to run on cloud computing services. Explore Spark 3, IntelliJ, Structured Streaming, and a stronger focus on the DataSet API.

'Big data' analysis is a hot and highly valuable skill-and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, eBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive datasets across a fault-tolerant Hadoop cluster. You will learn those same techniques using your own Windows system right at home. It is easier than you think, and you will learn from an ex-engineer and senior manager from Amazon and IMDb. In this course, you will learn the concepts of Spark's Resilient Distributed Datasets, DataFrames, and datasets. We will also cover a crash course in the Scala programming language that will help you with the course. You will learn how to develop and run Spark jobs quickly using Scala, IntelliJ, and SBT. You will learn how to translate complex analysis problems into iterative or multi-stage Spark scripts. You will learn how to scale up to larger datasets using Amazon's Elastic MapReduce service and understand how Hadoop YARN distributes Spark across computing clusters. We will also be practicing using other Spark technologies, such as Spark SQL, DataFrames, DataSets, Spark Streaming, Machine Learning, and GraphX. By the end of this course, you will be running code that analyzes gigabytes worth of information-in the cloud-in a matter of minutes. All the codes and supporting files for this course are available at https://github.com/PacktPublishing/Apache-Spark-with-Scala---Hands-On-with-Big-Data-

What You Will Learn

Learn the concepts of Spark's RDD, DataFrames, and Datasets
Get a crash course in the Scala programming language
Develop and run Spark jobs quickly using Scala, IntelliJ, and SBT
Translate complex analysis problems into iterative or multi-stage Spark scripts
Scale up to larger datasets using Amazon's Elastic MapReduce service
Understand how Hadoop YARN distributes Spark across computing clusters

Audience

This course is designed for software engineers who want to expand their skills into the world of big data processing on a cluster. It is necessary to have some prior programming or scripting knowledge.

Approach

This course is very hands-on; you will spend most of your time following along with the instructor as we write, analyze, and run real code together-both on your own system and in the cloud using Amazon's Elastic MapReduce service. Over eight hours of video content is included, with over 20 real examples of increasing complexity that you can build, run, and study yourself. Move through them at your own pace, on your own schedule.

Key Features

Understand the fundamentals of Scala and the Apache Spark ecosystem * Develop distributed code using the Scala programming language * Include practical examples to help you develop real-world Big Data applications with Spark with Scala

Github Repo

https://github.com/PacktPublishing/Apache-Spark-with-Scala---Hands-On-with-Big-Data-

About the Author

Frank Kane

Frank Kane has spent nine years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers all the time. He holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology and teaches others about big data analysis.

Course Outline

1. Getting Started

This section introduces the course, and you will get the development environment for Spark and Scala and create a histogram of real movie ratings with Spark.

1. Introduction and Installing the Course Materials, IntelliJ, and Scala

A brief introduction to the course, and then we will get your development environment for Spark and Scala all set up on your desktop, using IntelliJ and SBT. A quick test application will confirm that Spark is working on your system!

2. Introduction to Apache Spark

This video is a brief introduction to Apache Spark.


2. Scala Crash Course (Optional)

This section is about Scala. You will learn the basics, flow control, functions, and data structures in Scala.

1. (Activity) Scala Basics

We will go over the basic syntax and structure of Scala code with lots of examples. It's backward from most other languages, but you quickly get used to it.

2. (Exercise) Flow Control in Scala

We will go over the basic syntax and structure of Scala code with lots of examples. It's backward from most other languages, but you quickly get used to it. In this video, you will get some hands-on practice at the end.

3. (Exercise) Functions in Scala

Scala is a functional programming language and so, functions are central to the language. We will go over the many ways functions can be declared and used in Scala, and practice what you have learned.

4. (Exercise) Data Structures in Scala

We will cover the common data structures in Scala such as Map and List and put them into practice.


3. Using Resilient Distributed Datasets (RDDs)

In this section, you will learn how to use Spark RDD.

1. The Resilient Distributed Dataset

The core object of Spark programming is the Resilient Distributed Dataset, or RDD. Once you know how to use RDDs, you know how to use Spark. We will go over what they are and what you can do with them.

2. Ratings Histogram Example

Now that we understand Scala and have the theory of Spark behind us, let's start with a simple example of using RDDs to count up how many of each rating exists in the MovieLens dataset.

3. Spark Internals

How does Spark convert your script into a Directed Acyclic Graph and figure out how to distribute it on a cluster? Understanding how this process works under the hood can be important in writing optimal Spark driver scripts.

4. Key / Value RDDs, and the Average Friends by Age Example

RDDs that contain a tuple of two values are key/value RDDs, and you can use them much like you might use a NoSQL data store. We will use key/value RDDs to figure out the average number of friends by age in some fake social network data.

5. (Activity) Running the Average Friends by Age Example

We will run the average friends by age example on your desktop and give you some ideas for further extending this script on your own.

6. Filtering RDDs, and the Minimum Temperature by Location Example

We will cover how to filter data out of an RDD efficiently and illustrate this with a new example that finds the minimum temperature by location using real weather data.

7. (Activity) Running the Minimum Temperature Example, and Modifying It for Maximum

We will run our minimum temperature by location example and modify it to find maximum temperatures as well. Plus, some ideas for extending this script on your own.

8. (Activity) Counting Word Occurrences Using Flatmap()

flatmap() on an RDD can return variable amounts of new entries into the resulting RDD. We will use this as part of a hands-on example that finds how often each word is used inside a real book's text.

9. (Activity) Improving the Word Count Script with Regular Expressions

We extend the previous lecture's example using regular expressions to better extract words from our book.

10. (Activity) Sorting the Word Count Results

Finally, we sort the final results to see what the most common words in this book really are! And some ideas to extend this script on your own.

11. (Exercise) Find the Total Amount Spent by Customer

Your assignment is to write a script that finds the total amount spent per customer using some fabricated ecommerce data, using what you have learned so far.

12. (Exercise) Check Your Results and Sort Them by Total Amount Spent

We will review my solution to the previous lecture's assignment and challenge you further to sort your results to find the biggest spenders.

13. Check Your Results and Implementation Against Mine

Check your results for finding the biggest spenders in our ecommerce data against my own solution.


4. SparkSQL, DataFrames, and DataSets

This section is about SparkSQL, DataFrames, and DataSets.

1. Introduction to SparkSQL

Understand SparkSQL and the DataFrame and DataSet APIs used for querying structured data in an efficient, scalable manner.

2. (Activity) Using SparkSQL

We will revisit our fabricated social network data but load it into a DataFrame and analyze it with actual SQL queries!

3. (Activity) Using DataSets

We will analyze our social network data another way - this time using SQL-like functions on a DataSet, instead of actual SQL query strings.

4. (Exercise) Implement the Friends by Age Example Using DataSets

Earlier, we broke down the average number of friends by age using RDDs; see if you can do it using DataSets instead!

5. Exercise Solution: Friends by Age, with DataSets

Let's check the solution for the Friends by Age exercise using DataSets.

6. (Activity) Word Count Example Using DataSets

You will learn how to implement the word count activity with DataSets using SQL functions.

7. (Activity) Revisiting the Minimum Temperature Example, with DataSets

You will learn how to implement the weather data activity with DataSets using withColumn() functions.

8. (Exercise) Implement the Total Spent by Customer Problem with DataSets

We will have a look at the total spent by customer problem activity with DataSets.

9. Exercise Solution: Total Spent by Customer with DataSets

We will have a look at the possible solution for the total spent by customer problem activity with DataSets.


5. Advanced Examples of Spark Programs

In this section, we will be working on some advanced examples of Spark programming.

1. (Activity) Find the Most Popular Movie

We will revisit our movie ratings dataset and start off with a simple example to find the most-rated movie.

2. (Activity) Use Broadcast Variables to Display Movie Names

Broadcast variables can be used to share small amounts of data to all of the machines on your cluster. We will use them to share a lookup table of movie IDs to movie names and use that to get movie names in our final results.

3. (Activity) Find the Most Popular Superhero in a Social Graph

We introduce the Marvel superhero social network dataset and write a script to find the most-connected superhero in it. It's not who you might think!

4. (Exercise) Find the Most Obscure Superheroes

This exercise is all about finding the most obscure superheroes from the MostPopularSuperheroDataset.

5. Exercise Solution: Find the Most Obscure Superheroes

We will work on the possible solution to find the most obscure superheroes.

6. Superhero Degrees of Separation: Introducing Breadth-First Search

As a more complex example, we will apply a breadth-first-search (BFS) algorithm to the Marvel dataset to compute the degrees of separation between any two superheroes. In this lecture, we go over how BFS works.

7. Superhero Degrees of Separation: Accumulators, and Implementing BFS in Spark

We will go over our strategy for implementing BFS within a Spark script that can be distributed and introduce the use of Accumulators to maintain running totals that are synced across a cluster.

8. (Activity) Superhero Degrees of Separation: Review the Code and Run It!

Finally, we will review the code for finding the degrees of separation using breadth-first search, run it, and see the results!

9. Item-Based Collaborative Filtering in Spark, cache(), and persist()

Back to our movie ratings data-we will discover movies that are similar to each other just based on user ratings. We will cover the algorithm and how to implement it as a Spark script.

10. (Activity) Running the Similar Movies Script Using Spark's Cluster Manager

We will run our movie similarities script and see the results. In doing so, we will introduce the process of exporting your Spark script as a JAR file that can be run from the command line using the spark-submit script (instead of running from within the Scala IDE).

11. (Exercise) Improve the Quality of Similar Movies

Your challenge is to make the movie similarity results even better! Here are some ideas for you to try out.


6. Running Spark on a Cluster

This section is about running Spark on a cluster.

1. (Activity) Using spark-submit to Run Spark Driver Scripts

In a production environment, you will use spark-submit to start your driver scripts from a command line, cron job, or the like. We will cover the details on what you need to do differently in this case.

2. (Activity) Packaging Driver Scripts with SBT

Spark / Scala scripts that have external dependencies can be bundled up into self-contained packages using the SBT tool. We will use SBT to package up our movie similarities script as an exercise.

3. (Exercise) Package a Script with SBT and Run It Locally with spark-submit

We will work on an exercise where we will package a script with SBT and run it locally with spark-submit.

4. Exercise Solution: Using SBT and spark-submit

We will have a look at the possible solution for this exercise.

5. Introducing Amazon Elastic MapReduce

Amazon Web Services (AWS) offers the Elastic MapReduce service (EMR), which gives us a way to rent time on a Hadoop cluster of our choosing-with Spark pre-installed on it. We will use EMR to illustrate running a Spark script on a real cluster, so let's go over what EMR is and how it works first.

6. Creating Similar Movies from One Million Ratings on EMR

Let's compute movie similarities on a real cluster in the cloud, using one million user ratings!

7. Partitioning

Explicitly partitioning your datasets and RDDs can be an important optimization; we will go over when and how to do this.

8. Best Practices for Running on a Cluster

Other tips and tricks for taking your script to a real cluster and getting it to run as you expect.

9. Troubleshooting and Managing Dependencies

This video is on how to troubleshoot Spark jobs on a cluster using the Spark UI and logs, and more on managing dependencies of your script and data.


7. Machine Learning with Spark ML

This section is about machine learning with MLLib.

1. Introducing MLLib

MLLib offers several distributed machine learning algorithms that you can run on a Spark cluster. We will cover what MLLib can do and how it fits in.

2. (Activity) Using MLLib to Produce Movie Recommendations

We will use MLLib's Alternating Least Squares recommender algorithm to produce movie recommendations using our MovieLens ratings data. The results areunexpected!

3. Linear Regression with MLLib

A brief overview of what linear regression is and how it works, followed by a hands-on example of finding a regression and applying it to fabricated page speed versus revenue data.

4. (Activity) Running a Linear Regression with Spark

We will run our Spark ML example of linear regression using DataFrames.

5. (Exercise) Predict Real Estate Values with Decision Trees in Spark

This exercise is all about predicting real estate values with decision trees in Spark.

6. Exercise Solution: Predicting Real Estate with Decision Trees in Spark

We will have a look at the possible solution for this exercise.


8. Introduction to Spark Streaming

This section is about Spark Streaming.

1. The DStream API for Spark Streaming

Spark Streaming allows you to create Spark driver scripts that run indefinitely, continually processing data as it streams in! We will cover how it works and what it can do, using the original DStream micro-batch API.

2. (Activity) Real-Time Monitoring of the Most Popular Hashtags on Twitter

You will learn how to do real-time monitoring of the most popular hashtags on Twitter.

3. Structured Streaming

Structured Streaming is a newer DataFrame-based API in Spark for writing continuous applications.

4. (Activity) Using Structured Streaming for Real-Time Log Analysis

You will learn how to use Structured Streaming for real-time log analysis.

5. (Exercise) Windowed Operations with Structured Streaming

We will work on an exercise to search the top URLs that are streamed in the past 30 seconds.

6. Exercise Solution: Top URLs in a 30-Second Window

We will look at the possible solution for this exercise.


9. Introduction to GraphX

This section is about GraphX.

1. GraphX, Pregel, and Breadth-First Search with Pregel

We will cover Spark's GraphX library and how it works.

2. Using the Pregel API with Spark GraphX

We will revisit our "superhero degrees of separation" example and see how its breadth-first search algorithm could be implemented using Pregel and GraphX.

3. (Activity) Superhero Degrees of Separation Using GraphX

We will use GraphX and Pregel to recreate our earlier results analyzing the superhero social network data-but with a lot less code!


10. You Made It! Where to Go from Here

This section is about learning more and career tips.

1. Learning More, and Career Tips

You made it to the end! Here are some book recommendations if you want to learn more, as well as some career advice on landing a job in "big data".

Course Content

  1. Apache Spark with Scala – Hands-On with Big Data!

About The Provider

Packt
Packt
Birmingham
Founded in 2004 in Birmingham, UK, Packt’s mission is to help the world put software to work in new ways, through the delivery of effective learning and i...
Read more about Packt

Tags

Reviews