Skip to main content
HomeSparkBig Data Fundamentals with PySpark

Big Data Fundamentals with PySpark

Learn the fundamentals of working with big data with PySpark.

Start Course for Free
4 Hours16 Videos55 Exercises
45,636 LearnersTrophyStatement of Accomplishment

Create Your Free Account

GoogleLinkedInFacebook

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Loved by learners at thousands of companies


Course Description

There's been a lot of buzz about Big Data over the past few years, and it's finally become mainstream for many companies. But what is this Big Data? This course covers the fundamentals of Big Data via PySpark. Spark is a "lightning fast cluster computing" framework for Big Data. It provides a general data processing platform engine and lets you run programs up to 100x faster in memory, or 10x faster on disk, than Hadoop. You’ll use PySpark, a Python package for Spark programming and its powerful, higher-level libraries such as SparkSQL, MLlib (for machine learning), etc. You will explore the works of William Shakespeare, analyze Fifa 2018 data and perform clustering on genomic datasets. At the end of this course, you will have gained an in-depth understanding of PySpark and its application to general Big Data analysis.
  1. 1

    Introduction to Big Data analysis with Spark

    Free

    This chapter introduces the exciting world of Big Data, as well as the various concepts and different frameworks for processing Big Data. You will understand why Apache Spark is considered the best framework for BigData.

    Play Chapter Now
    What is Big Data?
    50 xp
    The 3 V's of Big Data
    50 xp
    PySpark: Spark with Python
    50 xp
    Understanding SparkContext
    100 xp
    Interactive Use of PySpark
    100 xp
    Loading data in PySpark shell
    100 xp
    Review of functional programming in Python
    50 xp
    Use of lambda() with map()
    100 xp
    Use of lambda() with filter()
    100 xp
  2. 4

    Machine Learning with PySpark MLlib

    PySpark MLlib is the Apache Spark scalable machine learning library in Python consisting of common learning algorithms and utilities. Throughout this last chapter, you'll learn important Machine Learning algorithms. You will build a movie recommendation engine and a spam filter, and use k-means clustering.

    Play Chapter Now

In the following tracks

Big Data with PySpark

Collaborators

Collaborator's avatar
Hadrien Lacroix
Collaborator's avatar
Chester Ismay
Upendra Kumar Devisetty HeadshotUpendra Kumar Devisetty

Science Analyst at CyVerse

Upendra Kumar Devisetty is a Science Analyst at CyVerse where he scientifically interacts with biologists, bioinformaticians, programming teams and other members of CyVerse team. He also coordinates development across projects, and facilitates integration and cross-communication. His current work mainly focuses on integrative analysis of Big Data using high-throughput methods on advanced computing systems. As scientific computing is becoming indispensable for Big Data research, he started building a community to develop and propagate a set of best practices, including continuous testing, version control, virtualization, sharing code through notebooks, and standard data structures.
See More

What do other learners have to say?

Join over 13 million learners and start Big Data Fundamentals with PySpark today!

Create Your Free Account

GoogleLinkedInFacebook

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.