Data Engineering using Kafka and Spark Structured Streaming

Data Engineering using Kafka and Spark Structured Streaming
Genre: eLearning | MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 4.04 GB | Duration: 9h 26m

As part of this course, you will be learning to build streaming pipelines by integrating Kafka and Spark Structured Streaming.


What you'll learn

Setting up self support lab with Hadoop (HDFS and YARN), Hive, Spark, and Kafka

Overview of Kafka to build streaming pipelines

Data Ingestion to Kafka topics using Kafka Connect using File Source

Data Ingestion to HDFS using Kafka Connect using HDFS 3 Connector Plugin

Overview of Spark Structured Streaming to process data as part of Streaming Pipelines

Incremental Data Processing using Spark Structured Streaming using File Source and File Target

Integration of Kafka and Spark Structured Streaming - Reading Data from Kafka Topics

Description

Let us go through the details about what is covered in the course.

First of all, we need to have the proper environment to build streaming pipelines using Kafka and Spark Structured Streaming on top of Hadoop or any other distributed file system. As part of the course, you will start with setting up a self-support lab with all the key components such as Hadoop, Hive, Spark, and Kafka on a single node Linux-based system.

Once the environment is set up you will go through the details related to getting started with Kafka. As part of that process, you will create a Kafka topic, produce messages into the topic as well as consume messages from the topic.

You will also learn how to use Kafka Connect to ingest data from web server logs into Kafka topic as well as ingest data from Kafka topic into HDFS as a sink.

Once you understand Kafka from the perspective of Data Ingestion, you will get an overview of some of the key concepts of related Spark Structured Streaming.

After learning Kafka and Spark Structured streaming separately, you will build a streaming pipeline to consume data from Kafka topic using Spark Structured Streaming, then process and write to different targets.

You will also learn how to take care of incremental data processing using Spark Structured Streaming.

Course Outline

Here is a brief outline of the course. You can choose either Cloud9 or GCP to provision a server to set up the environment.

Setting up Environment using AWS Cloud9 or GCP

Setup Single Node Hadoop Cluster

Setup Hive and Spark on top of Single Node Hadoop Cluster

Setup Single Node Kafka Cluster on top of Single Node Hadoop Cluster

Getting Started with Kafka

Data Ingestion using Kafka Connect - Web server log files as a source to Kafka Topic

Data Ingestion using Kafka Connect - Kafka Topic to HDFS a sink

Overview of Spark Structured Streaming

Kafka and Spark Structured Streaming Integration

Incremental Loads using Spark Structured Streaming

Udemy based support

In case you run into technical challenges while taking the course, feel free to raise your concerns using Udemy Messenger. We will make sure that issue is resolved in 48 hours.

Who this course is for:

Experienced ETL Developers who want to learn Kafka and Spark to build streaming pipelines

Experienced PL/SQL Developers who want to learn Kafka and Spark to build streaming pipelines

Bner or Experienced Data Eeers who want to learn Kafka and Spark to build streaming pipelines




DOWNLOAD
uploadgig.com


rapidgator.net


nitro.download

Add comment

reload, if the code cannot be seen