• Databricks
  • Data Engineering

Advanced Data Engineering with Databricks

Contact us to book this course
Learning Track icon
Learning Track

Data Engineering

Delivery methods icon
Delivery methods

On-Site, Virtual

Duration icon
Duration

2 days

In this course, students will build upon their existing knowledge of Apache Spark, Structured Streaming, and Delta Lake to unlock the full potential of the data lakehouse by utilizing the suite of tools provided by Databricks. This course places a heavy emphasis on designs favoring incremental data processing, enabling systems optimized to continuously ingest and analyze ever-growing data. By designing workloads that leverage built-in platform optimizations, data engineers can reduce the burden of code maintenance and on-call emergencies, and quickly adapt production code to new demands with minimal refactoring or downtime. 

Objectives

  • Upon completion of the course, you will be able to:
  • Design databases and pipelines optimized for the Databricks Lakehouse Platform
  • Implement efficient incremental data processing to validate and enrich data driving business decisions and applications
  • Leverage Databricks-native features for managing access to sensitive data and fulfilling right-to-be-forgotten requests
  • Manage error troubleshooting, code promotion, task orchestration, and production job monitoring using Databricks tools

Prerequisites

  • Experience using PySpark APIs to perform advanced data transformations
  • Familiarity implementing classes with Python
  • Experience using SQL in production data warehouse or data lake implementations
  • Experience working in Databricks notebooks and configuring clusters
  • Familiarity with creating and manipulating data in Delta Lake tables with SQL

 

The prerequisites listed above can be learned by taking the Data Engineering with Databricks and Apache Spark Programming with Databricks instructor-led courses (can be taken in either order) and validated by passing the Databricks Certified Data Engineer Associate and Databricks Certified Associate Developer for Apache Spark certification exams.

Course outline

  • Streaming Data Concepts
  • Introduction to Structured Streaming
  • Aggregations, Time Windows, Watermarks
  • Delta Live Tables Review
  • Auto Loader
  • Data Ingestion Patterns
  • Data Quality Enforcement Patterns
  • Data Modeling
  • Streaming Joins and Statefulness
  • Store Data Securely
  • Streaming Data and CDF
  • Deleting Data in Databricks
  • Spark Architecture
  • Designing the Foundation
  • Introduction of Spark UI
  • Fine-Tuning - Choosing the Right Cluster
  • Code Optimization
    • Shuffles
    • Spill
    • Skew
    • Serialization
  • Introduction to REST API and CLI
  • Deploy Batch and Streaming Jobs
  • Working with Terraform

Ready to accelerate your team's innovation?