Data Engineering with Databricks
Contact us to book this courseData Engineering
On-Site, Virtual
2 days
This is an introductory course that serves as an appropriate entry point to learn Data Engineering with Databricks.
Below, we describe each of the four, four-hour modules included in this course.
Data Ingestion with Delta Lake
This course is designed for Data Engineers to deepen their understanding of Delta Lake to handle data ingestion, transformation, and management with ease. Using the latest features of Delta Lake, learners will explore real-world applications to enhance data workflows, optimize performance, and ensure data reliability.
Deploy Workloads with Databricks Workflows
This course is designed for data engineer professionals who are looking to leverage Databricks for streamlined and efficient data workflows. By the end of this course, you’ll be well-versed in using Databricks' Jobs and Workflows functionalities to automate, manage, and monitor complex data pipelines. The course includes hands-on labs and best practices to ensure a deep understanding and practical ability to manage workflows in production environments.
Build Data Pipelines with Delta Live Tables
This comprehensive course is designed to understand the Medallion Architecture using Delta Live Tables. Participants will learn how to create robust and efficient data pipelines for structured and unstructured data, understand the nuances of managing data quality, and unlock the potential of Delta Live Tables. By the end of this course, participants will have hands-on experience building pipelines, troubleshooting issues, and monitoring their data flows within the Delta Live Tables environment.
Data Management and Governance with Unity Catalog
In this course, you'll learn about data management and governance using Databricks Unity Catalog. It covers foundational concepts of data governance, complexities in managing data lakes, Unity Catalog's architecture, security, administration, and advanced topics like fine-grained access control, data segregation, and privilege management.
* This course seeks to prepare students to complete the Associate Data Engineering certification exam, and provides the requisite knowledge to take the course Advanced Data Engineering with Databricks.
Objectives
There are four, half-day modules that make up this course. Please see the learning objectives for each of these modules, below:
Data Ingestion with Delta Lake
- Navigate and use the Databricks Data Science and Engineering Workspace for code development tasks.
- Utilize Spark SQL and PySpark to extract data from various sources.
- Apply common data cleaning transformations using Spark SQL and PySpark.
- Manipulate complex data structures with advanced functions in Spark SQL and PySpark.
Deploy Workloads with Databricks Workflows
- Orchestrate tasks with Databricks Workflow Jobs.
- Use Databricks SQL for on-demand queries.
- Configure and schedule dashboards and alerts to reflect updates to production data pipelines.
Build Data Pipelines with Delta Live Tables
- Describe how Delta Live Tables tracks data dependencies in data pipelines
- Configure and run data pipelines using the Delta Live Tables UI
- Use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables in the lakehouse using Auto Loader and Delta Live Tables
- Use APPLY CHANGES INTO syntax to process Change Data Capture feeds
- Review event logs and data artifacts created by pipelines and troubleshoot DLT syntax
Data Management and Governance with Unity Catalog
- Explain the importance of data governance and challenges in traditional data lake environments.
- Differentiate between managed and external tables, and evaluate the architecture of Unity Catalog.
- Utilize SQL commands to navigate and inspect metastore components, and assess data segregation strategies.
- Identify query lifecycle steps and Databricks roles for effective data governance and security within Unity Catalog.
- Implement privilege assignments and fine-grained access control strategies using SQL syntax and dynamic views in Databricks.
- Assess the effectiveness and implications of different privilege scenarios, inheritance models, and access control mechanisms in Unity Catalog.
Prerequisites
- Beginner familiarity with basic cloud concepts (virtual machines, object storage, identity management)
- Ability to perform basic code development tasks (create compute, run code in notebooks, use basic notebook operations, import repos from git, etc.)
- Intermediate familiarity with basic SQL concepts (CREATE, SELECT, INSERT, UPDATE, DELETE, WHILE, GROUP BY, JOIN, etc.)
- Intermediate experience with basic SQL concepts such as SQL commands, aggregate functions, filters and sorting, indexes, tables, and views.
- Basic knowledge of Python programming, jupyter notebook interface, and PySpark fundamentals.
Course outline
- Delta Lake and Data Objects
- Set Up and Load Delta Tables
- Basic Transformations
- Load Data Lab
- Cleaning Data
- Complex Transformations
- SQL UDFs
- Advanced Delta Lake Features
- Manipulate Delta Tables Lab
- Introduction to Workflows
- Jobs Compute
- Scheduling Tasks with the Jobs UI
- Workflows Lab
- Jobs Features
- Explore Scheduling Options
- Conditional Tasks and Repairing Runs
- Modular Orchestration
- Databricks Workflows Best Practices
- The Medallion Architecture
- Introduction to Delta Live Tables
- Using the Delta Live Tables UI
- SQL Pipelines
- Python Pipelines
- Delta Live Tables Running Modes
- Pipeline Results
- Pipeline Event Logs
- Optional - Land New Data
- Data Governance Overview
- Demo: Populating the Metastore
- Lab: Navigating the Metastore
- Organization and Access Patterns
- Demo: Upgrading Tables to Unity Catalog
- Security and Administration in Unity Catalog
- Databricks Marketplace Overview
- Privileges in Unity Catalog
- Demo: Controlling Access to Data
- Fine-Grained Access Control
- Lab: Migrating and Managing Data in Unity Catalog