• Databricks
  • Machine Learning and AI

Machine Learning Model Deployment

Contact us to book this course
Learning Track icon
Learning Track

Machine Learning and AI

Delivery methods icon
Delivery methods

On-Site, Virtual

Duration icon
Duration

1 day

This course is designed to introduce three primary machine learning deployment strategies and illustrate the implementation of each strategy on Databricks. Following an exploration of the fundamentals of model deployment, the course delves into batch inference, offering hands-on demonstrations and labs for utilizing a model in batch inference scenarios, along with considerations for performance optimization. The second part of the course comprehensively covers pipeline deployment, while the final segment focuses on real-time deployment. Participants will engage in hands-on demonstrations and labs, deploying models with Model Serving and utilizing the serving endpoint for real-time inference.

Objectives

After consuming this content, you should be able to: 

  • Define batch, pipeline, and real-time deployment methods and identify scenarios in which each type is best suited. 

  • Discuss the advantages and limitations of each deployment method.

  • Describe MLflow’s deployment features.

  • Perform batch, pipeline, and real-time inference using related Databricks features such as DLT and Model Serving.

  • Describe the features and benefits of Databricks Model serving.

Prerequisites

At a minimum, you should be familiar with the following before attempting to take this content:

  • Knowledge of fundamental machine learning models

  • Knowledge of model lifecycle and MLflow components

  • Familiarity with Databricks workspace and notebooks

  • Intermediate level knowledge of Python

Course outline

  • Model Deployment Strategies
  • Model Deployment with MLflow
  • Introduction to Batch Deployment
  • Introduction to Pipeline Deployment
  • Introduction to Real-time Deployment
  • Databricks Model Serving

Ready to accelerate your team's innovation?