Vertex AI and Generative AI Security
Contact us to book this course
Learning Track
Generative AI
Delivery methods
On-Site, Virtual
Duration
2 days
This course is designed to empower your organization to fully harness the transformative potential of Google’s Vertex AI and generative AI (gen AI) technologies, with a strong emphasis on security. Tailored for AI practitioners and security engineers, it provides targeted knowledge and hands-on skills to navigate and adopt AI safely and effectively. Participants will gain practical insights and develop a security-conscious approach, ensuring a secure and responsible integration of gen AI within their organization.
Course objectives
- Establish foundational knowledge of Vertex AI and its security challenges.
- Implement identity and access control measures to restrict access to Vertex AI resources.
- Configure encryption strategies and protect sensitive information.
- Enable logging, monitoring, and alerting for real-time security oversight of Vertex AI operations.
- Identify and mitigate unique security threats associated with generative AI.
- Apply testing techniques to validate and secure generative AI model responses.
- Implement best practices for securing data sources and responses within Retrieval-Augmented Generation (RAG) systems.
- Establish foundational knowledge of AI Safety.
Prerequisites
Fundamental knowledge of machine learning, in particular generative AI, and basic understanding of security on Google Cloud.
Audience
AI practitioners, security professionals, and cloud architects
Course outline
- Review Google Cloud Security fundamentals.
- Establish a foundational understanding of Vertex AI.
- Enumerate the security concerns related to Vertex AI features and components.
- Lab: Vertex AI: Training and Serving a Custom Model
- Control access with Identity Access Management.
- Simplify permission using organization hierarchies and policies.
- Use service accounts for least privileged access.
- Lab: Service Accounts and Roles: Fundamentals
- Configure encryption at rest and in-transit.
- Encrypt data using customer-managed encryption keys.
- Protect sensitive data using the Data Loss Prevention service.
- Prevent exfiltration of data using VPC Service Controls.
- Architect systems with disaster recovery in mind.
- Lab: Getting Started with Cloud KMS
- Lab: Creating a De-identified Copy of Data in Cloud Storage
- Deploy ML models using model endpoints.
- Secure model endpoints.
- Lab: Configuring Private Google Access and Cloud NAT
- Write to and analyze logs.
- Set up monitoring and alerting.
- Identify security risks specific to LLMs and gen AI applications.
- Understand methods for mitigating prompt hacking and injection attacks.
- Explore the fundamentals of securing generative AI models and applications.
- Introduce fundamentals of AI Safety.
- Lab: Safeguarding with Vertex AI Gemini API
- Lab: Gen AI and LLM Security for Developers
- Implement best practices for testing model responses.
- Apply techniques for improving response security in gen AI applications.
- Lab: Measure Gen AI Performance with the Generative AI Evaluation Service
- Lab: Unit Testing Generative AI Applications
- Understand RAG architecture and security implications.
- Implement best practices for grounding and securing data sources in RAG systems.
- Lab: Multimodal Retrieval Augmented Generation (RAG) Using the Vertex AI Gemini API
- Lab: Introduction to Function Calling with Gemini