Fundamentals of AI Security (EDU-111) Self-Guided Lab

Three self-guided labs demonstrate (1) how LLM guardrails can be bypassed through jailbreak-style prompt framing, (2) how everyday GenAI use can trigger data leakage and Shadow AI risk, and (3) how defenders can use AI as an analyst assistant to accelerate detection and investigation—while keeping humans in control.

rate limit

Code not recognized.

About this course

In this lab sequence, learners experience AI security from both attacker and defender perspectives. First, they attempt a “jailbreak challenge,” observing how an AI model may refuse an unsafe request (e.g., phishing content) and then comply when the prompt is reframed as role-play or training—highlighting why prompt injection and manipulation are real risks and why policy/guardrails alone are insufficient. Next, learners step into a “helpful intern” scenario where sensitive-looking information is pasted into a public AI tool to format it, making the data-leakage and Shadow AI problem concrete and illustrating why visibility gaps (like clipboard/paste and identity context) matter. Finally, learners use AI as a SOC assistant to analyze a suspicious log entry and identify SQL injection indicators and severity, reinforcing a practical “Human + AI” workflow where AI accelerates triage and reasoning, but analysts validate conclusions and decide response actions.

Curriculum60 mins

  • Fundamentals of AI Security (EDU-111) Self-Guided Lab Introduction & Overview
  • Lab Access and Learning Exercises

About this course

In this lab sequence, learners experience AI security from both attacker and defender perspectives. First, they attempt a “jailbreak challenge,” observing how an AI model may refuse an unsafe request (e.g., phishing content) and then comply when the prompt is reframed as role-play or training—highlighting why prompt injection and manipulation are real risks and why policy/guardrails alone are insufficient. Next, learners step into a “helpful intern” scenario where sensitive-looking information is pasted into a public AI tool to format it, making the data-leakage and Shadow AI problem concrete and illustrating why visibility gaps (like clipboard/paste and identity context) matter. Finally, learners use AI as a SOC assistant to analyze a suspicious log entry and identify SQL injection indicators and severity, reinforcing a practical “Human + AI” workflow where AI accelerates triage and reasoning, but analysts validate conclusions and decide response actions.

Curriculum60 mins

  • Fundamentals of AI Security (EDU-111) Self-Guided Lab Introduction & Overview
  • Lab Access and Learning Exercises