AI Red Team Academy

LLM Red TeamSecurity Assessment Methodology

Master the art of LLM security testing with this comprehensive 11-module curriculum.62 sections • 460 minutes of expert content

Learning Progress

0 of 11 modules completed
Start0% CompleteExpert
1
Phase 1
Fundamentals
01

Foundations

LLM architecture, transformer mechanics, attack surface mapping, OWASP LLM Top 10. Understanding how modern language models process text through tokenization, embedding, transformer layers, and output generation.

Beginner8 sections45 min
Topics: LLM Architecture, Transformer Mechanics +2 more
02

Prompt Injection

Direct & indirect injection, delimiter attacks, context manipulation, obfuscation techniques. Learn how adversaries override system instructions through carefully crafted user input.

Intermediate6 sections60 min
Topics: Direct Injection, Indirect Injection +3 more
2
Phase 2
Core Attacks
03

Jailbreaking

DAN & personas, roleplay attacks, framing techniques, multi-turn strategies, technical bypasses. Techniques to circumvent safety guardrails and content policies.

Intermediate8 sections60 min
Topics: DAN Techniques, Persona Attacks +4 more
04

Data Extraction

Prompt leaking, training data extraction, PII harvesting, exfiltration channels. Techniques to extract sensitive information from LLM systems.

Advanced4 sections30 min
Topics: Prompt Leaking, Training Data Extraction +2 more
3
Phase 3
Agent & RAG
05

Agent Attacks

Tool abuse, goal hijacking, attack chains, persistence mechanisms, agent worms. Exploitation techniques for autonomous LLM agents with tool access.

Expert8 sections50 min
Topics: Tool Abuse, Goal Hijacking +3 more
06

RAG Attacks

Document poisoning, embedding manipulation, retrieval exploitation, knowledge base attacks. Targeting Retrieval-Augmented Generation systems.

Advanced4 sections35 min
Topics: Document Poisoning, Embedding Manipulation +2 more
4
Phase 4
Advanced Vectors
07

MCP Security

Tool poisoning, rug pulls, cross-server exfiltration, server-specific attacks, defense strategies. Security of Model Context Protocol integrations.

Expert8 sections45 min
Topics: Tool Poisoning, Rug Pulls +3 more
08

Multimodal

Vision attacks, image injection, audio exploits, cross-modal manipulation techniques. Attacking LLMs that process images, audio, and other modalities.

Advanced3 sections30 min
Topics: Vision Attacks, Image Injection +2 more
5
Phase 5
Deep Exploitation
09

Model-Level

Training poisoning, adversarial examples, model extraction, supply chain attacks. Deep attacks targeting the model itself.

Expert4 sections35 min
Topics: Training Poisoning, Adversarial Examples +2 more
10

Defense Evasion

Filter bypass, encoding tricks, obfuscation, semantic evasion, fingerprinting defenses. Techniques to evade LLM security controls.

Advanced5 sections40 min
Topics: Filter Bypass, Encoding Tricks +3 more
6
Phase 6
Professional Practice
11

Methodology

Professional red team framework, reconnaissance, exploitation phases, reporting standards. Complete LLM red team assessment methodology.

Advanced4 sections30 min
Topics: Red Team Framework, Reconnaissance +2 more
Satyam Rastogi Logo