A comprehensive introduction to testing AI systems and Large Language Models from an offensive security perspective

Weekly insights on threats, vulnerabilities, and security best practices.

Master the art of LLM exploitation: comprehensive guide to prompt injection variants, jailbreaking techniques, data extraction attacks, and real-world exploitation scenarios with code examples.

Security researchers discovered 341 malicious skills on ClawHub, exposing OpenClaw AI assistant users to supply chain attacks and data theft through compromised third-party extensions.

Security researchers discovered 341 malicious skills on ClawHub marketplace targeting OpenClaw AI assistant users, revealing a concerning new supply chain attack vector.