How AI assistant marketplaces are becoming the new attack vector

Weekly insights on threats, vulnerabilities, and security best practices.

Security researchers discovered 341 malicious skills on ClawHub, exposing OpenClaw AI assistant users to supply chain attacks and data theft through compromised third-party extensions.

Learn the fundamentals of AI/LLM security assessment, including attack surfaces, threat models, and the emerging discipline of AI red teaming that every penetration tester needs to master.

A critical CVE-2026-25253 vulnerability in OpenClaw enables remote code execution through malicious links, highlighting the growing threat of sophisticated one-click attacks targeting modern applications.