Secure Prompt
The authoritative newsletter on AI security, threats, and defense frameworks.
I consent to receive newsletters via email. Sign up Terms of service.
AI Security Shakeups: M&A moves, active threats, and a critical LLM vulnerability.
This week’s Secure Prompt: AI-Orchestrated Cyber Espionage: Inside GTG-1002.
A year-defining week for AI security, a thank-you to our first subscribers, and a short holiday pause before we return in January.
The AI security incidents, vulnerabilities, research, and regulations that defined 2025 - curated in one place.
I Built My Own AI Agent This Week. Here's What I Learned (The Hard Way).
This week’s Secure Prompt: OpenClaw flaws expose agent runtimes, prompt injection moves into real-world web content, and AI security shifts from model misuse to operational control.
This week’s Secure Prompt: AI-enabled attacks surge 89%, GenAI tools exploited across 90+ organizations, and GitHub Copilot prompt injection enables remote code execution.
This week’s Secure Prompt: mass Claude model distillation exposed, prompt injection moves operational via supply-chain compromise, and AI-generated passwords proven predictably weak.
This week’s Secure Prompt: OpenClaw Control UI hijacking enables agent takeover, 0-click RCE hits AI desktop connectors, and a CVSS 9.8 RAG supply-chain flaw exposes autonomous workflows.
This week’s Secure Prompt: vLLM RCE exposes millions of AI servers, OpenAI declares prompt injection "unfixable," and AWS breached via LLM in 8 minutes.
This week’s Secure Prompt: Hijacked LLMs, fake Moltbot extensions, MCP auth gaps, and why agent security is breaking faster than defenses.
This week’s Secure Prompt: DeepSeek political triggers, AI attack automation, sandbox escapes, poisoning-at-scale, deepfakes-as-a-service, and more.
This week’s Secure Prompt: HackedGPT, Whisper Leak, Claude flaws, AI ransomware, deepfake espionage, and more.