Indirect prompt injection is the most widespread and serious vulnerability in AI agents today, not just a theoretical risk. Research shows attacks can transfer across models and behaviors, revealing a fundamental weakness in how agents interpret context. More capable models aren’t safer, high performance often comes with equally high vulnerability. Attacks are especially dangerous because […]
China’s National Computer Network Emergency Response Technical Team warned that the open-source AI agent OpenClaw has weak default security settings that attackers could exploit to gain system control. Attackers can use prompt injection, embedding malicious instructions in web pages to trick the AI into leaking sensitive data. Researchers showed that features like link previews in […]
On 23 February 2026, a coalition led by the Global Privacy Assembly warned about AI systems generating realistic images and videos of individuals without consent. They highlighted rising harms such as non-consensual intimate imagery, defamation, cyberbullying, and risks to children. Organizations are urged to follow privacy laws, build strong safeguards, ensure transparency, and provide fast […]
Switzerland rejected Palantir after a technical review found data leakage cannot be reliably prevented—an architectural, not legal, flaw. The concern isn’t analytics power, but loss of control over data flows, updates, access, and revocation. Germany faces a contradiction: promoting digital sovereignty while using Palantir in several federal states. Bavaria’s Palantir-based VeRA system triggered legal challenges, […]
Simple prompt injections can trick LLM agents into exposing sensitive personal data. Even with safeguards, attackers extract details like balances, transactions, or identifiers. Such attacks succeed in ~20% of cases and degrade agent performance by 15–50%. Defensive measures exist but remain incomplete, leaving users exposed. Bottom line: data sovereignty requires stronger guardrails. Trusting LLMs “as […]

