AI Hack

Why AI Agents are easier to hack than you think

Indirect prompt injection is the most widespread and serious vulnerability in AI agents today, not just a theoretical risk. Research shows attacks can transfer across models and behaviors, revealing a fundamental weakness in how agents interpret context. More capable models aren’t safer, high performance often comes with equally high vulnerability. Attacks are especially dangerous because […]

AI Malware Vulnerability

OpenClaw AI security flaws expose systems to data theft

China’s National Computer Network Emergency Response Technical Team warned that the open-source AI agent OpenClaw has weak default security settings that attackers could exploit to gain system control. Attackers can use prompt injection, embedding malicious instructions in web pages to trick the AI into leaking sensitive data. Researchers showed that features like link previews in […]

AI Privacy

Global privacy alarm raised for AI without consent

On 23 February 2026, a coalition led by the Global Privacy Assembly warned about AI systems generating realistic images and videos of individuals without consent. They highlighted rising harms such as non-consensual intimate imagery, defamation, cyberbullying, and risks to children. Organizations are urged to follow privacy laws, build strong safeguards, ensure transparency, and provide fast […]

AI Privacy

When Palantir-AI becomes a sovereignty risk

Switzerland rejected Palantir after a technical review found data leakage cannot be reliably prevented—an architectural, not legal, flaw. The concern isn’t analytics power, but loss of control over data flows, updates, access, and revocation. Germany faces a contradiction: promoting digital sovereignty while using Palantir in several federal states. Bavaria’s Palantir-based VeRA system triggered legal challenges, […]

AI Data Breach Privacy

How LLMs leak your data while prompting

Simple prompt injections can trick LLM agents into exposing sensitive personal data. Even with safeguards, attackers extract details like balances, transactions, or identifiers. Such attacks succeed in ~20% of cases and degrade agent performance by 15–50%. Defensive measures exist but remain incomplete, leaving users exposed. Bottom line: data sovereignty requires stronger guardrails. Trusting LLMs “as […]