AI Privacy

When Palantir-AI becomes a sovereignty risk

Switzerland rejected Palantir after a technical review found data leakage cannot be reliably prevented—an architectural, not legal, flaw. The concern isn’t analytics power, but loss of control over data flows, updates, access, and revocation. Germany faces a contradiction: promoting digital sovereignty while using Palantir in several federal states. Bavaria’s Palantir-based VeRA system triggered legal challenges, […]

AI Data Breach Privacy

How LLMs leak your data while prompting

Simple prompt injections can trick LLM agents into exposing sensitive personal data. Even with safeguards, attackers extract details like balances, transactions, or identifiers. Such attacks succeed in ~20% of cases and degrade agent performance by 15–50%. Defensive measures exist but remain incomplete, leaving users exposed. Bottom line: data sovereignty requires stronger guardrails. Trusting LLMs “as […]

AI Privacy

When privacy becomes training data

Researchers found millions of passports, credit cards, résumés, and faces in DataComp CommonPool, a massive AI training dataset scraped from the web. Auditing just 0.1% revealed hundreds of millions of likely PII (personally identifiable information) items, including sensitive job and health details. Despite face-blurring tools, researchers estimate 102 million faces were missed, and metadata/captions still […]