Full article: https://securitybrief.asia/story/meta-ai-agent-exposes-sensitive-data-in-internal-leak

Gidi Cohen, chief executive and co-founder of security firm Bonfy.AI, said the incident showed the consequences of deploying agents without a persistent understanding of data sensitivity and access rights.

"Meta's incident is exactly what happens when you let agents loose on sensitive data without any real data-centric guardrails. This wasn't some exotic AGI failure. It was a very simple pattern: an engineer asked an internal agent for help, the agent produced a 'reasonable' plan, and that plan quietly exposed a huge amount of internal and user data to people who were never supposed to see it."

Cohen continued, "The problem is that neither the engineer nor the agent had any persistent notion of who actually should see this data beyond whatever happened to sit in a narrow context window at that moment. Traditional controls don't help much here. Endpoint DLP, CASB, browser controls, even basic role-based permissions - none of them are watching the actual content as it moves through an agent's reasoning steps and tool calls, especially when the agent is running as a system service in some framework."