Close Menu
Daily Guardian
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
What's On

Europa Village Announces “Pathway to Possibilities” Autism Acceptance Gala to Benefit National and Local Programs

March 19, 2026

RR DEADLINE APPROACHING: Faruqi & Faruqi, LLP Reminds Richtech Robotics (RR) Investors of Securities Class Action Deadline on April 3, 2026

March 19, 2026

Bob Barker’s rep speaks out on ‘Price Is Right’ on-set harassment allegations

March 19, 2026

Almost 40% of Canadian youth blow past recommended screen time: data

March 19, 2026

Marine Corps Enlisted Leaders Visit HII’s Ingalls Shipbuilding to Advance Veteran-to-Shipbuilding Workforce Pipeline

March 19, 2026
Facebook X (Twitter) Instagram
Finance Pro
Facebook X (Twitter) Instagram
Daily Guardian
Subscribe
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
Daily Guardian
Home » A rogue AI led to a serious security incident at Meta
Technology

A rogue AI led to a serious security incident at Meta

By News RoomMarch 19, 20262 Mins Read
A rogue AI led to a serious security incident at Meta
Share
Facebook Twitter LinkedIn Pinterest Email

For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that “no user data was mishandled” during the incident.

A Meta engineer was using an internal AI agent, which Clayton described as “similar in nature to OpenClaw within a secure development environment,” to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly.

An employee then acted on the AI’s advice, which “provided inaccurate information” that led to a “SEV1” level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information — and it’s not clear whether the employee who originally prompted the answer planned to post it publicly.

“The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee’s own reply on that thread,” Clayton commented to The Verge. “The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided.”

Last month, an AI agent from open source platform OpenClaw went more directly rogue at Meta when an employee asked it to sort through emails in her inbox, deleting emails without permission. The whole idea behind agents like OpenClaw is that they can take action on their own, but like any other AI model, they don’t always interpret prompts and instructions correctly or give accurate responses, a fact Meta employees have now discovered twice.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Google’s new Android sideloading includes a mandatory waiting period

Prediction markets are trying to lure journalists with partnership deals

Harlowe has a cheaper solution for lighting 360-degree shoots

Lina Khan was right | The Verge

Casio’s new $600 calculator is a work of art

Tubi and TikTok are partnering to produce long form series

The best AirPods deals for March 2026

My favorite robot vacuum now supports Matter

Robinhood is making a social network

Editors Picks

RR DEADLINE APPROACHING: Faruqi & Faruqi, LLP Reminds Richtech Robotics (RR) Investors of Securities Class Action Deadline on April 3, 2026

March 19, 2026

Bob Barker’s rep speaks out on ‘Price Is Right’ on-set harassment allegations

March 19, 2026

Almost 40% of Canadian youth blow past recommended screen time: data

March 19, 2026

Marine Corps Enlisted Leaders Visit HII’s Ingalls Shipbuilding to Advance Veteran-to-Shipbuilding Workforce Pipeline

March 19, 2026

Latest News

Google’s new Android sideloading includes a mandatory waiting period

March 19, 2026

RGNX DEADLINE APPROACHING: Faruqi & Faruqi, LLP Reminds REGENXBIO (RGNX) Investors of Securities Class Action Deadline on April 14, 2026

March 19, 2026

METC DEADLINE APPROACHING: Faruqi & Faruqi, LLP Reminds Ramaco Resources (METC) Investors of Securities Class Action Deadline on March 31, 2026

March 19, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Canada. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version