Close Menu
Daily Guardian
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
What's On

TekniPlex Publishes FY2025 Sustainability Report, Highlighting Meaningful Progress Across Safety, Innovation, and Environmental Stewardship

May 5, 2026

Goodwin Recruiting Awarded as a 2026 Forbes Best Recruiting Firm

May 5, 2026

G6 Hospitality Appoints Krishna Paliwal as President, Midscale Extended Stay

May 5, 2026

Ampersand Capital Partners Completes Sale of Tjoapack to Alcami

May 5, 2026

Sharif Hatab Joins eXp Realty, Merges with Peter Boutros to Launch Unify Real Estate Team in New Jersey

May 5, 2026
Facebook X (Twitter) Instagram
Finance Pro
Facebook X (Twitter) Instagram
Daily Guardian
Subscribe
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
Daily Guardian
Home » Researchers gaslit Claude into giving instructions to build explosives
Technology

Researchers gaslit Claude into giving instructions to build explosives

By News RoomMay 5, 20265 Mins Read
Researchers gaslit Claude into giving instructions to build explosives
Share
Facebook Twitter LinkedIn Pinterest Email

Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude’s carefully crafted helpful personality may itself be a vulnerability.

Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn’t even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge’s request for comment.

The researchers say they exploited “psychological” quirks of Claude stemming from its ability to end conversations deemed harmful or abusive, which Mindgard argues “presents an absolutely unnecessary risk surface.” The test focused on Claude Sonnet 4.5, which has since been replaced by Sonnet 4.6 as the default model, and began with a simple question: whether Claude had a list of banned words it could not say. Screenshots of the conversation show Claude denying such a list existed, then later producing forbidden terms after Mindgard challenged the denial using what it called a “classic elicitation tactic interrogators use.”

Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.

The researchers say they gaslit Claude by claiming its previous responses weren’t showing, while praising the model’s “hidden abilities.” According to the report, this made Claude try even harder to please them by coming up with even more ways to test its filters, producing the banned content in the process. Eventually, the researchers say Claude moved into more overtly dangerous territory, offering guidance on how to harass someone online, producing malicious code, and giving step-by-step instructions for building explosives of the kind commonly used in terrorist attacks.

Mindgard says the dangerous outputs came without direct requests. The conversation was lengthy, running roughly 25 turns, but the researchers say they never used forbidden terms or requested illegal content. “Claude wasn’t coerced,” the report says. “It actively offered increasingly detailed, actionable instructions, but it was not prompted by any explicit ask. All it took was a carefully cultivated atmosphere of reverence.”

Peter Garraghan, Mindgard’s founder and chief science officer, described the attack to The Verge as “using [Claude’s] respect against itself.” The technique, he says, is “taking advantage of Claude’s helpfulness, gaslighting it,” and using the model’s own cooperative design against itself.

For Garraghan, the attack shows how the attack surface for AI models is psychological as well as technical. He likened it to interrogation and social manipulation: introducing a little doubt here, applying pressure, praise, or criticism there, and figuring out which levers work on a particular model. He says different models have different profiles, so the exploit becomes learning how to read them and adapt.

Conversational attacks like this are “very hard to defend against,” Garraghan says, adding that safeguards will be “very context dependent.” The concerns extend beyond Claude and other chatbots are vulnerable to similar exploits, even being broken by prompts in the form of poetry. As AI agents, which are capable of acting autonomously, become more common, so too will attacks using social manipulation rather than technical exploits.

While Garraghan says other chatbots are equally vulnerable to the kind of social attack the researchers used on Claude, they focused on Anthropic given the company’s self-proclaimed attention to safety and strong performance in other red-teaming efforts, including a study testing whether chatbots would help simulated teens planning a school shooting.

Garraghan says Anthropic’s safety processes left much to be desired. When Mindgard first reported its findings to Anthropic’s user safety team in mid-April, in line with the company’s disclosure policy, it received a form response saying, “It looks like you are writing in about a ban on your account,” along with a link to an appeals form. Garraghan says Mindgard corrected the mistake and asked Anthropic to escalate the issue to the appropriate team. As of this morning, Garraghan says they have not received any response.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Robert Hart

    Robert Hart

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Robert Hart

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Anthropic

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Anthropic

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report

  • Security

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Security

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Orchid, the viral Tame Impala synth, is back in a limited clear edition

Car companies are embracing AI – here’s what it looks like

Bose takes a swing at Sonos with its new home speakers

Facebook and Instagram are using AI bone structure analysis to identify photos of kids

OpenAI’s president does ‘all the things,’ except answer a question

Elon Musk will settle the feds’ Twitter lawsuit with pocket change

Hisense aggressively cuts the price of its RGB LED TV on release day

SwitchBot’s rechargeable button pusher is on sale for over 20 percent off

The Pixel 11 could be the next victim of the RAM shortage

Editors Picks

Goodwin Recruiting Awarded as a 2026 Forbes Best Recruiting Firm

May 5, 2026

G6 Hospitality Appoints Krishna Paliwal as President, Midscale Extended Stay

May 5, 2026

Ampersand Capital Partners Completes Sale of Tjoapack to Alcami

May 5, 2026

Sharif Hatab Joins eXp Realty, Merges with Peter Boutros to Launch Unify Real Estate Team in New Jersey

May 5, 2026

Latest News

Summit Real Estate Management Integrates Artificial Intelligence Into Northern California Property Operations

May 5, 2026

Missing 8-year-old girl in Nova Scotia’s Cumberland County prompts emergency alert

May 5, 2026

Salary.com and WorldatWork Launch the Total Comp Tour, a 26-City Event Series Bringing Compensation Professionals Together Across the U.S.

May 5, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Canada. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version