Daily Guardian
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
What's On

Aqua free enters the B2C market with “Shower2go”: a portable travel shower for safer showering on the go – helping to reduce exposure to Legionella

January 13, 2026

AGP Marks 13 Years of TuffMesh: A Dual-Purpose Mesh for Ember Protection and Safe Rainwater Collection

January 13, 2026

Toll Brothers Announces Grand Opening of Toll Brothers at Great Park Neighborhoods in Irvine, California

January 13, 2026

Global Loaners Launches Jewelry-Backed Line of Credit Loans

January 13, 2026

AIQ introduces direct-to-menu media, closing the gap between cannabis advertising and purchase

January 13, 2026
Facebook X (Twitter) Instagram
Finance Pro
Facebook X (Twitter) Instagram
Daily Guardian
Subscribe
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
Daily Guardian
Home » Anthropic’s quest to study the negative effects of AI is under pressure
Technology

Anthropic’s quest to study the negative effects of AI is under pressure

By News RoomDecember 4, 20255 Mins Read
Anthropic’s quest to study the negative effects of AI is under pressure
Share
Facebook Twitter LinkedIn Pinterest Email
Anthropic’s quest to study the negative effects of AI is under pressure

Today, I’m talking with Verge senior AI reporter Hayden Field about some of the people responsible for studying AI and deciding in what ways it might… well, ruin the world. Those folks work at Anthropic as part of a group called the societal impacts team, which Hayden just spent time with for a profile she published this week.

The team is just nine people out of more than 2,000 who work at Anthropic. Their only job, as the team members themselves say, is to investigate and publish quote “inconvenient truths” about how people are using AI tools, what chatbots might be doing to our mental health, and how all of that might be having broader ripple effects on the labor market, the economy, and even our elections.

That of course brings up a whole host of problems. The most important is whether this team can remain independent, or even exist at all, as it publicizes findings about Anthropic’s own products that might be unflattering or politically fraught. After all, there’s a lot of pressure on the AI industry in general and Anthropic specifically to fall in line with the Trump administration, which put out an executive order in July banning so-called “woke AI.”

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

If you’ve been following the tech industry, the outline of this story will feel familiar. We’ve seen this most recently with social media companies and the trust and safety teams responsible for doing content moderation. Meta went through countless cycles of this, where it dedicated resources to solving problems created by its own scale and the unpredictable nature of products like Facebook and Instagram. And then, after a while, it seems like the resources dried up, or Mark Zuckerberg got bored or more interested in MMA or just cozying up to Trump, and the products didn’t really change to reflect what the research showed.

We’re living through one of those moments right now. The social platforms have slashed investments into election integrity and other forms of content moderation. Meanwhile, Silicon Valley is working closely with the Trump White House to resist meaningful attempts to regulate AI. So as you’ll hear, that’s why Hayden was so interested in this team at Anthropic. It’s fundamentally unique in the industry right now.

In fact, Anthropic is an outlier because of how amenable CEO Dario Amodei has been to calls for AI regulation, both at the state and federal level. Anthropic is also seen as the most safety-first of the leading AI labs, because it was formed by former research executives at OpenAI who were worried their concerns about AI safety weren’t being taken seriously. There’s actually quite a few companies formed by former OpenAI people worried about the company, Sam Altman, and AI safety. It’s a real theme of the industry that Anthropic seems to be taking to the next level.

So I asked Hayden about all of these pressures, and how Anthropic’s reputation within the industry might be affecting how the societal impacts team functions — and whether it can really meaningfully study and perhaps even influence AI product development. Or, if as history suggests, this will just look good on paper, until the team quietly goes away. There’s a lot here, especially if you’re interested in how AI companies think about safety from a cultural, moral, and business perspective.

A quick announcement: We’re running a special end-of-the-year mailbag episode of Decoder later this month where we answer your questions about the show: who we should talk to, what topics we cover in 2026, what you like, what you hate. All of it. Please send your questions to decoder@theverge and we’ll do our best to feature as many as we can.

If you’d like to read more about what we discussed in this episode, check out these links:

  • It’s their job to keep AI from destroying everything | The Verge
  • Anthropic details how it measures Claude’s wokeness | The Verge
  • The White House orders tech companies to make AI bigoted again | The Verge
  • Chaos and lies: Why Sam Altman was booted from OpenAI | The Verge
  • Anthropic CEO Dario Amodei Just Made Another Call for AI Regulation | Inc.
  • How Elon Musk Is remaking Grok in his image | NYT
  • Anthropic tries to defuse White House backlash | Axios
  • New AI battle: White House vs. Anthropic | Axios
  • Anthropic CEO says company will pursue gulf state investments after all | Wired

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Nilay Patel

    Nilay Patel

    Nilay Patel

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Nilay Patel

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Anthropic

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Anthropic

  • Decoder

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Decoder

  • Podcasts

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Podcasts

  • Policy

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Policy

  • Politics

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Politics

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Google’s Veo now turns portrait images into vertical AI videos

Lego Smart Brick: watch an immersive 15-minute demo like you’re right there with us at CES

Meta is closing down three VR studios as part of its metaverse cuts

Verizon gets FCC permission to end 60-day phone unlocking rule

Nissan is among the first to offer magnetic phone chargers in the US

Apple Creator Studio suite is launching to take on Adobe

Microsoft scrambles to quell fury around its new AI data centers

What Apple and Google’s Gemini deal means for both companies

Insta360’s face-tracking webcams get bigger sensors and more expensive

Editors Picks

AGP Marks 13 Years of TuffMesh: A Dual-Purpose Mesh for Ember Protection and Safe Rainwater Collection

January 13, 2026

Toll Brothers Announces Grand Opening of Toll Brothers at Great Park Neighborhoods in Irvine, California

January 13, 2026

Global Loaners Launches Jewelry-Backed Line of Credit Loans

January 13, 2026

AIQ introduces direct-to-menu media, closing the gap between cannabis advertising and purchase

January 13, 2026

Subscribe to News

Get the latest Canada news and updates directly to your inbox.

Latest News

Updated analysis finds average battery degradation of 2.3% per year, with charging behaviour emerging as a key driver

January 13, 2026

Feed Scarborough CEO calls financial misuse allegations by Daily Bread ‘unfortunate’

January 13, 2026

Jay Walker Launches “The Jay Walker Podcast,” Premiering on All Podcast Platforms Every Wednesday

January 13, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Canada. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version