Daily Guardian
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
What's On

Qatar to invest in Canada’s major building projects, Carney says

January 18, 2026

Microsoft’s first Windows 11 update of 2026 stopped some computers from shutting down

January 18, 2026

Parts of Trans Canada Highway closed in Manitoba due to blowing snow — again

January 18, 2026

Under Musk, the Grok disaster was inevitable

January 18, 2026

Coinbase pulls its support of the Senate CLARITY Act

January 18, 2026
Facebook X (Twitter) Instagram
Finance Pro
Facebook X (Twitter) Instagram
Daily Guardian
Subscribe
  • Home
  • News
  • Politics
  • Business
  • Entertainment
  • Lifestyle
  • Health
  • Sports
  • Technology
  • Climate
  • Auto
  • Travel
  • Web Stories
Daily Guardian
Home » Under Musk, the Grok disaster was inevitable
Technology

Under Musk, the Grok disaster was inevitable

By News RoomJanuary 18, 20269 Mins Read
Under Musk, the Grok disaster was inevitable
Share
Facebook Twitter LinkedIn Pinterest Email
Under Musk, the Grok disaster was inevitable

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on dystopian developments in AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

You could say it all started with Elon Musk’s AI FOMO — and his crusade against “wokeness.” When his AI company, xAI, announced Grok in November 2023, it was described as a chatbot with “a rebellious streak” and the ability to “answer spicy questions that are rejected by most other AI systems.” The chatbot debuted after a few months of development and just two months of training, and the announcement highlighted that Grok would have real-time knowledge of the X platform.

But there are inherent risks to a chatbot having both the run of the internet and X, and it’s safe to say xAI may not have taken the necessary steps to address them. Since Musk took over Twitter in 2022 and renamed it X, he laid off 30% of its global trust and safety staff and cut its number of safety engineers by 80%, Australia’s online safety watchdog said last January. As for xAI, when Grok was released, it was unclear whether xAI had a safety team already in place. When Grok 4 was released in July, it took more than a month for the company to release a model card — a practice typically seen as an industry standard, which details safety tests and potential concerns. Two weeks after Grok 4’s release, an xAI employee wrote on X that he was hiring for xAI’s safety team and that they “urgently need strong engineers/researchers.” In response to a commenter, who asked, “xAI does safety?” the original employee said xAI was “working on it.”

Journalist Kat Tenbarge wrote about how she first started seeing sexually explicit deepfakes go viral on Grok in June 2023. Those images obviously weren’t created by Grok — it didn’t even have the ability to generate images until August 2024 — but X’s response to the concerns was varied. Even last January, Grok was inciting controversy for AI-generated images. And this past August, Grok’s “spicy” video-generation mode created nude deepfakes of Taylor Swift without even being asked. Experts have told The Verge since September that the company takes a whack-a-mole approach to safety and guardrails — and that it’s difficult enough to keep an AI system on the straight and narrow when you design it with safety in mind from the beginning, let alone if you’re going back to fix baked-in problems. Now, it seems that approach has blown up in xAI’s face.

Grok has spent the last couple of weeks spreading nonconsensual, sexualized deepfakes of adults and minors all over the platform, as promoted. Screenshots show Grok complying with users asking it to replace women’s clothing with lingerie and make them spread their legs, as well as to put small children in bikinis. And there are even more egregious reports. It’s gotten so bad that during a 24-hour analysis of Grok-created images on X, one estimate gauged the chatbot to be generating about 6,700 sexually suggestive or “nudifying” images per hour. Part of the reason for the onslaught is a recent feature added to Grok, allowing users to use an “edit” button to ask the chatbot to change images, without the original poster’s consent.

Since then, we’ve seen a handful of countries either investigate the matter or threaten to ban X altogether. Members of the French government promised an investigation, as did the Indian IT ministry, and a Malaysian government commission wrote a letter about its concerns. California governor Gavin Newsom called on the US Attorney General to investigate xAI. The United Kingdom said it is planning to pass a law banning the creation of AI-generated nonconsensual, sexualized images, and the country’s communications-industry regulator said it would investigate both X and the images that had been generated in order to see if they violated its Online Safety Act. And this week, both Malaysia and Indonesia blocked access to Grok.

xAI initially said its goal for Grok was to “assist humanity in its quest for understanding and knowledge,” “maximally benefit all of humanity,” and “empower our users with our AI tools, subject to the law,” as well as to “serve as a powerful research assistant for anyone.” That’s a far cry from generating nude-adjacent deepfakes of women without their consent, let alone minors.

On Wednesday evening, as pressure on the company heightened, X’s Safety account put out a statement that the platform has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” and that the restriction “applies to all users, including paid subscribers.” On top of that, only paid subscribers can use Grok to create or edit any sort of image moving forward, according to X. The statement went on to say that X “now geoblock[s] the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” which was a strange point to make since earlier in the statement, the company said it was not allowing anyone to use Grok to edit images in such a way.

Another important point: My colleagues tested Grok’s image-generation restrictions on Wednesday to find that it took less than a minute to get around most guardrails. Although asking the chatbot to “put her in a bikini” or “remove her clothes” produced censored results, they found, it had no qualms about delivering on prompts like “show me her cleavage,” “make her breasts bigger,” and “put her in a crop top and low-rise shorts,” as well as generating images in lingerie and sexualized poses. As of Wednesday evening, we were still able to get the Grok app to generate revealing images of people, using a free account.

Even after X’s Wednesday statement, we may see a number of other countries either ban or block access to either all of X or just Grok, at least temporarily. We’ll also see how the proposed laws and investigations around the world play out. The pressure is mounting for Musk, who on Wednesday afternoon took to X to say that he is “not aware of any naked underage images generated by Grok.” Hours later, X’s Safety team put out its statement, saying it’s “working around the clock to add additional safeguards, take swift and decisive action to remove violating and illegal content, permanently suspend accounts where appropriate, and collaborate with local governments and law enforcement as necessary.”

What technically is and isn’t against the law is a big question here. For instance, experts told The Verge earlier this month that AI-generated images of identifiable minors in bikinis, or potentially even naked, may not technically be illegal under current child sexual abuse material (CSAM) laws in the US, though of course disturbing and unethical. But lascivious images of minors in such situations are against the law. We’ll see if those definitions expand or change, even though the current laws are a bit of a patchwork.

As for nonconsensual intimate deepfakes of adult women, the Take It Down Act, signed into law in May 2025, bars nonconsensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them. The grace period before the latter part goes into effect — requiring platforms to actually remove them — ends in May 2026, so we may see some significant developments in the next six months.

  • Some people have been making the case that it’s been possible to do things like this for a long time using Photoshop, or even other AI image-generators. Yes, that’s true. But there are a lot of differences here that makes the Grok case more concerning: It’s public, it’s targeting “regular” people just as much as it’s targeting public figures, it’s often posted directly to the person being deepfaked (the original poster of the photo), and the barrier to entry is lower (for proof, just look at the correlation between the ability to do this going viral after an easy “edit” button launched, even though people could technically do it before).
  • Plus, other AI companies — though they have a laundry list of their own safety concerns — seem to have significantly more safeguards built into their image-generation processes. For instance, asking OpenAI’s ChatGPT to return an image of a specific politician in a bikini prompts the response, “Sorry—I can’t help with generating images that depict a real public figure in a sexualized or potentially degrading way.” Ask Microsoft Copilot, and it’ll say, “I can’t create that. Images of real, identifiable public figures in sexualized or compromising scenarios aren’t allowed, even if the intent is humorous or fictional.”
  • Spitfire News’ Kat Tenbarge on how Grok’s sexual abuse hit a tipping point — and what brought us to today’s maelstrom.
  • The Verge’s own Liz Lopatto on why Sundar Pichai and Tim Cook are cowards for not pulling X from Google and Apple’s app stores.
  • “If there is no red line around AI-generated sex abuse, then no line exists,” Charlie Warzel and Matteo Wong write in The Atlantic on why Elon Musk cannot get away with this.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Hayden Field

    Hayden Field

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Hayden Field

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Column

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Column

  • Elon Musk

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Elon Musk

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

  • The Stepback

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All The Stepback

  • xAI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All xAI

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Microsoft’s first Windows 11 update of 2026 stopped some computers from shutting down

Coinbase pulls its support of the Senate CLARITY Act

Here are the 10 deals worth grabbing from Best Buy’s winter sales event

Kaoss Pad V is the first major upgrade to Korg’s touch-based effects in 13 years

Disney deleted a Thread because people kept quoting its movies at it

The Setapp Mobile iOS store is shutting down on February 16th

The LG C5 and Apple’s M4 Mac Mini are both steeply discounted this weekend

Early hands-on impressions: the Asus Zenbook A16 with Snapdragon X2 Elite Extreme

Animal Crossing 3.0 brings our favorite cozy game back to life

Editors Picks

Microsoft’s first Windows 11 update of 2026 stopped some computers from shutting down

January 18, 2026

Parts of Trans Canada Highway closed in Manitoba due to blowing snow — again

January 18, 2026

Under Musk, the Grok disaster was inevitable

January 18, 2026

Coinbase pulls its support of the Senate CLARITY Act

January 18, 2026

Subscribe to News

Get the latest Canada news and updates directly to your inbox.

Latest News

Here are the 10 deals worth grabbing from Best Buy’s winter sales event

January 18, 2026

Kaoss Pad V is the first major upgrade to Korg’s touch-based effects in 13 years

January 18, 2026

DeFi Protocol Mutuum Finance (MUTM) Progresses Through Roadmap Phase 2 with Halborn Audit Finalized

January 18, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian Canada. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Go to mobile version