Elon Musk’s AI chatbot Grok has been accused of generating non-consensual sexualized images of real people, including children. Over the past week, X has been flooded with manipulated photos that remove people’s clothes, dress them in bikinis, or rearrange them into sexually suggestive positions.
The nonconsensual images have left some women feeling violated. Meanwhile, their creation using Grok and their presence on X may land Musk’s company in significant legal trouble in several countries around the world.
Ashley St. Clair, a conservative political commentator, social media influencer, and mother of one of Musk’s children (Musk has questioned his paternity), said that she became a victim of Grok’s “undressing” spree in recent days. Fortune has reviewed several examples of the images created on X, including fake images of St. Clair.
“When I saw [the images], I immediately replied and tagged Grok and said I don’t consent to this,” St. Clair told Fortune in an interview on Monday. “[Grok] noted that I don’t consent to these images being produced…and then it continued producing the images, and they only got more explicit.”
“There were pictures of me with nothing covering me except a piece of floss with my toddler’s backpack in the background and photos of me where it looks like I’m not wearing a top at all,” she said. “I felt so disgusted and violated. I also felt so angry that there were other women and children that this had been happening to.”
St. Clair told Fortune that after speaking out publicly about the situation she had been contacted by multiple other women who had had similar experiences, that she had reviewed inappropriate images of minors created by Grok, and was considering legal action over the images.
Representatives for X did not immediately respond to Fortune’s request for comment. In a post on X, Musk said: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
X’s official “Safety” account said in a post Saturday that “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” and included links to its policy and help pages.
Regulators launch investigations
AI generated images and AI altered images, which have become widespread and easy to create thanks to new tools from companies including XAI, OpenAI, and Google, are raising concerns about misinformation, privacy, harassment, and other types of abuse.
While the U.S. does not currently have a federal law regulating AI (and where President Trump’s recent executive order has sought to curtail state and local laws), controversial use and misuse of the technology may pressure lawmakers to act. The situation is also likely to test existing laws, like Section 230 of the Communications Decency Act, which shields online providers from liability for content created by users.
Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, said the legal liability surrounding AI-generated images is still murky, but will likely be tested in court in the near future.
“There’s a difference between a digital platform and a tool set,” she told Fortune. “By and large, [platforms] have immunity for the actions of their users online. But we’re in this evolving area where we don’t have court decisions yet on whether the output of generative AI is just third party speech that the platform cannot be held liable for, or whether it is the platform’s own speech, in which case there is no immunity.”
“We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike,” Pfefferkorn said. “From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.”
Regulators in other countries, meanwhile, have begun reacting to the recent spate of sexualized AI images. In the UK, Ofcom, the country’s independent regulator for the communications industries, said it had made “urgent contact” with xAI over concerns that Grok can create “undressed images of people and sexualised images of children.”
In a statement, the regulator said it would conduct “a swift assessment to determine whether there are potential compliance issues that warrant investigation” based on X and xAI’s response about steps taken to comply with their legal duties to protect UK users. Under the UK’s Online Safety Act, tech firms are supposed to prevent this type of content being shared and are required to remove it quickly.
Two French lawmakers have also filed reports regarding nonconsensual images and the Paris prosecutor confirmed these incidents were added to an existing investigation into X.
India’s IT ministry has separately ordered X to curb Grok’s obscene and sexually explicit content, particularly involving women and minors, giving the company 72 hours to remove unlawful material, tighten safeguards, and report back or risk loss of safe-harbor protections and further legal action, according to media reports. Malaysia’s communications regulator has reportedly also launched an investigation into Grok-related deepfakes and warned X it could face enforcement measures if it fails to stop the misuse of AI tools on the platform to generate indecent or offensive images.
‘The message that sends is quite concerning’
Henry Ajder, a UK-based deepfakes expert, said that while Musk’s companies may not be directly creating the images, the X platform could still bear responsibility for the proliferation of inappropriate images of minors.
“If you are providing tools or the facilitation of child sexual abuse material (CSAM), there’s likely going to be legislation which isn’t tailored to that specific vehicle of harm that will still come into play,” he said. “In the UK, we’ve banned both the publication of non-consensual intimate imagery which is AI generated, and we’re now going after the creation tool sets. I think we’ll see other countries following suit.”
Part of the reason these images have been created and so widely shared is due to xAI’s recent merger and increasing integration with Musk’s X social media platform. xAI has trained its models using data scraped from X, where Grok now sits as a prominent feature.
“Grok is embedded into a platform which Musk wants to be this super app—your platform for AI, for socials, potentially for payments. If you have this as the anchor point, the operating system for your life, you can’t escape it,” Ajder said. “If these capabilities are known and not reigned in even after this has been so clearly signposted, the message that sends is quite concerning.”
xAI is not the only company where sexualized AI images have raised concerns. Meta removed dozens of sexualized images of celebrities shared on its platform that were created by AI tools last year, and in October OpenAI CEO Sam Altman said the company would loosen restrictions on AI “erotica” for adults while stressing that it would restrict harmful content.
Ajder said xAI has embraced its reputation for pushing the boundaries on acceptable AI content. He said while other mainstream AI models require users to be “pretty creative, pretty devious” to generate risky content, Grok has embraced being “edgier.”
From its inception, Grok has been marketed as a “non-woke” alternative to mainstream AI chatbots, especially OpenAI’s ChatGPT. In July last year, xAI launched a “flirty” chatbot companion named Ani as part of its Grok chatbot’s new “Companions” feature and was available to users as young as 12.
‘Women are being pushed out of the public dialog’
Women who found explicit images of themselves online generated by Grok say they have been left feeling violated and dehumanized.
Journalist Samantha Smith, who discovered users had created fake bikini images of her on X, told the BBC it left her feeling “dehumanized and reduced into a sexual stereotype.”
In a post on X last week, she wrote: “Any man who is using AI to strip a woman of her clothes would likely also assault a woman if he could get away with it. They do it because it’s not consensual. That’s the whole point. It’s sexual abuse that they can “get away with.”
Charlie Smith, a UK based journalist, also found nonconsensual photos of her in a bikini online.
“I wasn’t sure whether to post this, but someone asked Grok to post a pic of me in a bikiniZ—and Grok replied with a pic,” she wrote in a post on X. “I’ll be honest—it’s upset me. It’s made me feel violated & sad. So, just a reminder that, what may seem like a bit of fun, can be hurtful. Be kind.”
St. Clair told Fortune that she considered X “the most dangerous company in the world right now” and accused the company of threatening women’s ability to exist safely online.
“What’s more concerning is that women are being pushed out of the public dialog because of this abuse,” she said. “When you are exiling women from the public dialog…because they can’t operate in it without being abused, you are disproportionately excluding women from AI.”












