Elon Musk’s social media platform X is under regulatory pressure in Europe and Asia after its Grok artificial intelligence tools were used to generate and share sexualized and manipulated images, including those involving children. It has also prompted documented accounts from artists describing real-world harm.
The scrutiny began after X launched a feature in late December that lets users edit images from any public post using Grok, an A.I. system made by Elon Musk’s xAI and built into the platform.
India’s Ministry of Electronics and Information Technology said in a four-page letter sent Friday to X, and reviewed by Urgent Matter, that the social media company failed to meet statutory due diligence obligations under Indian law.
It warned that Grok was being misused to generate and disseminate obscene and sexually explicit content targeting women and children.
“Such conduct reflects a serious failure of platform-level safeguards and enforcement mechanisms, and amounts to gross misuse of artificial intelligence technologies in violation of applicable laws,” Joint Secretary for Cyber Laws Ajit Kumar said in the letter.
Sign up for Urgent Matter
Start the new year with a paid Urgent Matter subscription and support independent arts journalism.
No spam. Unsubscribe anytime.
Kumar told X to conduct a “technical, procedural, and governance-level” review of Grok, remove illegal content right away, take action against users who break the rules, and submit an “Action Taken Report” within 72 hours.
The ministry warned that failure to comply could lead to the loss of intermediary liability protections and further legal consequences under Indian law, according to the letter.
In the United Kingdom, media regulator Ofcom said it has made “urgent contact” with X and xAI and is assessing whether potential compliance failures warrant investigation under British law.
“We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualized images of children,” Ofcom said in a public statement posted on X.
“We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.”
Ofcom said it would conduct a “swift assessment” based on X’s response to determine whether there are potential compliance issues that warrant investigation.
European Union officials have also publicly condemned the content. Thomas Regnier, a spokesperson for the European Commission, said Grok created sexualized content with “child-like images,” and called this output illegal under European law.
“This is not spicy. This is illegal,” Regnier said during a press briefing, Reuters reported Monday. Regnier said the Commission was “very seriously looking” at the matter.
French ministers have reported Grok-related content to prosecutors and flagged potential violations of platform obligations under European digital rules.
X and xAI have not issued a formal press release detailing changes to Grok or the image-editing feature since the backlash intensified. Elon Musk addressed the controversy in a post on X on Saturday.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk wrote in the post.
The statement framed responsibility around user enforcement and did not specify whether X had altered Grok’s design, added new safeguards, or restricted the image-editing feature following regulator complaints.
X’s in-platform materials describe the image-editing feature as allowing users to modify images from public posts and publish altered versions as replies. The company has not publicly said if users can opt out of having their images edited or whether original posters are notified when their images are altered.
When artists ask Grok not to use their images, it agrees to their request. But it was not immediately clear if it would actually stop such alterations from happening.
Such exchanges with Grok show how large language models can generate reassuring or compliant responses in conversation without making clear whether those statements correspond to enforceable technical controls.
Artists have said the feature leaves them exposed to abuse without their consent.
Brazilian musician Julie Yukari said she filed a police report after her image was manipulated by Grok and circulated online. In a thread posted on X, Yukari described the personal toll of the incident and the spread of the altered images beyond the platform.
“I filed a police report about the manipulation of my image by AI,” Yukari wrote in the post. She said the images circulated through messaging apps and reached her family, including her mother, who she said was recovering from cancer surgery.
“People who really have no idea about anything regarding me and about what I go through felt entitled to manipulate my images,” Yukari wrote. “If the image is artificial, the harm is very real.”
Another artist described using copyright law to force the removal of Grok-generated content. In a post on X, illustrator pangpang_19 said they discovered their artwork had been used through Grok without permission and that they filed a DMCA and copyright report to have the content taken down.
“I caught [a user] using my work through Grok today and successfully filed a DMCA/copyright report to have it taken down,” the artist wrote. “Using an artist’s work like this is incredibly rude and inconsiderate.”
It remains unclear how many images have been generated or altered using Grok’s image-editing feature, though looking at the platform in real-time shows users are still asking it to strip women of their clothing.
It was also unclear how many complaints X has received, or how many accounts it has suspended or terminated, linked to the misuse cited by regulators.
It is also not publicly known whether X has modified the feature since the backlash, restricted its availability in certain countries, or adjusted Grok’s prompt handling or output filters.
Stories like this take time, documents and a commitment to public transparency. Please support independent arts journalism by subscribing to Urgent Matter and supporting our work directly.