Lawmakers and regulators in the United Kingdom, the European Union and Southeast Asia have intensified scrutiny of Elon Musk’s X platform and its artificial intelligence image tool Grok after it was used to generate sexualized images of women and children without consent.

Urgent Matter previously reported that artists and online users had documented misuse of Grok’s image tools, prompting early regulatory action. These included Ofcom quickly contacting xAI and India’s I.T. ministry issuing enforcement notices.

Now, the chair of the British Parliament’s Science, Innovation and Technology Committee has demanded urgent explanations from the government and Ofcom, warning that gaps in current law have left the public exposed to A.I.-driven abuse.

Chi Onwurah wrote on Friday to Ofcom Chief Executive Melanie Dawes after Ofcom’s contact with xAI, the company behind Grok, over reports that the tool had been used to create sexualized images of women and children. In her letter, Onwurah said existing laws were not keeping pace with the rapid spread of A.I.-generated imagery.

“Reports that xAI’s Grok has been used to create non-consensual sexualised deepfakes on X are extremely alarming,” Onwurah said in a statement released by the committee.

Artists report harm after X rolls out Grok image editing
Creators in multiple countries describe police reports and copyright takedowns tied to a new Grok feature.

She said her committee warned last year that the Online Safety Act does not clearly regulate generative A.I., leaving British citizens “exposed to online harms while social media companies operate with apparent impunity.”

Onwurah’s letter to Ofcom questioned why the regulator had not opened a formal investigation or taken enforcement action, and whether it had the legal powers to respond.

She noted delays in putting into effect parts of the Data Use and Access Act, passed in July 2025, which would make it a crime to create non-consensual intimate images with A.I. She also mentioned the government’s plan to ban so-called “nudification” tools, but pointed out that there is no timeline for when the ban will start.

In a separate letter to the Secretary of State for Science, Innovation and Technology, Liz Kendall, Onwurah again called for the Online Safety Act to be changed to clearly include generative A.I. The government had rejected this recommendation in its response to a 2025 committee report on misinformation.

Onwurah asked when the promised ban on nudification tools would be introduced and how it would be enforced. She requested replies from Ofcom and the government by January 16.

Ofcom is also facing pressure from Parliament’s Culture, Media and Sport Committee. In its own letter to Dawes sent Friday, the committee said it was “deeply concerned” by reports that Grok was being used to create undressed images of real people and alleged sexualized images of children.

The committee criticized X’s reported decision to limit Grok’s image-generation and editing tools to paying subscribers, saying the move seemed “not to stop the creation of such images but to turn it into a paid-for service.” The letter also asked how many enforcement actions Ofcom has taken under the Online Safety Act since it began, and whether any involved A.I.-generated sexualized images of children.

At the European level, officials said that limiting Grok’s image tools to paid users does not solve regulators’ concerns.

During a European Commission midday press briefing in Brussels on Friday, spokesman Thomas Regnier said the move to a paid model “doesn’t change our fundamental issue.”

“Paid subscription or non-paid subscription, we don’t want to see such images. It’s as simple as that,” Regnier said.

Regnier said the European Commission does not order the removal of individual pieces of content. Instead, he said, enforcement under the Digital Services Act focuses on whether platform systems and design allow illegal content to be generated in the first place.

“What we’re asking platforms to do is to make sure that their design, that their systems, do not allow the generation of such illegal content,” he said, referring specifically to non-consensual sexual images of women and children.

Regnier said the European Commission is in dialogue with X over Grok and is examining how such output is possible, while national authorities pursue criminal investigations against individuals where appropriate. He said the bloc is not monitoring individual takedowns but is focused on whether platforms are meeting their  obligations under European law.

Outside Europe, Indonesia has started investigating how Grok is being used.

Indonesia’s Ministry of Communication and Digital Affairs said it is probing the alleged misuse of Grok to generate explicit content, citing potential violations of national digital and content-moderation laws.

In a press release, the ministry said it is working with other agencies and checking if the platform’s safeguards are strong enough to stop harmful A.I.-generated material from spreading.

Share this article
The link has been copied!