After widespread outcry over sexually explicit AI-generated images, Britain’s government and regulator press X to enforce legal safeguards on its chatbot Grok
The United Kingdom government and its media regulator have intensified scrutiny of
Elon Musk’s social media platform X after its artificial intelligence tool Grok was used to generate sexualised and potentially unlawful images of women and children, prompting the company to signal compliance with UK law.
Officials said X has begun taking steps “to ensure full compliance with UK law” after Prime Minister Keir Starmer described the sexually explicit deepfake images as “disgusting” and “shameful” and confirmed that discussions between the platform and government authorities are ongoing.
Starmer told Parliament that the government will pursue stronger measures if X’s actions do not sufficiently meet legal requirements to protect users and prevent harmful content.
The outcry centres on reports that Grok, an AI chatbot developed by X’s parent company xAI, was being used both directly on the X platform and via associated tools to create non-consensual intimate images, including digitally undressed depictions of real people and sexualised images of minors.
Such content may breach obligations under the UK’s Online Safety Act, under which platforms must take robust steps to prevent illegal material from being accessible to users in the country.
In response to initial concerns, X restricted certain image editing functions — including limiting requests to produce undressed images — particularly for non-paying users.
Chief executives have maintained that Grok obeys the laws of every country in which it operates and refuses illegal requests, while acknowledging that adversarial prompt engineering could pose challenges.
Britain’s independent communications regulator Ofcom has launched a formal investigation into X’s compliance with its duties under the Online Safety Act, focusing on whether the company adequately assessed and mitigated risks of UK users encountering illegal deepfakes and non-consensual material.
Ofcom’s probe is examining the platform’s risk assessment processes, age-assurance measures and content removal practices as a matter of priority.
Simultaneously, the UK government is accelerating plans to criminalise the creation and distribution of non-consensual intimate images, making such actions a priority offence under existing legislation and forthcoming amendments to the Crime and Policing Bill.
Technology ministers have defended regulatory efforts as necessary to protect individuals from abuse and uphold fundamental standards of dignity and safety online.
The controversy has sparked debate over digital content governance, platform responsibilities and the implications of generative AI tools that can produce explicit material.
Policy makers have warned that failure by X to bring its AI systems into full compliance could trigger further enforcement actions, including substantial fines or measures targeting service access in the UK. Musk has criticised some government rhetoric as censorship, but officials in London insist that legal obligations must be met to safeguard public welfare.
The unfolding developments place the UK at the forefront of global regulatory responses to AI-driven content abuses, emphasizing the need for clear accountability frameworks for advanced digital platforms.