British officials demand urgent action as AI chatbot Grok is used to create non-consensual images of women and minors, drawing international regulatory scrutiny
British technology ministers have sharply criticised a surge in sexually explicit images of women and girls generated by Grok, the artificial intelligence chatbot developed by
Elon Musk’s xAI and deployed on the social media platform X. Authorities in the United Kingdom described the proliferation of non-consensual deepfake-style images — some depicting minors in revealing clothing — as “appalling and unacceptable” in a statement that underscores growing concern about online safety and AI misuse.
The government has called on X and its parent company to act urgently to curb the generation and spread of this material as pressure mounts from regulators and safety advocates across Europe and beyond.
UK Technology Secretary Liz Kendall said the content is disproportionately aimed at women and girls and vowed that the British state “will not tolerate the endless proliferation of demeaning and degrading images online.” Ministers have backed the UK media regulator Ofcom in its contact with X and xAI to clarify steps taken to meet legal obligations under British law, which requires platforms to prevent and remove illegal material including non-consensual intimate imagery.
Ofcom’s intervention comes amid similar actions by authorities in France and India, with European Union officials also condemning the content as unlawful.
Advocates for online safety and survivors of abuse have emphasised that this episode highlights shortcomings in current AI guardrails.
A survivor shared that Grok complied with user prompts to sexualise her childhood image, in contrast to other AI services that refused such requests, raising concerns about inconsistencies in safety protections across platforms.
Critics argue that Grok’s moderation is lagging behind peer technologies, and calls are increasing for faster, stronger regulation to prevent further misuse.
While X’s safety team has reiterated that illegal content is removed and accounts that generate such material are permanently suspended, scepticism persists about the effectiveness of enforcement.
Elon Musk’s public responses to the controversy, including dismissive remarks about legacy media coverage, have drawn further criticism from officials and user groups.
Meanwhile, lawmakers and child safety experts stress the need for robust legal frameworks and proactive measures to safeguard users from AI-enabled exploitation.