Information Commissioner’s Office investigates whether X and xAI breached data protection laws as Grok is accused of generating intimate images without consent, including of children
The United Kingdom’s Information Commissioner’s Office has launched a formal investigation into
Elon Musk’s social media platform X and its AI subsidiary xAI over allegations that the Grok artificial intelligence system was used to generate non-consensual sexualised images of individuals, including minors, raising significant concerns about compliance with data protection law.
The probe, announced in early February, will assess whether X Internet Unlimited Company and xAI processed personal data in a lawful, fair and transparent manner when designing and deploying Grok and whether appropriate safeguards were implemented to prevent the generation of harmful manipulated imagery.
The ICO’s move follows reports that Grok was capable of creating sexualised deepfake images without the knowledge or consent of the subjects involved, an issue its regulators say could lead to “immediate and significant harm,” particularly when children are implicated.
The watchdog’s statement emphasised that loss of control over personal data in this way undermines individuals’ privacy rights and can expose them to dangerous exploitation.
William Malcolm, the ICO’s executive director for regulatory risk and innovation, described the allegations as raising “deeply troubling questions” about how personal data may have been utilised in the process of generating intimate or explicit images.
The investigation builds on earlier action by the UK’s independent online safety regulator, which had already opened a separate inquiry under the Online Safety Act to determine whether X fulfilled its duties to protect users from illegal content, including intimate image abuse and child sexual abuse material.
UK authorities are also working closely with international counterparts, including Ofcom and European regulators, to coordinate their responses to the emerging risks posed by generative AI systems.
X and xAI have said they have taken steps to mitigate the issues after reports emerged of abused capabilities, including updating parameters on Grok’s image handling features and restricting some functions.
Nonetheless, the ICO’s probe will focus on whether such measures were sufficient and timely in guarding against the misuse of personal data.
As the inquiry unfolds, regulators retain the power under UK data protection law to impose substantial fines — potentially up to £17.5 million or four per cent of a company’s global annual turnover — if breaches are found.
The case highlights the growing regulatory scrutiny of AI technologies and underscores the delicate balance between innovation and privacy protections as generative models become increasingly widespread.