CPM is investigating allegations that Grok, an AI tool developed by xAI and integrated with X, has been used to generate non-consensual, sexually explicit images of real women and children. These AI-generated “nudification” and deepfake images are degrading, invasive, and may violate laws prohibiting the creation and distribution of non-consensual intimate images and the sexualization of children.
If you or someone close to you has been targeted by AI-generated fake images created using Grok that digitally remove clothing or sexualize women or girls without consent, you may have a claim.
Regulators have raised serious concerns about whether Grok has adequate safeguards in place to prevent this abuse and whether its operators acted quickly enough to stop the spread of harmful content. “Unlike other leading chatbots, Grok doesn’t impose many limits on users or block them from generating sexualized content of real people, including minors,” said Brandie Nonnecke, senior director of policy at Americans for Responsible Innovation, in an interview with Bloomberg.
If you have been affected by Grok-generated fake images, have evidence of such content, or want to help hold those responsible accountable, please fill out the form below. This investigation is being handled by CPM partner Thomas E. Loeser and associate Jacob M. Alhadeff.