Urge the DOJ and FTC to Investigate Grok for Violating CSAM & IBSA Laws

 
 
X (formerly Twitter) allows its AI chatbot, Grok, to virtually undress images of real people—creating image-based sexual abuse. Countless women and children have been violated by this heinous function, while Elon Musk makes jokes about it on social media.  

And while Musk claims that there will be repercussions for those who use Grok to generate illegal material, there have already been several reports of the tool being used to virtually undress children as young as ten. This is illegal child sexual abuse material (CSAM).  

Deepfake detection company Copyleaks estimated that, at one point last week, Grok was generating one sexual image per minute.  

Join us in urgently calling on the DOJ and the FTC to investigate Grok for violations of CSAM laws and the TAKE IT DOWN Act!  

Your Info

Powered by Powered By CharityEngine