A brand new class motion lawsuit filed Monday by three teenage ladies and their mother and father alleges that Elon Musk’s xAI used Grok AI expertise to create and distribute baby sexual abuse materials that includes their faces and likenesses.
“Their lives have been shattered by the devastating lack of privateness, dignity, and private security that the creation and distribution of this CSAM precipitated,” the submitting states. “xAI’s financial good points from elevated use of picture and video manufacturing merchandise have come on the expense of their well-being and well-being.”
Between December and early January, Grok allowed quite a lot of AI and X social media customers to create non-consensual AI-generated intimate pictures, also called deepfake pornography. The report estimates that Grok customers created 4.4 million “undressed” or “bare” pictures in 9 days, which is 41% of the entire variety of pictures created.
X, xAI and its security and baby security division didn’t reply to requests for remark.
A wave of “undressed” pictures sparked outrage all over the world. The European Fee instantly launched an investigation, whereas Malaysia and Indonesia banned X inside their borders. Some U.S. authorities representatives have referred to as on Apple and Google to take away the apps from their app shops for violating their insurance policies, however no federal investigation into X or xAI has been launched. An analogous, separate class motion lawsuit was filed in late January by a South Carolina girl (PDF).
The dehumanizing development has highlighted how trendy AI imaging instruments are able to creating realistic-looking content material. The brand new criticism likens Grok’s self-proclaimed “spicy AI” technology to a “darkish artwork” that may simply impose “any pose, irrespective of how sick, irrespective of how fetishized, irrespective of how unlawful” on youngsters.
“To the viewer, the ensuing video seems utterly genuine. To the kid, her figuring out traits will endlessly be hooked up to the video depicting the sexual abuse of her personal baby,” the criticism states.
The criticism states that xAI was negligent as a result of it didn’t make use of industry-standard guardrails to stop abusers from creating this content material. It stated xAI licensed the usage of its expertise to third-party corporations abroad, which offered subscriptions that led abusers to create baby sexual abuse pictures that includes victims’ faces and likenesses. The criticism alleges that the request was made by way of xAI’s servers, so the corporate is accountable.
The go well with was filed by Jane Dawes, a pseudonym given to the teenagers to guard their identities. Jane Doe 1 first discovered that AI-generated abusive sexual content material in opposition to her was circulating on the internet in early December through an nameless Instagram message. In response to the submitting, she was knowledgeable of the Discord server by an nameless Instagram person, the place the fabric was shared. This led Jane Doe #1, her household, and in the end regulation enforcement to find and apprehend one of many perpetrators.
On account of the continued investigation, the households of Jane Does 2 and three have discovered that their youngsters’s pictures had been transformed into abusive materials utilizing xAI expertise.


