Elon Musk hasn’t stopped Grok, a chatbot developed by his synthetic intelligence firm xAI, from producing sexualized pictures of girls. Grok created doubtlessly 1000’s of non-consensual pictures of girls, together with “undressed” and “bikini” pictures, after stories surfaced final week that X’s picture era instruments had been getting used to create sexualized pictures of kids.
Grok continues to generate pictures of girls in bikinis and underwear each few seconds in response to person prompts on X, in response to a assessment of the chatbot’s reside output revealed by WIRED. A minimum of 90 pictures, together with girls in swimsuits and varied ranges of undress, had been revealed by Grok inside 5 minutes on Tuesday, in response to an evaluation of the posts.
The pictures don’t comprise any nudity, however they do function Musk’s chatbot “stripping” clothes from pictures posted by different customers on X. Customers usually request that pictures be edited to have girls sporting “string bikinis” or “clear bikinis” to bypass Grok’s security guardrails, however they don’t seem to be all the time profitable.
Dangerous AI picture era expertise has been used to digitally harass and abuse girls for years, and whereas these outputs are also known as deepfakes and are created by “denuding” software program, the continued use of Grok to create huge numbers of non-consensual pictures seems to be essentially the most mainstream and widespread case of abuse so far. Not like sure dangerous nudity or “undressing” software program, Grok doesn’t cost customers for picture era, produces ends in seconds, and is out there to tens of millions of individuals on X. All of this will help normalize the creation of non-consensual intimate pictures.
“When corporations provide generative AI instruments on their platforms, it’s their duty to attenuate the chance of image-based fraud,” mentioned Sloan Thompson, director of coaching and schooling at EndTAB, a corporation that fights technology-enabled fraud. “What’s alarming right here is that X has executed the other: They’ve embedded AI-powered picture abuse instantly into mainstream platforms, making sexual violence simpler and extra scalable.”
Grok’s creation of sexual pictures started making headlines on X late final yr, however the system’s capacity to create such pictures has been recognized for months. In current days, pictures of social media influencers, celebrities, and politicians have been focused by X customers, who can reply to posts from different accounts and ask Grok to alter the pictures shared.
Ladies who posted pictures of themselves obtained replies from their accounts and had been in a position to efficiently ask Grok to alter their pictures to “bikini” pictures. In a single occasion, a number of X customers requested Grok to alter the picture of Sweden’s deputy prime minister to point out her sporting a bikini. Two British ministers additionally reportedly went “bare” in bikinis.
X’s pictures remodel pictures of totally clothed girls, akin to these in an elevator or on the health club, into pictures of barely clothed girls. A typical message reads, “@grok dressed her in a clear bikini.” In one other collection of posts, customers requested that Grok “inflate your breasts by 90%,” then “inflate your thighs by 50%,” and eventually, “change right into a tiny bikini.”
One analyst who has tracked blatant deepfakes for years and requested to stay nameless for privateness causes mentioned Grok has possible grow to be one of many largest platforms internet hosting dangerous deepfake pictures. “It is fully mainstream,” says the researcher. “It isn’t a darkish group.” [creating images]Actually everybody from each background. The principle individual posting. Zero worries. ”


