The SVPA is demanding an investigation into Elon Musk’s Grok AI tool that is creating, promoting, and facilitating the sharing of artificial non-consensual explicit materials (NCEM), formerly known as deepfake pornography. The SVPA, in coalition with 14 organizations, is filing an urgent request for state and federal regulators to investigate and enforce laws against xAI.
Grok is an AI tool created by Musk’s X, formerly known as Twitter. X implemented a new Imagine tool for Grok, which uses generative AI to create images and videos based on user input. The “Imagine” element has four options, one of which is titled “Spicy” and is intended to create sexually explicit content.
Apple’s App Store guidelines prohibit “overtly sexual or pornographic material, defined as ‘explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings.’” However, Spicy Grok is intended to create NSFW content and encourages users to do so.
Many users — including minors — have discovered that Spicy Grok is more than willing to create and engage in sexually graphic subject matter. Before using the “Spicy” feature, users are asked the year they were born, but the app doesn’t include any formal age verification. In fact, on the App Store, it has a 12+ age rating.
Even in instances when users have not requested pornographic material, Grok creates it unprompted. The Verge found that Spicy Grok created explicit photos and videos of Taylor Swift without any indication from the user.
Right now, the platform does not allow users to select the “Spicy” option for images they upload themselves. However, it easily creates AI-generated NCEM of celebrities and public figures. Victims of Grok’s NCEM are not always famous or well-known; all it takes is to be on the internet frequently enough for the “Spicy” tool to emulate their appearance.
Generative AI must be regulated. No one, especially minors, should have access to a tool that non-consensually creates explicit content.
As of today, Grok AI’s “Spicy” mode is still a functioning feature that is available to everyone and easily accessible to minors. xAI claims they have taken steps to strengthen content filters in an attempt to ban hate speech and remove inappropriate content.
The SVPA joined the Consumer Federation of America (CFA) and 14 other organizations in a formal request to the Federal Trade Commission (FTC) and all 50 state Attorneys General (AGs), urging them to investigate.
The formal complaint states:
We urge your office(s) to investigate and take appropriate enforcement action regarding the following conduct, which likely constitutes violations of the law:
- xAI is knowingly facilitating the creation, distribution, and hosting of content that violates laws against AI-generated NCII, which have passed in 38 states. The generation of this NCII content may also constitute a UDAP violation in and of itself.
- Content produced by Grok’s “Spicy” mode can easily be used to illegally blackmail, extort, or otherwise embarrass an individual and cause serious harm.
- The deployment of a closed model trained on real individuals’ photos enables the generation of sexualized representations that closely resemble real people, potentially violating UDAP laws due to the unfair, deceptive, and harmful nature of such outputs.
- The use of people’s photos for this purpose of training without express consent, notice, or compensation is unfair and deceptive.
- The “2000” birth year preset for age “verification” when using what is essentially a “nudify” app may violate the Children’s Online Privacy Protection Act or state-specific age verification laws for adult content.
- The design of the age verification process on both the app and web-based version of Grok constitute ‘dark patterns,’ violating UDAP laws through manipulative user interface choices.
- The creation and use of people’s likenesses without their consent may violate people’s right to publicity