Over the past week, there have been multiple viral incidents of people using the AI chatbot developed by Elon Muskβs company to sexually harass women in a very specific way. Itβs an AI twist on a longstanding form of internet misogyny called βtributes.β Itβs actually called something worse than that but Iβve decided to spare you.
Let me explain. If you have had the misfortune of spending time in misogynistic web forums, you already know what Iβm talking about. If not, sorry again. Thereβs this whole phenomenon where men ejaculate on pictures of women. Sometimes women solicit these images, but itβs more often a form of sexual harassment. This, like sexually-explicit deepfakes, existed for years in the underbelly of the internet, but also like sexually-explicit deepfakes, Muskβs X has turned this into a mainstream trend that upward of 30 million people have seen this week.

This is a screenshot that 25 million people have seen, according to Xβs metrics, but I added the blur effect to the victimβs face.
Hereβs how it works. X has integrated Muskβs AI chatbot Grok into the siteβs basic functions, and anyone can tag Grok in a tweet and get a response to a question. Grok can also edit images per user requests. Grok, by the way, is powered by a data processing center in Memphis that is polluting the Black neighborhood around it so much that a resident said she canβt breathe in her home.
So, people are simultaneously engaging in environmental racism and automated sexual harassment. Theyβre tagging Grok in the replies of womenβs selfies and asking it to do things like βsplatter glue all over [her] face,β βsticking her tongue out,β with a βpinkish blush spreading throughout her face primarily in her cheek area.β The resulting image Grok creates gives the appearance of semen all over the victimβs face.
Over the weekend, this happened to a popular streamer who goes by BrookeAB. She posted: βno joke, can anything legally be done about thisβ and βI know itβs the βinternetβ and people can edit or do stuff like this blah blah blah I am more concerned with the fact itβs the official X AI making suggestive content of my face.β
I reached out to Dr. Mary Anne Franks, a preeminent legal expert in this field who drafted the template for several laws against nonconsensual distribution of intimate imagery (what you might know as βrevenge pornβ). We talked about what kinds of legal recourse Brooke might have in this situation.
Brooke could try to sue the person who prompted Grok to edit her selfie, relying on civil torts for things like intentional infliction of emotional distress or misuse of her image. But sheβd have to determine the identity of the user first. Then thereβs the question of whether she could go after X and whether X is protected under Section 230. Now, Section 230 shields tech platforms from liability for things users post on them. But this is something Grok posted, and Grok is operated by X.
Also, just a few weeks ago, Donald Trump signed something called the Take It Down Act into law. This makes the distribution of nonconsensual intimate imagery, including AI-generated deepfakes, a federal crime. Within a year, platforms like X have to set up a system to allow victims to request speedy takedowns of this material (a system that is unfortunately ripe for abuse). X doesnβt have a system like this yet (it does have a way to report rule violations, which this material falls under, but it isnβt always responsive). But there are other aspects of the Take It Down Act that could apply here, Franks said.
βBack when we were writing the model civil provisions, one of the things I insisted on including was the whole transfer of semen thing because it was such a common thing,β she said, referring to tributes. βIt would fall out of a lot of definitions of pornography or sexually explicit, because they didnβt have nudity in them, necessarily. So we added that on purpose.β
Franks thinks thereβs a plausible legal argument to prosecute Grok under the Take It Down Act, as long as it holds that Grok is considered a personββThe fact that itβs an AI entity is going to be complicated,β she said.
Neither Brooke or X responded to a request for comment, but in recent days, Grok has been responding βIβm sorry, Iβm not able to assist with that requestβ when users try to request βglueβ edits.

A screenshot someone else posted on X, which has close to 4 million views, according to the platformβs metrics.
Since Musk took over X and ground its trust and safety operations into dust, deepfakes and other AI-generated forms of sexual harassment have flourished on the platform. Not even celebrities have found recourse. Last year, a 17-year-old Marvel star said her team couldnβt get sexually-explicit deepfakes of her taken off X. Wednesday star Jenna Ortega said she deleted the app after seeing deepfakes of her underage self. And while celebrity and influencer deepfakes have repeatedly gone viral on X, itβs far from the only tech platform to host deepfake content and ads for AI apps that βundressβ pictures of teen girls. This kind of sexual abuse is a pillar of how generative AI came to be and how itβs commonly used today.
There are major barriers to victims of any kind of sexual abuse getting justice, and the AI-generated kind is no different. A high-profile victim of deepfakes told me last year that her lawyer advised her not to take action, because it would be costly, time-consuming, unpleasant, and put a spotlight on the material.
βThere are tools available, but we also donβt want to give false hope to victims,β Franks said, adding that the likelihood of any of these legal theories coming to fruition is βpretty slim still.β
βYou can try to blow it up, but it will become what you are identified with, maybe for the rest of your life,β she said. βThatβs way too big of a burden on an individual, and it just wonβt move the systemic problems that weβve got.β
Thereβs also the relationship between Trump and big tech CEOs like Musk, whose platform should be held accountable under the Take It Down Act. But while signing the bill into law, Trump spotted X CEO Linda Yaccarino in the crowd and complimented her for doing a βgreat job.β Meanwhile, Federal Trade Commission head Andrew Ferguson is responsible for enforcing the Take It Down Act. He opened an investigation into Media Matters after Musk sued the media watchdog for reporting on advertisements appearing next to pro-Nazi posts on X.
βThe FTC has made it clear that theyβre fighting for Trump. Itβs actually never going to be used against the very players who are the worst in this system,β Franks said. βX is going to continue to be one of the worst offenders and probably one of the pioneers of horrible ways to hurt women.β

U.S. President Donald Trump signs the TAKE IT DOWN Act into law alongside first lady Melania Trump, lawmakers and victims of AI deepfakes and revenge porn during a signing ceremony in the Rose Garden of the White House on May 19, 2025 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)
One of the things that disturbed me the most about Grokβs sexual harassment was reading the comments below Brookeβs plea for justice. There was a man telling her thereβs nothing she can do, so keep scrolling, and a man telling her she would have to stop posting selfies if she wanted it to stop. This is the ideology behind sexual harassment. Itβs a tool to oppress women individually and at scale. The message being sent is to stop participating in online life or endure violence and humiliation for being a woman. Brooke is a famous gamer, and many of the women Iβve seen targeted by this kind of harassment and rhetoric are succeeding in male-dominated spaces.
Generative AI tools like Grok give men the ability to sexually harass women and girls at scale. The phenomenon rose from niche misogynistic spaces online into a mainstream internet that is now a misogyny-driven radicalization pipeline. Men and boys have been victims, too.
βItβs about changing a culture of casual sexual harassment,β Franks said. "Itβs the same thing over and over again, even if thereβs a slightly new twist on it, that itβs AI.β
Correction: This article previously misstated that the Federal Communications Commission has the power to enforce the Take It Down Act. It is actually the Federal Trade Commission.
Democrats could learn something from food influencers speaking out against ICE
Subscribe to Spitfire News to read the rest.
Become a paying subscriber of Spitfire News to get access to this post and other subscriber-only content.
Upgrade
