Is Grok liable for sexual harassment?

Also, food influencers making anti-ICE content indicates a bigger, notable political shift.

Over the past week, there have been multiple viral incidents of people using the AI chatbot developed by Elon Musk’s company to sexually harass women in a very specific way. It’s an AI twist on a longstanding form of internet misogyny called “tributes.” It’s actually called something worse than that but I’ve decided to spare you.

Let me explain. If you have had the misfortune of spending time in misogynistic web forums, you already know what I’m talking about. If not, sorry again. There’s this whole phenomenon where men ejaculate on pictures of women. Sometimes women solicit these images, but it’s more often a form of sexual harassment. This, like sexually-explicit deepfakes, existed for years in the underbelly of the internet, but also like sexually-explicit deepfakes, Musk’s X has turned this into a mainstream trend that upward of 30 million people have seen this week.

This is a screenshot that 25 million people have seen, according to X’s metrics, but I added the blur effect to the victim’s face.

Here’s how it works. X has integrated Musk’s AI chatbot Grok into the site’s basic functions, and anyone can tag Grok in a tweet and get a response to a question. Grok can also edit images per user requests. Grok, by the way, is powered by a data processing center in Memphis that is polluting the Black neighborhood around it so much that a resident said she can’t breathe in her home.

So, people are simultaneously engaging in environmental racism and automated sexual harassment. They’re tagging Grok in the replies of women’s selfies and asking it to do things like “splatter glue all over [her] face,” “sticking her tongue out,” with a “pinkish blush spreading throughout her face primarily in her cheek area.” The resulting image Grok creates gives the appearance of semen all over the victim’s face.

Over the weekend, this happened to a popular streamer who goes by BrookeAB. She posted: “no joke, can anything legally be done about this” and “I know it’s the ‘internet’ and people can edit or do stuff like this blah blah blah I am more concerned with the fact it’s the official X AI making suggestive content of my face.”

I reached out to Dr. Mary Anne Franks, a preeminent legal expert in this field who drafted the template for several laws against nonconsensual distribution of intimate imagery (what you might know as “revenge porn”). We talked about what kinds of legal recourse Brooke might have in this situation.

Brooke could try to sue the person who prompted Grok to edit her selfie, relying on civil torts for things like intentional infliction of emotional distress or misuse of her image. But she’d have to determine the identity of the user first. Then there’s the question of whether she could go after X and whether X is protected under Section 230. Now, Section 230 shields tech platforms from liability for things users post on them. But this is something Grok posted, and Grok is operated by X.

Also, just a few weeks ago, Donald Trump signed something called the Take It Down Act into law. This makes the distribution of nonconsensual intimate imagery, including AI-generated deepfakes, a federal crime. Within a year, platforms like X have to set up a system to allow victims to request speedy takedowns of this material (a system that is unfortunately ripe for abuse). X doesn’t have a system like this yet (it does have a way to report rule violations, which this material falls under, but it isn’t always responsive). But there are other aspects of the Take It Down Act that could apply here, Franks said.

“Back when we were writing the model civil provisions, one of the things I insisted on including was the whole transfer of semen thing because it was such a common thing,” she said, referring to tributes. “It would fall out of a lot of definitions of pornography or sexually explicit, because they didn’t have nudity in them, necessarily. So we added that on purpose.”

Franks thinks there’s a plausible legal argument to prosecute Grok under the Take It Down Act, as long as it holds that Grok is considered a person—”The fact that it’s an AI entity is going to be complicated,” she said.

Neither Brooke or X responded to a request for comment, but in recent days, Grok has been responding “I’m sorry, I’m not able to assist with that request” when users try to request “glue” edits.

A screenshot someone else posted on X, which has close to 4 million views, according to the platform’s metrics.

Since Musk took over X and ground its trust and safety operations into dust, deepfakes and other AI-generated forms of sexual harassment have flourished on the platform. Not even celebrities have found recourse. Last year, a 17-year-old Marvel star said her team couldn’t get sexually-explicit deepfakes of her taken off X. Wednesday star Jenna Ortega said she deleted the app after seeing deepfakes of her underage self. And while celebrity and influencer deepfakes have repeatedly gone viral on X, it’s far from the only tech platform to host deepfake content and ads for AI apps that “undress” pictures of teen girls. This kind of sexual abuse is a pillar of how generative AI came to be and how it’s commonly used today.

There are major barriers to victims of any kind of sexual abuse getting justice, and the AI-generated kind is no different. A high-profile victim of deepfakes told me last year that her lawyer advised her not to take action, because it would be costly, time-consuming, unpleasant, and put a spotlight on the material.

“There are tools available, but we also don’t want to give false hope to victims,” Franks said, adding that the likelihood of any of these legal theories coming to fruition is “pretty slim still.”

“You can try to blow it up, but it will become what you are identified with, maybe for the rest of your life,” she said. “That’s way too big of a burden on an individual, and it just won’t move the systemic problems that we’ve got.”

There’s also the relationship between Trump and big tech CEOs like Musk, whose platform should be held accountable under the Take It Down Act. But while signing the bill into law, Trump spotted X CEO Linda Yaccarino in the crowd and complimented her for doing a “great job.” Meanwhile, Federal Trade Commission head Andrew Ferguson is responsible for enforcing the Take It Down Act. He opened an investigation into Media Matters after Musk sued the media watchdog for reporting on advertisements appearing next to pro-Nazi posts on X.

“The FTC has made it clear that they’re fighting for Trump. It’s actually never going to be used against the very players who are the worst in this system,” Franks said. “X is going to continue to be one of the worst offenders and probably one of the pioneers of horrible ways to hurt women.”

U.S. President Donald Trump signs the TAKE IT DOWN Act into law alongside first lady Melania Trump, lawmakers and victims of AI deepfakes and revenge porn during a signing ceremony in the Rose Garden of the White House on May 19, 2025 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

One of the things that disturbed me the most about Grok’s sexual harassment was reading the comments below Brooke’s plea for justice. There was a man telling her there’s nothing she can do, so keep scrolling, and a man telling her she would have to stop posting selfies if she wanted it to stop. This is the ideology behind sexual harassment. It’s a tool to oppress women individually and at scale. The message being sent is to stop participating in online life or endure violence and humiliation for being a woman. Brooke is a famous gamer, and many of the women I’ve seen targeted by this kind of harassment and rhetoric are succeeding in male-dominated spaces.

Generative AI tools like Grok give men the ability to sexually harass women and girls at scale. The phenomenon rose from niche misogynistic spaces online into a mainstream internet that is now a misogyny-driven radicalization pipeline. Men and boys have been victims, too.

“It’s about changing a culture of casual sexual harassment,” Franks said. "It’s the same thing over and over again, even if there’s a slightly new twist on it, that it’s AI.”

Correction: This article previously misstated that the Federal Communications Commission has the power to enforce the Take It Down Act. It is actually the Federal Trade Commission.

Democrats could learn something from food influencers speaking out against ICE

Subscribe to Spitfire News to read the rest.

Become a paying subscriber of Spitfire News to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.