- Spitfire News
- Posts
- Why isn't there a bigger Grok boycott?
Why isn't there a bigger Grok boycott?
Advertisers, politicians, and investors are still all-in on X, despite a sexual abuse crisis.
As I was writing this newsletter, I found out that 37-year-old Renee Good was shot and killed by an ICE agent in Minneapolis. Immediately, the federal government began smearing her as the aggressor, despite video footage clearly showing what actually happened. Good was a victim of ICE’s unchecked violence against nonthreatening civilians. She wasn’t the first. She isn’t the last.
Soon after Good was killed, people on X began using Grok, the platform’s built-in AI chatbot, to edit images of her dead body into a bikini. Bellingcat founder Eliot Higgins was the first person I saw to report this, but I’ve since seen one of these images myself live on X, and the conservative influencer behind it, who goes by Amiri King, has not been suspended as of publication. I’ve also seen other X users gleefully use AI-generated images to mock Good’s killing.
Needless to say, the trend of X users generating nonconsensual deepfakes using Elon Musk’s Grok hasn’t slowed down, despite massive attention and outrage over the past week. In fact, Bloomberg just reported that Grok is generating nearly 7,000 of these images from user requests per hour, meaning X has skyrocketed to become one of the largest producers and platforms for AI sexual abuse material.

A portrait of Renee Nicole Good is pasted to a light pole near the site of her shooting on January 8, 2026 in Minneapolis, Minnesota. According to federal officials, an ICE agent shot and killed Good during a confrontation yesterday in south Minneapolis. (Photo by Stephen Maturen/Getty Images)
And yet, despite the growing scale of this crisis, there has been remarkably little response from advertisers, politicians, and investors who are using and profiting from X and xAI. Many of these companies and figures have pledged to fight against deepfake sexual abuse, but nonetheless maintain financial and professional ties to Musk’s X—running ads on it, posting daily to it, and funding it with hundreds of millions of dollars so it can keep growing. Those ads, posts, and payouts are enabling a massive pile of abusive imagery, one that just keeps growing by the day. Some of this content is technically illegal, but no one is getting punished for it besides the victims, who are overwhelmingly women and girls (in my analysis, I also found that a majority of the victims in public posts on X were women and girls of color). The pile-on has already wreaked havoc in some of their lives, while others may not even know the material is out there. Even those of us whose identities aren’t stolen for these images have to bear witness to yet another global campaign of sexual humiliation. The scale and visibility of the attacks just keeps getting worse.
This abuse matters, and we deserve for it to be taken seriously. I reached out to 24 major advertisers on X, nine xAI investors, and the congresspeople behind legislation like the Take It Down Act and the Kids Online Safety Act, to see if anyone was planning to re-evaluate their relationship with X or work to hold it and Musk accountable. I also reached out to the Department of Defense about its $200 million contract with xAI and the Government Services Administration about “Grok for Government,” its 18-month partnership with xAI. And I asked the FTC about whether it will uphold its duty to enforce the law prohibiting this exact type of material. I’ll share some responses and lack thereof after the break.
Finally, I reached out to X and xAI, sending them 15 examples of sexualized and/or violent deepfakes that were still live on the platform. X did not remove them, and a spokesperson responded by sharing a statement from four days ago:
"We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
As it turns out, the “same consequences” appear to be roughly zero. And there are a few reasons why I think the public and private sectors have failed to address this mass campaign to dehumanize women and girls in front of everyone.