- Spitfire News
- Posts
- AI is 'pivot to video' 2.0
AI is 'pivot to video' 2.0
Plus, join my first member Q+A!

Senior Director Product Management, Sarah Ali speaks onstage at Made on YouTube at Pier 57 on September 18, 2024 in New York City. (Photo by Dave Kotinsky/Getty Images for Made on YouTube 2024)
Welcome back to Spitfire News! If you haven’t already, you can subscribe for free to get these dispatches delivered to your inbox.
Today’s article is free with reader support. If you want to read paywalled stories, join the members-only chat, and support my journalism, consider a $5 subscription!
I really cannot believe this, but it has already been two months since I launched Spitfire News. Time to get serious, I guess! Kidding. But seriously, thank you to everyone who has subscribed, read, shared, and helped me get this thing off the ground. The response so far has been bigger and better than I could have imagined.
As an even bigger thank you to Spitfire News members who are generously keeping this operation afloat, this weekend I’m doing my first-ever member Q+A. I’ll be taking questions over on the members-only Discord, or via email if you’re not on Discord (you can always reply to these emails to get in touch with me, and I try to respond to as many as I can). AMA!
I’ll be answering questions live on YouTube on Sunday at 1 PM EST. I’ll also be discussing what I’ve learned so far from doing this newsletter, the pros and cons of going independent, and what’s next for Spitfire News.
If you want your questions answered, upgrade to a membership now for just $5!
And now, onto the topic du jour: arguing with people about AI.
Whenever I post about how much I hate AI, people tend to respond with a few of the same arguments over and over again. One is that AI is too broad of a term. I agree. “Artificial intelligence” is marketing. It’s a misnomer a lot of the time. And AI has existed in various forms for decades. But the most popular definition of AI right now refers to a slate of products intended for everyday users to create media: large language models like ChatGPT that generate text, image generators like DALL-E, and voice generators like ElevenLabs. There’s also the growing sector of AI chatbots that are intended to replicate human-to-human connection. Gross! Ew! Yuck!
AI defenders will often posit that these products are inherently neutral tools, like a steering wheel (this is an actual comparison someone made to me). They’ll also compare the AI industry as a whole, which encompasses many different things, to the rise of the internet. They worry that if we don’t embrace AI, we’ll be left behind, like presumably people felt they were when they didn’t immediately cash in on the internet.
This is flawed logic! First of all, even if you were bullish on the internet, that doesn’t necessarily mean you gained anything from being an early adopter. I was already on TikTok in 2018, and that didn’t make me TikTok-famous in 2020. Nor do TikTok-famous people always get a happy ending. Just because a platform or a product or a company is ultimately successful does not mean people who made it so will have anything to show for their support. They may have less to show for it!
The idea that we should all be using AI to protect our own jobs is a repeat of failed history. It reminds me of when Facebook encouraged the pivot to video that royally fucked over a lot of publishers and caused mass layoffs. The pivot to video, for those who aren’t familiar, involved Meta fabricating numbers of video viewership on its platforms to bring in ad dollars.
Now, Meta is a major driver of the pivot to AI, which, based on the AI slop colonizing their platforms, is even worse and dumber than the pivot to video. Gee, maybe we should learn something from the cratering of the digital media industry and the enormous loss of jobs that came along with these promises of more viewership and money based on flimsy promises. AI is even flimsier. You can just use your two eyeballs to see how garbage the output is.
And yet, some of the same news organizations that observed the failed pivot to video have gone all in on AI that is plagiarizing their reporting, stealing their time and resources, and ultimately devaluing the journalism they provide to enrich people like Sam Altman, who is a total fraud. This is all so stupid but so are media executives.
Ed Zitron, who has written a lot about the current incarnation of AI being a false promise, published another piece this week undermining OpenAI’s projected profitability. It sure seems like everyone is getting played!
The other part of this argument I hear a lot is that individuals have found that using ChatGPT has made them more productive at work or performed tasks they don’t like doing. That’s a slippery slope if you ask me! It seems to me like you’re creating the justification for your job to not even exist anymore—just like the people who created the product you’re using said it would—or raise expectations for your own output!
And these questions around productivity and whether there’s a valid use case for AI tend to ignore the consequences of AI that have already happened that vastly outweigh any perceived benefits. But what do I know—I’ve just been reporting on how actively harmful this technology is for the past several years!
Here’s a non-exhaustive list of things off the top of my head that I think make these AI products unsafe for public use regardless of whether they are able to boost anyone’s productivity: there has never been a way to definitively prove if something is AI-generated. Nonconsensual sexually-explicit deepfakes are a growing exploitation economy relying on AI tools to abuse and oppress women and girls at scale. AI voiceovers and media have created a booming fake news industry. Tools like ChatGPT have been adopted by a generation of students who may not develop the same critical thinking skills as a result—and adults who rely on them too heavily will weaken their grasp on those skills, too. Not to mention the AI chatbots are capable of entertaining psychosis and harming the most vulnerable members of society, which everyone who thought about this technology for more than five seconds predicted. I have not even touched on the apocalyptic vision for humanity’s future that the richest AI proponents are eagerly aiming to make a reality or the way this technology accelerates climate change and depletes vital resources for our collective survival. But yeah, productivity at work. Totally worth it. Oh wait—studies show it doesn’t actually make people more productive at work at all! That link takes you to six whole pages of citations compiled by this lovely Bluesky user about all this stuff and much more.
As an internet enjoyer, comparing AI to it also irritates me because from where I’m sitting, AI is destroying everything good about the internet. The internet, AI defenders argue, has both good and bad consequences. I’d argue that the internet, for all its faults, was largely intended to be for humans. AI is anti-human. At its core, AI is about replacing humans and devaluing humanity and replacing human connection with a pathetic imitation.
Back in September 2024, I was invited to a media event for YouTube at Google’s HQ in Manhattan. I went, because I love standing in the corners of industrial buildings overlooking scenic views and nibbling on hors d'oeuvres like a rat.

Guest attend Made on YouTube at Pier 57 on September 18, 2024 in New York City. (Photo by Dave Kotinsky/Getty Images for Made on YouTube 2024)
The press briefing was led by YouTube’s top executives and featured an explanation of a new suite of AI features. There were AI-generated video clips, video ideas, background music, and even suggested replies to comments. The clips were ugly, crude, and painful to watch, but they were applauded as the platform’s future.
What I had seen felt destined to fail. I didn’t know who would want to watch AI slop videos on YouTube featuring cartoon animals rescuing each other. But over the past few months, a YouTube channel called “Chengyu Movies” has quietly racked up over 1.6 billion views by pumping out exactly that. Now AI slop is dominating YouTube like it dominated Facebook feeds, and we don’t even know how many real people are watching it.
This is not what YouTube was made for twenty years ago, when co-founder Jawed Karim uploaded a video of himself at the zoo. YouTube was a place to post videos, usually of real people, filmed on cameras. The platform’s pivot to AI-generated engagement bait slop is a sudden and dramatic departure from the two previous decades of YouTube content. And while it may artificially inflate YouTube’s profits, it is undeniably a worse outcome for the people who live in reality.