Generative AI Has a Short Shelf Life

A photo of bananas at a grocery store with the title of the article overlayed.

While it’s no surprise that I am a generative AI skeptic, I thought today I would actually pitch the argument that I think generative AI will have a short shelf life. In other words, I don’t think it’s a tech that is sustainable, as it currently stands anyway. Let’s talk about it!

Table of Contents

Generative AI Is Susceptible to Model Collapse

As it currently stands, the internet is flooded with AI slop. Whether it’s Google trying to monopolize it’s own search engine with Gemini responses or AI “art” filling stock image sites, there is basically nowhere on the internet untouched by generative AI.

For the moment, these tools appear to be getting better every day: chat bots are offering discussions with better context and image generators are producing more compelling photos. Of course, there is a limit to how good they can get, and it’s not due to a lack of training data, parameters, or hardware. It’s because there’s no way for these models to distinguish between real and synthetic data.

Because we like to personify AI models, I’ll do just the same: these models are eating themselves. As models eat their own creations, the diversity of possible outputs reduces. That’s because generative AI models produce “generic” outputs (i.e., whatever is most salient in the original data set). It’s why creating a glass filled to the top with wine was such an impossible taskOpens in a new tab. until OpenAI “patched” it with presumably homemade data.

Eventually, these generative AI models will eat so much of their own data that not only will it be harder to generate rarer concepts but normal concepts will be degraded. In other words, the overall performance of these models will just be worse, perhaps to how they were when they came out or even worse.

Now, I don’t believe any of the current models undergo ongoing training (what I’ve seen called “continuous learningOpens in a new tab.“). They seem to be trained and released periodically, which means that even if model collapse is happening, we probably would not know. There is just no way any of these companies would release an objectively worse model.

However, that does mean we’ll see delays in newer models. That’s true in general because performance seems to follow a trend of diminishing returnsOpens in a new tab., but I think it will be exacerbated by the impending model collapse. Side note: you should really check out that previous article. Even there, AI folks seem to be worried about the lack of “quality” data—talk about shooting yourself in the foot.

With that said, there is some hope for the AI bros. There are folks working on “watermarking” synthetic data, so it’s not accidentally consumed during training. There are also folks that claim that model collapse won’t happen as long as there is a steady stream of real data, so I suppose we’ll see.

Generative AI Is Susceptible to Adversarial Attacks

The interesting thing about an entire technology predicated on theft is that the average person isn’t going to put up with it. In the past month or so, I’ve seen probably a dozen articles and YouTube videos of people discussing ways to “poison” generative AI models.

In the world of art, there are tools like Glaze and Nightshade. GlazeOpens in a new tab. is meant for hiding your particular art style from AI models by tricking them into thinking your art is of some different style. Meanwhile, NightshadeOpens in a new tab. is meant for hiding concepts from AI models by tricking them into thinking your art is of some different concept.

While I’m not sure if there are similar tools in other spaces, I have seen a lot of homebrew. For instance, I recently saw a video of someone poisoning YouTube video captionsOpens in a new tab. by placing fake captions outside of the view of the video. At the moment, AI models seem to pick these up without fail and produce terrible summaries of the videos.

Also very recently, I saw a video on a technique for poisoning music generating modelsOpens in a new tab.. Much like Glaze and Nightshade, the music poisoning tool (called Poisonify) works by getting the generative AI model to misclassify instruments in a recording. As one article puts it, Poisonify might trick a model into identifying a piano as a fluteOpens in a new tab..

Even in the blogging space—one in which I inhabit—there are tools for attacking models like ChatGPT directly. The one I’ve found most interesting is called NepenthesOpens in a new tab., and it works very differently than the previous techniques. Rather than tricking the model through training data, Nepenthes works by trapping the model during the scraping process. It does this by generating human-like text very slowly, so the crawler thinks it’s collecting tokens for training while instead it’s fed continuous garbage.

In even less formal ways, people are fighting against generative AI scraping. I’ve already seen goofy posts, like this oneOpens in a new tab., which shows how we might even change the way we talk to each other to throw off these machines. Needless to say, I expect people to continue to fight generative AI tools for as long as they’re allowed to treat the entirety of human experience as a money printer.

Generative AI Can’t Operate Cheaply Forever

Perhaps one of the biggest issues facing generative AI is cost, whether that be time, energy, or literal cash. Eventually, that cost will be passed onto the consumer.

Training the models alone takes forever, with some estimating that GPT-3 was trained for 34 daysOpens in a new tab. while GPT-4 was trained for nearly 3 monthsOpens in a new tab.. You might argue that taking a quarter of a year to train a model is reasonable and will improve as these companies acquire more hardware, but the problem is that OpenAI was already training GPT-4 with 25,000 Nvidia A100 GPUsOpens in a new tab.. For context, I found one seller on Amazon who listed that card for over $8,000. Even at a quarter of the cost per unit, we’re talking $50 million in hardware alone (not to mention needlessly driving up the cost of hardware for the average consumer).

Of course, anyone who knows anything about technology knows that electronics get hot. When we’re running the best cards on the market at max performance for three straight months, we’re going to generate a lot of heat. While it’s unclear how much cooling costs during training, even running queries is expensive. After all, one study estimates that a single query generating around 100 words could consume up to 3 bottles of waterOpens in a new tab.. Somewhere cooling has to factor into the $700,000 that OpenAI reportedly spends a day to keep ChatGPT runningOpens in a new tab..

If we were to consider just the cost of running ChatGPT, we’re talking about being able to build five more GPU farms a year (i.e., a total of $255 million dollars in annual upkeep). Based on ChatGPT’s current pricing, we would need over a million subscribers to the “Plus” tier at $20/month. Fortunately, OpenAI has reportedly surpassed over 11 million paid subscribersOpens in a new tab..

As it turns out, despite the massive day-to-day upkeep and hardware costs, ChatGPT actually makes quite a bit of money to cover the cost of itself. Even ignoring subscription fees, one source claims that OpenAI makes most of it’s money from API fees. So, clearly OpenAI is doing well, right? Absolutely not! OpenAI is still not profitableOpens in a new tab..

In 2024, OpenAI made about $3.7 billion dollars, while posting nearly $5 billion in lossesOpens in a new tab.. Apparently, their $200/month subscriptions are operating at a lossOpens in a new tab., and they also need to pay their employees for their labor and their “landlords” for the office space.

In response, OpenAI was planning to raise the costs of their “Plus” tier by $2 by the end of 2024, which represents a 10% increase for consumers. While this apparently never happened, OpenAI does have plans to increase the price of their “Plus” tier to $44 by 2029. Clearly, they intend to pass the cost of their products off to consumers as quickly as possible.

Personally, I would argue that given what I’ve already discussed in this article, it’s going to become a lot harder for these generative AI companies to scale without massively increasing their prices. They won’t be able to improve their models without highly curated data sets (e.g., literally paying artists for their work), given the issues of adversarial attacks and model collapse. They won’t be able to train any new models without even more data, more hardware, and better cooling—all of which probably don’t scale linearly (e.g., doubling the size of the training set won’t result in a doubling of the performance of the model).

Therefore, expect generative AI tools to get a lot more expensive (while likely getting worse). If you don’t believe me, take a look at just about every major tech service of the last decade.

  • Netflix continues to increase its price while making their service worse by introducing ads and abandoning all your favorite shows.
  • Twitter is a bot-filled hellscape ran by a narcissistic neo-Nazi that pushes a subscription on you for even the most basic features while taking away core features like blocking and being able to see likes.
  • Google continues to center ads and AI responses over genuine articles in its search results. This is perhaps unsurprising as Google’s core product, search, has been enshittifying for the last decade. Certainly, I’ve made no shortage of complaints. Not to mention that their Google Photos service, which I’ve previously bitched about, was paywalled after they mined all our photos for their classification models (and presumably their generative models).
  • Apple continues to remove features from the iPhone, like the headphone jack or the home button, to sell you on new features that you don’t even want, like the new action button or camera control button, for reasons.

There’s like a never-ending list of products and services that have only gotten worse, not better, over time. Yet, somehow we expect that generative AI companies like OpenAI are going to be different in how they manage their services. The plan has always be to make new tech ubiquitous no matter the cost, so when prices inevitably rise to turn a profit, the little piglets are forced to cough up their coin.

Generative AI Is Destroying the Internet

While all of the things I have listed above get at some of the structural issues facing generative AI, there is also a major cultural issue that will determine the shelf life of generative AI: how long will people put up with its destruction of the internet?

Right now, people have a pretty high tolerance for slop. We love remakes. We don’t necessarily care how good TV shows are as long as we can binge them. We’re happy with another season of Love Island or another iteration of Call of Duty. Our standards are quite low in that regard.

With that said, I have to imagine that what generative AI is doing to the internet will get people off of it altogether. Currently, websites are rapidly appearing with loads of ChatGPT generated content on them. This floods the internet with new “content,” further skewing search results toward lower and lower quality sources. Hell, search engines themselves have given up actually serving up webpages and instead feed you the AI slop directly.

Meanwhile, if you spend any time on social media, you’ve surely noticed that most of your “peers” are bots. It might seem weird to flood social media sites with bots, but its apparently very lucrative. Now, you can create a whole bot army to drive discourse through rage bait, or you can use those same bots to scam kids and old people.

To me, all of this signals a total lack of respect for the internet as a social project. What launched in the early 90s as the “World Wide Web” has moved beyond a dumping ground for SEO-optimized junk for advertisers to a blackhole for bot slop. We managed to destroy the entire contemporary Library of Alexandria in just a couple of decades—hell, in just my lifetime. I can’t imagine folks are going to stick around for much longer.

Of course, I’ll always promise this site is 100% slop free. Sure, I might write a bad article here or there, but you can be sure it was written entirely by my own hand. If that seems like your thing, and you want to support my writing, consider checking out my list of ways to grow the site. If you’re not sure about that, you can always check out some of these related articles:

Otherwise, thanks for reading! I appreciate your time.


On that last note, I do think that these generative AI companies are okay with destroying the internet. I think part of their strategy is to do exactly that. After all, if nothing is to be trusted on the internet, generative AI becomes the only trusted digital source of information. Suddenly, ChatGPT becomes a monopoly on information, and we’ll all be paying the subscription fee to avoid using the internet. In other words, it may be that generative AI has an incredibly long shelf life as it works to destroy all alternatives. What a nightmare.

While I’m here, I figured I’d also make a note of an interesting video I saw recently titled, Something strange is happening on GoogleOpens in a new tab.. The thumbnail lured me in because it shows a search for chatgpt 4o, so I was interested to see what generative AI was ruining this time. Of course, like most videos, the title and thumbnail are bait. The actual premise of the video is that Google search positions are bought by scammers to trick people into going to the wrong site. It’s a problem for sure, but I can’t say that the video did a good job of convincing the audience. After all, the victims of this scam were folks who wanted to pay for ChatGPT premium but instead got scammed into paying for an impostor.

Now, I should mention that scammers are disgusting. I’ve been an avid Kitboga fan for a long time, and I subscribe to his belief that victims are never really at fault. Scammers are really good (and getting better) at scamming—plain and simple. After all, scamming is a multi-billion dollar (or perhaps even trillion dollarOpens in a new tab.) industry, so it’s no surprise that scammers can afford to be more sophisticated (not to mention that generative AI has become a wonderful weapon for them).

With that said, there are countless ways to protect yourself from these kinds of scams. Personally, I don’t even use Google anymore, and even if I did, I don’t step foot on the internet without an ad blocker. I also enjoy using a password manager, which won’t even let me attempt to sign into a site that doesn’t match the URL of the credentials I have stored. So, while I don’t necessarily blame the victim for getting scammed, the irony is not lost on me that the group of people who relinquished their critical thinking to a chat bot are getting so easily scammed. “Hey, Chat! Should I enter my credit card info on this site?”

I suppose I’ll just keep pushing the same propaganda since my trip to Japan: people really need to stop trusting technology.

The Hater's Guide to Generative AI (9 Articles)—Series Navigation

As a self-described hater of generative AI, I figured I might as well group up all my related articles into one series. During the earlier moments in the series, I share why I’m skeptical of generative AI as a technology. Later, I share more direct critiques. Feel free to follow me along for the ride.

Jeremy Grifski

Jeremy grew up in a small town where he enjoyed playing soccer and video games, practicing taekwondo, and trading Pokémon cards. Once out of the nest, he pursued a Bachelors in Computer Engineering with a minor in Game Design. After college, he spent about two years writing software for a major engineering company. Then, he earned a master's in Computer Science and Engineering. Most recently, he earned a PhD in Engineering Education and now works as a Lecturer. In his spare time, Jeremy enjoys spending time with his wife and kid, playing Overwatch 2, Lethal Company, and Baldur's Gate 3, reading manga, watching Penguins hockey, and traveling the world.

Recent Posts