Why Generative AI Makes the Future of Software Development Worse

A photo of a snail crossing a gap with the title of the article overlaid.

Today, I wanted to take a bit of a different spin on the usual “AI Sucks” rants by giving my take on what I think the effect generative will have on the future of the software development discipline. Hint: there are three paths and even the best case is bad.

Table of Contents

The Ideal Case for Generative AI: Creating a Skill Gap

There is obviously a lot of hype around generative AI right now, especially in tech bro circles. It’s being pushed on us at every turn. Now, guys don’t say “use Claude” or “use Cursor,” they say “make an agent.” Each new “innovation” is supposed to make our job as software developers easier and more efficient, but I think this mindset is incredibly myopic.

Let’s assume for a second that generative AI does everything we’re promised. Let’s say it can do most of the development for us. We just type in our prompts, and we get decent software solutions. Hell, I’ll take it a step forward and even say that we can continue to prompt these models to make appropriate changes and add desired features.

What happens when something fails? The assumption right now seems to be that a trained software developer will be able to dig through the code and figure out what’s wrong. Of course, we can make that assumption because software developers are currently trained to do just that. In the future, if we push everyone to become “prompt engineers,” who then will fix the software?

Just think about that for a second. If the next crop of software engineers don’t learn how to code, how are any of them expected to troubleshoot systems? And, you might think I’m fearmongering with this take. After all, surely universities aren’t going to stop teaching coding, right? Of course not. However, with the ubiquity of chat bots, there is now very little that universities can do to ensure students are actually learning.

Therefore, with each new round of junior developers (if short-sighted companies are even hiring them anymore), we’ll gradually see a degradation in skills. It might not be obvious because the generative AI tooling is doing a lot of the heavy lifting, but we’ll eventually see a gap in skills where developers won’t be able to debug software without the aid of an LLM.

Obviously, we’re banking on the idea that this is a nonissue. We’re praying that LLMs will get good enough that there will never be a bug that they can’t solve or a feature that they can’t implement. Is that a gamble we’re willing to make? And if so, are we cool with not understanding the systems that we haven’t even built ourselves? To me, that seems like a recipe for disaster long term.

I suppose the silver lining is that anyone willing to learn the skills to fill that gap is infinitely employable.

The Most Likely Case for Generative AI: Bleeding Technical Debt

In the real world, generative AI is, at best, a cool gimmick. For every supposed good use case, there are probably dozens of evil use cases from deep fakes to scams. Even as a lazy search engine, it lies. Meanwhile, that dream that increasing context size would lead to improved performance is dying—not to mention that using generative AI might even make you slower.

Therefore, I would argue that any sufficiently trained software developer can write, review, and debug code better than any LLM on the market. However, there may come a time where LLMs are better, not because LLMs have actually gotten better but because our trust in them has diminished our own skills. In that case, there will be no opportunity to write good code; all code will be shit.

To me, generative AI is (contrary to popular belief) an accelerator of technical debt, and it shouldn’t be too hard to see why. Technical debt is largely caused by folks taking the path of least resistance. Your boss needs a feature by the next sprint? You can probably whip together a quick prototype, but you’ll surely need more time to polish it. What’s that? Every temporary fix becomes the permanent solution?

See, we like to romanticize a future where technology is created by benevolent developers for benevolent corporations, but the reality is that everything is held together by duct tape. Rather than address this underlying problem (*cough* capitalism *cough*), we’ve found an even faster way to produce slop: generative AI. If you thought temporary solutions were insurmountable before, get ready for code that no one even bothered to write, read, or test becoming the fabric of mainstream tech.

I suppose the silver lining is that anyone willing to deeply understand existing code bases to manage technical debt will be infinitely employable.

The Worst Case For Generative AI: Welcome to the Scam Economy

At this point, mountains of technical debt might actually seem like the best case scenario with where I think we’re headed. After all, generative AI is not a technology that is going to make developers better at their jobs. It’s a technology that is going to enable companies to produce new scams at an alarming rate.

Technical debt is not actually a problem to companies when they can just as quickly prototype their next product or service. Is your app becoming too much to maintain? Is it not a money printer like you had hoped? Kill it and pivot to the next one.

Does this pattern sound familiar to it? It’s because it’s basically Google’s entire strategy. Here’s a quick list of products and services they’ve killed just off the top of my head:

  • Google Hangouts
  • Google Domains
  • Google+
  • Google Stadia
  • Google Glass
  • Google Jamboard

Google gets away with this because it’s absolutely massive. It can take a shotgun approach with development and kill anything that isn’t massively profitable, regardless of who’s affected. For instance, I think about all the times I used Jamboard for classes during COVID. That’s gone now.

Likewise, Google can buy up smaller brands and shut them down at will. As someone who has worn a FitBit for the last decade, I have nothing to look forward to in wearable fitness technology because Google isn’t making any new FitBits. Instead, I’m expected to jump ship to their line of Pixel watches. No thanks.

If you hate what Google does, then you’ll really hate where the tech industry is heading: the scam economy. I’m not sure where that term originated, but Matt Binder has a wonderful podcast by the same name. Regardless, we’re living it. AI bros like to act like generative AI has democratized art while all it’s doing is democratizing scams.

Ultimately, being able to mass produce products and services is not an inherent good. In much the same way that we don’t need 40 varieties of ketchup, we don’t need 40 different chat bot apps. Except, in the digital space, it’s more like 4,000 different apps all algorithmically tailored to you, you, insatiable, you.

I, for one, am really excited for what will look like the renaissance of products and services flooding the market only for 80+% of them to be outright scams. The quicker you can prototype some junk and get it in front of an investor, the quicker you can rug pull the public. It’ll be awesome.

Did I mention that I don’t really have a silver lining for this one?


Hello! Lately, I’ve been reading Dune, and I’ve been thinking about the sort of skills that have been lost to time—skills that we don’t teach or train because we have technology that rendered them silly to learn. For instance, we all rely on grocery stores for food, but so few of us have skills like gardening, foraging, hunting, and fishing. Sure, these are hobbies that folks have, but they’re not generalized skills that we all have like reading and math.

I think about this type of thing because our systems are built on abstractions. We tend to think that when we abstract away the “boring” or “hard” stuff, we can forget about, and I have to wonder how long that sort of thinking can last. It’s why I harp so much on generative AI because prompting is yet another layer of abstraction on top of an industry of abstractions.

And while this might all seem like fearmongering, like I said before, I don’t think we have to look too far in the future to see how things might crumble apart. In the US, Trump has been putting tariffs on every import with the supposed idea of building out those industries in the US. It’s a nice idea in theory, but there are just skills (and resources) we do not have in our country to produce certain goods. We simply cannot make silicon chips in the US, at least not without immense training programs.

Hell, we can look at the tech industry itself. There’s a running gag that COBOL is embedded in all sorts of financial systems. Of course, if you took a sample of software developers, you’d probably find that less than 1% of them have ever even seen the language—let alone written a program in it. I can only imagine that reality is going to get worse over time. We call it legacy code, but it’s really lost knowledge. I suppose we just have to pray that AI gets good enough to make these sorts of issues irrelevant, but I’m not exactly hopeful.

The Hater's Guide to Generative AI (14 Articles)—Series Navigation

As a self-described hater of generative AI, I figured I might as well group up all my related articles into one series. During the earlier moments in the series, I share why I’m skeptical of generative AI as a technology. Later, I share more direct critiques. Feel free to follow me along for the ride.

Jeremy Grifski

Jeremy grew up in a small town where he enjoyed playing soccer and video games, practicing taekwondo, and trading Pokémon cards. Once out of the nest, he pursued a Bachelors in Computer Engineering with a minor in Game Design. After college, he spent about two years writing software for a major engineering company. Then, he earned a master's in Computer Science and Engineering. Most recently, he earned a PhD in Engineering Education and now works as a Lecturer. In his spare time, Jeremy enjoys spending time with his wife and kid, playing Overwatch 2, Lethal Company, and Baldur's Gate 3, reading manga, watching Penguins hockey, and traveling the world.

Recent Blog Posts