Not All Code Completion Is Generative AI

A photo of a steam engine with the title of the article overlaid.

In an effort to accelerate AI hype, we now have people saying that every developer is using AI. Their evidence? Every IDE has some code completion feature. While that might be true, I think it’s important that we don’t muddy the waters by pretending that all code completion is generative AI.

Table of Contents

The Steam AI Tag Drama

If you’re not terminally online (which I no longer am), you probably haven’t heard about the latest Steam drama. Steam, if you’re not a gamer, is a marketplace for video games. It’s where I get most of my indie games.

The latest drama revolves around the CEO of Epic, a video game development company that also has its own marketplace. The CEO, Tim Sweeney, criticized Steam for marking video games that include AI generated assets with an AI tag. You can read his actual comments here.

Naturally, there was a bit of a debate on both sides of the issue. Some stating, like one Valve employee whose comments can be found here, that it’s important for video games to be tagged appropriately for the same reason that we like having ingredients listed on food. Others complained that the AI tag was too broad and could include anything AI generated, from images to audio to code.

Where the Drama Lost Me

Frankly, I’m not that interested in the debate. I think it’s good that Steam wants to label video games that include AI generated assets, but I assume that developers are just going to start hiding the facts. Perhaps it doesn’t matter at all since one of the biggest games right now, Arc Raiders, seems to be succeeding in spite of the pushback around their usage of generative AI.

However, there has been a part of this debate that drives me completely insane. Folks that don’t want the AI tag in Steam continue to cite the idea that almost all code produced now is generated by AI. I don’t know if there is any evidence for this claim, and I don’t even know how you would measure it. But, here are a few comments to show you what I mean:

I don’t think any modern thing that involves programming is made without AI. It’s proliferated every aspect of the job. Without nuance, it’s a useless label.

Says Reddit user “HeyDudeImChill”

Yeah, that was my question on this whole thing. Any modern programmer is pretty much required to use AI. Does that mean they get the label?

Says Reddit user “PopularDemand213”

Now, I wouldn’t be so annoyed by this because maybe it’s true. Maybe everyone writing code now is just prompting Cursor to add the latest slop feature. I don’t know.

However, what bothers me is the evidence that folks keep citing. People keep saying things to the effect of “I know everyone uses AI to code because every modern editor has code completion,” such as in the following comments:

Yeah these posts are easy way to notice people who have no fucking idea about programming, game development and current landscape of genAI within it. I get that using genAI for things like art or writing pisses people off, it does piss me off too, but if you put all genAI usage under the same umbrella of “AI bad” and want to filter it out, then good luck. One hit of tab to autocomplete in current VS Code, that’s it. Now the game is AI slop in the mind of these people.

Says Reddit user “Talking-Nonsense-978” (you can’t make this shit up)

I get not using agentic programming but not even using the auto completion feature?

And I do get around quite a bit actually. Enough to know someone at your company is using it.

Says Reddit user “HeyDudeImChill”

I think it’s fine this tag exists, but people need a better/deeper understanding of how AI can be used in more than just art assets.

There’s quite a lot of ML/AI that can be customized, created from scratch, or implemented that isn’t just straight up art theft — this includes code/auto-complete, build tools, testing. Software development in general really.

Says Reddit user “drumstix42”

This actually drives me crazy because not all code completion is generative AI.

I Feel Like I’m Being Gaslit

Code completion is a feature that has existed in IDEs for probably decades. While I think it’s hard to find the actual origin of code completion, I would argue that it fell naturally from the invention of compilers in the 1950s—thanks to Grace Hopper and the folks who came before her.

Of course, for code completion to exist, you probably need an IDE. IDEs, according to one source, didn’t start showing up until the 80s. The one IDE I see mentioned a lot, both in the previous article and on Stack Overflow, is Turbo Pascal. Later, in the early 90s, Microsoft released Visual Basic.

According to both the stackrnext article and Wikipedia, the first tool to demonstrate code completion was IntelliSense, and it shipped with Visual Studio in the mid-90s. By the early 2000s, several other tools came on the scene, including Eclipse and IntelliJ.

In other words, we’ve had code completion or autocomplete for basically my entire life, and definitely for as long as I’ve been developing software. Therefore, it’s a little odd to see people refer to it as AI.

How Generative AI Works

To meaningfully draw a distinction between traditional and AI-assisted code completion, I think it’s important to roughly explain how generative AI works.

Generative AI is a special kind of machine learning algorithm that works like most other machine learning algorithms. You feed it a massive amount of data, and it attempts to find patterns in that data. Think of it like a sophisticated statistical model. You give it data, and it fits a curve over that data. You can then give it new data, and it can predict the appropriate output based on the curve it fit (i.e., like a line of best fit).

Hopefully, it’s clear, but this is why generative AI “hallucinates.” It’s simply making a prediction based on some input. Sometimes that prediction will be incorrect. It’s also why it’s difficult to predict the output of generative AI because there’s no way to know what patterns were picked up from the input data. If you like an edgier description, I found this video really helpful.

In that video, the author draws a distinction between generative AI and procedural generation. The difference being that procedurally generated outputs are always understood. We design the algorithm, so we know how it works and how to change it. Generative AI outputs are never understood, so there is no easy way to make an appropriate adjustment. In fact, once the models are trained, they can’t really be changed. You typically have to start the training process over.

I’m not sure code completion would be classified as procedural generation, but the argument still holds. We know how code completion works; it’s an algorithm. Given a set of inputs, we know what the outputs are going to be. So, how does code completion actually work then?

How Does Code Completion Actually Work

While I can’t speak on exactly how tools like IntelliSense work, it doesn’t take a genius to make an educated guess. That’s because, like I said previously, we know how compilers work, and there’s one aspect of compilers that is really useful for static analysis: parsing.

Most likely, static analysis works by creating an abstract syntax tree of the source code. It turns out that if you can convert your code into a tree, you can traverse that tree to determine how different symbols relate to each other.

For example, if I have two variables with the same name in a Java program, I can distinguish between them by determining their scope. Are they local variables in separate functions? Then, they’re not related. When I rename one of them, I shouldn’t rename the other one.

In that sense, code completion kicks off when certain characters trigger a lookup. For instance, if you’re looking to call a method on a variable, you’ll type a dot (i.e., “.”). That triggers the code completion function which looks at your current context and references it against the abstract syntax tree which likely includes a symbol table. That symbol table will tell code completion the variable’s type which can be used to suggest appropriate methods.

Sure, completion can be made more sophisticated by suggesting the most relevant method in your context. For example, maybe you’re making a method call that is about to be stored in an integer. In that case, it might make sense to list the methods that return integers at the top of the list. All of this information can be gathered by quickly parsing the current line you’re writing.

Likewise, you can cut down on the list of options as a user types characters. Perhaps the best data structure for this is the Trie, which lets you model all the possible strings in a hierarchy. As the user plots their path down the tree, you can show fewer and fewer possible completions.

Why Should Anyone Care?

This entire article might seem like a petty grievance, but I think it’s important to be clear that just because most developers use some form of code completion doesn’t mean that most developers are using generative AI. Code completion has been around for quite some time, and it’s largely been used to free you from memorizing every API.

What generative AI does to “enhance” code completion, as the VS Code docs put it, is far more than looking up a method name. It reads what you’ve written and treats that as an input to an LLM. Then, it spits out some code.

Obviously, I’ve been a hater of this technology for some time, but I think this is really bad news for the field. Just to illustrate why, I’ll share a quick personal story.

Recently, I had a student who was working on a project. They’re the type of student that comes to office hours to have me do their work for them. Seemingly, they think that if I show them how to do something, they will learn how to do it.

On this particular occasion, they wanted me to help them make sense of feedback that they had gotten from a grader. I restated the feedback to them a few times and watched as they almost purposely misunderstood it until I just gave them the answer.

While this is happening, I’m watching Copilot suggest almost insane code blocks when what I need the student to do is delete a couple of lines of code. In that moment, I was thinking about how their use of AI-assisted code completion was clearly harming their ability to learn how to code and instead enabling their learned helplessness.

Now, however, I’m thinking how much more absurd it is that these tools “think” the solution to any problem is MORE code. If that’s the approach we’re going to take as developers, then forget what bloated applications used to look like. In response, we’ll be looking to AI to clean up our mess (or is that already here).


With all that said, thanks again for checking out another article! I am getting a bit bored of the usual generative AI rants, but it seemingly causes me some kind of pain on a regular basis. Once that stops, I suppose I’ll have something else to talk about.

In the meantime, if you enjoy these rants, there are plenty more where that came from. For instance, here are three more:

Likewise, you can support this series and the broader site by checking out my list of ways to grow the site. Otherwise, take care!

EDIT: a little pre-publication edit, but I saw the latest backlash to Larian using generative AI in their concept art on Reddit. The same sentiment around “everyone” using AI in development was all over the place. Like the following to me is a completely insane take:

99.9% of all software development will use some sort of AI powered tool somewhere in their pipeline.

Its as ubiquitous as source control at this point. Saying “we use AI tool” is like saying “we use git”.

Says Reddit user “Which-House5837”

You know how I know this take is bullshit. When I was working at GE in 2016, I was on a team that wasn’t using version control at all. I had to transition us from using zips to using git. Frankly, there is no way that every team has transitioned to AI tooling in three years. I don’t buy that for a second.

Of course, it’s really easy to tell when someone is full of shit when they say things like this deeper in the thread:

Very strange to say this when copilot is baked into git. Very very strange.

Says Reddit user “Which-House5837”

If they mean Copilot is baked into GitHub, then they are correct. Microsoft has been rapidly shoving Copilot into every tool to try to maintain its relevance. On the other hand, git is a very different thing. It’s a command line tool for versioning software. GitHub is just a way of centralizing a repository. You could just as easily host a repository on Codeberg, Gitlab, Bitbucket, etc. It hardly surprises me when someone singing the praises of generative AI doesn’t know the most basic facts about version control.

Ultimately, I think people (though, I suppose they could be a network of bots) hide behind the term “AI” in its broadest sense, so they can continue to propagandize. There is a massive difference between generative AI and any other form of AI (e.g., classification, classical, etc.). Like binning every type of “AI” as the same thing is disingenuous at best. Of course, the dialog tree that NPCs use in Morrowind could be called AI. Hell, pathing algorithms like A* are a part of classical AI. These are obviously not the same as a technology built entirely on IP theft. Pretending that all AI is the same in the name of “progress”, whatever the hell that means, is absurd.

The Hater's Guide to Generative AI (16 Articles)—Series Navigation

As a self-described hater of generative AI, I figured I might as well group up all my related articles into one series. During the earlier moments in the series, I share why I’m skeptical of generative AI as a technology. Later, I share more direct critiques. Feel free to follow me along for the ride.

Jeremy Grifski

Jeremy grew up in a small town where he enjoyed playing soccer and video games, practicing taekwondo, and trading Pokémon cards. Once out of the nest, he pursued a Bachelors in Computer Engineering with a minor in Game Design. After college, he spent about two years writing software for a major engineering company. Then, he earned a master's in Computer Science and Engineering. Most recently, he earned a PhD in Engineering Education and now works as a Lecturer. In his spare time, Jeremy enjoys spending time with his wife and kid, playing Overwatch 2, Lethal Company, and Baldur's Gate 3, reading manga, watching Penguins hockey, and traveling the world.

Recent Posts