What Happens When I’m Forced to Teach AI?

A photo of a model T car with the title of the article overlaid.

Lately, I’ve been feeling like AI has infiltrated every facet of my life, and it’s starting to feel like teaching AI will be more of a “when” not an “if.” As a result, I wanted to take some time to write about what it will be like when I ultimately have to teach AI.

To be honest, I’m near the end of my semester, and this article is coming off a feeling of stress and burnout. So, it’s a bit incoherent at times.

I want to give a little caveat out of the gate here. I use the word “AI” in the same colloquial sense as the average person. Like every article in this series, I take issue with specifically generative AI, which is often abbreviated to genAI. In this particular article, I am mostly complaining about a subset of genAI called Large Language Models or LLMs (i.e., the chat bots that everyone is using to write software). Hopefully, that helps you contextualize this piece a bit.

Table of Contents

I Don’t Want to Teach AI

For all of the hating I do on AI, there’s always been this concern in the back of my head: what happens when I’m forced to teach AI?

As an educator, I want to continue to teach software development, not “prompt engineering.” I don’t want to “adapt” to this new landscape. I just don’t. It has nothing to do with not wanting to change and everything to do with how absurd this moment is. Let me try to explain my thinking with an example.

Working at a university with an education background gives me superpowers. I get to run my classroom in ways that my peers could never imagine, and it shows in my evaluations from students. In fact, by the time you’re reading this, I will have received another teaching award from my department.

So, it frustrates me when I see the world rapidly disrupted by what we’re colloquially calling “Artificial Intelligence.” Like, every day I’m seeing money get thrown at studies to figure out how helpful “AI” might be in the classroom. To me, this is baffling. We already know how to effectively teach because we have decades of academic literature on the topic from various fields (e.g., education, sociology, psychology, etc.). Yet, the average college classroom is ran using pedagogy that is literally decades behind because research rarely makes it to practice.

I see this exact parallel in computer science. Our classes teach older software design principles. For example, we preach dogma like to “never write multiple return statements in the same function.” Hell, we’re still teaching design-by-contract and using tools like subversion and Eclipse.

Now, are software engineering techniques as rigorously tested as pedagogy? I don’t know. I’ve never seen an ounce of empirical evidence brought up in a discussion on best practices. Likewise, I really would hate to teach a class where we chase the latest web framework.

There’s a reason academia moves slowly. We like to be sure that we’re doing things the right way, with the right ontology, epistemology, and ethics. We must be able to justify everything that we do and say in academia, and that might require us to hold off on change until we’re certain we’re making the right choice.

So, forgive me if I find it a little absurd that I’m teaching a class that hasn’t changed in at least 15 years, yet I could soon find myself in a position where I’m forced to teach a technology that is only a couple of years older than my toddler.

This is even more absurd to me when compared to other inventions. For example, according to a video I was recently watching, flight was invented in 1903, but the first commercial flight didn’t happen until 1914. Meanwhile, the first international flight didn’t happening until 1917. That’s basically a 14-year journey to get planes off the ground, literally, and just 5 years to get ChatGPT from transformers (i.e., 2017 to 2022). I’m sure there’s also an interesting parallel with how quickly we built out airports vs. data centers.

I won’t go down this rabbit hole, but I also looked into cars. Since as much as I love flying, I can’t share the same love for cars, and those seemed to take a while to develop. Wikipedia claims the first “modern” car was built in 1886, but they weren’t affordable for the average person until 1913. Now look what we’re stuck with: cities that are sprawling due to an obsession with cars. Forgive me if I don’t trust the US to “innovate.” Likewise, I find it almost comical how the economical model has shifted such that AI started cheap and will get progressively more expensive.

Now, it’s not like I directly oppose rapid innovation, right? For example, if a medical study uncovers a miracle treatment for some disease, it’s unethical to continue the study. You have to begin treating folks as soon as possible. I was a big fan when those vaccines were created for COVID. I just don’t think AI can be compared to a miracle treatment. If anything, it’s the inverse; it’s causing vastly more harm than good. Like, the whole “the genie is out of the bottle” argument is absurd if the genie is causing harm. You must put a stop to that genie.

What Teaching AI Looks Like

Hopefully, it’s clear that I don’t want to teach students how to launch a Ralph Loop, or whatever slop term we’re using this week. Even if that’s not clear from this piece, there’s enough hate in this series to feed nations.

Instead, let’s assume I’m forced to teach AI anyway. I know it’s coming. Our university has a silly “AI fluency” initiative, so it’s literally only a matter of time before I’m asked to deskill my students. In fact, I think about this a lot, especially because I’m certain more than half of my students are already using an LLM to get their work done. Therefore, students are learning to use AI, whether I teach it explicitly or not.

So, let’s consider the explicit teaching case. What will that look like?

  1. I teach a critical view of AI, one in which I encourage students to be skeptical of the outputs. We focus more on code reviews, and we talk more about how to vet the correctness and security of outputs. I would enjoy some aspects of this, but I would feel like Garp in One Piece—just a guy aspiring to make change from the inside while only perpetuating harm.
  2. I teach an uncritical view of AI, one in which I help students develop the skills to use AI in their jobs. This is what would probably be asked of me, and I would despise it.
  3. I teach an uncritical view of AI, one in which I largely go to work for my paycheck. I tell students to use the tools how they see fit, and everyone gets an A. Nothing about this would be fulfilling to me.
  4. I teach a critical view of AI, one in which I encourage students to use AI only for key aspects of the development process. Right now, I think one reasonable use case would be as an additional step in the static analysis pipeline (i.e., have the model review the code for possible exploits). You would still write your own code, and you would still understand how your system works. You save time by not having to read code written by a bot, you maintain your development skills, and your code is less buggy.

I’m not interested in (1) because I still strictly oppose the idea of using AI to write code in the same way I think it’s gross to use AI to “write” a book or “draw” a picture. In fact, I’m not really interested in using AI anywhere in the development process. I don’t want AI’s “help” with brainstorming, nor do I want its help with documentation. You need to be incredibly careful with what you offload to the bot because you will atrophy your skills. That much is obvious.

Of course, I’m not interested in (2) because I don’t know what skills I could possibly be teaching. To me, AI seemingly requires no skill. If you can talk, you can use AI. That’s basically the same takeaway from this video.

And let’s suppose there were some explicit skills I could teach, at the rate this technology seemingly changes, the skills would be outdated fairly quickly. Like, we went from “be nice/mean to the bot to get better code” to “cursor is a no code experience.”

After all, whatever skills we could possibly be teaching seemingly exist because the tech itself is somewhat broken, right? Like each of these “skills” are just hacks to get the model to do what you want, and they’re not even foolproof. So, there’s like an entire cottage industry of people promoting AI skills that are just folklore. It reminds me of the pre-internet era where people would spread urban legends about video games (e.g., being able to revive Aerith in Final Fantasy 7). Hell, I’ve contributed to those urban legends myself with my Halo article.

Obviously, (3) is a tempting option. I feel like a lot of folks get to this point where they treat their job as a paycheck, and they separate that entirely from their personal life. While I respect that, I could never personally do it. I try really hard to identify my personal values and hold myself to them. I could never live a double life where I claim to care about student learning while letting students do all their work with AI. At that point, I would just automate grading with AI, and it would be a mutual waste of everyone’s time.

To me, the only route that seems reasonable right now is (4). Since LLMs are basically pattern recognition machines, I’m not strictly opposed to using them to recognize exploit patterns. After all, we already use static analysis tools for this purpose, and this might cut back on testing, which most people already loathe. If AI is truly unavoidable, then I’m happy to hook it in as a part of quality control. That’s it.

With that said, the technology is going to have to improve dramatically. Having read some of Copilot’s code reviews on GitHub, I get a little annoyed with how bad the feedback often is. Like, sometimes the feedback is good but the reasoning is all wrong. Other times, the feedback is just repetitive (e.g., “you should validate this function parameter x 1000”). It unironically makes me yearn for formal verification.

Ultimately, if I am forced to teach AI, I can’t see myself using it as anything more than a code review tool, and even that feels dubious. After all, why build a relationship with a peer through code review when you can be forever alone with your bot?

A Secret Fifth Choice

Realistically, if I were asked to teach AI, I would probably find a way out of teaching—or at least out of teaching computer science. I got into teaching because it was significantly more fulfilling for me than being a cog in the corporate machine. If teaching means my role becomes prepping the next generation of cogs (which to be fair, is already the case to some extent), I just don’t see the value in it.

I don’t know what I would do going forward. I haven’t thought that far. At best, my writing starts to make me a living. Or maybe, I travel back in time and pivot toward music or higher education. Perhaps, I write a book, or I work on study abroad programs.

For now, I’m praying we’re in a dip, and people go back to developing skills again. If not, I’ll continue to teach for as long as I can hold myself together mentally. Who knows? Maybe it’s true that if you can’t beat ’em, you should join ’em.

Anyway, thanks as always for reading. If I didn’t have this outlet, I surely would have gone insane already. Perhaps these are cathartic for you as well. If so, there’s more where that came from:

Likewise, you can take your support a step further by checking out my list of ways to grow the site. I updated it recently, so all the links should be good. Otherwise, take care!

The Hater's Guide to Generative AI (21 Articles)—Series Navigation

As a self-described hater of generative AI, I figured I might as well group up all my related articles into one series. During the earlier moments in the series, I share why I’m skeptical of generative AI as a technology. Later, I share more direct critiques. Feel free to follow me along for the ride.

Jeremy Grifski

Jeremy grew up in a small town where he enjoyed playing soccer and video games, practicing taekwondo, and trading Pokémon cards. Once out of the nest, he pursued a Bachelors in Computer Engineering with a minor in Game Design. After college, he spent about two years writing software for a major engineering company. Then, he earned a master's in Computer Science and Engineering. Most recently, he earned a PhD in Engineering Education and now works as a Lecturer. In his spare time, Jeremy enjoys spending time with his wife and kid, playing Overwatch 2, Lethal Company, and Baldur's Gate 3, reading manga, watching Penguins hockey, and traveling the world.

Recent Teach Posts