Generative AI Makes It Feel Bad to Be an Educator

A phot of an empty stadium symbolizing empty chairs in a classroom with the title of the article overlayed.

As an educator for the last seven years, I’ve seen a massive disruption in the way students approach learning since the release of generative AI tools like ChatGPT, and I don’t feel good about it.

Table of Contents

Education Has Its Problem

I am under no illusion that education has no problems. I have long complained that engineers are often not as smart as their public perception would let on. In fact, that was one of the eye opening realizations I had when I first got to college: some of the people around me just weren’t very bright.

Later, I realized that part of the problem with some of my peers was that they were just good at memorization. In fact, one of my roommates for a time was one of those guys who did really well on tests, but I often felt like he didn’t really understand the material.

Then, I found out that he spent a ton of time just memorizing algorithms for solving specific problems. He never really understood the science or math. He just knew how to solve the same types of problems that would show up in the homework.

I know this for a fact because I once took a differential equations exam with him, and we got the same score. The difference being that he nailed three of the four questions, and I spread my work out across all four. At the time, the instructor ended up dropping one of the questions because so few people were able to complete it. I ended up keeping my grade as a result, and he got a perfect score (that’s a rant for another time).

So, what’s the problem? It’s that engineering programs teach memorization. There’s no deeper thinking required if all you need to succeed is to memorize a set of algorithms for some known set of problems.

Side note: while rereading this, I thought about how memorization was also a problem in music programs. I remember auditioning with people who had memorized the audition piece perfectly but couldn’t play their way out of a paper bag otherwise. Education should probably be prioritizing the transfer of skills and not the memorization of them.

Changing the Landscape

As a lecturer myself, I approach education in a “radical” way. I have my students discuss the material before I explain it. That way, they have to at least think about the idea and try to come to a solution before I present the algorithm or approach. This act of thinking is where the learning happens.

Surely, some of my students hate this. In my mid-semester reviews this spring, I had a handful of students tell me that they wanted me to lecture more and that lecturing works better for them. This is something that I will never budge on. Learning cannot happen passively. You have to put in the work to learn, and that might mean reading, talking to people, making arguments, and synthesizing ideas.

Even in my early programming courses, I still make the students discuss things like “what do you think a language would look like without statements, only expressions?” rather than literally explaining the concept of functional programming. These types of discussions might not even go anywhere, but I know students are at least thinking about the ideas.

Of course, teaching is only half of the job of an educator. We’re also responsible for assessment. Personally, I don’t really care for the way we assess students either (or even the overall obsession with grades broadly). I could do without the high stakes exams and the repetitive tasks. It’s why I’ve put so much effort into more open-ended projects that incorporate student interests and require several iterations of feedback.

However, even with my efforts to improve education in computer science (and engineering broadly), I’ve found that I’m facing a rather painful new obstacle, generative AI, and I want to talk about that today.

What’s Wrong With Generative AI?

There’s no secret that I’m a generative AI hater. Regardless of how good the tech gets, I’m never going to see the output as more than slop (images, text, or otherwise). Of course, I’m not here to convince you of why the tech shouldn’t exist, why you should stop using it, or why it might even be appropriate to sabotage it. I’m here to share a little bit about why it makes me day-to-day worse.

Cheating Is Prevalent

It wouldn’t be an education article on generative AI if I didn’t at least acknowledge the role the tech plays in cheating. For a bit of context, my students have to submit a lot of work. Over the course of a semester, they submit 37 written homework assignments, 10 projects, and 3 exams.

In terms of written assignments, they are graded on completion. In general, they serve as a bit of a “flipped classroom” style assignment, where the students learn some of the material on their own and bring that knowledge into class with them. As a result, we don’t expect them to have the correct answers, but we do want to see them give an honest attempt. In total, the written assignments amount to 6% of their grade, so not completing one here or there isn’t really a big deal.

Despite that, a good portion of students just throw the assignment directly into a tool like ChatGPT and submit it as-is. I can always tell because ChatGPT (and tools like it) all produce code that is often wildly different from the types of things we teach in class. Generally, it will produce code with as many of the following features as possible (feel free to check out my recent reflection for examples):

  • Hallucinated APIs,
  • Style that goes against our discipline,
  • Code structures that students could never explain,
  • Comments on every line,
  • And, compilation errors.

This feels bad as an educator because you either have to report it as cheating or grade it knowing the student didn’t learn anything. Personally, I didn’t get into education to grade assignments, and I certainly don’t enjoy grading an assignment that a student didn’t bother to do. In my head, I just imagine that the student will eventually hit a wall where they will no longer be able to cheat their way through their education, but LLMs seem to be pushing that wall further and further back.

Interestingly, students also cheat on the projects and exams, but that’s to be expected as they’re worth more points. While the projects have always been a source of cheating, since the solutions can be found on GitHub in a quick search, the exams bug me.

I am extremely charitable with my exams, going as far as to literally tell students what questions will be on them beforehand. In fact, students are typically pretty receptive to this approach to exams, and they even respect my one rule: they must take the exam alone. Yet, students don’t seem to see LLMs as another person, so they happily ChatGPT their way through exams.

Fortunately, the code that ChatGPT produces is often bad enough that the student will get a poor grade. However, it’s still very painful to see this on an exam. Again, why would I bother grading it if the student isn’t even going to bother writing it?

Text Is Just Food for the LLM

As an educator and blogger, it might come as no surprise, but I like to write. I enjoy the process of explaining an idea in great detail, and that joy for writing translates into my classroom materials. Just about everything I assign is full of text meant for students to read and process. For example, I have rubrics to give students an idea of how they’ll be assessed. I also have background information to help students with their designs and ideas.

While I’m aware that this can sometimes appear like a wall of text to students, I am often disappointed when they admit to me that they didn’t even read any of it. That’s normally not a big deal. Like sure, I’ll walk you through it if you want. It’s lazy, but I’ll do it.

The problem is that students aren’t even coming to me now. They just put the assignments directly into ChatGPT (colloquially called “chat” by the students) and ask for a summary or a task list. Then, they submit some garbage to me based on some reductionist output they got from an LLM. Naturally, when that’s reflected in the grade, they’re annoyed. They wonder how ChatGPT could have possibly missed a part of the rubric.

Of course, that’s not even the worst of it because I allow for resubmissions. In my mind, students should be able to make mistakes and work toward mastery. Yet, this approach also makes being an educator feel bad because students probably don’t even look at the feedback. They can pass it directly into a tool like Cursor and watch as their code is transformed. Then, we grade it again, spending time to tell the student why the new code isn’t just wrong but not even in the ballpark of what we expected. Rinse and repeat.

To give you an idea what I mean, I want to share the story of how a student resubmitted the same assignment (i.e., project 2) four different times without getting full credit.

  • 0/10: First, they submitted the assignment without following the directions, so we gave them a zero.
  • 7/10: Second, they submitted the assignment correctly, but their work had major errors (mainly in testing). For reference, we generally expect them to test their coding using the “zero, one, many” approach, which basically means three tests per method. In this assignment, there are three methods and four constructors to test. Some of the constructors only need a single test and some of the methods could benefit from more than three tests. In total, I expect around 21 test cases. They had 7; one for each method. Naturally, I explained very clearly how to test their work in the comments.
  • 8/10: Third, they submitted the assignment with five new tests. Two of the tests were for methods they did not write. Two of the tests were in the ballpark of expectations and the fifth test just hit a constructor with a random input (essentially duplicating an existing test). Again, I explained myself in even more detail.
  • 9/10: Fourth, they submitted the assignment with nine new tests tacked onto the bottom of the test file. While these tests probably covered our expectations, they were done so sloppily that they didn’t meet our expectations for professionalism.

And, this was for an assignment where they did well in the end. Their other assignments were much more disturbing. For example, they submitted their next project (i.e., project 3) four times as well:

  • 0/10: First, same exact issue: not following directions.
  • 7/10: Second, assignment is submitted with minimal testing that does not follow our testing conventions. Some methods even test false preconditions which goes against our discipline (i.e., non-existent keys in maps). Because I gave so much feedback on the last project, I left minimal feedback on this one.
  • 7.5/10: Third, assignment is submitted with no meaningful changes to existing tests, just new tests tacked on. Tests are increasingly obviously written by ChatGPT as they check for things like thrown exceptions. Methods also contain extensive use of control flow like loops.
  • 7.5/10: Fourth, assignment is submitted and is somehow worse. All bad tests are still included. New tests are added for methods they did not write. Tests cover increasingly obscure situations like hashing collisions.

When I look at how little feedback we gave on project 3, it makes me think that the students turned to AI tools to generate garbage tests. It also makes me think they didn’t actually write their tests in project 2 either, as they would have known how to write them for the later project. If that’s the case, then even our feedback is being fed into LLMs, which doesn’t feel great.

If there is a silver lining here, it’s that students seem to be losing the ability to cheat in more traditional ways. If it was me getting a bad grade three times on an assignment with no hope in sight, I would just look to GitHub to see if a solution existed. Instead, students just trust “chat” unconditionally. It almost makes my ULPT article seem both moral and absurd at this point.

Community Building Is Harder

While our education system places the success or failure on students individually, I am of the opinion that education is communal. You do not learn or grow in a vacuum, and your education should not function in that way.

Unfortunately, for someone who wants to build community in the classroom, I’m finding generative AI to be extremely isolating. Students no longer have to talk to a friend or a teacher to make sense of a concept. They can simply throw their question at ChatGPT and have some answer in a moment.

Surely, this isn’t a unique aspect of ChatGPT. You could just as easily Google a question. That said, you had to at least synthesize the ideas yourself. You probably looked at a couple of sources, and you might have even checked with a peer to be sure. Hell, at least you knew you were reading the work of another person. At this point, I’ll take parasocial over antisocial.

Likewise, because classrooms have these localized cultures, the only way to truly get an answer is to talk to someone in class. Unfortunately, it’s normal for students to think that all knowledge is online and that there couldn’t possibly be any unique or untapped knowledge in the classroom. Why talk to a friend, a TA, or an instructor when “chat” is just better? It’s a very isolating mindset.

I literally saw this with one of my TAs this semester. He wanted to use generative AI to grade student work and give feedback. Fuck it, why even have schools at all? People of all ages can just have their education gamified, Duolingo-style. They don’t need to work with anyone. They don’t need to have hard discussions. They can simply “learn” in the pod while eating the bugs.

Other Educators Are Welcoming It With Open Arms

Perhaps the worst part of being an educator right now is being surrounded by people who are drinking the Kool-Aid. So many educators are out talking about how generative AI is the new calculator, and that we have to accept it or be left behind.

Side note: I’ve been referring to generative AI as the new cigarette. Go ahead. Take another hit.

Everything I read in education now is how we have to rapidly adapt what we’re doing to incorporate generative AI in our work. We should be accepting it because “the toothpaste is out of the tube.” To me, it’s total madness. If we hopped on every trend, we’d be teaching students to make NFTs and cryptocurrencies. Yet somehow generative AI is the exception, and we just need to adapt.

Fortunately, it wasn’t long before I stumbled upon the r/Professors subreddit and found that there is a cohort of educators that are having the same problems with LLM usage (though, I’m praying half of these posts and replies are not already written by LLMs). ChatGPT is mentioned basically every other post and broader problems like boundary pushing are prevalent topics. I’d encourage you to take a look, but naturally I’ll share some of my favorites:

I’m not the target audience for your post, but I have an anecdote to share…

My mother is one of those people who loves to claim that, because AI is here to stay, we might as well embrace it. I had little luck explaining why “here to stay” is not morally persuasive argument until I made a comparison to cannibalism.

I said, “Mom, if cannibalism became popular, and seemed as if it were here to stay, would you take us all out to eat at a cannibal restaurant?”

Mom: “Of course not!”

Me: “There you go.”

u/Justalocal1Opens in a new tab.

And the response to that was also excellent:

Oh, I love that. Here’s another good one. Give your mother a two-page letter about what a wonderful mother she is and how much you love her, and then, as she is teary-eyed, tell her AI wrote every word.

I’ve actually used similar examples in my classes for short writing assignments. I ask my students, would you be OK if your spouse wrote their wedding vows using AI, or you were fired via an AI-written letter, and so on. You get the point, I am sure. It becomes pretty clear that so many students don’t want other people to use AI; they certainly don’t want other people to use it to communicate with them. But, they can’t build that bridge to be critical of their own AI use, unfortunately.

u/larrymiller1982Opens in a new tab.

So, it’s not like everyone is onboard with ChatGPT. That said, I am certainly annoyed by it being pushed by even our teaching institute on campus, which I’ve ranted about previously.

Not All Students Are Like This

Honestly, I don’t think generative AI is the main problem here: it’s a symptom of some larger problems in our broader society. I could point to the commodification of education. That’s certainly a problem. I could point to the proliferation of social media and short form content. That’s certainly a problem. I could point to the pandemic. That’s certainly a problem.

Regardless, as an educator, I will say that it’s been painful trying to teach in an environment where some students do not want to learn. There has always been students like this, and I’ve had no problem ignoring them in the past. Yet, somehow teaching today feels different (or I’m fully locked into some “kids these days”-ass boomer mindset) . Even with all the effort I put in to make the space different, new, and engaging, students are always going to have the temptation to offload cognitive tasks to a bot. It’s truly a sad state of affairs.

That said, I feel like I should wrap this article up by saying that not all students use ChatGPT to get through their classes. Many students are curious and value the learning process. In fact, I would say most of them do. That said, in the past, I would have said that maybe 5% of my students make my life difficult. These days it’s pushing 10%-15%.

I’m hoping that this is just a blip, and we can go back to teaching normally. If not, I suppose I’ll be embracing our technofascist overlords sooner or later.

In the meantime, why not take a peek at some of these related articles:

In addition, I’d love it if you ran over to my list of ways to grow the site. Otherwise, take care!


For once, I’m actually sneaking this part in before I publish the work. Basically, I wanted to share a few of the links that helped me feel like I wasn’t the only one battling generative AI in the educational trenches:

I’m not a redditor by any means, but I found these threads to be some of the last few places on the internet where seemingly real people talk.

The Hater's Guide to Generative AI (14 Articles)—Series Navigation

As a self-described hater of generative AI, I figured I might as well group up all my related articles into one series. During the earlier moments in the series, I share why I’m skeptical of generative AI as a technology. Later, I share more direct critiques. Feel free to follow me along for the ride.

Jeremy Grifski

Jeremy grew up in a small town where he enjoyed playing soccer and video games, practicing taekwondo, and trading Pokémon cards. Once out of the nest, he pursued a Bachelors in Computer Engineering with a minor in Game Design. After college, he spent about two years writing software for a major engineering company. Then, he earned a master's in Computer Science and Engineering. Most recently, he earned a PhD in Engineering Education and now works as a Lecturer. In his spare time, Jeremy enjoys spending time with his wife and kid, playing Overwatch 2, Lethal Company, and Baldur's Gate 3, reading manga, watching Penguins hockey, and traveling the world.

Recent Teach Posts