Is Anyone Else Bothered by How Quickly We Adopted Generative AI?

A photo of fast moving cars with the title of the article overlayed

Typically, I lock these kinds of posts behind a paywall, but I’m actually interested in your thoughts. Generative AI is everywhere. Is anyone else bothered by it?

Table of Contents

The Ubiquity of AI

Artificial Intelligence (AI) is not exactly new—even if we ignore classical AI (e.g., graph algorithms, clustering algorithms, etc.) and video game AI. When I entered graduate school in 2018, basically every single research group in computer science had incorporated AI in their work. There are the obvious topics like machine learning and computer vision, but there are also topics like data visualization and computational audition that all got wrapped up in the hype of AI.

While I have generally resisted the use of AI tools, there are plenty of uses for traditional machine learning. For instance, classification is one of the nicest things that has come out of AI. I love being able to use a tool like Google Photos or Amazon Photos, which are able to tag photos with the names of my friends and family. No longer do I need to dig through mountains of JPEGs to find that one photo from Christmas “a few years back.”

In a similar realm, I really like machine learning for discovering new music. For the longest time, I really only ever found out about new music by the radio, by going to shows, or by word of mouth. Now, Spotify creates niche playlists of songs all with a similar vibe. Even today, I was listening to my “Moody Mix,” which features songs from some of my favorite artists with new artists sprinkled in.

The Introduction of Generative AI

However, something happened with the introduction of generative AI. Suddenly, AI wasn’t just classifying existing data; it was creating new data. Even I got caught up in the hype a couple years ago when I was asked to participate in the Copilot Beta in VS Code. At the time, I was really impressed. I was able to write down a comment and suddenly an entire function would appear. I even used it to help me put together keys for homework assignments as it would just take too long to do it myself—though I learned my lesson that the solutions had many bugs.

Then, ChatGPT came around, and I played with it for a bit. However, it felt really gimmicky, like it’s only utility was for chat bots. Yet, everyone around me started using it to complete their work. Even as recently as this semester, I’ve had so many people tell me they use ChatGPT regularly. For instance, one of my peers told me they use it for rewriting emails to get the right professional tone. Meanwhile, students have told me they use it to summarize lengthy homework assignments or explain concepts in another way. Others in the tech space have told me they use it to write code where documentation is otherwise nonexistent. In other words, there is some genuine utility to it.

Unfortunately, generative AI began to be used in places it frankly should never be used. For instance, I’ve seen it used in video game dialogue to create a supposed infinite open world sandbox, which kind of defeats the purpose of a game, right? There’s no story. There’s no direction. There’s no human touch.

Beyond video games, you can see generative AI slop in the form of images, videos, poetry, and even music now. All forms of creative human expression are now mass produced in some inherently derivative form by AI models, and a lot of folks have the audacity to call it art.

Needless to say, even given some of the genuine utility that generative AI provides, I really have a hard time adopting the technology given all of its faults. Let me try to illustrate why.

My Human Analysis of the State of Generative AI

What prompted me to write this article was actually an email. I had decided this semester to take part in a mid-semester evaluation from my students. Because this is an opt-in experience, I was sent a variety of notes on how to conduct the evaluation as well as analyze it.

Naturally, I was expecting some tips and tricks for how to process qualitative data. After all, given that the students took their time to write responses, I would argue it’s a good idea to sit down with their responses and read them. However, the advice I got was to feed the responses into copilot using structured prompts.

To me, this is really gross. As educators, we hate it when students use generative AI to write essays or code because we know they are not learning by using these tools. Therefore, how could we possibly think we’re learning from our students when we use generative AI in the same way?

Not to mention that generative AI is famously known for hallucinating. Who is to say that the “analysis” conducted by copilot would even be reflective of the data? If you have to go back to the data to confirm there was no hallucination, wouldn’t it make sense just to conduct the analysis yourself? I see this all the time with the way the generative AI evangelists say “you just have to double check its work.” Surely, I could do the work myself quicker.

As someone who conducts qualitative research, I am also deeply interested in the positionality of an author (i.e., how their background may manifest certain biases) when they make any analysis. Copilot does not have a positionality because it’s not a human. I can’t assume any biases the model might have because it doesn’t make its biases apparent.

The other thing that also sort of bothers me about generative AI is that it’s a replacement for genuine human connection. If I use generative AI to “analyze” my data, then what’s stopping students from providing generative AI responses? Suddenly, it’s not me and my students having a dialogue; it’s two bots. That’s the sort of thing that makes me laugh when I see the latest iPhone commercial pitching the power of summarized texts and generated text responses. Surely, I don’t actually want to talk to my friends and family. I want a bot to talk to their bot.

Speaking of bots talking to bots, I hope we’re all aware that generative AI models rely on stolen data. That’s kind of their whole thing. I’m not a huge fan of copyright laws myself, but it’s a little different when a human intentionally steals your work as opposed to a machine which is indiscriminately harvesting everyone’s data. The consequence of this is actually quite funny. If enough of us use generative AI to produce “new” text, those same generative AI models will feed on themselvesOpens in a new tab.. I suspect it will not be long before generative AI is unusable due to its indiscriminate stealing.

On the topic of feeding AI models, let’s talk about privacy. Is it not a little weird that I’m being asked to feed student data into an AI model? Surely, their comments would be considered confidential and maybe even protected by FERPA if there’s any identifiable information attached. Obviously, I am not absolved from this sin because I store student comments in a publicly available CSV, but I clean those comments in good faith.

The New Normal

Given all of the concerns I’ve outlined just from this one single example, you’d think we would be more skeptical of generative AI as a society. Yet, it almost seems like our modern approach to technology is to simply adopt anything that comes along without critique, lest we be left behind. In fact, that’s actually how generative AI is pitched, and it’s why so much money is being thrown away to further develop this technology.

A screenshot of an email subject line which reads: "AI Has Forever Changed the SEO Game... Are You Keeping Up?"
The subject line of an email I got right after this article first published.

To me, generative AI is a bubble. It might not be. It might be that we adopt it the same way we did the internet and social media. However, I don’t ever remember a technology that needed to be marketed as hard as generative AI except other recent garbage like cryptocurrency and NFTs.

Yet, somehow generative AI is being pushed even harder. It’s on our phones. It’s in every app. Hell, word ships with copilot by default now, and the little star graphic is everywhere. Even Google has an AI slop result which tells you to put glue on pizzaOpens in a new tab.. I spend more time opting out of generative AI than I would ever spend opting in. It’s everywhere.

As you can probably imagine, I am deeply bothered by how quickly we adopted generative AI. It’s already being used for such evil things like deepfakes and content farms. The internet is a shell of its former self, and we’re perfectly content with it.

I didn’t even have time to talk about the resource cost of running an AI model, the fact that the machine learning tools I love are going to be ruined by generative AI outputs, or that any productivity gains from generative AI are just going to billionaires. Of course, I’ve never really trusted anyone in tech to think about their impact on society or the future (thanks Zuckerberg for paving the way for sociopathy).

On the bright side, if my prediction that generative AI is a bubble comes true, any craft that is currently overshadowed by generative AI—like software development and art—will suddenly become very lucrative. I foresee a demand in genuine human craftsmanship, so keep building your talents. They will surely be needed. If not, you can sit alongside me like the contemporary version of a woodworker while your cousin brags about their kids who work as “prompt engineers.”

With that said, I’ll wrap this rant up for the day. Given the overwhelming adoption of generative AI, I expect to be in the minority of this opinion. That’s fine! I’m wrong about a lot of things, so maybe this is just my personal boomer take. You’ll have to let me know.

In the meantime, here are some related articles/rants that were written by an actual human being:

If you want to help a real human continue to create real articles, check out my list of ways to grow the site. Otherwise, thanks for reading. I’ll hopefully see you back here soon.

The Hater's Guide to Generative AI (3 Articles)—Series Navigation

As a self-described hater of generative AI, I figured I might as well group up all my related articles into one series. During the earlier moments in the series, I share why I’m skeptical of generative AI as a technology. Later, I share more direct critiques. Feel free to follow me along for the ride.

Jeremy Grifski

Jeremy grew up in a small town where he enjoyed playing soccer and video games, practicing taekwondo, and trading Pokémon cards. Once out of the nest, he pursued a Bachelors in Computer Engineering with a minor in Game Design. After college, he spent about two years writing software for a major engineering company. Then, he earned a master's in Computer Science and Engineering. Most recently, he earned a PhD in Engineering Education and now works as a Lecturer. In his spare time, Jeremy enjoys spending time with his wife and kid, playing Overwatch 2, Lethal Company, and Baldur's Gate 3, reading manga, watching Penguins hockey, and traveling the world.

Recent Blog Posts