We’re probably all tired of talking about ChatGPT at this point. Yet, here it is, continuing to permeate practically every conversation, especially those related to doing school work. I’ve heard all the opinions, from “oh my god, the world as we know it is ending” to “thank god I never have to do homework again.” As I see it, they both have a kernel of truth: we will need to adapt at least somewhat, but that’s not necessarily bad.
As covered in the Chronicle, some professors see the chatbot as capable of average work at best—the technology is not yet at the point of the higher level analysis that the disciplines in the humanities try to teach. It is useful for ideation, both to get a high-level start to an idea or to refine a specific one; going through the whole pipeline of solving a difficult problem or answering a complex prompt is still beyond ChatGPT.
To me, it is essential that students understand the topic they are asking ChatGPT well enough to determine if it’s correct—it’s kind of like how Google can lead you down rabbit holes that might take longer than answering a question by consulting your textbook, even though it feels like less work. I see the temptation many students feel when it comes to using this kind of thing for more menial assignments; however, if we use it to complete anything it can, what happens when we get to a higher level in a field without a solidly built foundation?
Regardless of how universities will decide to teach in the wake of ChatGPT, my main qualm with this discourse is that some of us act like academic dishonesty is something novel—that this is the beginning of a new era of cheating. For students who don’t want to learn, options have existed for years. But maybe we can pretend that Greek organizations don’t keep test banks. Perhaps we can gloss over the existence of Chegg and the like, and we can also forget that wealthy people often pay their way to a learning disability to get special academic accommodations, assuming they don’t just pay someone else to do the work for them. The list goes on, I’m sure.
Beyond the ability to just Google many things these days, these dishonesties have the commonality of a privileged barrier to entry. So what’s so bad about ChatGPT—that it allows non-rich people to cheat, too? No matter how you feel about it, you can't deny that it’s currently the most equitable form of academic dishonesty—if we decide to consider it academic dishonesty at all.
Sure, it’s concerning if we students just ask the chatbot for all the answers to our homework. What’s more concerning is the general trend of wanting to outsource learning and schoolwork by relying on resources—both online ones and non—that existed well before ChatGPT. Worse, that this was more acceptable when primarily the connected and wealthy leveraged their connections to glide through college more easily. Now that everyone can do it to some extent? Not so benign, apparently.
It’s not too fun to grapple with the question of the role of the university in this day and age, especially when this technology threatens how we are used to doing things. ChatGPT is best at straightforward tasks, like summarizing a text or answering multiple-choice questions; what it lacks most is nuance. I wrote about my opinions on pedagogy last semester—namely, my distaste for new-wave, bite-sized educational styles—but these discussions are now more relevant than ever. ChatGPT is much more capable of producing a mediocre Sakai discussion post than a 5-7 page analytical essay. Perhaps it’s time to re-embrace the old in cultivating a new sort of learning environment that will most encourage actual learning in our present time.
This isn’t to say that this innovation doesn’t threaten more traditional models of education as well. Reading in the library for several hours a day looks pretty silly when you can just feed texts to a model and have it spit out the relevant details. Freeing up our attention for higher-level tasks by delegating the menial stuff to machines is good, in theory, though we have to be able to evaluate the quality of the results. The question is whether we will put the time we can save by using ChatGPT towards more meaningful things—or if we plan on regressing intellectually.
We shouldn’t be asking if students are cheating via ChatGPT, but rather why they feel the need to. In a perfect world, everyone studies something they love, and they do it out of a genuine passion for the subject, with successful results obtained from a reasonable amount of effort. So, if students are going to ChatGPT for the answers, something is out of balance. Are classes too hard? Too trivial? Or do we just not care enough to genuinely study what we choose to study?
The general sentiment I've heard from people who have been using ChatGPT to do large parts of their homework is that they don't really care about the class—whether it be a T-Req or a class for a major they feel ambivalent about—and just want to use it to help pass. At the same time, I'm seeing friends with more passion for their courses using ChatGPT as a study tool for explaining concepts. Either way, it’s here now, and people will use it. Let’s find fruitful uses for the program while diagnosing, and attempting to treat, the causes of its misuse. It’s a better alternative than the extremes of vilifying or entirely relying on the technology.
Heidi Smith is a Trinity junior. Her column runs on alternate Mondays.
Get The Chronicle straight to your inbox
Signup for our weekly newsletter. Cancel at any time.