Banning ChatGPT won’t stop cheating

Ah, the first day of class. You brush your hair, show up on time and pick your seat for the rest of the semester (or, at least, for the days you end up attending). The professor goes through the usual spiel of accessing the course website, the grading scheme and a little about them. As they share the syllabus and read the Duke Community Standard, you notice something new — the class policy on the use of AI.

With the launch of ChatGPT less than a year and a half ago, the entire school system is scrambling to figure out how to adapt. Essays can be generated in seconds, coding problems solved with copy and pasting and reflections done with minimal prompting. Some schools have chosen to ban the use of large language models (LLMs) like ChatGPT completely, whereas other educators have embraced it.

Duke has provided recommendations to professors on AI policies but ultimately has given autonomy to teachers to decide their class rules. My classes have decided to either ban the use of LLMs or allow them when cited and used as an auxiliary tool (usually with points deducted as a result). 

But, allow me to let you in on a little secret: Regardless of professor policies on AI use, some (arguably many) students are going to use ChatGPT to aid or fully complete their assignments. This isn’t because students want to cheat, but rather because their classes aren’t serving their learning interests. Goal misalignment between students and Duke’s curriculum is causing the rampant use of ChatGPT on assignments.

To understand this, we must first consider why students go to college. There’s a lack of survey data at Duke around this idea, but through informal interviews I’ve deciphered three main reasons. One, it was expected or the clear next step. Two, to get a better job. And three, for the social experience. Only a small group of students choose to attend college primarily out of love for intellectual discovery.

The problem is that institutions of higher education view this small group as the main reason the majority of students attend college. So, classes focus on theory with little connection to the real world or the lives and goals of students.

I know way too many students with 4.0 GPAs in computer science that can’t create an API, much less call one. This is fundamental to the role of software engineers (the career outcome most computer science students strive for). Instead, students spend 40 hours in a single week designing a basic computer for one class — something less relevant to the role most strive for. So, it starts to make sense that students cheat, using LLMs to get work done faster. Our classes feel like a distraction from our goals.

Ignoring the group of students who go to college because it was expected, which is inherently satisfied, our classes are not aiding in our other two motivators: increased career opportunity and social experience. So, students are taking their education into their own hands. They use ChatGPT to give themselves more time to focus on job preparation and socializing. In computer science, this means using AI to quickly complete assignments to enable working on Leetcode interview questions, side projects or tenting in KVille. And using LLMs to cheat is actually preparing students for careers. A recent survey found 92% of professional software engineers use generative AI while they code. So this is actually preparing them for the tools they will use in the workforce.

Duke selects ambitious and exceptional undergraduates. Wouldn’t it be more concerning if students didn’t try to orient their time towards their goals?

One student told me that he uses ChatGPT to provide more example problems and tutoring, essentially giving him access to 24/7 office hours. If we ban the use of AI, we are harming him without actually affecting the cheaters. There are a million other ways to cheat. If classes continue to fail to provide the value students hope to derive from college, it won’t matter if AI is banned or not. Students will always find a way to cheat. Banning AI only hurts the students who use it to aid their in-class learning. 

If we really want to stop cheating, Duke could makes classes more aligned with student goals by providing more project based courses, emphasizing real world impact, and nurturing class community. 

So I urge us to think past the idea that banning the use of AI will stop cheating. Rather, let’s tackle the root cause of why students are using AI to cheat. It’s not because they’re lazy, but rather because they’re choosing to take agency over their collegiate experience and derive the value they desire from college.

Aaron Price is a Trinity senior. His column typically runs on alternate Tuesdays.

Discussion

Share and discuss “Banning ChatGPT won’t stop cheating” on social media.