Since the release of ChatGPT in 2022, artificial intelligence (AI) has rapidly permeated society, finding use in hospitals, corporations, and schools. In a July 2024 survey, 86 percent of global students reported they had used AI for schoolwork, and 24 percent reported using it daily.
Professors at the University of Chicago, which has no standardized policy for AI use in classes, have adopted varying approaches in the face of its expansion.
Adam Shaw, an associate senior instructional professor of computer science—a field he noted was particularly vulnerable to the impacts of AI—has prohibited the use of Large Language Models (LLMs) in his introductory classes.
Shaw told the Maroon that the threats of generative AI use are greater for “introductory skill-based courses” than for advanced courses.
“We’ve had tools for a long time that can do things like solve algebra problems, solve systems of equations, calculate integrals, etc., but we still have students doing those things by hand… I think for computer science students, it’s still valuable to walk some of the same paths that your predecessors have walked and write some of the tried-and-true programming exercises that have helped train programmers for generations.” Shaw continues, “I do think it’s possible that the way people write code in industry will change and shift to a more automated model, but I think learning how to wrestle with difficult programming problems and to break problems down and find a way forward is helpful no matter what you’re going to do.”
Still, he acknowledged the challenges of enforcing a prohibition on using LLMs for coding. “It’s extremely hard to tell what’s been produced by the student directly and what might’ve been produced by a tool,” he said. To address this, he is considering transitioning more of his coursework to in-class assignments to prevent students from turning in work created by AI.
Assistant instructional professor James Vaughn, who teaches the Power, Identity, and Resistance social sciences Core sequence and several history electives, takes a similar “maximalist approach.”
“I believe very strongly that we always are improving in our writing, thinking and reading, and we get the most improvement in our writing, thinking, reading, when we’re doing it wholly on our own, and then discussing with others, being evaluated by others, improving in light of others’ feedback,” Vaughn said. “Whether that’s feedback from simply a friend reading your work or that’s feedback from a professor grading your assignment, I believe still very strongly in that model, and so I adhere to the approach of a kind of prohibition on AI.”
Vaughn continued by saying that when a student uses an LLM, they are “avoiding having to confront the process of objectifying thinking in the written word, and that’s a really worthwhile process because it’s one of the hardest things to do in the whole world.”
Nonetheless, Vaughn acknowledged the stances of other professors who seek to anticipate the coming academic era and the new generation of students raised on AI. “AI is just going to be a natural part of [future students’] toolkit like social media is, and as such, it might be more constructive over time than rather, to try and resist it, to try and find ways to incorporate it responsibly, and I’m open to that.”
Jason de Stefano, a collegiate assistant professor who teaches the Human Being and Citizen sequence in the humanities core holds a more liberal AI policy: his syllabus permits the use of AI for “brainstorming, revising, and outlining written assignments,” as long as students provide citations.
However, he does warn of its limitations: “Relying on an AI to write your essays robs you of the opportunity to develop your own voice, to realize and reflect on your own habits of thought, to grow as a writer and a thinker,” his course syllabus reads. To that end, he requires students who choose to use AI for written assignments to meet with him during office hours and write a short reflection describing their experience using AI.
“I recognize I can’t eliminate the possibility that there are students who are using it and then just not writing the reflection,” de Stefano said. “But the students who do and then are open about it and write the reflection and then come see me tend to find that the usefulness of these programs as writing tools is really limited.”
“It’s my concern that more restrictive AI policies might have the unintended consequence of creating a kind of climate of suspicion between instructors and students that I think is not good pedagogically in general because it bleeds into other aspects of the instructor-student dynamic,” he continued. He also noted that a blanket prohibition on AI usage would be unenforceable and contrary to his ambition of fostering an open classroom culture.
The Chicago Center for Teaching and Learning, part of the Office of the Dean of Students in the College, provides guidance to University professors on matters of pedagogy. It advises four possible AI policies.
The first, which Shaw and Vaughn’s policies draw from, is a broad prohibition on using AI tools. The second allows AI only when a professor explicitly grants permission. The third, which is most similar to de Stefano’s, permits use “only with proper citation.” The fourth policy allows free use of AI with no required citations.
Still, the lack of standardized policy can leave students confused as they are confronted with many varying policies. In 2024, *Inside Higher Ed*’s annual Student Voice survey reported that 31 percent of undergraduates were unsure of when or how to use generative AI for coursework. In addition, only 20 percent of college provosts said their institution had published a school-wide policy outlining appropriate AI use.
In an email to the Maroon, Amadis Davis, a head student advocate for the Student Advocate’s Office, said she advises students to “ask their professors in advance (at the beginning of the quarter, for example) to elaborate on what their policies on AI usage are.”
“Additionally, it can be very difficult to prove that AI was not used on an assignment such as a discussion post because there’s no text tracking—like edit history—in the writing Canvas tabs,” Davis said. “It’s important that students understand that even asking ChatGPT to summarize a reading rather than reading it themselves might put them at risk of being accused of academic dishonesty.”
De Stefano advocates for a more standardized approach where policy comes from the administration rather than the faculty. “It’s an issue that I think the administration should take more of a stance on than they have,” de Stefano said. “Forcing us in an ad hoc individualized way to get our arms around this big problem is creating a lot more work for us—work that isn’t exactly in our job descriptions as instructors.”
Darius Johnson, a fourth-year philosophy major, echoed this sentiment. “I do wish there was a standardized AI policy across the University. I think that would streamline things a bit more and help to make clear some ambiguities that exist in how you can use AI and how you can’t use it.”
In particular, Johnson believes that attempting to discourage and police AI use only increases its use among students.
“Across the University, the standard should be that, instead of discouraging AI use, we should try and teach the students how to use the AI properly, as to stay at pace with the changing society.… I think especially at a research institution, it’s important to know how to use AI to your advantage.”
Shaw, on the other hand, disagrees. “I would not just take the position that we should have a one-size-fits-all policy, because I think our courses are very different, even within the computer science department,” he said. “There is a diversity of thought within the department. There are people who are more open to allowing students to use it than others, and we don’t have a settled consensus on it, and I don’t expect that we will.”
Merina Diaz, a first-year data science student, takes a similar view. “I kind of like the different policies because I feel like for humanities classes—or like [Hum] and Sosc—it makes sense not to use AI… because it’s supposed to be more original ideas. But for math and like computer science or like data science classes, I feel like you should be able to use AI.” In particular, in her Data Science 120 class, she appreciated the permission to use AI to review student-written code.
“AI technology is too new and professors are too old to immediately adjust to it [and have a standardized policy] immediately,” Vaughn said. “One should be open to the possibility that as we get a better sense of the technology, as we see it develop, as people make adjustments and people figure out what works and what doesn’t, perhaps the University should offer uniform guide posts.”