The University of Chicago’s Independent Student Newspaper since 1892

Chicago Maroon

The University of Chicago’s Independent Student Newspaper since 1892

Chicago Maroon

The University of Chicago’s Independent Student Newspaper since 1892

Chicago Maroon

Aaron Bros Sidebar

ChatGPT vs. the UChicago Core

Tools like ChatGPT are a type of forbidden fruit, tempting even those who wouldn’t normally fudge their assignments.
ChatGPT+vs.+the+UChicago+Core
Unsplash

“In the hallowed halls of the University of Chicago, where intellectual prowess reigns supreme, an unexpected companion has emerged in the pursuit of academic excellence: ChatGPT. As students navigate the labyrinth of scholarly pursuits, this digital oracle has proven to be more than a mere tool; it’s a confidant, a sounding board, and an indispensable ally in the quest for eloquence,” ChatGPT wrote.

I thought that if I was going to write an article about ChatGPT at UChicago, I should go straight to the source. ChatGPT suggested that I begin this article with the paragraph abovestylistically overwrought and unnecessarily wordy.

For many at UChicago and other universities, this style has become practically ubiquitous. Since November 30, 2022, students have used the large language model known as ChatGPT to write discussion posts on The Odyssey and score dating-app matches, and it’s become a frequent companion in our classrooms.

UChicago is distinctive for its Core, which preaches that students should respect and commit themselves to multiple forms of learning. The school prides itself on encouraging all students to critically engage with qualitative and quantitative work, no matter what subject a particular student is inclined toward for their major.

These ideals don’t always hold up in practice. Many students face the temptation to cop out of some facet of the Core, and ChatGPT can provide an easy way to do so. A math major taking honors STEM classes and IBL Calculus along with the Humanities sequence might not want to use more time on assigned readings. An art history major immersed in reading- and writing-based classes might see Core Biology as a mere chore. Tools like ChatGPT are a type of forbidden fruit, tempting even those who wouldn’t normally fudge their assignments.

ChatGPT is a large language model (LLM). It uses natural language processing to mathematically map out connections in language that humans understand instinctively. It’s called a large language model due to the enormous dataset of forums, articles, and books upon which it is trained. The newest iteration, GPT-4, uses the entirety of the internet as it appeared in September of 2021, which is rumored to have around 1 trillion parameters, or learned variables.

Other AI assistants are showing up everywhere; from Handshake’s Coco to X’s Grok. New digital tools like ChatGPT are part of a quest for optimization and to eliminate drudgery from our daily lives, perhaps at the expense of some humanity. If an AI can drive your car, check your resume, or mediate a breakup with your situationship, what can’t it do?

Though ChatGPT can appear to be an independent actor, having conversations and answering questions, it’s more like your phone’s predictive text feature set to a massive scale—it considers each word in a sentence as a statistical likelihood to try to determine which word should come next. And, like the Internet upon which it’s based, ChatGPT has a fuzzy relationship with truth. Science fiction writer Ted Chiang wrote in the New Yorker that ChatGPT is a “blurry picture of the Internet.” Just as a JPEG compresses an image file, ChatGPT condenses the infinite information on the Internet into grammatically correct sentences. But that doesn’t mean that it’s always correct. 

This compression is the reason ChatGPT is often unable to pull genuine quotes from specific sources. In an end-of-year discussion on LLMs, students in my Self, Culture, and Society class shared secondhand anecdotes of essays written completely with ChatGPT. Some students realized only after the deadline that not a single quote the LLM included in the essay existed. The professor, Eléonore Rimbault, who led the discussion, revealed that the department had indeed seen some bogus quotes in the past two quarters, and that ChatGPT usage was more obvious than students may think.

For a user accessing ChatGPT, the clean design and free-floating blocks of text appear to come from nowhere. The chatbot doesn’t cite the sources from which it draws, which has raised the question of the ownership of ideas. Not only is a coalition of authors filing a class-action lawsuit, claiming that it’s a copyright violation to train on preexisting works of fiction and nonfiction, the New York Times is also suing for lifting near-verbatim chunks from its news articles.

When AI usage appears ubiquitous, students begin to see it as an alternative to falling behind on classwork. In the Core especially, students confront subjects unfamiliar to them, adding strain to an academic environment that is already challenging. Students may want to bypass the confusion that occurs when learning new content, which involves understanding dense texts, concepts, or diagrams and requires a significant time commitment. During class discussions in my Media Aesthetics class, a friend told me that his whole side of the room was just “a bunch of screens of ChatGPT.” Another friend completely avoided the task of reading The Odyssey, using ChatGPT to write discussion posts for her Human Being and Citizen class.

I spoke to a second-year student who used ChatGPT extensively in his first year, who asked to remain anonymous for future employment reasons. He was assigned Dante’s Inferno for his Humanities class, Human Being and Citizen. “I found it very interesting, but extensively dense,” he said. To answer discussion posts, he’d have to sit with the text for an hour or two and “think really hard.” When ChatGPT became available, he started to use it as a time-saver to guide the questions he would ask in discussion posts and essays.

“For the discussion posts that looked harder, I needed a catalyst,” he said. ChatGPT gave him a starting point. He would ask it to help structure an argument in response to the prompt or suggest evidence from the text to support a point. Over time, he got better at creating prompts for the LLM, which he says translated over to asking better questions during in-class discussions. In a way, rather than as a crutch, the LLM functioned like a pair of training wheels.

The same student stressed the importance of fact-checking everything the LLM outputs. GPT-4’s generation speed is slower and more in-depth, but it still makes a lot of mistakes. This fall quarter, he was using it for neuroscience classes. Though AI helps him understand complex diagrams and organize his thoughts for discussion posts and essays, he’s trying to limit his dependence on it.

“I’m trying not to use it at all this week,” he said. “For having high-level thoughts, I feel like my soul has to be attached to that,” he continued. “ChatGPT is an easy cop-out, and there’s a part to learning and critical analysis that I’ve missed. It detaches me from what I write about.”

Discussion posts and speaking in class are worth much less than exams, large projects, and essays in most grading breakdowns at UChicago. A student might think this makes them less essential components of a course. Amidst optimization-related rhetoric and suggestions to “work smarter, not harder” at UChicago, students may want to leapfrog past a state of confusion to get them done.

But studies show that confusion is good for the deeper kind of learning that allows one to apply knowledge to new situations. Being forced to sit with a new concept longer encourages reflection and deliberation, allowing one to make sense of contradictions and gain a fuller understanding. So even minor ChatGPT use could be hurting students.

Later on in our conversation, the same student came to a conclusion I found surprising based on his prior behavior. “I know this is hypocritical, given how much I’ve used it, but I wholeheartedly think it should be banned,” he said. “I really regret using it so heavily in my first year. And if you go to the A-level [of the Regenstein Library] now, you’ll see so many screens with ChatGPT!”

The University does not have an official stance on ChatGPT. Instead, it is explicit only about intellectual property, forbidding outright plagiarism. “Individual instructors have the discretion to set expectations regarding the use of artificial intelligence, including whether the use of artificial intelligence is pedagogically appropriate,” said a University spokesperson. They’re allowed to determine to what extent using AI to clarify dense topics is helpful or harmful to overall learning, which could vary across disciplines.

“Our basic goal here is that students learn,” said Navneet Bhasin, a biology instructor whose stance on ChatGPT is that the LLMs are no replacement for the true work of learning. “[Students] have to take in the material, research it, and call it their own, and then be able to integrate it into their Core education and an informed society.”

Now, the biology department must consider the potential use of LLMs when evaluating students’ work. Instructors have changed assignments to focus more on writing during class, without the use of the internet. Bhasin has altered the weighting of points for lab reports, prioritizing results over introductions and discussion sections, the basic summaries for which ChatGPT is most suited.

“Students are not just a conduit for answers,” she told me. “They have to learn how to relate to the material.” To her, this is the true work of learning, and it’s the purpose of the Core Biology curriculum. Bhasin said that directly copying from ChatGPT is no different from plagiarism, and it makes students lose the opportunity to apply the writing and expression skills that are a big piece of Core Bio.

The Biological Sciences Division requires that students cite their sources if ChatGPT is used as a supplemental tool in any assignment. “Policing is something we as educators don’t like to do,” Bhasin said. She pointed out that, in many ways, these issues will regulate themselves. An LLM’s products are often “misinterpretations of reality,” which must be verified. “If students use an LLM and need to fact-check it, our goal has been fulfilled. They have learnt it.”

“Eventually, I see [ChatGPT] being integrated, like phones and computers, or considering an LLM like a part of your study group,” she said. For now, “there’s too much at stake in the real world” from trying to bypass learning.

Bhasin told me she isn’t currently seeing a lot of student work that looks like an LLM wrote it. But from my conversations with students, I wondered if that was really the case. I could see how a student who understands the material and knows what buzzwords to hit could use ChatGPT to put the sentences together, avoiding doing the work itself. Perhaps direct plagiarism isn’t the only way to be intellectually dishonest.

Indeed, as students and professors have continued to discuss the use of ChatGPT, some have tried to incorporate it formally as a learning tool. After all, maybe there’s utility in having access to the sum total of human knowledge found online, providing some automated version of a popular consensus. To that end, I spoke to Felix Farb, a second-year in the College, about his experience encountering ChatGPT in the Power, Identity, Resistance sequence this past autumn quarter.

As a supplement to reading about Rousseau’s concept of the body politic, the instructor assigned the students to make a body politic of their own and to create a law together. Near the end of the class period, a student suggested asking ChatGPT to complete the task. The class decided this was a valid idea on the basis that Rousseau concludes in The Social Contract that “Gods would be needed to give laws to men.” Rousseau’s legislature is all-knowing of human nature but divorced from it, a being with “superior intelligence” who “has no connection to our nature and yet understood it completely”—that sounds a whole lot like ChatGPT. Another student with a subscription to GPT-4 pulled out a computer and prompted it to design the law.

The students wondered if artificial intelligence could function like this supreme, law-giving being. After 80 minutes of student discussion, the LLM had reached similar conclusions as the class. But the text was “far more formulated,” Farb said. “It was doing more, faster.” They watched as the law unfurled on the screen.

Many share this awe at ChatGPT’s power. Farb described in detail one note-taking method he has witnessed at the University, which has involved his classmates asking GPT to provide them with questions to pose during class discussions. “I don’t look down on that at all,” he said. “It’s someone using a tool to help them understand a dense topic in a simple manner…If it’s better than us, is it wrong to use it for some insight?”

Farb’s holistic feelings on LLMs are more mixed. He admitted to trying on occasion to use the tool to improve specific sentences but said that he doesn’t “bounce ideas” off ChatGPT like some of his friends do. He drew a distinction between GPT-3.5, an earlier version, and GPT-4. To him, GPT-3.5 looked like “a 16-year-old trying to write an essay”—it was stylistically lacking in a way that didn’t tempt him to use it. Not a resounding endorsement.

Additionally, “It’s easy to fall into the trap of thinking you understand something,” he said, and it’s a trap he wants to avoid. It is possible that ChatGPT atrophies in its frequent users the logical and creative muscles that are fundamental to generating new thought.

ChatGPT’s inadequacies can also be pedagogically helpful. Farb brought up a problem set for Real Analysis, a math class, in which the instructor assigned students to ask ChatGPT to do a specific proof, then try to fix where it went wrong. The LLM’s solutions, Farb told me, were unimpressive.

“ChatGPT isn’t even in its infant stages for math,” he said. Similarly to its predictive text capabilities, it can produce proofs that look real at a cursory glance to non-experts. But, like a house built on a shoddy foundation, the logic isn’t valid, and the proofs fall apart upon examination by a mathematician.

Andre Uhl, a theorist at the Institute for the Formation of Knowledge, is hopeful about the possibilities of “critically engaging with ChatGPT.” Uhl’s background is in visual arts and film, but he’s recently been doing policy work in AI ethics and teaching courses on AI literacy. He studies technologies and the frameworks of knowledge around them as they relate to humanity’s search for collective meaning.

“There’s a prevalent anxiety, and the shortcut is to ban [ChatGPT] and take a step back,” he said. I thought of the student I’d interviewed whose learning had been so impacted that he believed a ban was for the best. Uhl’s perspective is different. He thinks universities will ultimately need to adapt classroom and research practices to embrace new tools like LLMs and delineate spaces that encourage or preclude their use.

A year is a short time, Uhl told the Maroon in an interview, and ChatGPT is still a new tool that people are testing out, one whose use we should treat similarly to to a library or the internet. “Just as we have best practices for navigating a library or the internet in order to retrieve resources in ways that foster a productive research process, I believe that we also need to learn how to navigate emerging AI systems and the content they produce,” he wrote in an email to the Maroon.

While Uhl, whose first language is German, has been writing academically in English before ChatGPT and does not depend on it in his work, he believes that ChatGPT could be a valuable proofreading tool for other non-native speakers.

“We need to create spaces where it is safe to experiment and collaborate across generations to create new forms of expertise,” Uhl said. Uhl himself is in the process of developing a comprehensive AI literacy curriculum with tailored modules for students across various fields of study and professionalization.

People in my generation, Gen Z, are often referred to as “digital natives;” we’ve grown up navigating digital environments as much as physical ones and relate to each other through our participation in a diverse set of online communities. It makes sense that we’d be the most comfortable doing homework hand-in-hand with an AI. However, Uhl also spoke to the importance of understanding ChatGPT’s limits, and where we should be cautious about using it.

Uhl stressed that ChatGPT is a tool, not an actor or entity, and it should be treated as such. “Even calling it artificial ‘intelligence’ may not serve the right purpose,” he said. I thought about how the tool’s name has become a verb; students say, “I’ll just ChatGPT that” in reference to low-level Core writing assignments. You could compare this with the common “I’ll Google that” or “I’ll Wikipedia that.” “Verbifying” these human-created machines seems to give them a life of their own, imbuing them with a new level of certainty, and eventually, a new level of power.

Thinking about how AI tools have appeared and been used as agents, I asked Uhl about AI-generated images, which often appear alongside art created by humans. In cases of intellectual and creative property, ethics become especially important, Uhl told me, and AI is a genuine threat; he pointed to the then-ongoing SAG-AFTRA strike. If human writers and actors protest poor working conditions, LLMs and digital likenesses could replace them, à la Black Mirror. I recognize this danger. On Instagram and TikTok, AI-generated “art” or “photography” have started to dominate suggested content. Within minutes, AI image creators like DALL-E can show you everything from “the cutest possible kitten” to “Succession characters in the style of Wes Anderson.”

“The point of the creative practice is to learn about the other person’s soul,” Uhl suggested. The presence of artificial intelligence that can throw together surface-level elements doesn’t take away from the value of learning how to write or make art. There are places, perhaps, where we don’t need to hear what a chatbot has to say. Uhl said, regarding AI art: “We could, but should we?”

This seems to be the central question of how we will approach artificial intelligence for learning and creating. ChatGPT is, after all, constantly getting better. Most of our interactions up until this point have been with GPT-3; GPT-4 is more sophisticated and harder for plagiarism tools to detect.

We’ll have to start drawing lines between what work can be done by the robots, and what work belongs to us. David Graeber pointed out in his book Bullshit Jobs that automation could pave the way towards eliminating drudgery—writing formulaic emails, entering data into spreadsheets, or scheduling events and projects. How much more would this university be able to indulge the Life of the Mind if we let future iterations of these tools do the boring stuff? Or should we forgo AI completely, following the basic idea behind the Core—that all work and all learning, even if you don’t recognize it as such, has meaning?

After all, we may not be able to cleanly divide work which is drudgery and that which builds important organizational skills. Constructing and reconstructing arguments, interpreting charts, and finding relevant text passages could all fall in a gray area, since student and ChatGPT alike can accomplish these tasks. Universal categorization of situations in which ChatGPT usage is helpful or detrimental to learning could be impossible, posing a genuine problem for the instructors who are tasked with setting clear rules around new technology.

Regardless of what course syllabi may say, one only needs to enter a study space to spot someone with ChatGPT open for a discussion post. Imagine a campus tour group peering through the windows of Ex Libris cafe as a tour guide launches into a spiel about the Core Curriculum. UChicago students, the guide says, have a respect for all forms of learning, and the Core helps clarify their individual interests. The guide gestures towards the Reg, a place where collaborative learning is sure to occur. Inside, students sit in groups, laptop screens flickering with ChatGPT as they copy and paste essay prompts and type requests for explanations of complicated concepts. Is this a horror-inducing transformation? Or could it be the start of something greater? 

Editor’s Note, April 13, 2024, 4:48 p.m.: This article, previously published in the April 4 print edition and online, has been updated to better contextualize Andre Uhl’s comments.

View Comments (1)
Donate to Chicago Maroon
$800
$2000
Contributed
Our Goal

Your donation makes the work of student journalists of University of Chicago possible and allows us to continue serving the UChicago and Hyde Park community.

More to Discover
About the Contributor
Josie Barboriak
Josie Barboriak, Grey City Reporter
Josie Barboriak is a student from Durham, North Carolina majoring in Fundamentals and Sociology. She started writing for Grey City in the winter quarter of her first year and is especially interested in questions of urban policy and education. Outside of the Maroon, she summons her inner Patrick Bateman rowing on the Chicago River with the club crew team, assembles an eclectic collection of student poetry and art at Sliced Bread, and walks backwards around the campus performing mediocre standup as a tour guide. Her hobbies include learning moody songs on the guitar, writing poetry about the ocean, and interrogating econ majors about their motivations. You can find her making terrible puns over a game of pool at Hallowed Grounds.
Donate to Chicago Maroon
$800
$2000
Contributed
Our Goal

Comments (1)

All Chicago Maroon Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

  • E

    Elaine G. King / Apr 5, 2024 at 11:20 am

    I was a student at the University as part of the last class under the Hutchins plan of learning. I am appalled by the names of the classes in the article. Do they really provide the basic knowledge and skills that the original Core curriculum provided?

    Reply