As someone who avoids using AI, it’s becoming harder and harder to go a day without encountering it. If I search something up, Google returns an AI overview before showing website links. When I updated my iPhone to iOS 18, it automatically prompted me to create AI emojis with new Apple Intelligence. Even in conversations with peers, I hear about a new ChatGPT model every week. Now, UChicago has eagerly hopped on the bandwagon, trying to incorporate its very own artificial intelligence service into student life.
PhoenixAI is the University’s official AI service, available to all UChicago students and staff. It runs OpenAI’s latest large language model and claims to be specifically for the UChicago community. The system looks like the perfect tool for students and staff alike. There’s just one problem that the University has overlooked: no one even wanted PhoenixAI in the first place.
Like many of my friends, I had seen posters advertising PhoenixAI plastered all over campus bulletin boards, and like many of my friends, I had mostly brushed them off. As I became more aware of AI, however, these posters became harder to ignore. I wanted to know what the point of PhoenixAI was and whom it was intended to serve.
According to the UChicago website, PhoenixAI is designed to “assist students, faculty, and staff in generating content, conducting research, and enhancing academic and professional work.” The system gives users options to run either the standard GPT-4.1 or the o3-mini for more complex reasoning. People can also create their own personalized “assistant,” or generative pre-training transformer (GPT) chatbot, and can turn on “Incognito Chat” mode, which automatically erases chat history 30 minutes after it has been created. All of these features sound great, but, from what I’ve learned, they’re all functions already available on ChatGPT.
PhoenixAI provides only a few benefits compared to ChatGPT. Everything offered on the site is free, whereas ChatGPT requires monthly subscriptions to access options such as personalized GPTs and sending unlimited photos. Additionally, data fed into PhoenixAI is not used to train other AI models; it stays within the UChicago environment.
Containing user information within UChicago servers, however, has its own drawbacks. Since the AI program operates under University management, the University has the right to monitor input data and hold users accountable. PhoenixAI’s service usage guidelines explicitly states that “[t]he University is not liable for any actions taken based on the information provided by the service, nor for any consequences arising from the use or misuse of this service.” Data monitoring enables accountability but discourages use when many students are already using AI to cheat their way through a class.
Even when AI isn’t being used for cheating, data monitoring can put people off. As someone who already shies away from using AI, having the University monitoring my activity isn’t something that drives a desire in me to start. So, whom is PhoenixAI even made for?
UChicago advertises the service as “customized” for the University community. That’s one reason why students might choose to use PhoenixAI. I decided to put that to the test by asking both ChatGPT 4-o and PhoenixAI 4-o—then the model used by the service—a few UChicago-specific questions.
- How many students are in the Class of 2028 at the University of Chicago?
ChatGPT:

PhoenixAI:

Almost immediately, I noticed that ChatGPT provided sources for the data that it gave, whereas PhoenixAI didn’t; it just redirected me to seek the answer somewhere else. Additionally, contrary to PhoenixAI’s claim that the University hasn’t released enrollment numbers yet, a quick Google search immediately revealed that the University, in fact, has.
I decided to ask a more specific question.
- Who are the Viewpoints editors of the Chicago Maroon?
ChatGPT:

PhoenixAI:

The accuracy of ChatGPT shocked me. PhoenixAI, on the other hand, didn’t provide an answer at all. Once again, it tried redirecting me to seek the information myself. I reasoned that ChatGPT can provide a more accurate answer because the service is able to search the web for a response, whereas PhoenixAI is not. Additionally, PhoenixAI’s knowledge cutoff date is October 2023, making the service unusable for anyone trying to find up-to-date information.
For my final question, I asked a non-time-sensitive question.
- How many floors are there in the Regenstein Library?
ChatGPT:

PhoenixAI:

While PhoenixAI actually managed to provide a response this time, it was questionably accurate. As a frequent visitor to the A Level, I have yet to meet a person that calls it “floor 1.”
For a resource created specifically for UChicago, it’s ironic how poorly PhoenixAI fared compared to ChatGPT. PhoenixAI is only in its beta stage, so future updates might improve the reliability of the service. However, compared to stronger AI programs that already exist elsewhere, students won’t find many benefits by switching to UChicago’s AI service.
But PhoenixAI demonstrates a bigger issue with AI integration than just poor performance. Today, organizations jump at any chance to integrate AI into services before considering whether it actually improves user experience. Google’s AI overview feels intrusive when I’m just trying to get to a website. Apple Intelligence’s “AI summary” for every text feels pointless when I just want to see what my friends are actually saying. PhoenixAI, a University-funded AI program, feels counterintuitive when some professors view the use of AI as cheating. The school providing a resource that could facilitate academic dishonesty in these classes sends a mixed message about the acceptability of AI usage.
Rapid adoption of artificial intelligence just for the sake of having it does nothing to improve the quality of online experience. Yes, some tools like ChatGPT can definitely benefit users by providing easy access to information or by simplifying small tasks. AI becomes a problem, however, when hundreds of different sites adopt notably lackluster versions of the same chatbot, when a generated overview appears for every small detail, or when unwanted features like AI emojis are shoved down users’ throats. At a certain point it becomes more like a gimmick: interesting for the first couple of uses but quickly oversaturated to the point of irrelevancy.
I have no doubt that the market for AI will continue to expand, continuing to become more integrated into day-to-day life. For it to be truly useful, however, institutions need to consider the people using the tools and what those people actually want. As PhoenixAI has shown, not everything needs to have a big red AI label attached to it.
And one last thing:

Kaci Sziraki is a second-year in the College.
Emma Trujillo / Jul 15, 2025 at 12:20 pm
Extremely thought-provoking article. As someone who is a regular user of AI, it was extremely informative on how there could be consequences or unreliable information that comes from using AI. Enjoyed reading as well, a very formal and informative tone, as well as parts that made the reader laugh.