I was heartened to read the Maroon’s coverage of Professor Ben Zhao. His project, Glaze, employs developments in AI to both hinder and attack the ability of image models to recognize and catalog faces. It’s the kind of technology that is going to make the lives of some people—namely, image and video model developers—a little more difficult for a while to come. It seems perverse, that this technology is now being deployed against itself. And yet, this was the first AI-related story I’ve read in months that genuinely excited me.
As a graduating senior in the Data Science Institute, I’m in an unusual position. While the DSI curriculum covers a wide range of topics, AI undeniably constitutes a significant portion which has only grown in the last few years, as the creation of the major was followed by the release of ChatGPT and the subsequent ascension of AI in the public consciousness.
In what feels like an overnight change, AI went from sci-fi to staple. Concerns like AI safety transformed from fringe speculation into the subjects of mass controversy. Every old product is getting an AI-enhanced makeover, which is probably good news for DSI graduates. But in the face of all this development, much of it professionally and academically fascinating, I can’t help but feel deeply ambivalent about AI.
Some detachment is inevitable while in the weeds of a given field. When your exposure to machine learning includes linear algebra and notebooks full of SciPy, it’s hard not to look at flashy news stories speculating on future AI capabilities without thinking ‘It’s matrices. It’s just a gigantic stack of matrices;’ you lose a lot of the starry-eyed enthusiasm.
But you can leave aside science journalism (a domain which can never live up to the expectations of scientists in the same field) and find real, compelling papers which push the boundaries of the possible, like Anthropic’s October paper on monosemanticity which explores methods for interpreting the internal processes of AI models. Yet, my ambivalence does not abate.
On reflection, I can’t seem to conjure a vision of an AI-enabled future which is better at the human level. I don’t necessarily disbelieve projections which claim AI will have such-and-such impact on GDP, improve existing technology, or finally ensure that our children are learning, but none of these professed outcomes compel me on an intuitive level. I recently interviewed with a fund that is feverishly seeking out startups which use retrieval augmented generation (RAG)—the new hotness for language models, which connects a large language model (LLM) to a database in order to improve accuracy and value in specific domains. RAGs for your dentistry practice! RAGs for your smart fridge! It’s part of the same bubble of AI-enabled services I expect will pop in 5-10 years, and I can’t help but look across this vast and WiFi-enabled landscape and wonder what it’s all for.
When I look at the problems our world faces, the ones I expect to get worse as time goes on, they’re broadly problems I expect AI to aggravate, not solve. The scarcity of the written word was not a problem before 2020 and it certainly isn’t a problem now. My inability to draw so much as a stick figure is not a problem. The fact that social relations increasingly migrate into virtual spaces, that language has begun to shape itself around content moderation, and that even the bottomless pit of entertainment has been displaced by mere doomscrolling, however, are problems.
Technologies like Glaze, I believe, are the first tremors of a rising backlash against the decomposition of the human into the mechanical. It feels a bit like studying aerospace engineering on the cusp of the First World War, knowing my chosen field will be the nexus of a vast contest of offensive and defensive ability in which my own contributions, however small, cannot be innocent. This conflict, however, despite having humans on both sides, will be fought to negotiate the frontier between human and machine.
This is why I take pleasure in technology which is, formally speaking, perverse. It delays and obstructs the straightforward goal of AI, which is to make the world more legible, categorizable, and processible. It cuts out a little more space for the human. In the face of my former ambivalence, I am struck with inspiration: there is a possibility for this technology not to clarify and illuminate but to obscure and to mystify. It is a possibility which is strange and capricious but also genuinely exciting in a way that new applications of AI to online advertising are not.
This is also great news for the major. Engineers don’t go hungry in war. You might think what I’ve written here is pure humanities babble, but if you see me across the trenches in a few years, have some mercy—we’re probably keeping each other employed
Nicolas Posner is a 2024 alum of The College.
Tim Tavern / Oct 19, 2024 at 4:07 am
…ok? And? A textbook example of simplistic academic navel-gazing. The author wallows in self-importance, lamenting AI’s impact while conveniently positioning himself as both critic and beneficiary of the very system he critiques. It’s the same old pseudo-intellectual routine—agonizing over technology’s march forward while reveling in the supposed nobility of resistance. Obstructing progress isn’t innovation, but to him, it’s apparently thrilling enough to indulge in some contrarian philosophizing. Ultimately, it’s little more than an overcomplicated excuse to feel superior.
Poorly written. Do better.
G / Oct 21, 2024 at 8:05 am
As with most comments in the Maroon, your critique reads like a feeble attempt to settle a personal vendetta, thinly veiled as intellectual commentary.