Is Anybody Out There? The Totalizing Ideology of Social Media

The lesson that emerges from the onslaught of social media is that totalizing ideologies kill, and that the human aversion to being proven mistaken means that extermination of the opposition tends to be chosen more frequently than conciliation or revision of the totalizing ideology.

By Eric Vanderwall

Social media, both the phenomenon itself and the term designating it, are by now so commonplace as to escape direct questioning. It is in fact worthwhile to remind ourselves that the term denotes a set of products, provided by multinational corporations, that people use by creating virtual simulacra of themselves while viewing advertising. What is the purpose of these media? What ends do these platforms serve, and to whose benefit? The questions are so obvious, so plain, that it is no surprise they are rarely posed in a society that assumes each new iteration of the glowing screen looking into the vast expanse of digital content is good in and of itself, requiring no justification. This is misguided. Every tool should serve some purpose, and its ability to do so must be assessed. So too must its costs of operation, its repercussions. 

Consider first the term itself: social media. Which is it? Is it social, a set of interactions among people? Or is it something akin to the mass media, radio, and television that preceded it? What exactly are social media? Communication technologies have a history of clunky portmanteau coinages and linguistic vagueness (e.g. the telegraph, the television, and the internet), which fostered widespread adoption with minimal scrutiny because of the implied promise of a brighter, merrier future of better living through applied science. The signifier “social media” has similarly enabled widespread adoption that, in less than a decade, transformed a class of services at first only used by a small segment of the population into something that sways elections and permits rapid distribution of inflammatory rhetoric. The term “social media” seems to promise some kind of personal autonomy, a way for people to form communities in the electronic medium. What it delivers is a service that allows people to look at advertising and reduce themselves to commodified simulacra, all while furnishing information useful in developing translation and face recognition programs for the profit of others. Social media are services that let people pay with their time and the furnishing of the requisite computing technology for the privilege of entering data without compensation. 

In addition to the obfuscation perpetrated by the phrase “social media,” the vocabulary of social media troubles the conscience. People are not people, but “consumers,” “users,” “followers,” “connections,” or, most insidiously, “friends.” There are no ideas or pieces of writing or art, simply an undifferentiated morass of information called “content.” 

Notice how the first two terms envision people as something like semi-conscious blobs, slithering through the digital world in search of the next mainline shot. “Consumers” and “users” are passive, defined as little more than mouths. These terms are part of a larger endeavor, conducted largely in subtleties of usage, that attempts to inculcate a form of nihilism: that their lives have no meaning and that they can never rise to the level of thinking and doing, but must content themselves with consuming. 

“Followers” strips people of autonomy and reduces them to a mob, valued only for the correlation between its size and the “followed” one’s perpetual need for external validation. That the acquisition of a large following is an acceptable and even laudable goal should trouble thoughtful people. What kind of value system valorizes amassing huge followings without being of service? In what kind of society are the most prominent people those whose chief merit is their ability to commodify their existence? Such a society—ours—is one that has lost a sense of the past, lacks a vision of the future, and lives in one long present moment. Nothing of lasting worth can emerge from a culture that turns everything and everybody into a product for consumption, a culture in which what little discourse there is on wisdom (a far subtler matter than “content”) is relegated to isolated enclaves. 

“Connections,” although lacking the scent of authoritarian megalomania wafting from “followers” and “thought-leaders,” dehumanizes people to the extent that it equates them to nodes in an electronic network. Likewise for the now ubiquitous and disconcertingly vague verb “connect,” which seems to have supplanted more informative and precise alternatives such as “meet,” “acquaint,” “converse,” or, simply, “talk.”  

In presentations he gave several years ago, Mark Zuckerberg said that he hoped Facebook would eventually “connect the world.” Like many of the utopian lies in this article, this statement’s self-assured tone and vision of a totalizing system encompassing every living human being may at first sound promising. What does it mean to “connect”? It obviously means more than simply soldering wires and joining lengths of conduit, but what exactly? What benefits accrue when the world is connected? To whom do they accrue? More importantly, what are the costs and who must pay them? 

In the last several years, we have seen the resurrection of fascism and authoritarianism, the creation of extremist political movements, renewed devotion to science denial, and the effective bifurcation of the United States into two nations: ours (whichever one that is) and the other country. It is no accident that public life has swung down to this nadir of intellect and discourse in proportion to the extent that “the world” gets “connected.” It is no coincidence that from national politicians to plumbers, people have lost the ability to talk to one another, while unprecedented amounts of information are created and distributed at incredible speed. It is clear that although “connecting the world” might be good for advertisers and an interesting problem for programmers, it brings only discord for those so “connected.” In his 2011 TED Talk and subsequent book The Filter Bubble, Eli Pariser examined how the algorithms programmed to direct people to their preferences segregate them, withhold information that challenge those preferences, and generally reinforce those same preferences, all of which distort people’s sense of the world and damage their capacities for learning and examining their beliefs. The positive feedback effect is amplified by the fact that most of the people posting on social media are not people at all, but bots. Algorithms designed to show people shoes to their taste are often redeployed by unknown actors, possibly in the employ of hostile powers, to spread misinformation and sow discord. This is an ongoing problem, according to Jaron Lanier in Ten Arguments for Deleting Your Social Media Accounts Right Now. Bots on social media were a significant part of the social media swirl leading up to the 2020 election, as in the 2016 election

One aspect of “connecting” every person and machine in this way has been a flattening of meaning. First, this so-called connecting reduces people to electronic representations, in turn nothing but the sum of pieces of information (age, relationship status, likes and dislikes, and so on). Indeed, the totalizing connectivity reduces everything to data, which strips it of all value except commercial potential. Everything is simply content, no constituent part of which is better or worse than any other except as it determines advertising decisions. The term “content” is bland and bureaucratic, a manifestation of what it describes: something denuded of meaning that exists to fill up space. 

Driven as it is by advertising, so-called social media must be understood as an enormous set of behavior modification experiments on humanity with no oversight and no beneficiaries other than the experimenters. Their basic architecture trains people to behave in ways that benefit the arbiters of the experiment, not humanity generally. The experiment continually reinforces the notion that the best way to understand the world is through one’s likes and dislikes. This is a logical endpoint of the form of individualist thinking that denies the existence of any purpose higher than self-gratification. With the tracking technologies and the algorithms of most sites catered to one’s likes, the result is a kind of sinkhole of the self: the filter prevents one from encountering anything one does not already like, so that at every turn one sees only what has already been seen. Take Amazon: at one time the site was, in addition to a vast all-purpose store, a good place to find book recommendations by looking at the list of what other people browsed. Less than a year ago, the presentation of recommendations changed without announcement (in an erasure of history and memory typical of the internet) to show not what other people liked, but what you (the consumer, the user, the holder of the all-important preferences) have already seen. It is as though one looked out the window and found it to be a distorted mirror. It is an attempt to find the world that leads only to the self. And there is no record of anything having ever been otherwise. Facebook’s frequent adjustments to its interface function similarly: manipulation and distortion of temporal sense through the creation of a false world that appears to exist statically, in one long moment. 

This, then, is what “connection” seems to really mean, in the ideology of so-called social media: people are connected to machines, which are connected to each other, but the machines insert as many layers of mediation as possible between people so that the real relationship is with the vast wasteland of “content,” not people. However profitable this is for firms, it is bad for human society. If the network is programmed to present the most distilled form of each person’s preferences, then it is no surprise that we have seen the ascendance of extreme politics (i.e. niche preference), a collapse in discourse, and the proliferation of irrationality (e.g. the flat earth model, creationism, climate change denial, and anti-intellectual hostility) in conjunction with the “connecting” of the world. These developments are by design—perhaps not consciously but in the ideology of the network. Even if the arbiters of these technologies intend only good, which seems doubtful, the system’s axioms inevitably lead to this state of affairs. As they appear to me, the axiom is that preferences are the immutable measure of a person that ought to be pursued (rather than, say, capacity for empathy or any other conceivable trait). These foundational beliefs cannot give us anything other than what they have. 

There is a common misconception that a machine is neutral, that it is not prey to the fallacies that beset human thinking. The values of the designers are encoded in the machines. No matter how fast or impressive their feats of computing, the machines are still contrivances embodying their makers’ beliefs about what deserves attention and what does not. Some of the most obvious axioms for the network are that problems are best solved by acquiring more information, that only that which is quantifiable matters, that any technological development constitutes progress (which itself is predicated on the idea that there exists some attainable state of perfection reached through progressive, linear improvement), and that the best solutions are elaborate, technological ones. These, and the many other foundational assumptions of the machine network from which arises so-called social media, are not neutral or incontestable. These beliefs, coded into the physical and electronic infrastructures, are a totalizing ideology, a worldview that allows no room for nuance or dissent and that offers itself as the only acceptable solution. Recorded history is littered with totalizing ideologies. The lesson that emerges is that totalizing ideologies kill, and that the human aversion to being proven mistaken means that extermination of the opposition tends to be chosen more frequently than conciliation or revision of the totalizing ideology. 

All of this might sound extreme for platforms on which people post photos of themselves in swimsuits or memes about fast food. It is, however, no exaggeration. The false promise of “connection” fosters the positive feedback loops that lead to coronavirus denial protests, failed coups, mob vilification, science denial—in short, everything that has come to define the last few years. The world has been connected, and this is the result. 

But what’s a failed totalizing ideology among “friends?”