The Deliberate Awfulness of Social Media

To be alive and online in our time is to feel at once incensed and stultified by the onrush of information, helpless against the rising tide of bad news and worse opinions.
Illustration of hand holding photo that's chained to its wrist
Social-media platforms know what you’re seeing, and they know how you acted in the immediate aftermath of seeing it, and they can decide what you will see next.Illustration by Jack Sachs

Twitter, as everyone knows, is Hell. Its most hellish aspect is a twofold, self-reinforcing contradiction: you know that you could leave at any time and you know that you will not. (Its pleasures, in this sense, are largely masochistic.) My relationship with the Web site, which has, for years now, been the platform most deeply embedded in my daily—hourly, minutely—routine, has come to feel increasingly perverse. It mostly seems to offer a relentless confirmation that everything is both as awful as possible and somehow getting worse. And everyone else on Twitter appears to feel the same way. (You can check this claim right now by doing a Twitter search for phrases including “extremely normal website” and “I’m losing my mind.”) Last month, the writer Julius Sharpe posted the following exquisitely relatable sentiment: “Whenever someone stops tweeting, I feel like Ben Affleck going to Matt Damon’s house at the end of ‘Good Will Hunting.’ So happy for them.”

So why hasn’t Sharpe done a runner, like Matt Damon lighting out for the territory? And why, more to the point, haven’t I? The obvious answer is that social media is an addiction. The first argument in Jaron Lanier’s recent book, “Ten Arguments for Deleting Your Social Media Accounts Right Now,” is that the nexus of consumer technologies and submerged algorithms, which forms so large a part of contemporary reality, is deliberately engineered to get us hooked. “We’re being hypnotized little by little by technicians we can’t see, for purposes we don’t know,” he writes. “We’re all lab animals now.”

The problem, for Lanier, is not technology, per se. The problem is the business model based on the manipulation of individual behavior. Social-media platforms know what you’re seeing, and they know how you acted in the immediate aftermath of seeing it, and they can decide what you will see next in order to further determine how you act—a feedback loop that gets progressively tighter until it becomes a binding force on an individual’s free will. One of the more insidious aspects of this model is the extent to which we, as social-media users, replicate its logic at the level of our own activity: we perform market analysis of our own utterances, calculating the reaction a particular post will generate and adjusting our output accordingly. Negative emotions like outrage and contempt and anxiety tend to drive significantly more engagement than positive ones. This toxic miasma of bad vibes—of masochistic pleasures—is not, in Lanier’s view, an epiphenomenon of social media, but rather the fuel on which it has been engineered to run.

Lanier has coined a term for this process: he calls it BUMMER, which stands for “Behaviours of Users Modified, and Made into an Empire for Rent.” (Sample BUMMER-based sentence: “Your identity is packified by BUMMER.” Sample marginalia, scrawled by this reviewer with sufficient desperate emphasis to literally tear the page: “Please stop saying BUMMER!”) In Lanier’s view, BUMMER is responsible, in whole or in part, for a disproportionate number of our contemporary ailments, from the election of Donald Trump to the late-career resurgence of measles due to online anti-vaccine paranoia.

Before he emerged as a prominent diagnostician of our technological malaise, in 2010, with his book “You Are Not a Gadget,” Lanier was mostly known as one of the chief architects of virtual reality and a tutelary spirit of the Internet’s freewheeling early days. (He is nowadays an employee of Microsoft, a fact that he acknowledges in the book.) His major selling point as a public figure is the notion that he’s critiquing from the inside. But that insider status can be a disadvantage. One way of framing the problem would be to say that he thinks like an engineer, in that his argument is an explanation of how a particular system, social media, operates, and how it might be improved by tinkering with certain aspects of it. Which is to say that “Ten Arguments” is relentlessly focussed on the few BUMMER apples, without giving much serious consideration to the barrel. His peers in Silicon Valley, he repeatedly reassures us, are fundamentally well-meaning—which even if it were true wouldn’t be especially relevant—and capitalism, he maintains, is a basically just and rational social arrangement, albeit one that is open to corruption by bad actors and bad systems. He goes so far as to suggest that even Trump would be a “nicer, better person” if Twitter suddenly ceased to exist. “As a lefty,” he writes, “I don’t think a BUMMER-style lefty leader would be any better than Trump. Debasement is debasement, whatever direction it comes from.” I would, I suppose, prefer a lefty leader who didn’t tweet from a West Wing en suite at 5 A.M. to a lefty leader who did, but I would take either over a right-wing President who pursues tax cuts for the super-rich, dismantles environmental regulations, and implements border-protection policies specifically designed to victimize immigrants and their children. Stephen Miller does not appear to tweet much; it’s hard to imagine him being a worse person if he did.

There is a tendency toward overgeneralizing of this sort throughout the book. Social-media posts, Lanier argues, are peculiarly vulnerable to deliberate or incidental misinterpretation, because context can be applied to what you say after the fact. “You have to become crazy extreme if you want to say something that will survive even briefly in an unpredictable context,” he writes. “Only asshole communication can achieve that.” But this is straightforwardly untrue, and it’s untrue in a way that reveals a fault line in Lanier’s whole argument. Any regular Twitter user will immediately tell you that the communications that survive in the unpredictable context of the platform are not extreme statements but extremely funny statements. What Lanier seems not to appreciate is that we keep firing up our timelines, scrolling downward through the linear abyss of utterances, in large part because of the ever-present possibility that we might read something that makes us howl with laughter. It is, granted, not a vision of a flourishing utopia, but it’s not nothing.

Lanier is, to the very marrow of his bones, a Silicon Valley sage: his prose, despite its politely resistant stance, is a medley of TED talks and keynotes and takeaways. Reading his book, I kept wanting him to go deeper. And then I read James Bridle’s “New Dark Age: Technology and the End of the Future,” which wades in so deep that I began to fear I might never come back. “New Dark Age” is among the most unsettling and illuminating books I’ve read about the Internet, which is to say that it is among the most unsettling and illuminating books I’ve read about contemporary life. Bridle doesn’t want to convince you to delete your social-media accounts, although you might be more likely to do so as a result of having read his book than Lanier’s. Instead, he wants you to see more clearly what it’s like to live in a world where you can never really go offline anyway, where there is no workable possibility of evading the network.

Bridle, like Lanier, has a background in computer programming: he is an artist whose work examines the hidden operations of technology in the public realm. Among his better known pieces are “Drone Shadow,” for which he painted life-size outlines of military drones in urban spaces, and “Autonomous Trap 001,” a high-concept prank involving “trapping” self-driving cars by surrounding them with ritualistic circles of salt, whose resemblance to road markings confused the cars’ A.I. navigation systems into helpless stasis. He also gained prominence last year for his viral essay “Something Is Wrong on the Internet,” a harrowing exposition of creepy, algorithmically generated kids’ videos on YouTube, an expanded version of which forms a chapter of “New Dark Age.”

Bridle argues that the Enlightenment-era equation of knowledge and power has collapsed under the sheer tonnage of information—data, news, opinion, political spectacle, fact, falsehood—mobilized by contemporary technology. Not only is knowledge no longer power, it isn’t even really knowledge anymore. It is a strange fact, verifiable by people still living, that the Internet was once thought of as a grand superstructure by which all of us would be elevated to a state of technological enlightenment. This is not how things have panned out. Here’s how Bridle puts it:

We find ourselves today connected to vast repositories of knowledge and yet we have not learned to think. In fact, the opposite is true: that which was intended to enlighten the world in practice darkens it. The abundance of information and the plurality of worldviews now accessible to us through the internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics. It is on this contradiction that the idea of a new dark age turns: an age in which the value we have placed upon knowledge is destroyed by the abundance of that profitable commodity, and in which we look about ourselves in search of new ways to understand the world.

The book delineates the ways in which the future is becoming darker and less knowable, even as our tools for predicting it become more sophisticated. The book’s most fascinating and disturbing chapter is about how the Internet, the primary vector of information about climate change, is increasingly a vector of the problem itself. The world’s data centers already have roughly the same carbon footprint as the global aviation industry, even as people continue to speak of “the cloud” as though it were a barely corporeal entity. As temperatures rise, our information technologies will function less efficiently—increased heat and humidity will hamper the flow of wireless transmissions and satellite communications—and a vicious cycle will commence. (Bridle makes a similar point about cryptocurrency, that supposedly revolutionary and transformative technology: if its rate of growth continues, by next year Bitcoin alone will account for the same level of carbon output as the entire United States.) Even more depressing is the contention that climate change could actually wind up making us stupider: he cites research showing that human cognitive ability decreases significantly with higher atmospheric concentration of carbon. “Carbon dioxide clouds the mind: it directly degrades our ability to think clearly, and we are walling it into our places of education and pumping it into the atmosphere,” he writes. “The crisis of global warming is a crisis of the mind, a crisis of thought, a crisis in our ability to think another way to be. Soon, we shall not be able to think at all.”

If I’ve encountered a more forbidding outline of the future in any work of nonfiction, I’ve obviously forgotten it, possibly as a result of carbon-induced cognitive decline. (The silver lining to this toxic cloud is that pretty soon we’ll be able to blame all our idiocies on climate change.) Bridle’s apocalyptic vision can itself be mind-numbing, in its way: it is a relentlessly gloomy book, and to read it is to risk suffocating any remaining hope you might have for the future, any sense that catastrophe might yet be averted or mitigated. (This is not an outcome that its author seems to intend.) Like Lanier’s book, though in a very different register, it risks presenting the Internet as both the manifestation and cause of all of our deepest problems. Yes, social media contributed to a Trump Presidency, but so did the financial collapse of 2008, reality television, misogyny, and enduring structures of white supremacy. So, too, with Brexit: the surveillance stratagems of Cambridge Analytica might have pushed the U.K. over the line, but it wouldn’t have approached that line without a confused sense of its own savage colonial history, a thwarted cultural superiority complex, and a self-perpetuating class system that elevates mediocre old-Etonian opportunists at the cost of the national interest. The chronic condition is the disproportionate power and wealth of a tiny minority; technology is a means by which its symptoms manifest.

Bridle does establish beyond all doubt the viciousness and complexity of those symptoms. At one point, he describes the logistics system employed by Amazon at its warehouses, in which stock-pickers walk briskly among the shelves, following directions relayed by a handheld device that also tracks their speed and efficiency. The arrangement of the shelves makes no sense to human eyes—books stacked next to saucepans, televisions beside children’s toys—but is perfectly rational to the machine intelligence that configured it. It’s a system that is incomprehensible without the aid of computers, and in which the traditional relationship of authority between human and machine is inverted. (“Reducing humans to meat algorithms, useful only for their ability to move and follow orders, makes them easier to hire, fire, and abuse,” Bridle notes.) As with so much else in the book, it’s difficult not to read this as a metaphor for a much broader truth: we are all of us increasingly negotiating a world that makes sense only from the point of view of machines. For some of us—Amazon workers, Uber drivers—it’s less a metaphor than a literal reality. As William Gibson has put it, “The future is already here—it’s just not very evenly distributed.”

To be alive and online in our time is to feel at once incensed and stultified by the onrush of information, helpless against the rising tide of bad news and worse opinions. Nobody understands anything: not the global economy governed by the unknowable whims of algorithms, not our increasingly volatile and fragile political systems, not the implications of the impending climate catastrophe that forms the backdrop of it all. We have created a world that defies our capacity to understand it—though not, of course, the capacity of a small number of people to profit from it. Deleting your social-media accounts might be a means of making it more bearable, and even of maintaining your sanity. But one way or another, the world being what it is, we are going to have to learn to live in it.