Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

A handful of companies have outsize influence on the world’s artificial intelligence. Policymakers must act now to stem the rise of powerful monopolies.
colorful 3D render of brain
We must get beyond considering antitrust in the rearview mirror. The road ahead points to a growing concentration in AI.Illustration: Getty Images

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife's edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.

Biden's antitrust revolutionaries need a four-step plan to confront the AI revolution.

Antitrust authorities must first be forward-looking. They must recognize that the AI chess pieces being moved today will shape tomorrow’s endgame–particularly in a tech industry with high barriers to entry and early moves that are hard to reverse after scale. Tech antitrust action often occurs after it’s too late. Policymakers should also trace the outlines of multiple future AI scenarios, including a dystopian one. They must imagine, for example, a society that suffers from “algorithmic poverty,” in which users generate data as unpaid “labor,” which is used to train algorithms that in turn displace wage-producing labor.

Policymakers must also separate AI applications that are value-enhancing for society, like speeding up scientific research, from others that might be value-destroying, like rapidly creating misinformation echo chambers, even if such developments are valuable for the firms bringing them to market. The economic impact can be broken down into the ways in which AI augments and substitutes existing activities and where it imposes negative social costs. Such a framework can help regulators provide guidance and guardrails to AI development. Selective taxes, tax breaks, and credits and subsidies can nudge corporate decisionmakers in their investment choices. Reconsidering existing tax codes that create incentives for companies to replace labor with “excessive automation”—through heavy taxation of labor and low taxes on capital—should also be part of this overhaul. In addition, grants from government agencies, such as the NSF, Darpa, and the NIH, which have been used to steer research and development on other technologies, should also be a critical part of the tool set used to help steer AI.

Third, regulators ought to scrutinize acquisitions of AI startups by the major tech companies more closely. Biden’s latest executive order on “promoting competition in the American economy” advises scrutiny with “particular attention to the acquisition of nascent competitors.” This is too limited. Google, for example, has made 81 acquisitions in areas that could be considered nascent competitors and a whopping 187 acquisitions since 2003 in entirely new areas, with 30 in AI startups alone.

Fourth, policymakers should establish a “creative commons” for AI R&D as a way to mitigate the risks of concentration of power. This can be done in several ways. One involves getting the major tech companies to pool anonymized user data, in concert with a new Data Protection Agency that Senator Kirsten Gillibrand has already proposed. Data.gov, or the Opportunity Insights Economic tracker, could be great models for this. Another approach is to create incentives for the dominant companies to open up their machine-learning platforms, as Google has done with TensorFlow, or to encourage R&D on AI with societal spillover effects, modeled on public health or humanitarian response projects at companies such as Microsoft and Google. Yet another idea can be imported from India, where several major banks have started experimenting with “account aggregators,” which consolidate all of a user’s financial data in one place, so that they can open accounts and access financial services more efficiently.

The government could also mandate open IP. The administration has leverage with the multiple antitrust actions against Big Tech under consideration. Regulators could take a page from the 1956 federal consent decree against Bell. That decree kept the telephone company intact, but in exchange it was required to license all its patents royalty-free to other businesses. This led to some of the most profound technological innovations in history–including the transistor, the solar cell, and the laser. Similar consent decrees with conditions on AI patents can be considered as a way of settling with the tech companies while utilizing their assets for advancing AI for the greater good.

We must get beyond considering antitrust in the rearview mirror. The road ahead points to a growing concentration in AI. The risks of monopolization have changed in ways that Teddy Roosevelt never would’ve imagined as he was setting the rules of competitiveness for America’s Machine Age. If the Biden antitrust revolution were updated to the needs of the “second machine age,” history would only thank us for it.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories