AI Nationalism

For the past 9 months I have been presenting versions of this talk to AI researchers, investors, politicians and policy makers. I felt it was time to share these ideas with a wider audience. Thanks to the Ditchley conference on Machine Learning in 2017 for giving me a fantastic platform to get early feedback on my ideas. Thanks also to Nathan Benaich, Jack Clark, Matt Clifford, Jeff Ding, Paul Graham, Michael Page, Nick Srnicek, Yancey Strickler and Michelle You for helpful conversations and feedback on this piece.

Summary


The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society. The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue. This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good.

 

Progress in machine learning

The last few years have seen developments in machine learning research and commercialisation that have been pretty astounding. As just a few examples:

  • Image recognition starts to achieve human-level accuracy at complex tasks, for example skin cancer classification.

  • Big steps forward in applying neural networks to machine translation at Baidu, Google, Microsoft etc. Microsoft’s system achieving human-parity on Mandarin-English translation of news stories (when compared with non-expert translators).

  • In March 2016, DeepMind developed AlphaGo--the first computer program to defeat a world champion at Go. This is significant given that machine learning researchers have been trying to develop a system that could defeat a professional player for decades. AlphaGo was trained on 30 million moves played by human experts.

  • 18 months later, DeepMind released AlphaZero. Unlike AlphaGo, AlphaZero did not use any moves from human experts to train. Instead, it learned solely by playing against itself. AlphaZero was not only able to defeat its predecessor AlphaGo, but in what is known as ‘transfer learning’ it was also able to defeat best-in-class chess and shogi computers. Leading ML researchers I have spoken with have consistently remarked on the ‘uncanny’ significance of a simpler algorithm that used zero human data ending up being more competent and exhibiting more transferable intelligence. There is a huge gulf between the achievement of AlphaZero and Artificial General Intelligence, but nonetheless there is a sense that this could be another small step in that direction.

Beyond research, there has been incredible progress in applying machine learning to large markets, from search engines (Baidu) to ad targeting (Facebook) to warehouse automation (Amazon) to many new areas like self-driving cars, drug discovery, cybersecurity and robotics. CB Insights provides a good overview of all the markets that start-ups are applying machine learning to today.

This rapid pace of change has caused leading AI practitioners to think seriously about its impact on society. Even at Google, the quintessential applied machine learning company of my lifetime, leadership seems to be shifting away from a techno-utopian stance and is starting to publicly acknowledge the attendant risks in accelerated machine learning research and commercialisation:

“How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?” - Sergey Brin, April 2018

 

Three forms of instability

So why does this matter to nation states? There are 3 main ways in which accelerating progress in machine learning could create instability in the international order:

  1. Commercial applications of machine learning will create vast new businesses and destroy millions of jobs. In the extreme case, the country that invests the most effectively may end up the strongest economically.

  2. Machine learning will enable new modes of warfare - both sophisticated cyber offense and defense capabilities but also various forms of autonomous and semiautonomous weaponry for example Lockheed Martin’s Long Range Anti-Ship Missile. In the most extreme case, the country that invests the earliest and most aggressively may end up in a position of military supremacy.

  3. Eventually more general purpose AI will enable a fundamental speedup in science and technology research. In my opinion, this might actually be the most profound source of instability. Consider for example the state whose leadership in AI enables them to be the first to develop a viable fusion reactor for power generation. Again, in the extreme case this might enable a country to achieve Wakandan technological supremacy.

Machine learning, to use Jack Clark’s term, is a uniquely omni-use technology that could impact almost every area of national policy. Human intelligence has shaped everything we see around us, so our ability to build machines with greater and greater intelligence could eventually have the same impact. Despite that, we can find some historical parallels to help us think through how things might unfold. Nuclear technology is a dual-use technology with both civilian and military uses (nuclear weapons, radiography, power generation) as is oil (the use of which expanded from lighting to heating, to an incredibly broad range of industrial and military uses). Both of these technologies have had enormous influence on geopolitics and relatively rapidly governments became primary actors and remain so today (consider America’s 6,800 nuclear warheads or the 695 million barrels of oil in the Strategic Petroleum Reserve).

Ambitious governments have already started to see machine learning as the core differentiating technology of the twenty first century and a race has already commenced. This race will come to bear some similarity to nuclear arms race of the last century and the geopolitical tensions and alliances between nation states and multinational companies over oil. Economic, military and technological supremacy have always been extremely powerful motivators for countries.

 

Industry mix, labour cost, demographics, domestic champions

While the broad threats and rewards of forward-thinking AI policy are common across states, the impact of machine learning is going to vary substantially by country:

Firstly, each country has a different mix of dominant industries and automation is not affecting all industries at the same pace. Compare for example the manufacturing and construction sectors. The construction sector has only recently started to be transformed by digital technologies like Building Information Modeling, whereas manufacturing has seen substantial applications of robotics and automation. That is clear when you look at their comparative productivity gains since 1995:

manufacturing vs construction.png

The impact on wages and jobs will be felt very differently by countries whose core industries are automated sooner. Consider for example Germany, where the automotive industry represents over 10% of GDP; they are going to be more affected by the dynamics around self-driving cars than, for example, the UK where the automotive industry contributes 4% of GDP.

Secondly, every country has a different labour cost that machines will compete against. I have seen this most clearly with a cleaning robotics company called Avidbots (full disclosure: I am an investor). The start-up is headquartered in Waterloo, Canada and produces industrial robots that use computer vision to clean large commercial spaces at a lower price than human cleaning teams in most developed countries. They are seeing orders for their robots from all over the world; however, growth is fastest in Australia due to higher labour costs in the cleaning sector there.

This chart captures well how the economic consequences of automation may vary by country:

wage against the machine.png

If the OECD’s analysis is directionally correct then Slovakia will face a greater challenge in the near term than Norway, with twice as many jobs at risk of automation.

Thirdly, as Kai-Fu Lee articulated very eloquently in his recent New York Times article, only America and China currently are headquarters for the largest AI companies - Google, Apple, Amazon, Facebook, Baidu, Tencent, and Alibaba. National industrial strategy is very different when you are the home of these companies vs. just a customer state. I discuss this in more detail in a later section on the role of national champions.

Finally, in a time when AI is going to materially impact the labour market, different countries have very different attitudes to redistribution, and this will significantly affect how they approach sharing the value created by automation. It is worth noting that while both China and America are home to the leading AI companies, they also both have levels of income inequality at or near their historic peaks.

 

Blurring line between public & private sectors

This is complicated by the fact that there are incredibly powerful non-state actors who are also competing furiously to develop this technology. All of the 7 most important technology companies in the world--Google, Apple, Amazon, Facebook, Alibaba, Tencent, Baidu--are making huge investments in AI, from low level frameworks and silicon to consumer products.  It goes without saying that their expertise in machine learning leads any state actor at the moment.

As the applications of machine learning grow, the interactions between these companies and different nation states will grow in complexity. Consider for example road transportation, where we are gradually moving towards on demand, autonomous cars. This will increasingly blur the line between publicly funded mass transportation (e.g. a bus) and private transport (a shared Uber). If this leads to a new natural monopoly in road transportation should it be managed by the state (e.g. the call in London for “Khan’s Cars”) or by a British company, or by a multinational company like Uber?

As Mariana Mazzucato outlined in her fantastic book The Entrepreneurial State, states have historically played a crucial role in underwriting long term, high risk research in science and technology by funding either academic research or the military. These technologies are often then commercialised by private companies. With the rise of visionary and wealthy technology companies like Google we are seeing more high risk long term research being funded by the private sector. DeepMind is a prime example of this. This creates tension when the interests of a private company like Google and a state are not aligned. An example of this is the recent interactions between Google and the Pentagon where over 4000 Google employees protested against Google’s participation in “warfare technologies” and as a result Google decided to not renew its contract with the Pentagon. This is a rapidly evolving topic. Only a week earlier Sergey Brin had said that “he understood the controversy and had discussed the matter extensively with Mr. Page and Mr. Pichai. However, he said he thought that it was better for peace if the world’s militaries were intertwined with international organizations like Google rather than working solely with nationalistic defense contractors”.

 

AI with Chinese Characteristics

In developing a national strategy for AI, China is way out ahead of everyone else. Call it ‘AI with Chinese Characteristics.’ For China over the past couple decades, protectionism has been a winning strategy in developing enduring domestic technology companies, and it has ultimately enabled China to be the only other country in the world with AI companies to rival America’s. Beyond this, China’s technology companies are far more coupled to national policy than in the UK or US, with talk of the Chinese government taking equity ownership in them via 1% ‘special management shares’.

Some notable aspects of China’s early efforts in AI nationalism:

  • China has an explicit goal developed at the highest level of government to make itself the global leader in AI by the year 2030. As Jeff Ding notes, China viewed themselves as behind the US in AI policy and this was a major effort to catch up.
  • China has committed to a $2 billion AI technology park in Beijing.
  • China has developed the ‘Big Fund’ (Credit Suisse estimates total investment at ~$140 billion) to grow the Chinese semiconductor industry. Semiconductor performance is a key driver behind progress in machine learning research and applications
  • The state appears to be explicitly focusing their domestic champions on key fields, for example Tencent in computer vision for medical imaging and Baidu for autonomous driving.
  • The Chinese state appears to have recognised the importance of data to its AI nationalism efforts. China’s latest cybersecurity law mandates that data being exported out of China have to be reviewed.
  • China is implementing specific incentives for key foreign AI talent to relocate to China

The effects of this are starting to be felt. Andrew Moore, Dean of Computer Science at Carnegie Mellon, has estimated that the percentage of papers submitted from China to big AI conferences has increased from 5% a decade ago to 50% today (discussed eight minutes into this interview). This assumes that China is openly publishing all its research. Quantity is obviously not the same as quality and for now researchers based in North America and Europe remain the most influential (for example see Google Scholar ranking by citation). It seems reasonable to assume that this gap will start to close.

Beyond research, Chinese AI startups accounted for an astonishing 48% of global AI funding to startups last year, up from 11% in 2016.

Arguably the weakest link in China’s AI strategy at present is in semiconductors, hence the centrality of that to both the Big Fund and China 2030 and the tension between the US and China in this area, e.g. the US blocking the $117 billion takeover of Qualcomm. China’s annual imports of semiconductor-related products are now $260 billion and have recently risen above spending on oil.

The following graphics illustrate the gaps that China is trying to close in semiconductors and how much smaller the Chinese companies are than the US, Taiwanese or South Korean market leaders. This would also suggest that Taiwan and the Korean peninsula will become an even more geopolitically fraught area for US and Chinese foreign policy.   

China semiconductors 1.png
China semiconductors 2.png

(Source)

 

Key events in the arms race so far

While China has the most developed public position on AI Nationalism, there is a clear and growing competition between major countries to lead the world in AI. When referring to an arms race I am primarily using this term figuratively to describe a competitive dynamic between actors where the value they are creating is partly a function of their relative strength over a competitor. There is also a smaller component of this that is a literal arms race, where states are focused on autonomous and semi-autonomous weapons and machine learning enabled capabilities for cyberattack and defense. Here are the key events so far as I see them.

2014:

  • China launches the National Integrated Circuit Industry Investment Fund (aka the ’Big Fund’ with 138 billion yuan ($21.9 billion) to boost fledgling semiconductor industry.

2016:

  • Obama White House releases report on future of artificial intelligence. Report is widely read and discussed in China.
  • US gov spend of $1.2 billion on unclassified AI-related R&D.
  • AlphaGo as a ‘Sputnik Moment’ for China and AI. Sixty million people watch AlphaGo vs. Lee Sedol live. For Westerners who don’t understand the historical significance and popularity of Go in China, consider AlphaGo’s victory as analogous to a scenario where Tencent developed a team of humanoid robots that could play American football and then went on to defeat the New England Patriots at the Super Bowl. Given Go’s deep history as a vehicle for military strategy, the PLA also takes note. Workshops like “A Summary of the Workshop on the Game between AlphaGo and Lee Sedol and the Intelligentization of Military Command and Decision-Making” start to be held.
  • Partly in response to AlphaGo, South Korea announces investment of $863 million in AI research over the following 5 years.
  • Germany fails to prevent €4.5 billion Chinese takeover of industrial robotics manufacturer Kuka.

2017:

  • AlphaGo defeated world No.1 Kie Jie 3-0 in Wuzhen, China. Live video coverage of AlphaGo vs. Ke Jie was blocked in China.
  • China announces deeply ambitious plan to become the world leader in AI by 2030.
  • Pentagon publicly raises concerns around technology transfer from US to China in various AI related areas.
  • Increasing use of CFIUS (Committee on Foreign Investment in the United States) to block acquisitions and investments in US technology companies from Chinese companies or investors. Not limited to US companies--for example, CFIUS also used to block Chinese takeover of Aixtron (German chip equipment maker used in US weapons systems).

2018 so far:

  • January: France announces that foreign takeovers of AI companies will be subject to government approval.
  • March: France announces its AI plan - plan to invest €1.5 billion over 4 years. Meaningful vision for France’s role laid out by Cédric Villani. Trump uses CFIUS to block Qualcomm takeover.
  • April: The UK announces its AI plan to invest £600 million over the coming years (exact annual spend unclear). The EU Commission announces desire to invest €20 billion into AI by 2020. US considers using International Emergency Economic Powers Act to move beyond the blocking of Chinese investment and acquisitions to potentially blocking business partnerships between American and Chinese companies.
  • May: South Korea expands 2016 AI plan to $2.2 billion including 6 new AI institutes, a $1 billion fund for AI semiconductors and an overarching goal to reach the “global [AI] top four by 2022”.

 

AI Nationalism policies

It is helpful to consider the various fundamental actions a state can take in trying to advance its interests in AI. I am listing these roughly in order of how commonly taken these actions have been by governments over the past decade:

  • Invest money in research or academic institutions focused on machine learning.
  • Help to set standards/regulations so that the technology develops in a way that is most aligned/beneficial to the state’s domestic concerns and companies.
  • Indirectly invest money in the sector by subsidising venture capital.
  • Directly invest money in key companies.
  • Have the state become a key customer for your domestic champions e.g. the relationship between SenseTime and Chinese local and national government.
  • Block acquisitions of your domestic AI companies by foreign companies to preserve their independence.
  • Block investment into your domestic AI companies by foreign investors.
  • Block partnerships between your domestic AI companies and foreign companies.
  • Nationalise key domestic AI companies.

My personal belief is that we will see a lot more activity at the bottom of the list over the next few years. In particular, political leaders will start to question whether acquisitions of key AI startups should be blocked or perhaps even reversed. The canonical example for me is Google and DeepMind, which I will discuss more towards the end of this essay.

 

Domestic champions

Domestic champions are companies that are global commercial leaders in AI but are also headquartered in a specific country, for example Baidu and China or Google and the US. It is worth discussing domestic champions in more detail:

tax rates.png

source

This presents issues for the US and China and even bigger issues for other countries when it comes to redistributing the gains from automation and reducing inequality. If these companies continue to take a larger and larger share of the global economy the delta between tax revenues for China or America and everyone else becomes a bigger and bigger issue for politicians.

Kai-Fu Lee, formerly of Google China and now a leading venture capitalist in Beijing presents a bleak view on how this plays out for countries that are not the US or China,

"[I]f most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances."

This kind of dependency would be tantamount to a new kind of colonialism.

We can see small examples of new geopolitical relationships emerging. In March, Zimbabwe’s government signed a strategic cooperation framework agreement with a Guangzhou-based startup, CloudWalk Technology for a large-scale facial recognition program where Zimbabwe will export a database of their citizens’ faces to China, allowing CloudWalk to improve their underlying algorithms with more data and Zimbabwe to get access to CloudWalk’s computer vision technology. This is part of the much broader Belt and Road initiative of the Chinese Government.

There are historical parallels in all of this with the development of the oil industry. As Daniel Yergin explains in his masterful history of oil:

“two contradictory, even schizophrenic, strands of public policy towards the major oil companies have appeared and reappeared in the United States. On occasion, Washington would champion the companies and their expansion in order to promote America’s political and economic interests, protect its strategic objectives, and enhance the nation’s well-being. At other times, these same companies were subjected to populist assaults against “big oil” for their allegedly greedy, monopolistic ways and indeed for being arrogant and secretive”.

My prediction is that domestic antitrust action against Google and Amazon will not materialise, because for now Washington will care more about strengthening its hand against China. The notes Mark Zuckerberg prepared for his Senate hearing capture this pithily:

“Break up FB? US tech companies key asset for America, break up strengthens Chinese companies.”

 

What can countries that aren’t China or America do?

To answer that question we need to consider the resources that are important to a country in the race to develop a leading position in AI:

  • Compute. The compute resources associated with machine learning progress are increasing rapidly. Consider for example this Open AI analysis. While compute costs run into the hundreds of millions for the leading machine learning corporations, this is still small compared to government budgets, so in theory smaller states like Germany, Singapore, the UK or Canada can compete head to head with the US and China.
  • Deeply specific talent. At present, progress in machine learning is very sensitive to a talent pool that is microscopically small compared to the world’s population. There are perhaps 700 people in the world who can contribute to the leading edge of AI research, perhaps 70,000 who can understand their work and participate actively in commercialising it and 7 billion people who will be impacted by it. There are parallels with nuclear weapons, where the pool of scientists like Fermi, Szilard, Segre, Hahn, Frisch, Heisenberg capable of designing an atomic bomb was incredibly small compared to the consequences of their work. This suggests that specific talent could be a huge determiner in any AI arms race. China certainly thinks so. In this regard, some smaller countries--notably the UK and Canada--punch massively above their weight.
  • General STEM talent. The alternative is that you don’t need a Fermi or an Oppenheimer, you just need a lot of competent engineers, mathematicians and physicists. If so, the balance tips in favour of the largest most-developed countries, with the US and China squarely at the forefront.  
  • Adjacent technologies. I have restricted this discussion to machine learning, but it is worth noting that there are various technologies that could contribute to progress in machine learning. For example if quantum computing enables a breakthrough in computing power, this would further accelerate progress in machine learning. A state’s ability to win an AI arms race will be partly enabled by a broader set of technology investments in particular software and semiconductors.
  • Political environment - clearly any state action around AI will consume a portion of the leaderships political capital and will trade off against other key issues consuming the country. If a country’s political leadership is absorbed by dealing with another form of instability - for example climate change or Brexit then it will be harder for them to focus attention on AI.

 

The strange case of the UK

My interest in this topic partly stems from my concern that the UK government is not getting its  AI strategy right.

The UK finds itself in a fortunate position of having DeepMind--arguably the most important AI lab on the planet--headquartered in London. DeepMind has the magical combination of visionary, exceptional leadership in Demis Hassabis, Shane Legg and Mustafa Suleyman as well as the greatest density of AI research talent in the world. If humanity builds Artificial General Intelligence, many of the deepest thinkers on the topic believe that it will happen in Kings Cross. If you were looking for a domestic champion for the UK, you would be hard pressed to find a better candidate.

However, DeepMind is no longer an independent British company. It was acquired by Google in 2014 for £400 million at a critical inflection point: after their success with Atari DQN, but before the big AlphaGo/AlphaZero breakthroughs. It was a brilliant acquisition. In general, it appears that Google has been an excellent parent company for DeepMind, providing substantial resources to increase both the compute spend and the talent base (reported by Quartz as $160 million in 2016) as well as being able to tap into Google’s existing talent in machine learning--for example the Google Brain team. For a pre-revenue startup, remaining independent would have required DeepMind to raise close to half a billion dollars between 2014 and now to execute a similar plan. Today, in the middle of an bull market for AI startups, that seems reasonable, but looking back at 2014--before SoftBank’s Vision Fund and the escalation in huge growth rounds for pre-revenue companies--it would have been a tall order. Ultimately, DeepMind probably chose the highest impact and ambition path available to them in 2014 by selling to Google. I have always had enormous respect for Google and the principled and visionary leadership there is likely a very good fit with the DeepMind culture.

However I find it hard to believe that the UK would not be better off were DeepMind still an independent company. How much would Google sell DeepMind for today? $5 billion? $10 billion? $50 billion? It’s hard to imagine Google selling DeepMind to Amazon, or Tencent or Facebook at almost any price. With hindsight, would it have been better for the UK government to block this acquisition and help keep it independent? Even now, is there a case to be made for the UK to reverse this acquisition and buy DeepMind out of Google and reinstate it as some kind of independent entity?

The two main political parties in the UK both struggle with this kind of question for different reasons. The Conservative MPs I have spoken to about this topic will always cite the troubled history of British Leyland; that spectre of failed market interference still looms large over their thinking. They remain convinced that the only path is laissez-faire economics.

The Labour party has a different challenge. They assert the importance of state action, for example Jeremy Corbyn’s desire to nationalise railways, water and energy companies. But this thinking focuses on those historic battles over privatisation and doesn’t look to the future. Corbyn and McDonnell today are more interested in Great Western Rail than DeepMind.

All of this is further complicated by the fact that the government is hugely distracted by Brexit.

DeepMind is not the only example of an exceptional British company working on cutting edge machine learning. The UK has made many fundamental contributions to the field of machine learning and is home to some of the world’s very best universities for machine learning research including Cambridge, Edinburgh, Imperial, Oxford and UCL. With the growth of the UK’s startup sector over the past decade, there are now many great teams working to combine the UK’s expertise in building great technology companies like Arm, and its academic talent in machine learning.  Prowler is applying reinforcement learning to the general field of decision making. Graphcore is building a new type of processor for machine learning. Ocado is arguably the most sophisticated global player in warehouse automation after Amazon. DarkTrace is one of the leading companies applying machine learning to cybersecurity. Benevolent is doing pioneering work in applying machine learning to drug discovery. All these companies are growing incredibly quickly, doing transformational work in their fields and building deep talent pools. They are all still independent startups. What will the UK government do when Amazon, Google or Tencent make them a multi-billion dollar offer? At present, nothing. This is a good thing if you’re Google, Amazon or Alibaba looking to further cement your position and indirectly a good thing for the US or China. Is it a good thing for the average UK citizen?

 

Rogue actors

Most of this essay has focused on the national interests of countries. There are other non-state political actors who also have to be considered - for example terrorist cells or rogue states. This is most relevant when it comes to machine-learning-enabled cyberattacks and autonomous weaponry. For those interested to learn more about these risks, they were covered well in this report on malicious uses of AI. The key question for me is the extent to which key labs, corporations or nation states ‘go dark’ in terms of publishing AI research to avoid enabling malicious actors. The risk is well captured by Allan Friedman in Cybersecurity and Cyberwar:

“To make a historic comparison, building Stuxnet the first time may have required an advanced team that was the cyber equivalent to the Manhattan Project. But once it was used, it was like the Americans didn’t just drop this new kind of bomb on Hiroshima, but also kindly dropped leaflets with the design plan so anyone else could also build it, with no nuclear reactor required… the proliferation of cyber weapons happens at Internet speed”

This is also complicated by the fact that cyber attacks may not be as easily identified:

“The problem is that, unlike in the Cold War, there is no simple bipolar arrangement, since, as we saw, the weapons are proliferating far more widely. Even more, there are no cyber equivalents to the clear and obvious tracing mechanism of a missile’s smoky exhaust plume heading your way, since the attacks can be networked, globalized, and of course, hidden. Nuclear explosions also present their own, rather irrefutable evidence that atomic weapons have been used, while a successful covert cyber operation could remain undetected for months or years”

The most likely outcome here is that certain key machine learning research ceases to be shared in the public domain to avoid enabling malicious actors. This thinking is captured most clearly in OpenAI’s recent charter:

“We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.“

If we do see key research labs or countries ‘go dark’ on some of their research output, a Cold War dynamic could emerge that will reward the most established and largest state or corporate actors. Ultimately, this reinforces the AI Nationalism dynamic.

 

The great wall of money

So far the amount invested by states is an order of magnitudes lower than that of Google, Alibaba etc. McKinsey estimates that the largest technology multinationals spent $20-30 billion on AI in 2016.

I believe that the current government spending on AI is tiny compared to the investment we will see as they come to realise what is at stake. What if rather than spending ~£500 million of public money on AI over a number of years the UK spent something closer to its annual defence budget of £45 billion?

Consider again the parallel with nuclear weapons, where the US government went from ignoring key scientists like Leo Szilard to recognising the existential importance of nuclear weapons to initiating the Manhattan Project. The Manhattan Project went from employing zero people in 1941 to within 3 years spending $25 billion (in 2016 dollars), employing over 100,000 people and building industrial capacity as large as the entire US automobile industry. States have tremendous inertia, but once they move they can have incredible momentum.

If this happens, then the amount of investment in AI research and commercialisation could be 10-100X what it is today. It is not always the case that more funding enables more progress but nonetheless I think it is prudent to assume that if states substantially increase their investment in machine learning then progress is likely to speed up further. This only reinforces the importance of investing now in research that helps to mitigate risks and ensure that these developments go well for humanity.

 

Engineers without borders

It is also worth acknowledging that there are connections that transcend the state and nationalism as Jeff Ding notes in his excellent report “Deciphering China’s AI Dream”:

“It is important to consider the interdependent, positive-sum aspects of various AI drivers….Cross-border AI investments, with respect to the U.S. and China, have significantly increased in the past few years. From 2016 to 2017, China-backed equity deals to U.S. startups rose from 19 to 31 and U.S.-backed equity deals to Chinese startups quadrupled from 5 to 20. Moreover, what is often forgotten is the fact that both Tencent and Alibaba are multinational, public companies that are owned in significant portions by international stakeholders (Naspers has a 33.3% stake in Tencent and Yahoo has a 15 percent stake in Alibaba).”

It is also true that economies and fundamental science and technology progress do not neatly track state borders. Talent and capital are global: DeepMind’s initial investors were from Silicon Valley and Hong Kong, their team is extremely international and they now have offices in Canada and France. There is a weakness to viewing things too narrowly through a state-centric lense. However, I believe that overall the economic and military consequences of machine learning will be such a dramatic cause of instability that nation states will be forced to put their citizens ahead of broader goals around internationalism.

Up until now I have just tried to outline what I think will happen. Machine learning becomes a huge differentiator between states--economically, militarily and technologically--and triggers an arms race, which causes progress in AI to speed up faster.

However there is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result. George Orwell writing on nationalism in 1945 captures the tension between a patriotism that is primarily defensive, and a nationalism that seeks to dominate:

“Nationalism is not to be confused with patriotism. Both words are normally used in so vague a way that any definition is liable to be challenged, but one must draw a distinction between them, since two different and even opposing ideas are involved. By ‘patriotism’ I mean devotion to a particular place and a particular way of life, which one believes to be the best in the world but has no wish to force on other people. Patriotism is of its nature defensive, both militarily and culturally. Nationalism, on the other hand, is inseparable from the desire for power. The abiding purpose of every nationalist is to secure more power and more prestige, not for himself but for the nation or other unit in which he has chosen to sink his own individuality.”

Personally, I believe that AI should become a global public good--like GPS, HTTP, TCP/IP, or the English language--and the best long term structure for bringing this to fruition is a non-profit, global organisation with governance mechanics that reflect the interests of all countries and people. The best shorthand I have for this is some kind of cross between Wikipedia, and The UN. One organisation that has made a step in this direction is OpenAI, which operates as a non-profit entity focused on AI research. This doesn’t solve many of the economic issues around machine learning that I have discussed in this essay, but it is a great improvement on machine learning research being primarily the economic domain of large technology companies and the military domain of nation states.

While the idea of AI as a public good provides me personally with a true north, I think it is naive to hope we can make a giant leap there today, given the vested interests and misaligned incentives of nation states, for-profit technology companies and the weakness of international institutions. I believe that we are likely to go through a period of AI Nationalism before we get to a place where AI is treated like a public good, and that, to use Orwell’s distinction, a kind of AI Patriotism is likely to be a good thing for smaller countries in the short term.

Taking the example of the UK again, I am in favour of a more expansive national AI strategy to protect the UK’s economic, military and technological interests and to give the UK a credible seat at the table when global issues around AI are being worked out. That will help ensure that the UK’s economic interests and values are considered. I believe that the stronger the position of smaller countries like the UK, Canada, Singapore or South Korea in the short term, the more likely we are to move in the longer term to AI as a global public good. For that reason I believe it is necessary for the UK government to take steps towards investing in and protecting its homegrown AI companies and institutions to allow them to play a larger role on the world stage independent of America and China. I have lived in both America and China, and during that time developed enormous respect and affection for both of those countries. That does not prevent me from believing the UK should protect the economic interests of its citizens and I would like to see the UK play a material role in shaping the future of AI. Once again I come back to DeepMind - I believe that the UK and the world would be in a better place were DeepMind to be an independent entity. Ideally, in the longer term as a non-profit, international organisation focused on AI as a global public good.

During the coming phase of AI Nationalism that this essay predicts, I believe we need a simultaneous investment in organisations and technologies that can counterbalance this trend and drive an international rather than national agenda. Something analogous to The Baruch Plan led by organisations like DeepMind and OpenAI. I plan to write more about that soon.