“The Coming Wave” explores the advantages and risks associated with a surge of emerging technologies. Authored by Mustafa Suleyman, a British artificial intelligence researcher and co-founder of DeepMind and Inflection AI, the book extends beyond just AI. It delves into the merits and drawbacks of other developing technologies, such as quantum computing and synthetic biology.

In the initial sections of the book, the author paints a picture of the potential benefits of these emerging technologies, reading like utopian science fiction. It later shifts to outlining the potential threats, evoking a dystopian hellscape. The book ends with a series of recommendations on navigating the fine line between these two extremes. The following review reflects my interpretations of the author’s opinions, except where I specifically state my own views.

The Wave

The central metaphor of the book is the “wave,” defined by Suleyman as a convergence of new, general-purpose technologies that emerge simultaneously and carry significant societal impact. Historically, there have been only 24 technologies of this magnitude, such as farming, the factory system, and electricity. Suleyman identifies the upcoming wave as comprising three primary general-purpose technologies: artificial intelligence, synthetic biology, and quantum computing. Everett Rogers, a technology scholar, characterizes technologies as “clusters of innovations” in which one or more features are interrelated. According to this perspective, the impending wave represents a supercluster.

Once established, technological waves are nearly impossible to halt. Technology tends to disseminate regardless of obstacles, a process primarily driven by two factors: demand and the subsequent reduction in costs to meet that demand. Historical attempts to impede technological proliferation, such as the Ottomans’ resistance to the printing press and the Luddites’ opposition to mechanized looms, have largely been unsuccessful. A notable exception is the proliferation of nuclear technology, which was curtailed through a concerted multinational effort, with demand confined mainly to nation-states. As long as technology remains useful, desirable, affordable, and accessible, it not only persists but also grows in influence, with these attributes reinforcing one another.

How people end up using technology is never certain. Johannes Gutenberg, for instance, did not intend to spark the Reformation; his aim was simply to sell more Bibles. The more general-purpose a technology is, the less control its inventors have over its applications.

Artificial Intelligence

As an AI researcher and founder, Mustafa Suleyman naturally dedicates significant attention to the subject. His assertion is that AI will eventually supplant what he terms “intellectual manual labor.” Over the coming decade, he anticipates that large language models (LLMs) will experience a substantial enhancement in their capabilities, coinciding with a dramatic decrease in costs by several orders of magnitude.

Instead of concentrating on the pursuit of the elusive concept of artificial general intelligence (AGI), Suleyman proposes a focus on what he calls artificial capable intelligence (ACI). He views ACI, defined as the stage where AI can accomplish complex objectives with minimal human oversight, as the next significant phase in the evolution of artificial intelligence.

Suleyman introduces the concept of the “modern Turing test,” which he defines as an AI’s ability to successfully execute the task of “making $1 million dollars on Amazon in a few months with just a $100k investment.” He believes that achieving this feat with minimal human intervention is possible within the next year, and likely to become fully autonomous within three to six years. The key challenge lies in developing an AI capable of hierarchical planning, effectively coordinating multiple goals and subgoals towards a singular objective. This concept bears a striking resemblance to the well-known hypothetical scenario of the “paperclip maximizer,” a thought experiment that illustrates the potential risks of an AI system single-mindedly pursuing a defined goal without ethical or safety considerations.

Artificial intelligence introduces asymmetric risks, affecting entire societies rather than just individuals. Traditional risks, such as a car crash, typically impact only the parties directly involved in the incident. In contrast, an asymmetric risk in this context would be akin to a hypothetical scenario where a malfunction in Tesla’s autopilot system causes all Teslas to spontaneously take a sharp right turn (note: this analogy is my own, not Suleyman’s). This type of risk implies a broader, systemic impact, originating from a single source but affecting a wide array of users or stakeholders.

The advancement of artificial intelligence will invariably lead to the development of more sophisticated cyberweapons. Incidents like the WannaCry and NotPetya cyber attacks had significant impacts and were only neutralized due to exploitable flaws in their design — flaws that could have been easily rectified. Future generations of AI-powered cyberweapons are anticipated to possess the capability to modify their code in real-time. This adaptability will enable them to persistently scan networks, autonomously identify vulnerabilities, and exploit them effectively.

The advent of generative AI technologies also raises concerns about the proliferation of fake news and synthetic identities. With these tools, it becomes increasingly feasible to create convincing yet entirely fictitious accounts of events, complete with rich, detailed histories. This development poses a significant challenge to the discernment of truth, as the authenticity of information can be easily obscured.

This situation is further complicated by individual biases, where people’s perceptions of reality are influenced by their pre-existing beliefs and opinions. The scenario described in the Israel-Gaza conflict, where interpretations of an event like a rocket landing on a hospital are split along lines of bias, illustrates this challenge. People may perceive such footage as either fabricated or real, largely depending on their preconceived notions or allegiances. As generative AI continues to evolve, distinguishing between genuine and artificially created content will become increasingly difficult, amplifying the risk of misinformation and the manipulation of public opinion.

The first known case of an AI-caused human fatality involved an Israeli robot sharpshooter. This autonomous system, satellite-operated and able to fire 600 rounds per minute, was used in the killing of an Iranian nuclear scientist. Suleyman foresees that the full automation of militaries will lower the barriers to conflict, as the human cost for at least one side diminishes. He speculates that future wars might be triggered by AI systems responding to perceived threats, similar to overreactions observed in algorithmic trading, leading to conflicts initiated for reasons not fully understood by humans.

Suleyman challenges the optimistic view that billions of people will transition to high-end jobs in the future. He posits that there will be limited domains where human capabilities surpass those of machines. This perspective aligns with that of Nick Bostrom, who suggests that human labor might eventually be valued solely for its artistic or sentimental significance, as performed by a human. Contrary to the historical trend where technological advancements created new, unforeseen job categories (like “social media influencer”), Suleyman believes this pattern will not continue in the era of advanced AI and automation.

If humanity succeeds in creating an artificial superintelligence, we would encounter what is termed the “gorilla problem.” This analogy draws on the fact that while gorillas are physically stronger than humans, they are kept in zoos by humans due to our superior intelligence. Thus, if an entity possessing intelligence surpassing that of humans were to emerge, it could potentially dominate us in a similar manner. This scenario underscores the concern that a superintelligent AI might not inherently share human goals or values, leading to what is known as the “alignment problem” — the challenge of ensuring that such an AI’s objectives are aligned with human interests and ethics.

Synthetic Biology

Suleyman turns his attention to another emerging technology: synthetic biology. His predictions for this field are diverse and profound. They include the development of organisms engineered with the precision characteristic of modern software, highly personalized health treatments tailored to individual genetic profiles, and the creation of “enhanced” humans with improved physical attributes. Additionally, he envisions the use of carbon nanotubes as interfaces connecting humans directly with the digital world, enabling new forms of interaction and integration between biological and digital systems. These advancements, according to Suleyman, represent just a few of the potential breakthroughs in the rapidly evolving field of synthetic biology.

DeepMind, a pioneering company in the field of artificial intelligence, developed AlphaFold. This deep learning system has made a significant breakthrough in the scientific community by being able to predict the three-dimensional structures of proteins based solely on their amino acid sequences. This capability represents a major advancement in understanding protein folding, a complex and crucial aspect of biology, with far-reaching implications for medical research, drug discovery, and our overall understanding of life processes.

Technologies like AlphaFold and CRISPR unlock the ability to manipulate biology at the molecular level. It also enables more advanced gain-of-function experiments, which involve the deliberate modification of pathogens to enhance their properties, such as increased lethality or infectiousness. This type of research, while often aimed at understanding diseases better and developing treatments, carries significant risks. One of the potential dangers is the creation of bioweapons that could be designed to target specific populations based on their DNA. Such bioweapons would have the capability to cause selective, large-scale harm, posing a grave threat to global health security. The ethical and safety concerns surrounding gain-of-function research are substantial, especially considering the possibility of such advanced biological agents being misused.

Suleyman also foresees a biohacking arms race, where some individuals enhance themselves to “post-human” levels. This scenario could lead to stark inequalities between enhanced and non-enhanced humans, raising critical questions about rights, access to technology, and the very definition of being human. The emergence of a biologically enhanced class could profoundly shift societal dynamics, governance, and ethical frameworks, necessitating a reevaluation of fairness and opportunity in a radically altered world.

Quantum Computers

Quantum computers, leveraging the principles of quantum mechanics, offer computing speeds vastly superior to traditional computers. Each quantum bit, or “qubit,” can exist in multiple states simultaneously, as opposed to the binary 0 or 1 state of classical bits. This capability allows quantum computers to process complex calculations at an unprecedented pace. Google’s quantum computer, for instance, completed a task in mere seconds that would have taken a traditional computer over 10,000 years. The computing power of a quantum computer effectively doubles with the addition of each qubit. This immense speed makes quantum computing exceptionally well-suited for optimization problems, which are central to many modern technological challenges. The advancement in quantum computing will not only boost AI development, due to its reliance on computing power, but also significantly aid synthetic biology, which benefits from AI’s capabilities. Thus, quantum computers are poised to be a catalyst in the rapid progression of both artificial intelligence and synthetic biology.

On the downside, quantum computing poses a serious threat to modern cryptography. Current encryption methods are secure because they rely on problems that are extremely time-consuming for traditional computers to solve. However, the extraordinary computational abilities of quantum computers could easily break these encryption algorithms. This vulnerability extends to a wide range of technologies, including cryptocurrencies like Bitcoin. The advent of quantum computing, therefore, necessitates the development of new forms of cryptography that can withstand the power of quantum processing.

Impact on the nation state

Suleyman emphasizes that technology inherently embodies a form of power, making it intrinsically political. The emerging technologies discussed significantly amplify the power accessible to individuals. This development leads to a paradoxical situation where power is simultaneously concentrated and dispersed, resulting in both the strengthening and weakening of existing power structures, including nation-states.

The original formation and justification of the nation-state revolved around the Hobbesian bargain, a trade-off between individual liberty and collective security. However, as Suleyman points out, if advancements in technology reach a point where a nation-state can no longer assure security, this foundational social contract comes into question. The evolution of technology, therefore, not only reshapes our physical and digital worlds but also has profound implications for our social structures and the very concept of governance.

Suleyman envisions two primary paths that nation-states might take, with various possibilities in between. The first is what he refers to as the “zombie state.” In this scenario, a country maintains the outward appearance of a liberal democracy but becomes functionally hollowed-out. Core services in such states would be significantly weakened, and they would suffer from political instability and divisiveness. Suleyman observes early signs of this trend in the United States, indicating a potential shift towards this “zombie state” model.

At the opposite end of the spectrum, Suleyman foresees the potential emergence of totalitarian governments, which would make the oppressive regime depicted in George Orwell’s “1984” seem almost utopian by comparison. The uncritical and unchecked adoption of emerging technologies could pave the way for extreme state control. Such governments would possess unprecedented means to monitor and repress their populations, including the potential for genetic manipulation.

Suleyman points out that early indications of technology-enabled population surveillance are already visible in countries like China. This development suggests a trajectory towards more invasive and comprehensive state monitoring and control, underlining the profound implications of how emerging technologies are adopted and regulated by governments.

Winner take all

The “superstar effect” is a phenomenon where leading players in a field capture a disproportionately large share of the rewards, significantly overshadowing others. This is similar to the “power law”, fundamental principle underlying venture capitalism as well as several other domains and phenomena. In this context, power law dynamics dictate that a small number of successful investments can yield returns of 100 times or more their initial value. Meanwhile, the remaining majority of ventures end up competing for a substantially smaller portion of the overall gains. This dynamic leads to a highly skewed distribution of success and rewards, with a few top performers reaping the majority of the benefits.

The advent of the “coming wave” of technological advancements is expected to intensify the “superstar effect.” This will result in the creation of even wealthier and more successful superstars in various domains. The returns on intelligence, driven by advancements in AI, quantum computing, and synthetic biology, are anticipated to compound exponentially. Consequently, a small number of highly capable organizations will reap massive benefits, leading to significant concentrations of wealth and power.

These developments will enable the rise of vast, automated megacorporations that shift value transfer from human capital to raw capital. In this scenario, the economic and societal influence of human labor and skills diminishes, while capital, especially in the form of advanced technology and infrastructure, becomes the primary driver of economic value. This shift could further exacerbate economic inequalities and alter the traditional dynamics of labor, capital, and production.

The trend of technological dominance and the concentration of power in a few large entities is already evident in the current business landscape. Big tech companies are among the largest in the world and, as a category, are larger and growing more rapidly than traditional Fortune 500 companies. These companies possess both the financial capital and technological resources to capitalize on the forthcoming wave of technological advancements, often leading the charge in developing these transformative technologies.

For instance, Amazon’s investment in research and development (R&D) is a testament to this trend. With an R&D expenditure of $78 billion, Amazon alone accounts for a significant portion of the global R&D spending, which stands at around $700 billion. This level of investment not only highlights Amazon’s role in driving technological innovation but also illustrates the scale at which these tech giants operate and their capacity to shape future technological landscapes.

China

The game of Go holds significant cultural and intellectual importance in China, so the defeat of the world Go champion by DeepMind’s AlphaGo was a momentous event, often likened to a “Sputnik moment” for China. Just as the launch of Sputnik by the Soviet Union in 1957 galvanized the United States to invest heavily in space technology, rocketry, and computing, leading to its emergence as a superpower in these fields, AlphaGo’s victory may have a similar catalyzing effect on China’s pursuit of leadership in artificial intelligence.

China has since adopted an explicit national strategy to become the world leader in AI by 2030. This ambition is evident in the significant resources being allocated to AI research and development. For instance, Tsinghua University in Beijing has emerged as a leading academic institution in AI research, publishing more papers in this field than any other university worldwide.

These developments have not gone unnoticed in other countries. In 2021, the Pentagon’s chief software officer resigned, expressing a stark viewpoint on the global AI competition. He was quoted in the Financial Times saying, “We have no fighting chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over in my opinion.” This statement reflects the growing concern about the pace and scale of China’s advancements in AI and its implications for global technological and strategic balances.

China’s commitment to developing advanced technologies, including AI, is partly driven by demographic challenges. The country’s total fertility rate is among the lowest globally, leading to projections of significant population decline. For instance, the Shanghai Academy of Social Sciences predicts that China’s population could drop to around 600 million by the end of the 21st century.

This demographic trend means that China could face challenges similar to those currently experienced by Japan, where a shrinking working-age population is burdened with supporting an increasingly large retired population. The development and implementation of technologies like AI become crucial under these circumstances, as they can help mitigate the impacts of a declining workforce. AI and automation can potentially compensate for labor shortages, maintain economic productivity, and support the elderly, thus playing a vital role in managing the socio-economic consequences of demographic shifts.

Tech can save us

The challenges highlighted by Suleyman, including climate change, an expanding retired population, and the need to sustain rising living and healthcare standards, are indeed pressing global issues. The anticipated decline in global population over the next century will further exacerbate these problems. As the ratio of workers to retirees shifts, traditional economic models and welfare systems will be strained, potentially making it difficult to maintain current living standards.

In light of these challenges, Suleyman presents a somewhat paradoxical argument: despite the various risks and drawbacks associated with new technologies (as discussed earlier in the book), he believes that these technologies are also crucial for addressing these major global challenges. This viewpoint aligns with the idea that technological advancements, if harnessed responsibly, can provide innovative solutions to complex problems like climate change, healthcare, and economic productivity.

The need for new technology becomes a balancing act between mitigating its potential negative impacts and leveraging its capabilities for the greater good. This perspective reflects a growing recognition that technology, while presenting risks, is also an indispensable tool in tackling some of the most daunting challenges facing humanity.

What is to be done?

In the concluding section of his book, Suleyman presents a comprehensive strategy for managing the upcoming wave of technological advancements. His aim is to maximize their benefits while mitigating associated risks. He introduces the concept of ‘containment’, which he describes as an array of interconnected and mutually supportive mechanisms spanning technical, cultural, legal, and political domains. These mechanisms are designed to maintain societal control over technology during periods of rapid and exponential change.

Suleyman emphasizes that regulation alone is insufficient to manage these challenges, but acknowledges it as a necessary component of a broader strategy. To this end, he advocates for a large-scale initiative akin to the Apollo program, specifically targeting AI and bio safety. This program would involve hundreds of thousands of professionals working towards ensuring the safe development and deployment of these technologies.

Furthermore, Suleyman suggests the implementation of legislation mandating that a certain percentage of corporate R&D budgets be allocated specifically for safety measures. This proposal aims to institutionalize safety as a core aspect of technological development, ensuring that as companies innovate, they also consistently invest in mitigating potential risks associated with their technologies. This approach reflects a proactive stance towards technological stewardship, prioritizing not just advancement but also the safety and well-being of society.

Suleyman points out that the generality of an AI model correlates with its potential threat level. Thus, AI laboratories working on foundational AI capabilities, such as OpenAI, Anthropic, and Gemini, require particular scrutiny and oversight.

He observes that, as of now, there is no comprehensive, formalized global effort for routinely testing deployed AI systems or the necessary tools for such testing. To address this gap, Suleyman proposes the creation of an AI Audit Authority (AAA). This body would be dedicated to fact-finding and auditing the scale of AI models.

The AAA’s role would involve monitoring AI development, particularly when AI systems reach certain capability thresholds, and informing the public about these advancements. Its function would include posing critical questions about the systems’ capabilities, such as whether they show signs of self-improvement or can set their own goals. The establishment of such an authority aims to provide a systematic and transparent approach to AI oversight, ensuring that advancements in AI are closely monitored for safety, ethical considerations, and potential societal impacts.

Suleyman’s proposal for managing risks in synthetic biology includes the SecureDNA program. This program envisages connecting every DNA synthesizer to a centralized, encrypted system that screens for pathogenic sequences. I’m not sure how well this would work, as I assume a motivated actor could easily remove or spoof the communications to SecureDNA.

Suleyman also emphasizes the responsibility of those creating new technologies. He advocates for scientists and developers to adhere to ethical standards akin to the Hippocratic Oath. This approach is intended to ensure that only responsible parties produce the most sophisticated AI systems, DNA synthesizers, and quantum computers. At Inflection, for example, their AI named Pi (Personal Intelligence) is designed to exhibit caution, express self-doubt, and generally defer to human judgment. This to me seems a pretty weak defense, at least as described in the book.

He concludes by saying that safe, contained technology is not a final end state, but rather an ongoing process. Containment is a narrow and treacherous path.

My thoughts

I can’t say this was an entirely enjoyable book, though I did like the sci-fi scenarios. It is very prone to make readers paranoid and/or filled with existential dread. Some sections also felt somewhat dry and stiff. However, I do think it is an important book, because it describes all of the very real dangers we face with the coming (or arrived?) wave of technologies.

Suleyman’s book echoes themes from several influential works on technology and societal change. It recalls “Guns, Germs, and Steel” by Jared Diamond in its historical analysis of technological revolutions. Similarities with Max Tegmark’s “Life 3.0” are evident in the discussions on biotechnology. The book also resonates with Alvin Toffler’s “Future Shock” in addressing the challenges of adapting to rapid technological change. Furthermore, it parallels “The Singularity is Near” by Ray Kurzweil and “Superintelligence” by Nick Bostrom in exploring the exponential growth and potential risks of AI. Unique to Suleyman’s narrative is the emphasis on the interplay and cumulative acceleration of various emerging technologies, highlighting compounded risks not typically addressed in these other works.

One could have the cynical outlook that Suleyman is advocating for regulation in order to cement his firm’s position. However, he does say that companies building general capabilities should face increased scrutiny, which would seem to include Inflection. I think for the most part, his suggested changes should allow for continued innovation without too much stifling red tape. I think monitoring model size might be the wrong parameter to monitor as LLMs continue to increase capabilities at smaller parameter counts (the new term being Small Language Models, or SLM).

Time will tell whether Suleyman will turn out to be the modern day Malthus. During Malthus’ time, it was easier to project out the rise in population increase, but not so easy to predict the even greater increase in food production. This is because predicting the increase in food production required the generation of new knowledge, which was impossible to predict. Likewise, we simply can’t predict what new technologies will be developed that will help curtail or eliminate the threats outlined in this book.