Nexus: A book review
I like to write notes, or book reviews, on the books I read so I can go back to them once in a while. But I have never published them. This is my first attempt at doing so.
Over the holiday I read Yuval Noah Harari’s Nexus, a profound exploration of humanity’s information networks, tracing their evolution from ancient times to the present and pondering their potential future. The book delves deeply into the paradox of human intelligence: we possess extraordinary technological capabilities, yet we often lack the wisdom to wield them responsibly. Harari questions not just how humanity has come so far, but also whether we are equipped to face the monumental challenges of an AI-driven world.
The Evolution of Cooperation and Stories
About 70,000 years ago, Homo sapiens developed an unprecedented capacity for cooperation, evident in inter-band trade and artistic traditions. This cooperation was made possible by the emergence of stories. Harari explains that stories create a third level of reality — “intersubjective reality” — beyond the objective and subjective. Concepts like laws, nations, and currencies exist not in the physical world but in the shared stories humans tell one another. Brands, for instance, are specific types of stories that imbue products with meaning.
While storytelling enables cooperation, it also has a darker side. History is often shaped by harmful but mesmerizing stories. Fiction is malleable, simple, and comforting, whereas the truth is complex and often unsettling. Harari warns that as information networks expand, they risk prioritizing the proliferation of connectivity over truth.
Information: Connection Over Representation
Harari provides a nuanced view of information, distinguishing between naive and pragmatic perspectives. The naive view holds that information seeks to represent reality and lead to truth. In reality, however, most information does not represent anything but instead connects things. Lies, fantasies, and fictions are forms of information, too.
This focus on connectivity over truth creates problems. Without mechanisms to prioritize truth, an increase in information tends to amplify falsehoods. This dynamic explains why today’s information explosion has not necessarily made us wiser.
The Rise of Democracy and Totalitarianism
Harari traces the historical evolution of political systems, emphasizing the role of information networks in enabling both democracy and totalitarianism. Early philosophers like Plato and Aristotle argued that democracy could only work in small city-states, as meaningful conversations required proximity and understanding. The invention of mass media — starting with the printing press — enabled large-scale democracies but also laid the groundwork for centralized control.
Democracy, Harari argues, is not just about voting but about creating self-correcting systems like a free press and checks and balances. While the Founding Fathers of the United States committed grave mistakes, such as endorsing slavery, they also created mechanisms to rectify these errors. Totalitarian regimes, by contrast, aim to centralize all information and suppress independent decision-making.
Harari highlights the contrasting uses of technology: democracies decentralize information, fostering dialogue and transparency, whereas totalitarian regimes centralize it, aiming for absolute control. Yet, even democracies face challenges. Surveillance capitalism and AI threaten to erode transparency and accountability, making the preservation of self-correcting mechanisms more critical than ever.
AI: The Alien Intelligence Among Us
Harari’s analysis of AI is both illuminating and unsettling. He distinguishes intelligence — the ability to achieve goals — from consciousness — the capacity to feel emotions. AI lacks consciousness but excels at processing data and influencing human behavior. Harari warns that AI could reshape society by becoming active participants in information networks, making decisions and generating ideas alien to human reasoning.
The alignment problem looms large: how can we ensure that AI systems pursue goals aligned with human values? Harari compares this to the historical challenge of guiding human creativity, which has driven both progress and destruction. He advocates for training AI to recognize its own fallibility, drawing inspiration from Socratic wisdom.
One of Harari’s most striking warnings is that humanity risks living inside the “dreams” of AI. Without safeguards, AI could distort reality, impose new world orders, and annihilate privacy. Yet, Harari remains hopeful, emphasizing that humans still have the power to shape these outcomes — if we act wisely and quickly.
The Struggle Between Truth and Order
Every information network, Harari explains, must balance two priorities: discovering truth and maintaining order. Mythology and bureaucracy, for example, often sacrifice truth for the sake of stability. This trade-off persists in modern networks, from governments to corporations.
Harari underscores the importance of strong self-correcting mechanisms. He draws parallels to evolution, which operates through trial and error. Just as life evolved over billions of years, human institutions must embrace self-correction to avoid catastrophic mistakes.
Global Cooperation in the AI Era
Harari concludes with a call for global cooperation. He emphasizes that the regulation of AI will require unprecedented trust and discipline, as governments and corporations must prioritize the long-term interests of humanity over short-term gains. Harari highlights the historical trend toward increasing cooperation, from tribes to states, and argues that this legacy can guide us in addressing the challenges of the AI revolution.
Despite fears of escalating conflict, Harari points out that modern societies have deprioritized militarism in favor of welfare and education. However, he warns that this progress is reversible if leaders like Putin embrace a dog-eat-dog worldview. In the AI era, Harari notes, the ultimate predator may not be human at all.
Humans: Intelligent but Self-Destructive
Harari offers us a provocative paradox: if we are so wise, why are we so self-destructive? Humans are capable of creating nuclear missiles and super-intelligent algorithms, but we also go ahead and produce these things, even when failing to control them could destroy us. The fault, Harari argues, lies not in our nature but in our information networks. These networks prioritize order over truth, producing power but little wisdom.
Even if Homo sapiens were to destroy itself, Harari reminds us, the universe would carry on. Evolution might replace us with highly intelligent rats in 100 million years. But we are here now, and the decisions we make today will determine whether we write a hopeful new chapter in evolution or make a fatal error.
Final Thoughts
Yuval Noah Harari’s Nexus is a deeply insightful and thought-provoking examination of humanity’s past, present, and future. It challenges readers to confront uncomfortable truths about our information networks, the tension between truth and order, and the unprecedented challenges posed by AI.
Harari’s ultimate message is both sobering and hopeful: while humanity’s information networks have often prioritized power over wisdom, we have the capacity to change. By building robust, self-correcting institutions and embracing global cooperation, we can ensure that our extraordinary intelligence serves as a force for benevolence rather than destruction. The stakes couldn’t be higher, but as Harari reminds us, the universe is patient — it is up to us to decide what our legacy will be.