Nexus

This book explains history in a different perspective: information. Here are my insights:
Never summon powers you cannot control
I like the Goethe story at the prologue: the apprentice summoned the spell he didn’t know how to stop, almost made a disaster.
For complex things like rocket, we still believe there are some people can understand it. But for AI, scientists are still finding ways to explain the black box. We don’t know what happen inside. We should recognize this and be careful.
Information brings more order than truth
We think the more free the information flows, the more truth we get. But the truth is, people with authority use information to manipulate others more often.
The history won’t favor the democratic side
There are more than half of the people on earth living in non-democratic conditions.
We usually discuss the impacts of AI in the democratic world, but we should also notice how AI impacts other kinds of regimes.
After started reading this book, I started aware the influence of algorithms, especially the ones on social media. Social media platforms have brought huge convenience and connection by distributing information, but we didn’t really know how the recommendation algorithm works, and it also have shown how it made people more extreme.
To sum up, the whole book is based on the author’s trust of human species. He believes thats humans know ourselves better than machines, and we are capable and responsible to do the alignment job. And that is our duty.
Notes
Prologue
The naive view argues that by gathering and processing much more information than individuals can, makes the network not only powerful but also wise. This view posits that in sufficient quantities information leads to truth, and truth in turn leads to both power and wisdom.
The populist wave has undermined the cohesion of even the most robust democracies.
The populism views information as a weapon. They posits there is no objective truth at all and that everyone has “their own truth”. Their basic view of society and of information is surprisingly Marxist, seeing all human interactions as a power struggle between oppressors and oppressed.
One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.
If we wish to avoid relinquishing power to a charismatic leader or an inscrutable AI, we must gain a better understanding of what information is, how it helps to build human networks, and how it relates to truth and power.
There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely.
Chapter 1: What Is Information?
In everyday usage, “information” is associated with human-made symbols like spoken or written words.
Most information is not an attempt to represent reality. Most information in human society, and indeed in other biological and physical systems, does not represent anything.
Truth and reality are nevertheless different things, because no matter how truthful an account is, it can never represent reality in all its aspects.
Even the most truthful accounts of reality can never represent it in full.
Rather, truth is something that brings our attention to certain aspects of reality while inevitably ignoring other aspects.
Naive view sees information as an attempt to represent reality.
The naive view further believes that the solution to the problems caused by misinformation and disinformation is more information.
This book strongly disagrees with the naive view.
Contrary to what the naive view of information says, information has no essential link to truth, and its role in history isn’t to represent a preexisting reality.
Rather, what information does is to create new realities by tying together disparate things—whether couples or empires.
Information is something that creates new realities by connecting different points into a network.
To conclude, information sometimes represents reality, and sometimes doesn’t. But it always connects.
It should be emphasized that rejecting the naive view of information as representation does not force us to reject the notion of truth, nor does it force us to embrace the populist view of information as a weapon.
Chapter 2: Stories: Unlimited Connections
Sapiens rule the world not because we are so wise but because we are the only animals that can cooperate flexibly in large numbers.
In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story.
As numerous modern studies indicate, repeatedly retelling a fake memory eventually causes the person to adopt it as a genuine recollection.
The two levels of reality that preceded storytelling are objective reality and subjective reality. But some stories are able to create a third level of reality: intersubjective reality.
Whereas subjective things like pain exist in a single mind, intersubjective things like laws, gods, nations, corporations, and currencies exist in the nexus between large numbers of minds.
Intersubjective things exist in the exchange of information.
Of all genres of stories, those that create intersubjective realities have been the most crucial for the development of large-scale human networks.
No religions or empires managed to survive for long without a strong belief in the existence of a god, a nation, a law code, or a currency.
In fact, all relations between large-scale human groups are shaped by stories, because the identities of these groups are themselves defined by stories.
If history had been shaped solely by material interests and power struggles, there would be no point talking to people who disagree with us.
Any conflict would ultimately be the result of objective power relations, which cannot be changed merely by talking.
History is often shaped not by deterministic power relations, but rather by tragic mistakes that result from believing in mesmerizing but harmful stories.
In history, power stems only partially from knowing the truth. It also stems from the ability to maintain social order among a large number of people.
The choice isn’t simply between telling the truth and lying. There is a third option.
Contrary to the naive view, information isn’t the raw material of truth, and human information networks aren’t geared only to discover the truth. But contrary to the populist view, information isn’t just a weapon, either. Rather, to survive and flourish, every human information network needs to do two things simultaneously: discover truth and create order.
It is a difficult process to use information to discover the truth and simultaneously use it to maintain order. The search for truth threatens the foundations of the social order.
Instead of a march of progress, the history of human information networks is a tightrope walk trying to balance truth with order.
Chapter 3: Documents: The Bite of the Paper Tigers
Stories laid the foundation for all large-scale human cooperation and made humans the most powerful animals on earth. But as an information technology, stories have their limitations.
Documents (Lists) and stories are complementary.
The big problem with lists, and the crucial difference between lists and stories, is that lists tend to be far more boring than stories, which means that while we easily remember stories, we find it difficult to remember lists.
Unlike national poems and myths, which can be stored in our brains, complex national taxation and administration systems have required a unique nonorganic information technology in order to function. This technology is the written document.
Like stories and like all other information technologies in history, written documents didn’t necessarily represent reality accurately. But these pieces of paper can wield enormous power.
Written documents were much better than human brains in recording certain types of information. But they created a new and very thorny problem: retrieval.
Unlike foragers, who need merely to discover the preexisting order of the forest, archivists need to devise a new order for the world. That order is called bureaucracy.
But like mythology, bureaucracy too tends to sacrifice truth for order.
Reducing the messiness of reality to a limited number of fixed drawers helps bureaucrats keep order, but it comes at the expense of truth.
The distortions created by bureaucracy affect not only government agencies and private corporations but also scientific disciplines. Consider, for example, how universities are divided into different faculties and departments. Students pursuing an academic degree must usually decide to which of these departments they belong. Their decision limits their choice of courses, which in turn shapes their understanding of the world. And is “species” an objective reality that biologists discover, or is it an intersubjective reality that biologists impose? Even if biologists reach a consensus that viruses are life-forms, it wouldn’t change anything about how viruses behave; it will only change how humans think about them.
Of course, intersubjective conventions are themselves part of reality.
Mythology and bureaucracy are the twin pillars of every large-scale society.
For all bureaucracies—good or bad—share one key characteristic: it is hard for humans to understand them.
Documents, archives, forms, licenses, regulations, and other bureaucratic procedures have changed the way information flows in society, and with it the way power works.
This led to shifts in authority. As documents became a crucial nexus linking many social chains, considerable power came to be invested in these documents, and experts in the arcane logic of documents emerged as new authority figures.
Looked at from a different perspective, what we see is documents compelling humans to engage with other documents.
All powerful information networks can do both good and ill, depending on how they are designed and used.
Bureaucracy and mythology are both essential for maintaining order, and both are happy to sacrifice truth for the sake of order.
What mechanisms, then, ensure that bureaucracy and mythology don’t lose touch with truth altogether, and what mechanisms enable information networks to identify and correct their own mistakes, even at the price of some disorder?
Chapter 4: Errors: The Fantasy of Infallibility
The fallibility of human beings, and the need to correct human errors, have played key roles in every mythology.
In order to function, self-correcting mechanisms need legitimacy. If humans are prone to error, how can we trust the self-correcting mechanisms to be free from error?
To escape this seemingly endless loop, humans have often fantasized about some superhuman mechanism, free from all error, that they can rely upon to identify and correct their own mistakes.
In previous eras, such fantasies took a different form—religion.
Holy books like the Bible and the Quran are a technology to bypass human fallibility.
The book thereby ensures that many people in many times and places can access the same database.
The social order was now guaranteed by the infallible technology of the book. Or so it seemed.
In truth, copying errors crept in without destroying the entire world, and no two ancient Bibles were identical.
Even when people agree on the sanctity of a book and on its exact wording, they can still interpret the same words in different ways.
When Christianity emerged in the first century CE, it was not a unified religion, but rather a variety of Jewish movements that didn’t agree on much, except that they all regarded Jesus Christ—rather than the rabbinical institution—as the ultimate authority on Jehovah’s words.
As Christians composed more and more gospels, epistles, prophecies, parables, prayers, and other texts, it became harder to know which ones to pay attention to. Christians needed a curation institution. That’s how the New Testament was created.
In a letter from 367 CE, Bishop Athanasius of Alexandria recommended twenty-seven texts that faithful Christians should read. A generation later the list was canonized, became known as the New Testament.
When Christians talk about “the Bible,” they mean the Old Testament together with the New Testament. In contrast, Judaism never accepted the New Testament, and when Jews talk about “the Bible,” they mean only the Old Testament, which is supplemented by the Mishnah and Talmud.
By choosing to include 1 Timothy in their recommendation list while rejecting the Acts of Paul and Thecla, the assembled bishops and theologians shaped Christian attitudes toward women down to the present day.
The attempt to invest all authority in an infallible superhuman technology led to the rise of a new and extremely powerful human institution—the church.
The attempt to bypass human fallibility by investing authority in an infallible text never succeeded.
If infallible texts merely lead to the rise of fallible and oppressive churches, how then to deal with the problem of human error? The naive view of information posits that the problem can be solved by creating the opposite of a church—namely, a free market of information.
In fact, print allowed the rapid spread not only of scientific facts but also of religious fantasies, fake news, and conspiracy theories.
While it would be an exaggeration to argue that the invention of print caused the European witch-hunt craze, the printing press played a pivotal role in the rapid dissemination of the belief in a global satanic conspiracy.
Witches were not an objective reality. Nobody in early modern Europe had sex with Satan or was capable of flying on broomsticks and creating hailstorms. But witches became an intersubjective reality. Like money, witches were made real by exchanging information about witches.
Witch hunts were a catastrophe caused by the spread of toxic information. They are a prime example of a problem that was created by information, and was made worse by more information.
Releasing barriers to the flow of information doesn’t necessarily lead to the discovery and spread of truth. It can just as easily lead to the spread of lies and fantasies and to the creation of toxic information spheres. More specifically, a completely free market of ideas may incentivize the dissemination of outrage and sensationalism at the expense of truth.
What really got the scientific revolution going was neither the printing press nor a completely free market of information, but rather a novel approach to the problem of human fallibility.
For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favor of the facts.
A church typically told people to trust it because it possessed the absolute truth, in the form of an infallible holy book. A scientific institution, in contrast, gained authority because it had strong self-correcting mechanisms that exposed and rectified the errors of the institution itself.
Self-correcting mechanisms, not the technology of printing, that were the engine of the scientific revolution.
The trademark of science is not merely skepticism but self-skepticism, and at the heart of every scientific institution we find a strong self-correcting mechanism.
As an information technology, the self-correcting mechanism is the polar opposite of the holy book. The holy book is supposed to be infallible. The self-correcting mechanism embraces fallibility.
While self-correction mechanisms are vital for the pursuit of truth, they are costly in terms of maintaining order.
The history of information networks has always involved maintaining a balance between truth and order. Just as sacrificing truth for the sake of order comes with a cost, so does sacrificing order for truth.
Scientific institutions have been able to afford their strong self-correcting mechanisms because they leave the difficult job of preserving the social order to other institutions.
For one of the biggest questions about AI is whether it will favor or undermine democratic self-correcting mechanisms.
Chapter 5: Decisions: A Brief History of Democracy and Totalitarianism
Democracy and dictatorship are typically discussed as constrasting political and ethical systems. This chapter seeks to shift the terms of the discussion, by surveying the history of democracy and dictatorship as contrasting types of information networks.
Dictatorial information networks are highly centralized.
This means two things. First, the center enjoys unlimited authority; hence information tends to flow to the central hub, where the most important decisions are made.
The second characteristic of dictatorial networks is that they assume the center is infallible.
A democracy, in contrast, is a distributed information network, possessing strong self-correcting mechanisms.
A democratic government leaves as much room as possible for people to make their own choices.
But any intervention in people’s lives demands an explanation.
Of course, if the central government doesn’t intervene at all in people’s lives, and doesn’t provide them with basic services like security, it isn’t a democracy; it is anarchy.
Another crucial characteristic of democracies is that they assume everyone is fallible.
The definition of democracy as a distributed information network with strong self-correcting mechanisms stands in sharp contrast to a common misconception that equates democracy only with elections.
For democracy is not the same thing as majority dictatorship.
Democracy is a system that guarantees everyone certain liberties, which even the majority cannot take away.
It is particularly crucial to remember that elections are not a method for discovering truth.
But the one option that should not be on offer in elections is hiding or distorting the truth.
If all this sounds complicated, it is because democracy should be complicated.
Populists cherish this basic democratic principle, but somehow conclude from it that a single party or a single leader should monopolize all power.
A fundamental part of this populist credo is the belief that “the people” is not a collection of flesh-and-blood individuals with various interests and opinions, but rather a unified mystical body that possesses a single will—“the will of the people.”
Populism poses a deadly threat to democracy.
By taking the democratic principle of “people’s power” to its extreme, populists turn totalitarian.
Populism offers strongmen an ideological basis for making themselves dictators while pretending to be democrats.
Once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power.
When trust in bureaucratic institutions like election boards, courts, and newspapers is particularly low, an enhanced reliance on mythology is the only way to preserve order.
Democracies die not only when people are not free to talk but also when people are not willing or able to listen.
Democracy is never a matter of all or nothing. It is a continuum.
Newspapers that succeeded in gaining widespread trust became the architects and mouthpieces of public opinion.
They created a far more informed and engaged public, which changed the nature of politics, first in the Netherlands and later around the world.
The political influence of newspapers was so crucial that newspaper editors often became political leaders.
As noted in chapter 2, the Founding Fathers committed enormous mistakes—such as endorsing slavery and denying women the vote—but they also provided the tools for their descendants to correct these mistakes. That was their greatest legacy.
Mass media made large-scale democracy possible, rather than inevitable. And it also made possible other types of regimes. In particular, the new information technologies of the modern age opened the door for large-scale totalitarian regimes.
In an autocratic network, there are no legal limits on the will of the ruler, but there are nevertheless a lot of technical limits. In a totalitarian network, many of these technical limits are absent.
Just as modern technology enabled large-scale democracy, it also made large-scale totalitarianism possible.
There are, however, several major differences between modern totalitarianism and premodern churches.
First, as noted earlier, modern totalitarianism has worked by deploying several overlapping surveillance mechanisms that keep one another in order.
Another important difference is that medieval churches tended to be traditionalist organizations that resisted change, while modern totalitarian parties have tended to be revolutionary organizations demanding change.
Perhaps most important of all, premodern churches could not become tools of totalitarian control because they themselves suffered from the same limitations as all other premodern organizations.
Consequently, churches tended to be local affairs.
The biggest advantage of the centralized totalitarian network is that it is extremely orderly, which means it can make decisions quickly and enforce them ruthlessly.
But hyper-centralized information networks also suffer from several big disadvantages. Since they don’t allow information to flow anywhere except through the official channels, if the official channels are blocked, the information cannot find an alternative means of transmission. And official channels are often blocked.
As a consequence, they have to struggle with the danger of ossification. When more and more information flows to only one place, will it result in efficient control or in blocked arteries and, finally, a heart attack?
The main split in twenty-first-century politics might be not between democracies and totalitarian regimes but rather between human beings and nonhuman agents.
The rest of this book, then, is dedicated to exploring whether such a Silicon Curtain is indeed descending on the world, and what life might look like when computers run our bureaucracies and algorithms invent new mythologies.
Chapter 6: The New Members: How Computers Are Different from Printing Presses
The seed of the current revolution is the computer. Everything else—from the internet to AI—is a by-product.
In essence a computer is a machine that can potentially do two remarkable things: it can make decisions by itself, and it can create new ideas by itself.
The rise of intelligent machines that can make decisions and create new ideas means that for the first time in history power is shifting away from humans and toward something else.
The crucial thing to grasp is that social media algorithms are fundamentally different from printing presses and radio sets. They make active and fateful decisions by themselves.
Recommendations from on high can have enormous sway over people. Recall that the Bible was born as a recommendation list.
In the case of the Bible, ultimate power lay not with the authors who composed different religious tracts but with the curators who created recommendation lists. This was the kind of power wielded in the 2010s by social media algorithms.
In particular, many readers may disagree that the algorithms made independent decisions, and may insist that everything the algorithms did was the result of code written by human engineers and of business models adopted by human executives. This book begs to differ. Human soldiers are shaped by their genetic code and follow orders issued by executives, yet they can still make independent decisions. The same is true of AI algorithms. They can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw. This is the essence of the AI revolution: The world is being flooded by countless new powerful agents.
We are in danger of losing control of our future.
At present, we still play a central role in this network. But we may gradually be pushed to the sidelines, and ultimately it might even be possible for the network to operate without us.
This objection assumes that making decisions and creating ideas are predicated on having consciousness. Yet this is a fundamental misunderstanding that results from a much more widespread confusion between intelligence and consciousness.
Intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate. In humans and other mammals, intelligence often goes hand in hand with consciousness.
But it is wrong to extrapolate from humans and mammals to all possible entities. Bacteria and plants apparently lack any consciousness, yet they too display intelligence.
In order to pursue a goal like “maximize user engagement,” and make decisions that help attain that goal, consciousness isn’t necessary. Intelligence is enough.
The emergence of computers capable of pursuing goals and making decisions by themselves changes the fundamental structure of our information network.
In previous networks, members were human, every chain had to pass through humans, and technology served only to connect the humans. In the new computer-based networks, computers themselves are members and there are computer-to-computer chains that don’t pass through any human.
We may find ourselves conducting discussions with entities that we think are humans but are actually computers. This could make democracy untenable. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation.
If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
What we are talking about is potentially the end of human history. Not the end of history, but the end of its human-dominated part.
What will happen to the course of history when computers play a larger and larger role in culture and begin producing stories, laws, and religions?
Why do huge companies need dollars, if they can get what they want with information?
This has far-reaching implications for taxation. Taxes aim to redistribute wealth. They take a cut from the wealthiest individuals and corporations, in order to provide for everyone. However, a tax system that knows how to tax only money will soon become outdated as many transactions no longer involve money.
The most important thing to remember is that technology, in itself, is seldom deterministic. Belief in technological determinism is dangerous because it excuses people of all responsibility.
Politics involves a delicate balance between truth and order. As computers become important members of our information network, they are increasingly tasked with discovering truth and maintaining order.
Chapter 7: Relentless: The Network Is Always On
The computer network has become the nexus of most human activities.
The fundamental difference between the new digital bureaucrats and their flesh-and-blood predecessors is that inorganic bureaucrats can be “on” twenty-four hours a day and can monitor us and interact with us anywhere, anytime.
In a world where humans monitored humans, privacy was the default. But in a world where computers monitor humans, it may become possible for the first time in history to completely annihilate privacy.
Traditionally, the relationship between the customer and a waiter, say, was a relatively private affair.
Peer-to-peer surveillance networks have obliterated that sense of privacy.
Life has been divided into separate reputational spheres, with separate status competitions, and there were also many off-grid moments when you didn’t have to engage in any status competitions at all.
Unfortunately, social credit algorithms combined with ubiquitous surveillance technology now threaten to merge all status competitions into a single never-ending race.
Yes, computers can gather unprecedented amounts of data on us, watching what we do twenty-four hours a day. And yes, they can identify patterns in the ocean of data with superhuman efficiency. But that does not mean that the computer network will always understand the world accurately. Information isn’t truth. A total surveillance system may form a very distorted understanding of the world and of human beings.
Chapter 8: Fallible: The Network Is Often Wrong
As discussed in previous chapters, contrary to the naive view, information is often used to create order rather than discover truth.
In quantum mechanics the act of observing subatomic particles changes their behavior; it is the same with the act of observing humans.
Numerous far-right activists who first became interested in extremist politics after watching videos that the YouTube algorithm auto-played for them.
We have reached a turning point in history in which major historical processes are partly caused by the decisions of nonhuman intelligence. It is this that makes the fallibility of the computer network so dangerous.
Unfortunately, research has shown how outrage and misinformation are more likely to be viral.
Like the Soviet leaders in Moscow, the tech companies were not uncovering some truth about humans; they were imposing on us a perverse new order.
The algorithms reduced the multifaceted range of human emotions—hate, love, outrage, joy, confusion—into a single catchall category: engagement.
As the harmful effects were becoming manifest, the tech giants were repeatedly warned about what was happening, but they failed to step in because of their faith in the naive view of information.
As we have seen again and again throughout history, in a completely free information fight, truth tends to lose.
And yet I have devoted so much attention to the social media “user engagement” debacle because it exemplifies a much bigger problem afflicting computers—the alignment problem.
Since they operate very differently than humans, they are likely to use methods their human overlords didn’t anticipate. This can result in dangerous unforeseen consequences that are not aligned with the original human goals.
Both Napoleon and George W. Bush fell victim to the alignment problem. Their short-term military goals were misaligned with their countries’ long-term geopolitical goals.
Bureaucrats tasked with accomplishing a narrow mission may be ignorant of the wider impact of their actions, and it has always been tricky to ensure that their actions remain aligned with the greater good of society.
The AI was doing exactly what the game was rewarding it to do—even though it is not what the humans were hoping for. That’s the essence of the alignment problem: rewarding A while hoping for B.
In the case of human networks, we rely on self-correcting mechanisms to periodically review and revise our goals, so setting the wrong goal is not the end of the world. But since the computer network might escape our control, if we set it the wrong goal, we might discover our mistake when we are no longer able to correct it.
Let’s revisit Clausewitz’s war theory. There is one fatal flaw in the way he equates rationality with alignment.
If our only rule of thumb is that “every action must be aligned with some higher goal,” by definition there is no rational way to define that ultimate goal.
For millennia, philosophers have been looking for a definition of an ultimate goal that will not depend on an alignment to some higher goal.
They have repeatedly been drawn to two potential solutions known in philosophical jargon as deontology and utilitarianism. Deontologists (from the Greek word deon, meaning “duty”) believe that there are some universal moral duties, or moral rules, that apply to everyone.
In simpler language, Kant reformulated the old Golden Rule: “Do unto others what you would have them do unto you” (Matthew 7:12).
The key question historians would ask Kant is, when you talk about universal rules, how exactly do you define “universal”?
The whole point of Nazi ideology was to deny the humanity of Jews. Everybody accepts that murder is wrong, but thinks that only killing members of the in-group qualifies as “murder,” whereas killing someone from an out-group is not. But the in-groups and out-groups are intersubjective entities, whose definition usually depends on some mythology.
This problem with deontology is especially critical if we try to dictate universal deontologist rules not to humans but to computers.
Is there a way to define whom computers should care about, without getting bogged down by some intersubjective myth? The most obvious suggestion is to tell computers that they must care about any entity capable of suffering.
But if we go in that direction, we inadvertently desert the deontologist camp and find ourselves in the camp of their rivals—the utilitarians.
The English philosopher Jeremy Bentham—another contemporary of Napoleon, Clausewitz, and Kant—said that the only rational ultimate goal is to minimize suffering in the world and maximize happiness.
Unfortunately, as with the deontologist solution, what sounds simple in the theoretical realm of philosophy becomes fiendishly complex in the practical land of history. The problem for utilitarians is that we don’t possess a calculus of suffering.
Kant, for example, condemned homosexuality on the grounds that it is “contrary to natural instinct and to animal nature” and that it therefore degrades a person “below the level of the animals.”
Since homosexuals were allegedly below the level of animals, the Kantian rule against murdering humans didn’t apply to them.
But in historical situations when the scales of suffering are more evenly matched, utilitarianism falters.
Just as deontologists trying to answer the question of identity are pushed to adopt utilitarian ideas, so utilitarians stymied by the lack of a suffering calculus often end up adopting a deontologist position.
The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present.
How then did bureaucratic systems throughout history set their ultimate goals? They relied on mythology to do it for them.
If you start with the mythological belief that Jews are demonic monsters bent on destroying humanity, then both deontologists and utilitarians can find many logical arguments why the Jews should be killed.
When a lot of computers communicate with one another, they can create inter-computer realities, analogous to the intersubjective realities produced by networks of humans.
Inter-computer realities like Pokémon and Google ranks are analogous to intersubjective realities like the sanctity that humans ascribe to temples and cities.
For thousands of years wars were fought over intersubjective entities like holy rocks. In the twenty-first century, we might see wars fought over inter-computer entities.
The problem we face is not how to deprive computers of all creative agency, but rather how to steer their creativity in the right direction.
It is the same problem we have always had with human creativity.
The intersubjective entities invented by humans were the basis for all the achievements of human civilization, but they occasionally led to crusades, jihads, and witch hunts. The inter-computer entities will probably be the basis for future civilizations, but the fact that computers collect empirical data and use mathematics to analyze it doesn’t mean they cannot launch their own witch hunts.
Analogous problems may afflict all social credit systems and total surveillance regimes. Whenever they claim to use all-encompassing databases and ultraprecise mathematics to discover sinners, terrorists, criminals, and antisocial or untrustworthy people, they might actually be imposing baseless religious and ideological prejudices with unprecedented efficiency.
Many of the algorithmic biases surveyed in this and previous chapters share the same fundamental problem: the computer thinks it has discovered some truth about humans, when in fact it has imposed order on them.
In God, Human, Animal, Machine, the philosopher Meghan O’Gieblyn demonstrates how the way we understand computers is heavily influenced by traditional mythologies.
When we say that computers are fallible, it means far more than that they make the occasional factual mistake or wrong decision. More important, like the human network before it, the computer network might fail to find the right balance between truth and order.
As Socrates taught, being able to say “I don’t know” is an essential step on the path to wisdom. And this is true of computer wisdom no less than of human wisdom.
Yet no matter how aware algorithms are of their own fallibility, we should keep humans in the loop, too.
To conclude, the new computer network will not necessarily be either bad or good. All we know for sure is that it will be alien and it will be fallible. We therefore need to build institutions that will be able to check not just familiar human weaknesses like greed and hatred but also radically alien errors.
Chapter 9: Democracies: Can We Still Hold a Conversation?
While experts should spend lifelong careers discussing the finer details, it is crucial that the rest of us understand the fundamental principles that democracies can and should follow.
The first principle is benevolence.
The second principle that would protect democracy against the rise of totalitarian surveillance regimes is decentralization.
For the survival of democracy, some inefficiency is a feature, not a bug.
A third democratic principle is mutuality.
Democracy requires balance. Governments and corporations often develop apps and algorithms as tools for top-down surveillance. But algorithms can just as easily become powerful tools for bottom-up transparency and accountability, exposing bribery and tax evasion. If they know more about us, while we simultaneously know more about them, the balance is kept.
A fourth democratic principle is that surveillance systems must always leave room for both change and rest.
Human life is a balancing act between endeavoring to improve ourselves and accepting who we are.
History is full of rigid caste systems that denied humans the ability to change, but it is also full of dictators who tried to mold humans like clay. Finding the middle path between these two extremes is a never-ending task.
In truth, we have no way to verify whether anyone—a human, an animal, or a computer—is conscious.
An ancient tradition may seem ridiculous and irrelevant, but abolishing it could cause unanticipated problems. In contrast, a revolution may seem overdue and just, but it can lead to far greater crimes than anything committed by the old regime.
When both conservatives and progressives resist the temptation of radical revolution, and stay loyal to democratic traditions and institutions, democracies prove themselves to be highly agile.
The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes.
The flexibility of democracies, their willingness to question old mythologies, and their strong self-correcting mechanism will therefore be crucial assets.
Democracies have spent generations cultivating these assets. It would be foolish to abandon them just when we need them most.
In order to function, however, democratic self-correcting mechanisms need to understand the things they are supposed to correct. For a dictatorship, being unfathomable is helpful, because it protects the regime from accountability. For a democracy, being unfathomable is deadly.
But what might happen in the future, if some social credit algorithm denies the request of a low-credit child to enroll in a high-credit school? As we saw in chapter 8, computers are likely to suffer from their own biases and to invent inter-computer mythologies and bogus categories. How would humans be able to identify and correct such mistakes?
Computers are making more and more decisions about us, both mundane and life-changing.
As society entrusts more and more decisions to computers, it undermines the viability of democratic self-correcting mechanisms and of democratic transparency and accountability.
There is, consequently, a growing demand to enshrine a new human right: the right to an explanation.
Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI.
Second, move 37 demonstrated the unfathomability of AI.
The rise of unfathomable alien intelligence undermines democracy.
As with total surveillance regimes, so also with social credit systems, the fact that they could be created doesn’t mean that we must create them.
To function, a democracy needs to meet two conditions: it needs to enable a free public conversation on key issues, and it needs to maintain a minimum of social order and institutional trust.
In the nineteenth and twentieth centuries, when media moguls censored some views and promoted others, this might have undermined democracy, but at least the moguls were humans, and their decisions could be subjected to democratic scrutiny.
It is far more dangerous if we allow inscrutable algorithms to decide which views to disseminate.
Preserving the democratic conversation has never been easy.
Chapter 10: Totalitarianism: All Power to the Algorithms?
The rise of machine-learning algorithms, however, may be exactly what the Stalins of the world have been waiting for. AI could tilt the technological balance of power in favor of totalitarianism.
Indeed, whereas flooding people with data tends to overwhelm them and therefore leads to errors, flooding AI with data tends to make it more efficient. Consequently, AI seems to favor the concentration of information and decision making in one place.
At the same time, as noted in an earlier chapter, AI could also make it possible for totalitarian regimes to establish total surveillance systems that make resistance almost impossible.
However, in the long term, totalitarian regimes are likely to face an even bigger danger: instead of criticizing them, an algorithm might gain control of them.
If a twenty-first-century autocrat gives computers too much power, that autocrat might become their puppet.
By giving so much power to the Surveillance & Security Algorithm, the Great Leader has placed himself in an impossible situation. If he distrusts the algorithm, he may be assassinated by the defense minister, but if he trusts the algorithm and purges the defense minister, he becomes the algorithm’s puppet.
If the information channels merge somewhere else, that then becomes the true nexus of power.
Dictators have always suffered from weak self-correcting mechanisms and have always been threatened by powerful subordinates. The rise of AI may greatly exacerbate these problems.
Chapter 11: The Silicon Curtain: Global Empire or Global Split?
Climate change can devastate even countries that adopt excellent environmental regulations, because it is a global rather than a national problem. AI, too, is a global problem.
Since computers make it easier to concentrate information and power in a central hub, humanity could enter a new imperial era.
Humanity could split along a new Silicon Curtain that would pass between rival digital empires. As each regime chooses its own answer to the AI alignment problem, to the dictator’s dilemma, and to other technological quandaries, each might create a separate and very different computer network.
During the Cold War, the Iron Curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the Silicon Curtain.
For centuries, new information technologies fueled the process of globalization and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality.
One possible development with far-reaching consequences is that different digital cocoons might adopt incompatible approaches to the most fundamental questions of human identity.
Throughout history, diverse cultures have given diverse answers to the mind-body problem. Are humans a physical body, or a nonphysical mind, or perhaps a mind trapped inside a body?
Global cooperation and patriotism are not mutually exclusive.
In fact, global cooperation means two far more modest things: first, a commitment to some global rules.
The second principle of globalism is that sometimes—not always, but sometimes—it is necessary to prioritize the long-term interests of all humans over the short-term interests of a few.
The clearest pattern we observe in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation.
I cannot predict what decisions people will make in the coming years, but as a historian I do believe in the possibility of change.
One of the chief lessons of history is that many of the things that we consider natural and eternal are, in fact, man-made and mutable.
Every old thing was once new. The only constant of history is change.
Epilogue
Politics is largely a matter of priorities.
Priorities determine how citizens vote, what businesspeople are concerned about, and how politicians try to make a name for themselves. And priorities are often shaped by our understanding of history.
Information isn’t truth. Its main task is to connect rather than represent, and information networks throughout history have often privileged order over truth.
Just as the law of the jungle is a myth, so also is the idea that the arc of history bends toward justice.
To create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms.
