Nexus by Yuval Noah Harari
How strongly I recommend this book: 7 / 10
Date read: December 03, 2024
Get this book on Amazon
Favorite Quotes and Chapter Notes
I went through my notes and captured key quotes from all chapters below.
P.S. – Highly recommend Readwise if you want to get the most out of your reading.
Highlights and Notes
Prologue
-
The main argument of this book is that humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to use that power unwisely. Our problem, then, is a network problem. Even more specifically, it is an information problem. Information is the glue that holds networks together.
-
The naive view is of course more nuanced and thoughtful than can be explained in a few paragraphs, but its core tenet is that information is an essentially good thing, and the more we have of it, the better. Given enough information and enough time, we are bound to discover the truth about things ranging from viral infections to racist biases, thereby developing not only our power but also the wisdom necessary to use that power well. This naive view justifies the pursuit of ever more powerful information technologies and has been the semiofficial ideology of the computer age and the internet.
-
In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10 percent chance to advanced AI leading to outcomes as bad as human extinction.
-
People who single out China, Russia, or a post- democratic United States as their main source for totalitarian nightmares misunderstand the danger. In fact, Chinese, Russians, Americans, and all other humans are together threatened by the totalitarian potential of nonhuman intelligence. Given the magnitude of the danger, AI should be of interest to all human beings. While not everyone can become an AI expert, we should all keep in mind that AI is the first technology in history that can make decisions and create new ideas by itself.
-
Homo Deus hypothesized that if humans aren’t careful, we might dissolve within the torrent of information like a clump of earth within a gushing river, and that in the grand scheme of things humanity will turn out to have been just a ripple within the cosmic dataflow.
-
One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.
-
it is important to note that populists are eroding trust in large- scale institutions and international cooperation just when humanity confronts the existential challenges of ecological collapse, global war, and out- of- control technology.
-
Populists are right to be suspicious of the naive view of information, but they are wrong to think that power is the only reality and that information is always a weapon. Information isn’t the raw material of truth, but it isn’t a mere weapon, either. There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely. This book is dedicated to exploring that middle ground.
Part I: Human Networks
-
Though there is some controversy about the exact details, it is clear that the American artillery adjusted its barrage, and an American counterattack rescued the Lost Battalion. Cher Ami was tended by army medics, sent to the United States as a hero, and became the subject of numerous articles, short stories, children’s books, poems, and even movies. The pigeon had no idea what information he was conveying, but the symbols inked on the piece of paper he carried helped save hundreds of men from death and captivity.
-
The point is that even the most truthful accounts of reality can never represent it in full. There are always some aspects of reality that are neglected or distorted in every representation. Truth, then, isn’t a one- to- one representation of reality. Rather, truth is something that brings our attention to certain aspects of reality while inevitably ignoring other aspects. No account of reality is 100 percent accurate, but some accounts are nevertheless more truthful than others.
-
the naive view sees information as an attempt to represent reality. It is aware that some information doesn’t represent reality well, but it dismisses this as unfortunate cases of“misinformation” or“disinformation.” Misinformation is an honest mistake, occurring when someone tries to represent reality but gets it wrong. Disinformation is a deliberate lie, occurring when someone consciously intends to distort our view of reality.
-
According to the naive view, astronomers derive“real information” from the stars, while the information that astrologers imagine to read in constellations is either“misinformation” or“disinformation.” If only people were given more information about the universe, surely they would abandon astrology altogether. But the fact is that for thousands of years astrology has had a huge impact on history, and today millions of people still check their star signs before making the most important decisions of their lives, like what to study and whom to marry. As of 2021, the global astrology market was valued at $ 12.8 billion.
-
What the example of astrology illustrates is that errors, lies, fantasies, and fictions are information, too. Contrary to what the naive view of information says, information has no essential link to truth, and its role in history isn’t to represent a preexisting reality. Rather, what information does is to create new realities by tying together disparate things— whether couples or empires. Its defining feature is connection rather than representation, and information is whatever connects different points into a network. Information doesn’t necessarily inform us about things. Rather, it puts things in formation. Horoscopes put lovers in astrological formations, propaganda broadcasts put voters in political formations, and marching songs put soldiers in military formations.
-
If the main job of information had been to represent reality accurately, it would have been hard to explain why the Bible became one of the most influential texts in history.
-
To conclude, information sometimes represents reality, and sometimes doesn’t. But it always connects. This is its fundamental characteristic. Therefore, when examining the role of information in history, although it sometimes makes sense to ask“How well does it represent reality? Is it true or false?” often the more crucial questions are“How well does it connect people? What new network does it create?”
-
While information always connects, some types of information— from scientific books to political speeches— may strive to connect people by accurately representing certain aspects of reality. But this requires a special effort, which most information does not make. This is why the naive view is wrong to believe that creating more powerful information technology will necessarily result in a more truthful understanding of the world. If no additional steps are taken to tilt the balance in favor of truth, an increase in the amount and speed of information is likely to swamp the relatively rare and expensive truthful accounts by much more common and cheap types of information.
-
Contrary to what the naive view believes, Homo sapiens didn’t conquer the world because we are talented at turning information into an accurate map of reality. Rather, the secret of our success is that we are talented at using information to connect lots of individuals.
-
Instead of building a network from human- to- human chains alone— as the Neanderthals, for example, did— stories provided Homo sapiens with a new type of chain: human- to- story chains. In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story. And the same story can be familiar to billions of individuals. A story can thereby serve like a central connector, with an unlimited number of outlets into which an unlimited number of people can plug.
-
It may seem that in the case of ancient Chinese emperors, medieval Catholic popes, or modern corporate titans it has been a single flesh- and- blood human— rather than a story— that has served as a nexus linking millions of followers. But, of course, in all these cases almost none of the followers has had a personal bond with the leader. Instead, what they have connected to has been a carefully crafted story about the leader, and it is in this story that they have put their faith. Joseph Stalin, who stood at the nexus of one of the biggest personality cults in history, understood this well. When his troublesome son Vasily exploited his famous name to frighten and awe people, Stalin berated him.“But I’m a Stalin too,” protested Vasily.“No, you’re not,” replied Stalin.“You’re not Stalin and I’m not Stalin. Stalin is Soviet power. Stalin is what he is in the newspapers and the portraits, not you, no— not even me!”
-
Whatever the facts may be, the story of the self- sacrificing winged savior proved irresistible.
-
While branding campaigns are occasionally a cynical exercise of disinformation, most of the really big stories of history have been the result of emotional projections and wishful thinking. True believers play a key role in the rise of every major religion and ideology, and the Jesus story changed history because it gained an immense number of true believers.
-
Family is the strongest bond known to humans. One way that stories build trust between strangers is by making these strangers reimagine each other as family.
-
While most Christians were not physically present at the Last Supper, they have heard the story so many times, and they have seen so many images of the event, that they“remember” it more vividly than they remember most of the family dinners in which they actually participated.
-
some stories are able to create a third level of reality: intersubjective reality. Whereas subjective things like pain exist in a single mind, intersubjective things like laws, gods, nations, corporations, and currencies exist in the nexus between large numbers of minds. More specifically, they exist in the stories people tell one another. The information humans exchange about intersubjective things doesn’t represent anything that had already existed prior to the exchange of information; rather, the exchange of information creates these things.
-
Neanderthals lived in small isolated bands, and to the best of our knowledge different bands cooperated with one another only rarely and weakly, if at all.[ 18] Stone Age Sapiens too lived in small bands of a few dozen individuals. But following the emergence of storytelling, Sapiens bands no longer lived in isolation. Bands were connected by stories about things like revered ancestors, totem animals, and guardian spirits. Bands that shared stories and intersubjective realities constituted a tribe. Each tribe was a network connecting hundreds or even thousands of individuals.
-
The centrality of stories reveals something fundamental about the power of our species, and it explains why power doesn’t always go hand in hand with wisdom. The naive view of information says that information leads to truth, and knowing the truth helps people to gain both power and wisdom. This sounds reassuring.
-
All human political systems are based on fictions, but some admit it, and some do not. Being truthful about the origins of our social order makes it easier to make changes in it. If humans like us invented it, we can amend it. But such truthfulness comes at a price. Acknowledging the human origins of the social order makes it harder to persuade everyone to agree on it.
-
Contrary to the naive view, information isn’t the raw material of truth, and human information networks aren’t geared only to discover the truth. But contrary to the populist view, information isn’t just a weapon, either. Rather, to survive and flourish, every human information network needs to do two things simultaneously: discover truth and create order.
-
It is a difficult process to use information to discover the truth and simultaneously use it to maintain order. What makes things worse is that these two processes are often contradictory, because it is frequently easier to maintain order through fictions.
-
Religions, for example, always claim to be an objective and eternal truth rather than a fictional story invented by humans. In such cases, the search for truth threatens the foundations of the social order.
-
That’s a major reason why the history of human information networks isn’t a triumphant march of progress. While over the generations human networks have grown increasingly powerful, they have not necessarily grown increasingly wise. If a network privileges order over truth, it can become very powerful but use that power unwisely. Instead of a march of progress, the history of human information networks is a tightrope walk trying to balance truth with order.
-
Contrary to what the mission statements of corporations like Google and Facebook imply, simply increasing the speed and efficiency of our information technology doesn’t necessarily make the world a better place. It only makes the need to balance truth and order more urgent.
-
The yarns Bialik and Herzl wove ignored many crucial facts about contemporary reality, most notably that around 1900 the Jews of Palestine comprised only 6– 9 percent of the region’s total population of about 600,000 people.[ 4] While disregarding such demographic facts, Bialik and Herzl accorded great importance to mythology, most notably the stories of the Bible, without which modern Zionism is unimaginable. Bialik and Herzl were also influenced by the nationalist myths that were created in the nineteenth century by almost every other ethnic group in Europe.
-
What makes us so good at remembering epic poems and long- running TV series is that long- term human memory is particularly adapted to retaining stories.
-
As a key example, consider ownership. In oral communities that lacked written documents, ownership was an intersubjective reality created through the words and behaviors of the community members. To own a field meant that your neighbors agreed that this field was yours, and they behaved accordingly. They didn’t build a hut on that field, graze their livestock there, or pick fruits there without first asking your permission. Ownership was created and maintained by people continuously saying or signaling things to one another. This made ownership the affair of a local community and placed a limit on the ability of a distant central authority to control all landownership. No king, minister, or priest could remember who owned each field in hundreds of distant villages.
-
Bureaucracy is the way people in large organizations solved the retrieval problem and thereby created bigger and more powerful information networks. But like mythology, bureaucracy too tends to sacrifice truth for order. By inventing a new order and imposing it on the world, bureaucracy distorted people’s understanding of the world in unique ways. Many of the problems of our twenty- first- century information networks— like biased algorithms that mislabel people, or rigid protocols that ignore human needs and feelings— are not new problems of the computer age. They are quintessential bureaucratic problems that have existed long before anyone even dreamed of computers.
-
At the heart of the bureaucratic order, then, is the drawer. Bureaucracy seeks to solve the retrieval problem by dividing the world into drawers, and knowing which document goes into which drawer.
-
The urge to divide reality into rigid drawers also leads bureaucrats to pursue narrow goals irrespective of the wider impact of their actions. A bureaucrat tasked with increasing industrial production is likely to ignore environmental considerations that fall outside her purview, and perhaps dump toxic waste into a nearby river, leading to an ecological disaster downstream.
-
Students pursuing an academic degree must usually decide to which of these departments they belong. Their decision limits their choice of courses, which in turn shapes their understanding of the world. Mathematics students learn how to predict future morbidity levels from present rates of infection; biology students learn how viruses mutate over time; and history students learn how religious and political beliefs affect people’s willingness to follow government instructions. To fully understand COVID- 19 requires taking into account mathematical, biological, and historical phenomena, but academic bureaucracy doesn’t encourage such a holistic approach.
-
Anyone who fantasizes about abolishing all bureaucracies in favor of a more holistic approach to the world should reflect on the fact that hospitals too are bureaucratic institutions. They are divided into different departments, with hierarchies, protocols, and lots of forms to fill out. They suffer from many bureaucratic illnesses, but they still manage to cure us of many of our biological illnesses. The same goes for almost all the other services that make our life better, from our schools to our sewage system.
-
Mythology and bureaucracy are the twin pillars of every large- scale society. Yet while mythology tends to inspire fascination, bureaucracy tends to inspire suspicion. Despite the services they provide, even beneficial bureaucracies often fail to win the public’s trust. For many people, the very word“bureaucracy” carries negative connotations. This is because it is inherently difficult to know whether a bureaucratic system is beneficial or malicious. For all bureaucracies— good or bad— share one key characteristic: it is hard for humans to understand them.
-
Evolution has primed our minds to understand death by a tiger. Our mind finds it much more difficult to understand death by a document.
-
Should we love the bureaucratic information network or hate it? Stories like that of my grandfather indicate the dangers inherent in bureaucratic power. Stories like that of the London cholera epidemic indicate its potential benevolence. All powerful information networks can do both good and ill, depending on how they are designed and used. Merely increasing the quantity of information in a network doesn’t guarantee its benevolence, or make it any easier to find the right balance between truth and order. That is a key historical lesson for the designers and users of the new information networks of the twenty- first century.
-
In the history of religion, a recurrent problem is how to convince people that a certain dogma indeed originated from an infallible superhuman source.
-
As human societies grew and became more complex, so did their religious institutions. Priests and oracles had to train long and hard for the important task of representing the gods, so people no longer needed to trust just any layperson who claimed to have met an angel or to carry a divine message.
-
The book became an important religious technology in the first millennium BCE. After tens of thousands of years in which gods spoke to humans via shamans, priests, prophets, oracles, and other human messengers, religious movements like Judaism began arguing that the gods speak through this novel technology of the book.
-
Whereas the archives of Egyptian pharaohs and Assyrian kings empowered the unfathomable kingly bureaucracy at the expense of the masses, the Jewish holy book seemed to give power to the masses, who could now hold even the most brazen leader accountable to God’s laws. Second, and more important, having numerous copies of the same book prevented any meddling with the text. If there were thousands of identical copies in numerous locations, any attempt to change even a single letter in the holy code could easily be exposed as a fraud.
-
As Jews increasingly argued over the interpretation of the Bible, rabbis gained more power and prestige. Writing down the word of Jehovah was supposed to limit the authority of the old priestly institution, but it gave rise to the authority of a new rabbinical institution.
-
The dream of bypassing fallible human institutions through the technology of the holy book never materialized. With each iteration, the power of the rabbinical institution only increased.“Trust the infallible book” turned into“trust the humans who interpret the book.” Judaism was shaped by the Talmud far more than by the Bible, and rabbinical arguments about the interpretation of the Talmud became even more important than the Talmud itself.
-
Modernity too raised many new questions that have no straightforward answers in the Mishnah and Talmud. For example, when electrical appliances developed in the twentieth century, Jews struggled with numerous unprecedented questions, such as whether it is okay to press the electrical buttons of an elevator on the Sabbath? The Orthodox answer is no. As noted earlier, the Bible forbids working on the Sabbath, and rabbis argued that pressing an electrical button is“work,” because electricity is akin to fire, and it has long been established that kindling a fire is“work.” Does this mean that elderly Jews living in a Brooklyn high- rise must climb a hundred steps to their apartment in order to avoid working on the Sabbath? Well, Orthodox Jews invented a“Sabbath elevator,” which continually goes up and down buildings, stopping on every floor, without you having to perform any“work” by pressing an electrical button.
-
When Christianity emerged in the first century CE, it was not a unified religion, but rather a variety of Jewish movements that didn’t agree on much, except that they all regarded Jesus Christ— rather than the rabbinical institution— as the ultimate authority on Jehovah’s words.
-
As Christians composed more and more gospels, epistles, prophecies, parables, prayers, and other texts, it became harder to know which ones to pay attention to. Christians needed a curation institution. That’s how the New Testament was created. At roughly the same time that debates among Jewish rabbis were producing the Mishnah and Talmud, debates among Christian priests, bishops, and theologians were producing the New Testament.
-
Just as most Jews forgot that rabbis curated the Old Testament, so most Christians forgot that church councils curated the New Testament, and came to view it simply as the infallible word of God. But while the holy book was seen as the ultimate source of authority, the process of curating the book placed real power in the hands of the curating institution.
-
The Catholic Church, however, viewed such pacifists and egalitarian readings as heresies. It interpreted Jesus’s words in a way that allowed the church to become the richest landowner in Europe, to launch violent crusades, and to establish murderous inquisitions. Catholic theology accepted that Jesus told us to love our enemies, but explained that burning heretics was an act of love, because it deterred additional people from adopting heretical views, thereby saving them from the flames of hell.
-
The Catholic Church used its power and wealth to disseminate copies of its favored texts while prohibiting the production and spread of what it considered erroneous ones.
-
The church sought to lock society inside an echo chamber, allowing the spread only of those books that supported it, and people trusted the church because almost all the books supported it. Even illiterate laypersons who didn’t read books were still awed by recitations of these precious texts or expositions on their content. That’s how the belief in a supposedly infallible superhuman technology like the New Testament led to the rise of an extremely powerful but fallible human institution like the Catholic Church that crushed all opposing views as“erroneous” while allowing no one to question its own views.
-
Luther, Calvin, and their successors argued that there was no need for any fallible human institution to interpose itself between ordinary people and the holy book. Christians should abandon all the parasitical bureaucracies that grew around the Bible and reconnect to the original word of God. But the word of God never interpreted itself, which is why not only Lutherans and Calvinists but numerous other Protestant sects eventually established their own church institutions and invested them with the authority to interpret the text and persecute heretics.[ 64] If infallible texts merely lead to the rise of fallible and oppressive churches, how then to deal with the problem of human error? The naive view of information posits that the problem can be solved by creating the opposite of a church— namely, a free market of information. The naive view expects that if all restrictions on the free flow of information are removed, error will inevitably be exposed and displaced by truth.
-
While it would be an exaggeration to argue that the invention of print caused the European witch- hunt craze, the printing press played a pivotal role in the rapid dissemination of the belief in a global satanic conspiracy. As Kramer’s ideas gained popularity, printing presses produced not only many additional copies of The Hammer of the Witches and copycat books but also a torrent of cheap one- page pamphlets whose sensational texts were often accompanied by illustrations depicting people attacked by demons or witches burned at the stake.
-
The witch- hunting bureaucracy did what bureaucracy often does: it invented the intersubjective category of“witches” and imposed it on reality. It even printed forms, with standard accusations and confessions of witches and blank spaces left for dates, names, and the signature of the accused. All that information produced a lot of order and power; it was a means for certain people to gain authority and for society as a whole to discipline its members. But it produced zero truth and zero wisdom.
-
The new intersubjective reality was so convincing that even some people accused of witchcraft came to believe that they were indeed part of a worldwide satanic conspiracy. If everybody said so, it must be true.
-
Even after expressing his horror at the insanity of the witch hunt in Würzburg, the chancellor nevertheless expressed his firm belief in the satanic conspiracy of witches. He didn’t witness any witchcraft firsthand, but so much information about witches was circulating that it was difficult for him to doubt all of it. Witch hunts were a catastrophe caused by the spread of toxic information. They are a prime example of a problem that was created by information, and was made worse by more information.
-
Nevertheless, in all these cases, the popes were careful to shift responsibility away from scriptures and from the church as an institution. Instead, the blame was laid on the shoulders of individual churchmen who misinterpreted scriptures and deviated from the true teachings of the church.
-
The most celebrated moments in the history of science are precisely those moments when accepted wisdom is overturned and new theories are born.
-
What makes scientific self- correcting mechanisms particularly strong is that scientific institutions are not just willing to admit institutional error and ignorance; they are actively seeking to expose them. This is evident in the institutions’ incentive structure.
-
I nevertheless trust what I read in Hutton’s book, because I understand how institutions like the University of Bristol and Yale University Press operate. Their self- correcting mechanisms have two crucial features: First, the self- correcting mechanisms are built into the core of the institutions rather than being a peripheral add- on. Second, these institutions publicly celebrate self- correcting instead of denying it.
-
Shechtman’s claims were dismissed by most of his colleagues, and he was blamed for mismanaging his experiments. The head of his laboratory also turned on Shechtman. In a dramatic gesture, he placed a chemistry textbook on Shechtman’s desk and told him,“Danny, please read this book and you will understand that what you are saying cannot be.” Shechtman boldly replied that he saw the quasicrystals in the microscope— not in the book. As a result, he was kicked out of the lab. Worse was to come. Linus Pauling, a two- time Nobel laureate and one of the most eminent scientists of the twentieth century, led a brutal personal attack on Shechtman. In a conference attended by hundreds of scientists, Pauling proclaimed,“Danny Shechtman is talking nonsense, there are no quasicrystals, just quasi- scientists.”
-
Similarly, when Georg Cantor developed in the late nineteenth century his theory of infinite numbers, which became the basis for much of twentieth- century mathematics, he was personally attacked by some of the leading mathematicians of his day, like Henri Poincaré and Leopold Kronecker. Populists are right to think that scientists suffer from the same human biases as everyone else. However, thanks to institutional self- correcting mechanisms these biases can be overcome. If enough empirical evidence is provided, it often takes just a few decades for an unorthodox theory to upend established wisdom and become the new consensus.
-
There is a reason why institutions like the Catholic Church and the Soviet Communist Party eschewed strong self- correcting mechanisms. While such mechanisms are vital for the pursuit of truth, they are costly in terms of maintaining order. Strong self- correcting mechanisms tend to create doubts, disagreements, conflicts, and rifts and to undermine the myths that hold the social order together.
-
The history of information networks has always involved maintaining a balance between truth and order. Just as sacrificing truth for the sake of order comes with a cost, so does sacrificing order for truth. Scientific institutions have been able to afford their strong self- correcting mechanisms because they leave the difficult job of preserving the social order to other institutions. If a chemist finds that a thief has broken into their lab or a psychiatrist receives death threats, they don’t complain to a peer- reviewed journal; they call the police.
-
American self- flagellation about the Vietnam War continues even today to divide the American public and to undermine America’s reputation throughout the world, whereas Soviet and Russian silence about the Afghanistan War has helped dim its memory and limit its reputational costs.
-
For one of the biggest questions about AI is whether it will favor or undermine democratic self- correcting mechanisms.
-
Dictatorial information networks are highly centralized.[ 1] This means two things. First, the center enjoys unlimited authority; hence information tends to flow to the central hub, where the most important decisions are made. In the Roman Empire all roads led to Rome, in Nazi Germany information flowed to Berlin, and in the Soviet Union to Moscow.
-
But not every dictatorship is totalitarian. Technical difficulties often prevent dictators from becoming totalitarian. The Roman emperor Nero, for example, didn’t have the means to micromanage the lives of millions of peasants in remote provincial villages. In many dictatorial regimes considerable autonomy is therefore left to individuals, corporations, and communities. However, the dictators always retain the authority to intervene in people’s lives. In Nero’s Rome freedom was not an ideal but a by- product of the government’s inability to exert totalitarian control.
-
The second characteristic of dictatorial networks is that they assume the center is infallible. They therefore dislike any challenge to the center’s decisions. Soviet propaganda depicted Stalin as an infallible genius, and Roman propaganda treated emperors as divine beings. Even when Stalin or Nero made a patently disastrous decision, there were no robust self- correcting mechanisms in the Soviet Union or the Roman Empire that could expose the mistake and push for a better course of action.
-
To summarize, a dictatorship is a centralized information network, lacking strong self- correcting mechanisms. A democracy, in contrast, is a distributed information network, possessing strong self- correcting mechanisms.
-
A democracy is not a system in which a majority of any size can decide to exterminate unpopular minorities; it is a system in which there are clear limits on the power of the center. Suppose 51 percent of voters choose a government that then takes away the voting rights of the other 49 percent of voters, or perhaps of just 1 percent of them. Is that democratic? Again the answer is no, and it has nothing to do with the numbers. Disenfranchising political rivals dismantles one of the vital self- correcting mechanisms of democratic networks. Elections are a mechanism for the network to say,“We made a mistake; let’s try something else.” But if the center can disenfranchise people at will, that self- correcting mechanism is neutered.
-
But the one option that should not be on offer in elections is hiding or distorting the truth. If the majority prefers to consume whatever amount of fossil fuels it wishes with no regard to future generations or other environmental considerations, it is entitled to vote for that. But the majority should not be entitled to pass a law stating that climate change is a hoax and that all professors who believe in climate change must be fired from their academic posts. We can choose what we want, but we shouldn’t deny the true meaning of our choice.
-
To discover the truth, it is better to rely on two other methods. First, academic institutions, the media, and the judiciary have their own internal self- correcting mechanisms for fighting corruption, correcting bias, and exposing error. In academia, peer- reviewed publication is a far better check on error than supervision by government officials, because academic promotion often depends on uncovering past mistakes and discovering unknown facts. In the media, free competition means that if one outlet decides not to break a scandal, perhaps for self- serving reasons, others are likely to jump at the scoop. In the judiciary, a judge who takes bribes may be tried and punished just like any other citizen. Second, the existence of several independent institutions that seek the truth in different ways allows these institutions to check and correct one another. For example, if powerful corporations manage to break down the peer- review mechanism by bribing a sufficiently large number of scientists, investigative journalists and courts can expose and punish the perpetrators.
-
The most novel claim populists make is that they alone truly represent the people. Since in democracies only the people should have political power, and since allegedly only the populists represent the people, it follows that the populist party should have all political power to itself. If some party other than the populists wins elections, it does not mean that this rival party won the people’s trust and is entitled to form a government. Rather, it means that the elections were stolen or that the people were deceived to vote in a way that doesn’t express their true will. It should be stressed that for many populists, this is a genuinely held belief rather than a propaganda gambit.
-
How can you tell, then, whether someone is part of the people or not? Easy. If they support the leader, they are part of the people. This, according to the German political philosopher Jan- Werner Müller, is the defining feature of populism. What turns someone into a populist is claiming that they alone represent the people and that anyone who disagrees with them— whether state bureaucrats, minority groups, or even the majority of voters— either suffers from false consciousness or isn’t really part of the people.
-
This is why populism poses a deadly threat to democracy. While democracy agrees that the people is the only legitimate source of power, democracy is based on the understanding that the people is never a unitary entity and therefore cannot possess a single will. Every people— whether Germans, Venezuelans, or Turks— is composed of many different groups, with a plurality of opinions, wills, and representatives. No group, including the majority group, is entitled to exclude other groups from membership in the people. This is what makes democracy a conversation. Holding a conversation presupposes the existence of several legitimate voices. If, however, the people has only one legitimate voice, there can be no conversation. Rather, the single voice dictates everything.
-
Journalists can reveal that a popular politician took a bribe, and if compelling evidence is presented in court, a judge may send that politician to jail, even if most people don’t want to believe these accusations. Populists are suspicious of institutions that in the name of objective truths override the supposed will of the people. They tend to see this as a smoke screen for elites grabbing illegitimate power.
-
The result is a dark and cynical view of the world as a jungle and of human beings as creatures obsessed with power alone. All social interactions are seen as power struggles, and all institutions are depicted as cliques promoting the interests of their own members. In the populist imagination, courts don’t really care about justice; they only protect the privileges of the judges. Yes, the judges talk a lot about justice, but this is a ploy to grab power for themselves.
-
In all, it’s a rather sordid view of humanity, but two things nevertheless make it appealing to many. First, since it reduces all interactions to power struggles, it simplifies reality and makes events like wars, economic crises, and natural disasters easy to understand. Anything that happens— even a pandemic— is about elites pursuing power. Second, the populist view is attractive because it is sometimes correct. Every human institution is indeed fallible and suffers from some level of corruption. Some judges do take bribes.
-
In a well- functioning democracy, citizens trust the results of elections, the decisions of courts, the reports of media outlets, and the findings of scientific disciplines because citizens believe these institutions are committed to the truth. Once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power.
-
To judge by the archaeological and anthropological evidence, democracy was the most typical political system among archaic hunter- gatherers.
-
Even when hunter- gatherers did end up ruled by a domineering chief, as happened among the salmon- fishing people of northwestern America, at least that chief was accessible. He didn’t live in a faraway fortress surrounded by an unfathomable bureaucracy and a cordon of armed guards. If you wanted to voice a complaint or a suggestion, you could usually get within earshot of him. The chief couldn’t control public opinion, nor could he shut himself off from it. In other words, there was no way for a chief to force all information to flow through the center, or to prevent people from talking with one another, criticizing him, or organizing against him.[ 20] In the millennia following the agricultural revolution, and especially after writing helped create large bureaucratic polities, it became easier to centralize the flow of information and harder to maintain the democratic conversation.
-
Democracy was still an option for these small city- states, as the history of both early Sumer and classical Greece clearly indicates.[ 21] However, the democracy of ancient city- states tended to be less inclusive than the democracy of archaic hunter- gatherer bands. Probably the most famous example of ancient city- state democracy is Athens in the fifth and fourth centuries BCE. All adult male citizens could participate in the Athenian assembly, vote on public policy, and be elected to public offices. But women, slaves, and noncitizen residents of the city did not enjoy these privileges. Only about 25– 30 percent of the adult population of Athens enjoyed full political rights.
-
The ancient Romans had a clear understanding of what democracy means, and they were originally fiercely committed to the democratic ideal. After expelling the last king of Rome in 509 BCE, the Romans developed a deep dislike for monarchy and a fear of giving unlimited power to any single individual or institution. Supreme executive power was therefore shared by two consuls who balanced each other. These consuls were chosen by citizens in free elections, held office for a single year, and were additionally checked by the powers of the popular assembly, of the Senate, and of other elected officials like the tribunes. But when Rome extended citizenship to Latins, Italians, and finally to Gauls and Syrians, the power of the popular assembly, the tribunes, the Senate, and even the two consuls was gradually reduced, until in the late first century BCE the Caesar family established its autocratic rule.
-
The lack of a meaningful public conversation was not the fault of Augustus, Nero, Caracalla, or any of the other emperors. They didn’t sabotage Roman democracy. Given the size of the empire and the available information technology, democracy was simply unworkable. This was acknowledged already by ancient philosophers like Plato and Aristotle, who argued that democracy can work only in small- scale city- states.
-
In the sixteenth and seventeenth centuries participation in royal elections usually ranged between 3,000 and 7,000 voters, except for the 1669 elections, in which 11,271 participated.[ 37] While this hardly sounds democratic in the twenty- first century, it should be remembered that all large- scale democracies until the twentieth century limited political rights to a small circle of relatively wealthy men. Democracy is never a matter of all or nothing. It is a continuum, and late- sixteenth- century Poles and Lithuanians explored previously unknown regions of that continuum.
-
Newspapers that succeeded in gaining widespread trust became the architects and mouthpieces of public opinion. They created a far more informed and engaged public, which changed the nature of politics, first in the Netherlands and later around the world.[ 44] The political influence of newspapers was so crucial that newspaper editors often became political leaders.
-
You may wonder whether we are talking about democracies at all. At a time when the United States had more slaves than voters(more than 1.5 million Americans were enslaved in the early 1820s),[ 50] was the United States really a democracy? This is a question of definitions. As with the late- sixteenth- century Polish- Lithuanian Commonwealth, so also with the early- nineteenth- century United States,“democracy” is a relative term. As noted earlier, democracy and autocracy aren’t absolutes; they are part of a continuum. In the early nineteenth century, out of all large- scale human societies, the United States was probably the closest to the democratic end of the continuum. Giving 25 percent of adults the right to vote doesn’t sound like much today, but in 1824 that was a far higher percentage than in the Tsarist, Ottoman, or Chinese Empires, in which nobody had the right to vote.
-
An even more important reason to consider the United States in 1824 a democracy is that compared with most other polities of its day, the new country possessed much stronger self- correcting mechanisms. The Founding Fathers were inspired by ancient Rome— witness the Senate and the Capitol in Washington— and they were well aware that the Roman Republic eventually turned into an autocratic empire. They feared that some American Caesar would do something similar to their republic, and constructed multiple overlapping self- correcting mechanisms, known as the system of checks and balances. One of these was a free press. In ancient Rome, the self- correcting mechanisms stopped functioning as the republic enlarged its territory and population. In the United States, modern information technology combined with freedom of the press helped the self- correcting mechanisms survive even as the country extended from the Atlantic to the Pacific.
-
the Founding Fathers committed enormous mistakes— such as endorsing slavery and denying women the vote— but they also provided the tools for their descendants to correct these mistakes. That was their greatest legacy.
-
Before the invention of the telegraph, radio, and other modern information technology, large- scale totalitarian regimes were impossible. Roman emperors, Abbasid caliphs, and Mongol khans were often ruthless autocrats who believed they were infallible, but they did not have the apparatus necessary to impose totalitarian control over large societies. To understand this, we should first clarify the difference between totalitarian regimes and less extreme autocratic regimes. In an autocratic network, there are no legal limits on the will of the ruler, but there are nevertheless a lot of technical limits. In a totalitarian network, many of these technical limits are absent.
-
Totalitarianism is the attempt to control what every person throughout the country is doing and saying every moment of the day, and potentially even what every person is thinking and feeling. Nero might have dreamed about such powers, but he lacked the means to realize them. Given the limited tax base of the agrarian Roman economy, Nero couldn’t employ many people in his service.
-
The Qin Empire was probably the most ambitious totalitarian experiment in human history prior to the modern age, and its scale and intensity would prove to be its ruin. The attempt to regiment tens of millions of people along military lines, and to monopolize all resources for military purposes, led to severe economic problems, wastefulness, and popular resentment. The regime’s draconian laws, along with its hostility to regional elites and its voracious appetite for taxes and recruits, fanned the flames of this resentment even further. Meanwhile, the limited resources of an ancient agrarian society couldn’t support all the bureaucrats and soldiers that the Qin needed to contain this resentment, and the low efficiency of their information technology made it impossible to control every town and village from distant Xiangyang. Not surprisingly, in 209 BCE a series of revolts broke out, led by regional elites, disgruntled commoners, and even some of the empire’s own newly minted officials.
-
After several years of war, a new dynasty— the Han— reunited the empire. But the Han then adopted a more realistic, less draconian attitude. Han emperors were certainly autocratic, but they were not totalitarian. They did not recognize any limits on their authority, but they did not try to micromanage everyone’s lives. Instead of following Legalist ideas of surveillance and control, the Han turned to Confucian ideas of encouraging people to act loyally and responsibly out of inner moral convictions. Like their contemporaries in the Roman Empire, Han emperors sought to control only some aspects of society from the center, while leaving considerable autonomy to provincial aristocrats and local communities. Due largely to the limitations imposed by the available information technology, premodern large- scale polities like the Roman and Han Empires gravitated toward nontotalitarian autocracy.[ 70] Full- blown totalitarianism might have been dreamed about by the likes of the Qin, but its implementation had to wait for the development of modern technology.
-
Just as modern technology enabled large- scale democracy, it also made large- scale totalitarianism possible.
-
Like the Catholic Church, the Bolshevik party was convinced that though its individual members might err, the party itself was always right. Belief in their own infallibility led the Bolsheviks to destroy Russia’s nascent democratic institutions— like elections, independent courts, the free press, and opposition parties— and to create a one- party totalitarian regime. Bolshevik totalitarianism did not start with Stalin. It was evident from the very first days of the revolution. It stemmed from the doctrine of party infallibility, rather than from the personality of Stalin.
-
Just as democracy is maintained by having overlapping self- correcting mechanisms that keep each other in check, modern totalitarianism created overlapping surveillance mechanisms that keep each other in order. The governor of a Soviet province was constantly watched by the local party commissar, and neither of them knew who among their staff was an NKVD informer. A testimony to the effectiveness of the system is that modern totalitarianism largely solved the perennial problem of premodern autocracies— revolts by provincial subordinates. While the U.S.S.R. had its share of court coups, not once did a provincial governor or a Red Army front commander rebel against the center.[ 75] Much of the credit for that goes to the secret police, which kept a close eye on the mass of citizens, on provincial administrators, and even more so on the party and the Red Army.
-
While in most polities throughout history the army had wielded enormous political power, in twentieth- century totalitarian regimes the regular army ceded much of its clout to the secret police— the information army. In the U.S.S.R., the Cheka, OGPU, NKVD, and KGB lacked the firepower of the Red Army, but had more influence in the Kremlin and could terrorize and purge even the army brass.
-
In none of these cases could the secret police defeat the regular army in traditional warfare, of course; what made the secret police powerful was its command of information. It had the information necessary to preempt a military coup and to arrest the commanders of tank brigades or fighter squadrons before they knew what hit them. During the Stalinist Great Terror of the late 1930s, out of 144,000 Red Army officers about 10 percent were shot or imprisoned by the NKVD. This included 154 of 186 divisional commanders(83 percent), eight of nine admirals(89 percent), thirteen of fifteen full generals(87 percent), and three of five marshals(60 percent).
-
Serving as an NKVD general in Stalin’s day was one of the most dangerous jobs in the world. At a time when American democracy was improving its many self- correcting mechanisms, Soviet totalitarianism was refining its triple self- surveilling and self- terrorizing apparatus.
-
In June 1929, only 4 percent of Soviet peasant households had belonged to collective farms. By March 1930 the figure had risen to 57 percent. By April 1937, 97 percent of households in the countryside had been confined to the 235,000 Soviet collective farms.[ 88] In just seven years, then, a way of life that had existed for centuries had been replaced by the totalitarian brainchild of a few Moscow bureaucrats.
-
The mountains of information collected by Soviet bureaucrats about the kulaks wasn’t the objective truth about them, but it imposed a new intersubjective Soviet truth. Knowing that someone was labeled a kulak was a very important thing to know about a Soviet person, even though the label was entirely bogus.
-
Consequently, the church was perhaps the most important check on the power of European autocrats. For example, when in the“Investiture Controversy” of the 1070s King Henry IV of Germany and Italy asserted that he had the final say on the appointment of bishops, abbots, and other church officials, Pope Gregory VII mobilized resistance and eventually forced the king to surrender. On January 25, 1077, Henry reached Canossa castle, where the pope was lodging, to offer his submission and apology. The pope refused to open the gates, and Henry waited in the snow outside, barefoot and hungry. After three days, the pope finally opened the gates to the king, who begged forgiveness.
-
Churches became more totalitarian institutions only in the late modern era, when modern information technologies became available. We tend to think of popes as medieval relics, but actually they are masters of modern technology. In the eighteenth century, the pope had little control over the worldwide Catholic Church and was reduced to the status of a local Italian princeling, fighting other Italian powers for control of Bologna or Ferrara. With the advent of radio, the pope became one of the most powerful people on the planet. Pope John Paul II could sit in the Vatican and speak directly to millions of Catholics from Poland to the Philippines, without any archbishop, bishop, or parish priest able to twist or hide his words.
-
Another common reason why official channels fail to pass on information is to preserve order. Because the chief aim of totalitarian information networks is to produce order rather than discover truth, when alarming information threatens to undermine social order, totalitarian regimes often suppress it. It is relatively easy for them to do so, because they control all the information channels. For example, when the Chernobyl nuclear reactor exploded on April 26, 1986, Soviet authorities suppressed all news of the disaster. Both Soviet citizens and foreign countries were kept oblivious of the danger, and so took no steps to protect themselves from radiation. When some Soviet officials in Chernobyl and the nearby town of Pripyat requested to immediately evacuate nearby population centers, their superiors’ chief concern was to avoid the spread of alarming news, so they not only forbade evacuation but also cut the phone lines and warned employees in the nuclear facility not to talk about the disaster. Two days after the meltdown Swedish scientists noticed that radiation levels in Sweden, more than twelve hundred kilometers from Chernobyl, were abnormally high.
-
In 2019, I went on a tour of Chernobyl. The Ukrainian guide who explained what led to the nuclear accident said something that stuck in my mind.“Americans grow up with the idea that questions lead to answers,” he said.“But Soviet citizens grew up with the idea that questions lead to trouble.”
-
Indeed, some of the lessons of Three Mile Island, which were openly shared even with the Soviets, contributed to mitigating the Chernobyl disaster.
-
Ironically, Stalin’s own death eight years later was partly the result of an information network that prioritized order and disregarded truth.
-
First, democratic countries like France, Norway, and the Netherlands made at the time diplomatic errors as great as those of the U.S.S.R., and their armies performed even worse. Second, the military machine that crushed the Red Army, the French army, the Dutch army, and numerous other armies was itself built by a totalitarian regime. So whatever conclusion we draw from the years 1939– 41, it cannot be that totalitarian networks necessarily function worse than democratic ones. The history of Stalinism reveals many potential drawbacks of totalitarian information networks, but that should not blind us to their potential advantages.
-
When one considers the broader history of World War II and its outcome, it becomes evident that Stalinism was in fact one of the most successful political systems ever devised— if we define“success” purely in terms of order and power while disregarding all considerations of ethics and human well- being.
-
In the 1940s and early 1950s, many people throughout the world believed Stalinism was the wave of the future. It had won World War II, after all, raised the red flag over the Reichstag, ruled an empire that stretched from central Europe to the Pacific, fueled anticolonial struggles throughout the world, and inspired numerous copycat regimes. It won admirers even among leading artists and thinkers in Western democracies, who believed that notwithstanding the vague rumors about gulags and purges Stalinism was humanity’s best shot at ending capitalist exploitation and creating a perfectly just society.
-
Of course, just as the printing press didn’t cause the witch hunts or the scientific revolution, so radio didn’t cause either Stalinist totalitarianism or American democracy. Technology only creates new opportunities; it is up to us to decide which ones to pursue.
-
The images from Chicago or Paris in 1968 could easily have given the impression that things were falling apart. The pressure to live up to the democratic ideals and to include more people and groups in the public conversation seemed to undermine the social order and to make democracy unworkable.
-
Fast- forward twenty years, and it was the Soviet system that had become unworkable. The sclerotic gerontocrats on the podium in Red Square were a perfect emblem of a dysfunctional information network, lacking any meaningful self- correcting mechanisms. Decolonization, globalization, technological development, and changing gender roles led to rapid economic, social, and geopolitical changes. But the gerontocrats could not handle all the information streaming to Moscow, and since no subordinate was allowed much initiative, the entire system ossified and collapsed.
-
Nowhere were its shortcomings more glaring than in the semiconductor sector, in which technology developed at a particularly fast rate. In the West, semiconductors were developed through open competition between numerous private companies like Intel and Toshiba, whose main customers were other private companies like Apple and Sony. The latter used microchips to produce civilian goods such as the Macintosh personal computer and the Walkman. The Soviets could never catch up with American and Japanese microchip production, because— as the American economic historian Chris Miller explained— the Soviet semiconductor sector was“secretive, top- down, oriented toward military systems, fulfilling orders with little scope for creativity.” The Soviets tried to close the gap by stealing and copying Western technology— which only guaranteed that they always remained several years behind.[ 125] Thus the first Soviet personal computer appeared only in 1984, at a time when in the United States people already had eleven million PCs.
-
Western democracies not only surged ahead technologically and economically but also succeeded in holding the social order together despite— or perhaps because of— widening the circle of participants in the political conversation. There were many hiccups, but the United States, Japan, and other democracies created a far more dynamic and inclusive information system, which made room for many more viewpoints without breaking down. It was such a remarkable achievement that many felt that the victory of democracy over totalitarianism was final. This victory has often been explained in terms of a fundamental advantage in information processing: totalitarianism didn’t work because trying to concentrate and process all the data in one central hub was extremely inefficient. At the beginning of the twenty- first century, it accordingly seemed that the future belonged to distributed information networks and to democracy. This turned out to be wrong.
-
Things look as dire as they did in the 1960s, and there is no guarantee that democracies will pass the new test as successfully as they passed the previous one. Simultaneously, the new technologies also give fresh hope to totalitarian regimes that still dream of concentrating all the information in one hub. Yes, the old men on the podium in Red Square were not up to the task of orchestrating millions of lives from a single center. But perhaps AI can do it?
-
As humankind enters the second quarter of the twenty- first century, a central question is how well democracies and totalitarian regimes will handle both the threats and the opportunities resulting from the current information revolution. Will the new technologies favor one type of regime over the other, or will we see the world divided once again, this time by a Silicon Curtain rather than an iron one?
-
As in previous eras, information networks will struggle to find the right balance between truth and order. Some will opt to prioritize truth and maintain strong self- correcting mechanisms. Others will make the opposite choice. Many of the lessons learned from the canonization of the Bible, the early modern witch hunts, and the Stalinist collectivization campaign will remain relevant, and perhaps have to be relearned. However, the current information revolution also has some unique features, different from— and potentially far more dangerous than— anything we have seen before.
-
The main split in twenty- first- century politics might be not between democracies and totalitarian regimes but rather between human beings and nonhuman agents. Instead of dividing democracies from totalitarian regimes, a new Silicon Curtain may separate all humans from our unfathomable algorithmic overlords.
Part II: The Inorganic Network
-
Recommendations from on high can have enormous sway over people. Recall that the Bible was born as a recommendation list. By recommending Christians to read the misogynist 1 Timothy instead of the more tolerant Acts of Paul and Thecla, Athanasius and other church fathers changed the course of history. In the case of the Bible, ultimate power lay not with the authors who composed different religious tracts but with the curators who created recommendation lists. This was the kind of power wielded in the 2010s by social media algorithms.
-
The same research estimated that, altogether, 53 percent of all videos watched in Myanmar were being auto- played for users by algorithms. In other words, people weren’t choosing what to see. The algorithms were choosing for them.
-
There is more to say about the power of algorithms to shape politics. In particular, many readers may disagree that the algorithms made independent decisions, and may insist that everything the algorithms did was the result of code written by human engineers and of business models adopted by human executives. This book begs to differ. Human soldiers are shaped by their genetic code and follow orders issued by executives, yet they can still make independent decisions. The same is true of AI algorithms. They can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw. This is the essence of the AI revolution: The world is being flooded by countless new powerful agents.
-
People often confuse intelligence with consciousness, and many consequently jump to the conclusion that nonconscious entities cannot be intelligent. But intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate. In humans and other mammals, intelligence often goes hand in hand with consciousness.
-
Bacteria and plants apparently lack any consciousness, yet they too display intelligence. They gather information from their environment, make complex choices, and pursue ingenious strategies to obtain food, reproduce, cooperate with other organisms, and evade predators and parasites.[ 23] Even humans make intelligent decisions without any awareness of them; 99 percent of the processes in our body, from respiration to digestion, happen without any conscious decision making. Our brains decide to produce more adrenaline or dopamine, and while we may be aware of the result of that decision, we do not make it consciously.
-
But whether computers develop consciousness or not doesn’t ultimately matter for the question at hand. In order to pursue a goal like“maximize user engagement,” and make decisions that help attain that goal, consciousness isn’t necessary. Intelligence is enough. A nonconscious Facebook algorithm can have a goal of making more people spend more time on Facebook. That algorithm can then decide to deliberately spread outrageous conspiracy theories, if this helps it achieve its goal.
-
True, it was the human ARC researchers who set GPT- 4 the goal of overcoming the CAPTCHA, just as it was human Facebook executives who told their algorithm to maximize user engagement. But once the algorithms adopted these goals, they displayed considerable autonomy in deciding how to achieve them.
-
the book argues that the emergence of computers capable of pursuing goals and making decisions by themselves changes the fundamental structure of our information network.
-
The same is true of laws. How many people know all the tax laws of their country? Even professional accountants struggle with that. But computers are built for such things. They are bureaucratic natives and can automatically draft laws, monitor legal violations, and identify legal loopholes with superhuman efficiency.
-
Recall that the Q drops that began this political flood were anonymous online messages. In 2017, only a human could compose them, and algorithms merely helped disseminate them. However, as of 2024 texts of a similar linguistic and political sophistication can easily be composed and posted online by a nonhuman intelligence. Religions throughout history claimed a nonhuman source for their holy books; soon that might be a reality. Attractive and powerful religions might emerge whose scriptures are composed by AI.
-
And if so, there will be another major difference between these new AI- based scriptures and ancient holy books like the Bible. The Bible couldn’t curate or interpret itself, which is why in religions like Judaism and Christianity actual power was held not by the allegedly infallible book but by human institutions like the Jewish rabbinate and the Catholic Church. In contrast, AI not only can compose new scriptures but is fully capable of curating and interpreting them too. No need for any humans in the loop.
-
Equally alarmingly, we might increasingly find ourselves conducting lengthy online discussions about the Bible, about QAnon, about witches, about abortion, or about climate change with entities that we think are humans but are actually computers. This could make democracy untenable. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation. When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views.
-
In the 2010s social media was a battleground for controlling human attention. In the 2020s the battle is likely to shift from attention to intimacy. What will happen to human society and human psychology as computer fights computer in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for particular politicians, buy particular products, or adopt radical beliefs? What might happen when LaMDA meets QAnon?
-
People may come to use a single computer adviser as a one- stop oracle. Why bother searching and processing information by myself when I can just ask the oracle? This could put out of business not only search engines but also much of the news industry and advertisement industry. Why read a newspaper when I can just ask my oracle what’s new? And what’s the purpose of advertisements when I can just ask the oracle what to buy?
-
And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, but the end of its human- dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food, sex, and intimacy and our cultural creations like religions and laws. The history of the Christian religion, for example, is a process through which mythological stories and church laws influenced how humans consume food, engage in sex, and build intimate relationships, while the myths and laws themselves were simultaneously shaped by underlying biological forces and dramas. What will happen to the course of history when computers play a larger and larger role in culture and begin producing stories, laws, and religions? Within a few years AI could eat the whole of human culture— everything we have created over thousands of years— digest it, and begin to gush out a flood of new cultural artifacts.
-
We live cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the opinions of friends. Our sexual habits are influenced by what we hear in fairy tales and see in movies. Even the way we walk and breathe is nudged by cultural traditions, such as the military discipline of soldiers and the meditative exercises of monks. Until very recently, the cultural cocoon we lived in was woven by other humans. Going forward, it will be increasingly designed by computers.
-
Bach didn’t compose music in a vacuum; he was deeply influenced by previous musical creations, as well as by biblical stories and other preexisting cultural artifacts. But just as human artists like Bach can break with tradition and innovate, computers too can make cultural innovations, composing music or making images that are somewhat different from anything previously produced by humans. These innovations will in turn influence the next generation of computers, which will increasingly deviate from the original human models, especially because computers are free from the limitations that evolution and biochemistry impose on the human imagination.
-
The Matrix proposed that to gain total control of human society, computers would have to first gain physical control of our brains, hooking them directly to a computer network. But in order to manipulate humans, there is no need to physically hook brains to computers. For thousands of years prophets, poets, and politicians have used language to manipulate and reshape society. Now computers are learning how to do it. And they won’t need to send killer robots to shoot us. They could manipulate human beings to pull the trigger.
-
Second, computer- to- computer chains are emerging in which computers interact with one another on their own. Humans are excluded from these loops and have difficulty even understanding what’s happening inside them. Google Brain, for example, has experimented with new encryption methods developed by computers. It set up an experiment in which two computers— nicknamed Alice and Bob— had to exchange encrypted messages, while a third computer named Eve tried to break their encryption. If Eve broke the encryption within a given time period, it got points. If it failed, Alice and Bob scored. After about fifteen thousand exchanges, Alice and Bob came up with a secret code that Eve couldn’t break. Crucially, the Google engineers who conducted the experiment had not taught Alice and Bob anything about how to encrypt messages. The computers created a private language all on their own.
-
Are conservatives against AI because of the threat it poses to traditional human- centered culture, or do they favor it because it will fuel economic growth while simultaneously reducing the need for immigrant workers? Do progressives oppose AI because of the risks of disinformation and increasing bias, or do they embrace it as a means of generating abundance that could finance a comprehensive welfare state?
-
But in history, time and place are crucial, and no two moments are the same. It matters a great deal that America was colonized by the Spaniards in the 1490s rather than by the Ottomans in the 1520s, or that the atom bomb was developed by the Americans in 1945 rather than by the Germans in 1942. Similarly, there would have been significant political, economic, and cultural consequences if the personal computer emerged not in San Francisco in the 1970s but rather in Osaka in the 1980s or in Shanghai in the first decade of the twenty- first century.
-
To exercise our agency, we first need to understand what the new technologies are and what they can do. That’s an urgent responsibility of every citizen. Naturally, not every citizen needs a PhD in computer science, but to retain control of our future, we do need to understand the political potential of computers.
-
Politics involves a delicate balance between truth and order. As computers become important members of our information network, they are increasingly tasked with discovering truth and maintaining order. For example, the attempt to find the truth about climate change increasingly depends on calculations that only computers can make, and the attempt to reach social consensus about climate change increasingly depends on recommendation algorithms that curate our news feeds, and on creative algorithms that write news stories, fake news, and fiction. At present, we are in a political deadlock about climate change, partly because the computers are at a deadlock. Calculations run on one set of computers warn us of an imminent ecological catastrophe, but another set of computers prompts us to watch videos that cast doubt on those warnings. Which set of computers should we believe? Human politics is now also computer politics.
-
Computers could infer from our eyes many additional personality traits, like how open we are to new experiences, and estimate our level of expertise in various fields ranging from reading to surgery. Experts possessing well- honed strategies display systematic gaze patterns, whereas the eyes of novices wander aimlessly.
-
At that point, if biometric sensors register what happens to the heart rate and brain activity of millions of people as they watch a particular news item on their smartphones, that can teach the computer network far more than just our general political affiliation. The network could learn precisely what makes each human angry, fearful, or joyful. The network could then both predict and manipulate our feelings, selling us anything it wants— be it a product, a politician, or a war.
-
The second time, I had done some shopping, and I was bringing the bags into the car, my scarf fell off, and I received a message noting that due to violating compulsory veiling laws, my car had been subjected to ‘systematic impoundment’ for a period of fifteen days. I did not know what this meant. I asked around and found out through relatives that this meant I had to immobilize my car for fifteen days.”[ 36] Maryam’s testimony indicates that the AI sends its threatening messages within seconds, with no time for any human to review and authorize the procedure.
-
Now, when customers come into your taxi or barbershop, they bring cameras, microphones, a surveillance network, and thousands of potential viewers with them.[ 47] This is the foundation of a nongovernmental peer- to- peer surveillance network.
-
Money revolutionized economic relations, social interactions, and human psychology. But like surveillance, money has had its limitations and could not reach everywhere. Even in the most capitalist societies, there have always been places that money didn’t penetrate, and there have always been many things that lacked a monetary value. How much is a smile worth? How much money does a person earn for visiting their grandparents?[ 48] For scoring those things that money can’t buy, there was an alternative nonmonetary system, which has been given different names: honor, status, reputation. What social credit systems seek is a standardized valuation of the reputation market. Social credit is a new points system that ascribes precise values even to smiles and family visits.
-
Yes, computers can gather unprecedented amounts of data on us, watching what we do twenty- four hours a day. And yes, they can identify patterns in the ocean of data with superhuman efficiency. But that does not mean that the computer network will always understand the world accurately. Information isn’t truth. A total surveillance system may form a very distorted understanding of the world and of human beings. Instead of discovering the truth about the world and about us, the network might use its immense power to create a new kind of world order and impose it on us.
-
Finally, after eleven minutes, the director of a paper factory took his life in his hands, stopped clapping, and sat down. Everyone else immediately stopped clapping and also sat down. That same night, the secret police arrested him and sent him to the gulag for ten years.“His interrogator reminded him: Don’t ever be the first to stop applauding!”[ 2] This story reveals a crucial and disturbing fact about information networks, and in particular about surveillance systems. As discussed in previous chapters, contrary to the naive view, information is often used to create order rather than discover truth. On the face of it, Stalin’s agents in the Moscow conference used the“clapping test” as a way to uncover the truth about the audience. It was a loyalty test, which assumed that the longer you clapped, the more you loved Stalin. In many contexts, this assumption is not unreasonable.
-
This leaked document made one crucial recommendation: since Facebook cannot remove everything harmful from a platform used by many millions, it should at least“stop magnifying harmful content by giving it unnatural distribution.”
-
As we have seen again and again throughout history, in a completely free information fight, truth tends to lose. To tilt the balance in favor of truth, networks must develop and maintain strong self- correcting mechanisms that reward truth telling. These self- correcting mechanisms are costly, but if you want to get the truth, you must invest in them.
-
One such error- enhancing mechanism was the Instant Articles program that Facebook rolled out in Myanmar in 2016. Wishing to drive up engagement, Facebook paid news channels according to the amount of user engagement they generated, measured in clicks and views. No importance whatsoever was given to the truthfulness of the“news.” A 2021 study found that in 2015, before the program was launched, six of the ten top Facebook websites in Myanmar belonged to“legitimate media.” By 2017, under the impact of Instant Articles,“legitimate media” was down to just two websites out of the top ten. By 2018, all top ten websites were“fake news and clickbait websites.”
-
And yet I have devoted so much attention to the social media“user engagement” debacle because it exemplifies a much bigger problem afflicting computers— the alignment problem. When computers are given a specific goal, such as to increase YouTube traffic to one billion hours a day, they use all their power and ingenuity to achieve this goal. Since they operate very differently than humans, they are likely to use methods their human overlords didn’t anticipate. This can result in dangerous unforeseen consequences that are not aligned with the original human goals.
-
As Napoleon expanded his empire into central Europe and the Italian Peninsula, he liquidated the Holy Roman Empire in 1806, amalgamated many of the smaller German and Italian principalities into larger territorial blocs, created a German Confederation of the Rhine and a Kingdom of Italy, and sought to unify these territories under his dynastic rule. His victorious armies also spread the ideals of modern nationalism and popular sovereignty into the German and Italian lands. Napoleon thought all this would make his empire stronger. In fact, by breaking up traditional structures and giving Germans and Italians a taste of national consolidation, Napoleon inadvertently laid the foundations for the ultimate unification of Germany(1866– 71) and of Italy(1848– 71). These twin processes of national unification were sealed by the German victory over France in the Franco- Prussian War of 1870– 71. Faced with two newly unified and fervently nationalistic powers on its eastern border, France never regained its position of dominance.
-
Both Napoleon and George W. Bush fell victim to the alignment problem. Their short- term military goals were misaligned with their countries’ long- term geopolitical goals. We can understand the whole of Clausewitz’s On War as a warning that“maximizing victory” is as shortsighted a goal as“maximizing user engagement.” According to the Clausewitzian model, only once the political goal is clear can armies decide on a military strategy that will hopefully achieve it. From the overall strategy, lower- ranking officers can then derive tactical goals. The model constructs a clear hierarchy between long- term policy, medium- term strategy, and short- term tactics. Tactics are considered rational only if they are aligned with some strategic goal, and strategy is considered rational only if it is aligned with some political goal. Even local tactical decisions of a lowly company commander must serve the war’s ultimate political goal.
-
Bostrom’s point was that the problem with computers isn’t that they are particularly evil but that they are particularly powerful. And the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals. If we define a misaligned goal to a pocket calculator, the consequences are trivial. But if we define a misaligned goal to a superintelligent machine, the consequences could be dystopian.
-
Is there a way to define whom computers should care about, without getting bogged down by some intersubjective myth? The most obvious suggestion is to tell computers that they must care about any entity capable of suffering. While suffering is often caused by belief in local intersubjective myths, suffering itself is nonetheless a universal reality. Therefore, using the capacity to suffer in order to define the critical in- group grounds morality in an objective and universal reality.
-
But in historical situations when the scales of suffering are more evenly matched, utilitarianism falters. In the early days of the COVID- 19 pandemic, governments all over the world adopted strict policies of social isolation and lockdown. This probably saved the lives of several million people.[ 42] It also made hundreds of millions miserable for months. Moreover, it might have indirectly caused numerous deaths, for example by increasing the incidence of murderous domestic violence,[ 43] or by making it more difficult for people to diagnose and treat other dangerous illnesses, like cancer.[ 44] Can anyone calculate the total impact of the lockdown policies and determine whether they increased or decreased the suffering in the world? This may sound like a perfect task for a relentless computer network. But how would the computer network decide how many“misery points” to allocate to being locked down with three kids in a two- bedroom apartment for a month? Is that 60 misery points or 600?
-
The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present. Indeed, this is a trick traditional religions discovered thousands of years ago. The crimes of this world could too easily be excused by the promises of future salvation.
-
An analogous problem might well afflict computers. Of course, they cannot“believe” in any mythology, because they are nonconscious entities that don’t believe in anything. As long as they lack subjectivity, how can they hold intersubjective beliefs? However, one of the most important things to realize about computers is that when a lot of computers communicate with one another, they can create inter- computer realities, analogous to the intersubjective realities produced by networks of humans. These inter- computer realities may eventually become as powerful— and as dangerous— as human- made intersubjective myths. This is a very complicated argument, but it is another of the central arguments of the book,
-
The Palestinian philosopher Sari Nusseibeh observed that“Jews and Muslims, acting on religious beliefs and backed up by nuclear capabilities, are poised to engage in history’s worst- ever massacre of human beings, over a rock.”[ 50] They don’t fight over the atoms that compose the rock; they fight over its“sanctity,” a bit like kids fighting over a Pokémon. The sanctity of the Holy Rock, and of Jerusalem generally, is an intersubjective phenomenon that exists in the communication network connecting many human minds. For thousands of years wars were fought over intersubjective entities like holy rocks. In the twenty- first century, we might see wars fought over inter- computer entities.
-
For tens of thousands of years, humans dominated planet Earth because we were the only ones capable of creating and sustaining intersubjective entities like corporations, currencies, gods, and nations, and using such entities to organize large- scale cooperation. Now computers may acquire comparable abilities. This isn’t necessarily bad news. If computers lacked connectivity and creativity, they would not be very useful.
-
The problem we face is not how to deprive computers of all creative agency, but rather how to steer their creativity in the right direction. It is the same problem we have always had with human creativity. The intersubjective entities invented by humans were the basis for all the achievements of human civilization, but they occasionally led to crusades, jihads, and witch hunts. The inter- computer entities will probably be the basis for future civilizations, but the fact that computers collect empirical data and use mathematics to analyze it doesn’t mean they cannot launch their own witch hunts.
-
Yet databases come with biases. The face- classification algorithms studied by Joy Buolamwini were trained on data sets of tagged online photos, such as the Labeled Faces in the Wild database. The photos in that database were taken mainly from online news articles. Since white males dominate the news, 78 percent of the photos in the database were of males, and 84 percent were of white people. George W. Bush appeared 530 times— more than twice as many times as all Black women combined.
-
Unfortunately, if real- life companies already suffer from some ingrained bias, the baby algorithm is likely to learn this bias, and even amplify it. For instance, an algorithm looking for patterns of“good employees” in real- life data may conclude that hiring the boss’s nephews is always a good idea, no matter what other qualification they have.
-
Many of the algorithmic biases surveyed in this and previous chapters share the same fundamental problem: the computer thinks it has discovered some truth about humans, when in fact it has imposed order on them. A social media algorithm thinks it has discovered that humans like outrage, when in fact it is the algorithm itself that conditioned humans to produce and consume more outrage.
-
For computers to have a more accurate and responsible view of the world, they need to take into account their own power and impact. And for that to happen, the humans who currently engineer computers need to accept that they are not manufacturing new tools. They are unleashing new kinds of independent agents, and potentially even new kinds of gods.
-
As noted in chapter 2, large- scale societies cannot exist without some mythology, but that doesn’t mean all mythologies are equal. To guard against errors and excesses, some mythologies have acknowledged their own fallible origin and included a self- correcting mechanism allowing humans to question and change the mythology. That’s the model of the U.S. Constitution, for example. But how can humans probe and correct a computer mythology we don’t understand? One potential guardrail is to train computers to be aware of their own fallibility. As Socrates taught, being able to say“I don’t know” is an essential step on the path to wisdom. And this is true of computer wisdom no less than of human wisdom. The first lesson that every algorithm should learn is that it might make mistakes.
-
This is a key difference between AI and previous existential threats like nuclear technology. The latter presented humankind with a few easily anticipated doomsday scenarios, most obviously an all- out nuclear war. This meant that it was feasible to conceptualize the danger in advance, and explore ways to mitigate it. In contrast, AI presents us with countless doomsday scenarios. Some are relatively easy to grasp, such as terrorists using AI to produce biological weapons of mass destruction. Some are more difficult to grasp, such as AI creating new psychological weapons of mass destruction. And some may be utterly beyond the human imagination, because they emanate from the calculations of an alien intelligence. To guard against a plethora of unforeseeable problems, our best bet is to create living institutions that can identify and respond to the threats as they arise.
Part III: Computer Politics
-
Novel technology often leads to historical disasters, not because the technology is inherently bad, but because it takes time for humans to learn how to use it wisely.
-
The first principle is benevolence. When a computer network collects information on me, that information should be used to help me rather than manipulate me. This principle has already been successfully enshrined by numerous traditional bureaucratic systems, such as health care. Take, for example, our relationship with our family physician.
-
The second principle that would protect democracy against the rise of totalitarian surveillance regimes is decentralization. A democratic society should never allow all its information to be concentrated in one place, no matter whether that hub is the government or a private corporation. It may be extremely helpful to create a national medical database that collects information on citizens in order to provide them with better health- care services, prevent epidemics, and develop new medicines. But it would be a very dangerous idea to merge this database with the databases of the police, the banks, or the insurance companies. Doing so might make the work of doctors, bankers, insurers, and police officers more efficient, but such hyper- efficiency can easily pave the way for totalitarianism. For the survival of democracy, some inefficiency is a feature, not a bug.
-
A fourth democratic principle is that surveillance systems must always leave room for both change and rest. In human history, oppression can take the form of either denying humans the ability to change or denying them the opportunity to rest. For example, the Hindu caste system was based on myths that said the gods divided humans into rigid castes, and any attempt to change one’s status was akin to rebelling against the gods and the proper order of the universe.
-
If three years of up to 25 percent unemployment could turn a seemingly prospering democracy into the most brutal totalitarian regime in history, what might happen to democracies when automation causes even bigger upheavals in the job market of the twenty- first century?
-
Similarly, to judge by their pay, you could assume that our society appreciates doctors more than nurses. However, it is harder to automate the job of nurses than the job of at least those doctors who mostly gather medical data, provide a diagnosis, and recommend treatment. These tasks are essentially pattern recognition, and spotting patterns in data is one thing AI does better than humans. In contrast, AI is far from having the skills necessary to automate nursing tasks such as replacing bandages on an injured person or giving an injection to a crying child.[ 10] These two examples don’t mean that dish washing or nursing could never be automated, but they indicate that people who want a job in 2050 should perhaps invest in their motor and social skills as much as in their intellect.
-
Another 2023 study prompted patients to ask online medical advice from ChatGPT and human doctors, without knowing whom they were interacting with. The medical advice given by ChatGPT was later evaluated by experts to be more accurate and appropriate than the advice given by the humans. More crucially for the issue of emotional intelligence, the patients themselves evaluated ChatGPT as more empathic than the human doctors.
-
We regard entities as conscious not because we have proof of it but because we develop intimate relationships with them and become attached to them.
-
Progressives tend to downplay the importance of traditions and existing institutions and to believe that they know how to engineer better social structures from scratch. Conservatives tend to be more cautious. Their key insight, formulated most famously by Edmund Burke, is that social reality is much more complicated than the champions of progress grasp and that people aren’t very good at understanding the world and predicting the future. That’s why it’s best to keep things as they are— even if they seem unfair— and if some change is inescapable, it should be limited and gradual. Society functions through an intricate web of rules, institutions, and customs that accumulated through trial and error over a long time.
-
Nobody knows for sure why all this is happening. One hypothesis is that the accelerating pace of technological change with its attendant economic, social, and cultural transformations might have made the moderate conservative program seem unrealistic. If conserving existing traditions and institutions is hopeless, and some kind of revolution looks inevitable, then the only means to thwart a left- wing revolution is by striking first and instigating a right- wing revolution. This was the political logic in the 1920s and 1930s, when conservative forces backed radical fascist revolutions in Italy, Germany, Spain, and elsewhere as a way— so they thought— to preempt a Soviet- style left- wing revolution.
-
When both conservatives and progressives resist the temptation of radical revolution, and stay loyal to democratic traditions and institutions, democracies prove themselves to be highly agile. Their self- correcting mechanisms enable them to ride the technological and economic waves better than more rigid regimes. Thus, those democracies that managed to survive the tumultuous 1960s— like the United States, Japan, and Italy— adapted far more successfully to the computer revolution of the 1970s and 1980s than either the communist regimes of Eastern Europe or the fascist holdouts of southern Europe and South America.
-
The most important human skill for surviving the twenty- first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes.
-
In order to function, however, democratic self- correcting mechanisms need to understand the things they are supposed to correct. For a dictatorship, being unfathomable is helpful, because it protects the regime from accountability. For a democracy, being unfathomable is deadly. If citizens, lawmakers, journalists, and judges cannot understand how the state’s bureaucratic system works, they can no longer supervise it, and they lose trust in it.
-
The Wisconsin Supreme Court was not completely unaware of the danger inherent in relying on opaque algorithms. Therefore, while permitting the practice, it ruled that whenever judges receive algorithmic risk assessments, these must include written warning for the judges about the algorithms’ potential biases. The court further advised judges to be cautious when relying on such algorithms. Unfortunately, this caveat was an empty gesture. The court did not provide any concrete instruction for judges on how they should exercise such caution. In its discussion of the case, the Harvard Law Review concluded that“most judges are unlikely to understand algorithmic risk assessments.”
-
Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In East Asia go is considered much more than a game: it is a treasured cultural tradition. Alongside calligraphy, painting, and music, go has been one of the four arts that every refined person was expected to know. For over twenty- five hundred years, tens of millions of people have played go, and entire schools of thought have developed around the game, espousing different strategies and philosophies. Yet during all those millennia, human minds have explored only certain areas in the landscape of go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.[ 35] Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it. Even if a court had ordered DeepMind to provide Lee Sedol with an explanation, nobody could fulfill that order.
-
What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero? The increasing unfathomability of our information network is one of the reasons for the recent wave of populist parties and charismatic leaders. When people can no longer make sense of the world, and when they feel overwhelmed by immense amounts of information they cannot digest, they become easy prey for conspiracy theories, and they turn for salvation to something they do understand— a human.
-
We are so bad at weighing together many different factors that when people give a large number of reasons for a particular decision, it usually sounds suspicious.
-
While individual laypersons may be unable to vet complex algorithms, a team of experts getting help from their own AI sidekicks can potentially assess the fairness of algorithmic decisions even more reliably than anyone can assess the fairness of human decisions. After all, while human decisions may seem to rely on just those few data points we are conscious of, in fact our decisions are subconsciously influenced by thousands of additional data points.
-
This raises the question of how we can be sure that the vetting algorithm itself is reliable. Ultimately, there is no purely technological solution to this recursive problem. No matter which technology we develop, we will have to maintain bureaucratic institutions that will audit algorithms and give or refuse them the seal of approval. Such institutions will combine the powers of humans and computers to make sure that new algorithmic systems are safe and fair.
-
To vet algorithms, regulatory institutions will need not only to analyze them but also to translate their discoveries into stories that humans can understand. Otherwise, we will never trust the regulatory institutions and might instead put our faith in conspiracy theories and charismatic leaders.
-
Because computers will increasingly replace human bureaucrats and human mythmakers, this will again change the deep structure of power. To survive, democracies require not just dedicated bureaucratic institutions that can scrutinize these new structures but also artists who can explain the new structures in accessible and entertaining ways. For example, this has successfully been done by the episode“Nosedive” in the sci- fi series Black Mirror. Produced in 2016, at a time when few had heard about social credit systems,“Nosedive” brilliantly explained how such systems work and what threats they pose.
-
To function, a democracy needs to meet two conditions: it needs to enable a free public conversation on key issues, and it needs to maintain a minimum of social order and institutional trust. Free conversation must not slip into anarchy.
-
If no agreement is reached on how to conduct the public debate and how to reach decisions, the result is anarchy rather than democracy. The anarchical potential of AI is particularly alarming, because it is not only new human groups that it allows to join the public debate. For the first time ever, democracy must contend with a cacophony of nonhuman voices, too. On many social media platforms, bots constitute a sizable minority of participants.
-
If I engage online in a political debate with an AI, it is a waste of time for me to try to change the AI’s opinions; being a nonconscious entity, it doesn’t really care about politics, and it cannot vote in the elections. But the more I talk with the AI, the better it gets to know me, so it can gain my trust, hone its arguments, and gradually change my views. In the battle for hearts and minds, intimacy is an extremely powerful weapon. Previously, political parties could command our attention, but they had difficulty mass- producing intimacy. Radio sets could broadcast a leader’s speech to millions, but they could not befriend the listeners. Now a political party, or even a foreign government, could deploy an army of bots that build friendships with millions of citizens and then use that intimacy to influence their worldview.
-
The philosopher Daniel Dennett has suggested that we can take inspiration from traditional regulations in the money market.[ 52] Ever since coins and later banknotes were invented, it was always technically possible to counterfeit them. Counterfeiting posed an existential danger to the financial system, because it eroded people’s trust in money. If bad actors flooded the market with counterfeit money, the financial system would have collapsed. Yet the financial system managed to protect itself for thousands of years by enacting laws against counterfeiting money. As a result, only a relatively small percentage of money in circulation was forged, and people’s trust in it was maintained.
-
Another important measure democracies can adopt is to ban unsupervised algorithms from curating key public debates. We can certainly continue to use algorithms to run social media platforms; obviously, no human can do that. But the principles the algorithms use to decide which voices to silence and which to amplify must be vetted by a human institution.
-
If unfathomable algorithms take over the conversation, and particularly if they quash reasoned arguments and stoke hate and confusion, public discussion cannot be maintained. Yet if democracies do collapse, it will likely result not from some kind of technological inevitability but from a human failure to regulate the new technology wisely.
-
The rise of machine- learning algorithms, however, may be exactly what the Stalins of the world have been waiting for. AI could tilt the technological balance of power in favor of totalitarianism. Indeed, whereas flooding people with data tends to overwhelm them and therefore leads to errors, flooding AI with data tends to make it more efficient. Consequently, AI seems to favor the concentration of information and decision making in one place.
-
Some people believe that blockchain could provide a technological check on such totalitarian tendencies, because blockchain is inherently friendly to democracy and hostile to totalitarianism. In a blockchain system, decisions require the approval of 51 percent of users. That may sound democratic, but blockchain technology has a fatal flaw. The problem lies with the word“users.” If one person has ten accounts, she counts as ten users. If a government controls 51 percent of accounts, then the government constitutes 51 percent of the users. There are already examples of blockchain networks where a government is 51 percent of users.[ 7] And when a government is 51 percent of users in a blockchain, it has control not just over the chain’s present but even over its past. Autocrats have always wanted the power to change the past. Roman emperors, for example, frequently engaged in the practice of damnatio memoriae— expunging the memory of rivals and enemies.
-
In the long term, totalitarian regimes are likely to face an even bigger danger: instead of criticizing them, an algorithm might gain control of them. Throughout history, the biggest threat to autocrats usually came from their own subordinates.
-
How could he topple the man who controlled not just the bodyguards but also all communications with the outside world? If he tried to make a move, Sejanus could imprison him on Capri indefinitely and inform the Senate and the army that the emperor was too ill to travel anywhere. Tiberius nevertheless managed to turn the tables on Sejanus. As Sejanus grew in power and became preoccupied with running the empire, he lost touch with the day- to- day minutiae of Rome’s security apparatus. Tiberius managed to secretly gain the support of Naevius Sutorius Macro, commander of Rome’s fire brigade and night watch. Macro orchestrated a coup against Sejanus, and as a reward Tiberius made Macro the new commander of the Praetorian Guard. A few years later, Macro had Tiberius killed.[ 14] Power lies at the nexus where the information channels merge. Since Tiberius allowed the information channels to merge in the person of Sejanus, the latter became the true center of power, while Tiberius was reduced to a puppet.
-
Dictators have always suffered from weak self- correcting mechanisms and have always been threatened by powerful subordinates. The rise of AI may greatly exacerbate these problems. The computer network therefore presents dictators with an excruciating dilemma. They could decide to escape the clutches of their human underlings by trusting a supposedly infallible technology, in which case they might become the technology’s puppet. Or, they could build a human institution to supervise the AI, but that institution might limit their own power, too. If even just a few of the world’s dictators choose to put their trust in AI, this could have far- reaching consequences for the whole of humanity.
-
But the weakest spot in humanity’s anti- AI shield is probably the dictators. The easiest way for an AI to seize power is not by breaking out of Dr. Frankenstein’s lab but by ingratiating itself with some paranoid Tiberius.
-
After 1945, dictators and their subordinates cooperated with democratic governments and their citizens to contain nuclear weapons. On July 9, 1955, Albert Einstein, Bertrand Russell, and a number of other eminent scientists and thinkers published the Russell- Einstein Manifesto, calling on the leaders of both democracies and dictatorships to cooperate on preventing nuclear war.“We appeal,” said the manifesto,“as human beings, to human beings: remember your humanity, and forget the rest. If you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.”[ 15] This is true of AI too. It would be foolish of dictators to believe that AI will necessarily tilt the balance of power in their favor. If they aren’t careful, AI will just grab power to itself.
-
Many societies— both democracies and dictatorships— may act responsibly to regulate such usages of AI, clamp down on bad actors, and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind. Climate change can devastate even countries that adopt excellent environmental regulations, because it is a global rather than a national problem. AI, too, is a global problem.
-
Just as in the early nineteenth century the effort to build railways was pioneered by private entrepreneurs, so in the early twenty- first century private corporations were the initial main competitors in the AI race. The executives of Google, Facebook, Alibaba, and Baidu saw the value of recognizing cat images before the presidents and generals did.
-
In the nineteenth century, China was late to appreciate the potential of the Industrial Revolution and was slow to adopt inventions like railroads and steamships. It consequently suffered what the Chinese call“the century of humiliations.” After having been the world’s greatest superpower for centuries, failing to adopt modern industrial technology brought China to its knees. It was repeatedly defeated in wars, partially conquered by foreigners, and thoroughly exploited by the powers that did understand railroads and steamships. The Chinese vowed never again to miss the train. In 2017, China’s government released its“New Generation Artificial Intelligence Plan,” which announced that“by 2030, China’s AI theories, technologies, and application should achieve world- leading levels, making China the world’s primary AI innovation center.”
-
Becoming a data colony will have economic as well as political and social consequences. In the nineteenth and twentieth centuries, if you were a colony of an industrial power like Belgium or Britain, it usually meant that you provided raw materials, while the cutting- edge industries that made the biggest profits remained in the imperial hub. Egypt exported cotton to Britain and imported high- end textiles. Malaya provided rubber for tires; Coventry made the cars.[ 18] Something analogous is likely to happen with data colonialism. The raw material for the AI industry is data. To produce AI that recognizes images, you need cat photos. To produce the trendiest fashion, you need data on fashion trends. To produce autonomous vehicles, you need data about traffic patterns and car accidents. To produce health- care AI, you need data about genes and medical conditions. In a new imperial information economy, raw data will be harvested throughout the world and will flow to the imperial hub.
-
During the Industrial Revolution machines became more important than land. Factories, mines, railroad lines, and electrical power stations became the most valuable assets. It was somewhat easier to concentrate these kinds of assets in one place. The British Empire could centralize industrial production in its home islands, extract raw materials from India, Egypt, and Iraq, and sell them finished goods made in Birmingham or Belfast. Unlike in the Roman Empire, Britain was the seat of both political and economic power. But physics and geology still put natural limits on this concentration of wealth and power. The British couldn’t move every cotton mill from Calcutta to Manchester, or shift the oil wells from Kirkuk to Yorkshire. Information is different. Unlike cotton and oil, digital data can be sent from Malaysia or Egypt to Beijing or San Francisco at almost the speed of light. And unlike land, oil fields, or textile factories, algorithms don’t take up much space. Consequently, unlike industrial power, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
-
AI and automation therefore pose a particular challenge to poorer developing countries. In an AI- driven economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more. Meanwhile, the value of unskilled laborers in left- behind countries will decline, and they will not have the resources to retrain their workforce, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.[ 22] According to the global accounting firm PricewaterhouseCoopers, AI is expected to add $ 15.7 trillion to the global economy by 2030. But if current trends continue, it is projected that China and North America— the two leading AI superpowers— will together take home 70 percent of that money.
-
These economic and geopolitical dynamics could divide the world between two digital empires. During the Cold War, the Iron Curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the Silicon Curtain. The Silicon Curtain is made of code, and it passes through every smartphone, computer, and server in the world. The code on your smartphone determines on which side of the Silicon Curtain you live, which algorithms run your life, who controls your attention, and where your data flows. It is becoming difficult to access information across the Silicon Curtain, say between China and the United States, or between Russia and the EU.
-
For centuries, new information technologies fueled the process of globalization and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality. While the web has been our main metaphor in recent decades, the future might belong to cocoons.
-
Guessing future cultural and ideological developments is usually a fool’s errand. It is far more difficult than predicting economic and geopolitical developments. How many Romans or Jews in the days of Tiberius could have anticipated that a splinter Jewish sect would eventually take over the Roman Empire and that the emperors would abandon Rome’s old gods to worship an executed Jewish rabbi?
-
This was also the view of Jesus himself and the first Christians. Jesus promised his followers that soon the Kingdom of God would be built here on earth and they would inhabit it in their material bodies. When Jesus died without fulfilling his promise, his early followers came to believe that he was resurrected in the flesh and that when the Kingdom of God finally materialized on earth, they too would be resurrected in the flesh.
-
The different approaches to the mind- body problem influenced how people treated their own bodies. Saints, hermits, and monks made breathtaking experiments in pushing the human body to its limits. Just as Christ allowed his body to be tortured on the cross, so these“athletes of Christ” allowed lions and bears to rip them apart while their souls rejoiced in divine ecstasy. They wore hair shirts, fasted for weeks, or stood for years on a pillar— like the famous Simeon who allegedly stood for about forty years on top of a pillar near Aleppo.[ 35] Other Christians took the opposite approach, believing that the body didn’t matter at all. The only thing that mattered was faith. This idea was taken to extremes by Protestants like Martin Luther, who formulated the doctrine of sola fide: only faith. After living as a monk for about ten years, fasting and torturing his body in various ways, Luther despaired of these bodily exercises. He reasoned that no bodily self- torments could force God to redeem him. Indeed, thinking he could win his own salvation by torturing his body was the sin of pride. Luther therefore disrobed, married a former nun, and told his followers that to be good Christians, the only thing they needed was to have complete faith in Christ.
-
These ancient theological debates about mind and body may seem utterly irrelevant to the AI revolution, but they have in fact been resurrected by twenty- first- century technologies. What is the relationship between our physical body and our online identities and avatars? What is the relation between the offline world and cyberspace? Suppose I spend most of my waking hours sitting in my room in front of a screen, playing online games, forming virtual relationships, and even working remotely. I hardly venture out even to eat. I just order takeout. If you are like ancient Jews and the first Christians, you would pity me and conclude that I must be living in a delusion, losing touch with the reality of physical spaces and flesh- and- blood bodies. But if your thinking is closer to that of Luther and many later Christians, you might think I am liberated. By shifting most of my activities and relationships online, I have released myself from the limited organic world of debilitating gravity and corrupt bodies and can enjoy the unlimited possibilities of a digital world, which is potentially liberated from the laws of biology and even physics. I am free to roam a much vaster and more exciting space and to explore new aspects of my identity.
-
A society that understands identities in terms of biological bodies should also care more about material infrastructure like sewage pipes and about the ecosystem that sustains our bodies. It will see the online world as an auxiliary of the offline world that can serve various useful purposes but can never become the central arena of our lives. Its aim would be to create an ideal physical and biological realm— the Kingdom of God on earth. In contrast, a society that downplays biological bodies and focuses on online identities may well seek to create an immersive Kingdom of God in cyberspace while discounting the fate of mere material things like sewage pipes and rain forests. This debate could shape attitudes not only toward organisms but also toward digital entities. As long as society defines identity by focusing on physical bodies, it is unlikely to view AIs as persons. But if society gives less importance to physical bodies, then even AIs that lack any corporeal manifestations may be accepted as legal persons enjoying various rights.
-
What happens, for example, if the American sphere discounts the body, defines humans by their online identity, recognizes AIs as persons, and downplays the importance of the ecosystem, whereas the Chinese sphere adopts opposite positions? Current disagreements about violations of human rights or adherence to ecological standards will look minuscule in comparison. The Thirty Years’ War— arguably the most devastating war in European history— was fought at least in part because Catholics and Protestants couldn’t agree on doctrines like sola fide and on whether Christ was divine, human, or nonbinary. Might future conflicts start because of an argument about AI rights and the nonbinary nature of avatars? As noted, these are all wild speculations, and in all likelihood actual cultures and ideologies will develop in different— and perhaps even wilder— directions.
-
The threat of losing control of AIs is an analogous situation in which patriotism and global cooperation must go together. An out- of- control AI, just like an out- of- control virus, puts in danger humans in every nation. No human collective— whether a tribe, a nation, or the entire species— stands to benefit from letting power shift from humans to algorithms. Contrary to what populists argue, globalism doesn’t mean establishing a global empire, abandoning national loyalties, or opening borders to unlimited immigration. In fact, global cooperation means two far more modest things: first, a commitment to some global rules. These rules don’t deny the uniqueness of each nation and the loyalty people should owe their nation. They just regulate the relations between nations. A good model is the World Cup. The World Cup is a competition between nations, and people often show fierce loyalty to their national team. At the same time, the World Cup is an amazing display of global agreement. Brazil cannot play football against Germany unless Brazilians and Germans first agree on the same set of rules for the game. That’s globalism in action.
-
This grim view of international relations is akin to the populist and Marxist views of human relations, in that they all see humans as interested only in power. And they are all founded upon a deeper philosophical theory of human nature, which the primatologist Frans de Waal termed“veneer theory.” It argues that at heart humans are Stone Age hunters who cannot but see the world as a jungle where the strong prey upon the weak and where might makes right. For millennia, the theory goes, humans have tried to camouflage this unchanging reality under a thin and mutable veneer of myths and rituals, but we have never really broken free from the law of the jungle. Indeed, our myths and rituals are themselves a weapon used by the jungle’s top dogs to deceive and trap their inferiors. Those who don’t realize this are dangerously naive and will fall prey to some ruthless predator.
-
As de Waal and many other biologists documented in numerous studies, real jungles— unlike the one in our imagination— are full of cooperation, symbiosis, and altruism displayed by countless animals, plants, fungi, and even bacteria. Eighty percent of all land plants, for example, rely on symbiotic relationships with fungi, and almost 90 percent of vascular plant families enjoy symbiotic relationships with microorganisms. If organisms in the rain forests of Amazonia, Africa, or India abandoned cooperation in favor of an all- out competition for hegemony, the rain forests and all their inhabitants would quickly die. That’s the law of the jungle.
-
Some periods were exceptionally violent, whereas others were relatively peaceful. The clearest pattern we observe in the long- term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation. A hundred thousand years ago, Sapiens could cooperate only at the level of bands. Over the millennia, we have found ways to create communities of strangers, first on the level of tribes and eventually on the level of religions, trade networks, and states. Realists should note that states are not the fundamental particles of human reality, but rather the product of arduous processes of building trust and cooperation. If humans were interested only in power, they could never have created states in the first place. Sure, conflicts have always remained a possibility— both between and within states— but they have never been an inescapable destiny.
-
Numerous statistics attest to the decline of war in this post- 1945 era, but perhaps the clearest evidence is found in state budgets. For most of recorded history, the military was the number one item on the budget of every empire, sultanate, kingdom, and republic. Governments spent little on health care and education, because most of their resources were consumed by paying soldiers, constructing walls, and building warships. When the bureaucrat Chen Xiang examined the annual budget of the Chinese Song dynasty for the year 1065, he found that out of sixty million minqian(currency unit), fifty million(83 percent) were consumed by the military.
-
During World War II, the U.K. figure rose to 69 percent and the U.S. figure to 71 percent.[ 56] Even during the détente years of the 1970s, Soviet military expenditure still amounted to 32.5 percent of the budget.[ 57] State budgets in more recent decades make for far more hopeful reading material than any pacifist tract ever composed. In the early twenty- first century, the worldwide average government expenditure on the military has been only around 7 percent of the budget, and even the dominant superpower of the United States spent only around 13 percent of its annual budget to maintain its military hegemony.
-
Worldwide average expenditure on health care in the early twenty- first century has been about 10 percent of the government budget, or about 1.4 times the defense budget.[ 59] For many people in the 2010s, the fact that the health- care budget was bigger than the military budget was unremarkable. But it was the result of a major change in human behavior, and one that would have sounded impossible to most previous generations.
Epilogue
-
These historical comparisons underestimate both the unprecedented nature of the AI revolution and the negative aspects of previous revolutions. The immediate results of the print revolution included witch hunts and religious wars alongside scientific discoveries, while newspapers and radio were exploited by totalitarian regimes as well as by democracies.
-
One lesson is that the invention of new information technology is always a catalyst for major historical changes, because the most important role of information is to weave new networks rather than represent preexisting realities. By recording tax payments, clay tablets in ancient Mesopotamia helped forge the first city- states. By canonizing prophetic visions, holy books spread new kinds of religions. By swiftly disseminating the words of presidents and citizens, newspapers and telegraphs opened the door to both large- scale democracy and large- scale totalitarianism. The information thus recorded and distributed was sometimes true, often false, but it invariably created new connections between larger numbers of people.
-
When church fathers like Bishop Athanasius decided to include 1 Timothy in the biblical dataset while excluding the Acts of Paul and Thecla, they shaped the world for millennia. Billions of Christians down to the twenty- first century have formed their views of the world based on the misogynist ideas of 1 Timothy rather than on the more tolerant attitude of Thecla. Even today it is difficult to reverse course, because the church fathers chose not to include any self- correcting mechanisms in the Bible. The present- day equivalents of Bishop Athanasius are the engineers who write the initial code for AI, and who choose the dataset on which the baby AI is trained. As AI grows in power and authority, and perhaps becomes a self- interpreting holy book, so the decisions made by present- day engineers could reverberate down the ages.
-
Information isn’t truth. Its main task is to connect rather than represent, and information networks throughout history have often privileged order over truth. Tax records, holy books, political manifestos, and secret police files can be extremely efficient in creating powerful states and churches, which hold a distorted view of the world and are prone to abuse their power. More information, ironically, can sometimes result in more witch hunts.
-
The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. Doing so is not a matter of inventing another miracle technology or landing upon some brilliant idea that has somehow escaped all previous generations. Rather, to create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self- correcting mechanisms. That is perhaps the most important takeaway this book has to offer. This wisdom is much older than human history. It is elemental, the foundation of organic life. The first organisms weren’t created by some infallible genius or god. They emerged through an intricate process of trial and error.