Nexus: AI's impact on Society

Status: completed


Personal notes from reading Nexus by Yuval Noah Harari. They mostly aim to echo the book's thoughts, but certainly carry my perspective on it. If only by the selection of the thoughts to recite. I've found the contextualization of the algorithmic advancements the most compelling aspect. The assessment of capabilities and shortcomings of AI is fair to me.
This summary does not replace working through the book and neither aspires to. I hope it helps you evaluating on whether the covered topics are of interest to you or help processing the book.

Initial Skim

Purpose: What is this book about? What problem is the author trying to solve?

The dangers of idea-generating AI for humanity and how it affects both democracies as well as dictatorships.

Structure: How is the book structured to make its argument?

  • Learn from history: How have information systems and beliefs in infallible systems worked out?
  • Make an argument against the "naive view of information". Accessibility of information does not automatically improve things.
  • Argue, that history has shown self-correcting systems to be superior.
  • Assess the impact (predominantly risks) of current AI advancements. How does that impact differ from history and which learnings can we still apply?
  • How should we handle it?

Initial Impression: Why did I choose this book? What do I expect to learn?

I work around AI. I want to be thoughtful of the social implications and hope this to provide a framework for it. Harari has written other fantastic books.

Notes


Part I: Human Networks

Chapter 1&2: What is information?

  • Objective reality: laws of physics, existence of a tree
  • Subjective reality: personal experience, perception and feelings
  • Intersubjective reality: Shared reality that exists because a large group of people collectively believe in it. Currencies, gods, laws, corporations. Shape human society, despite being fictions.

Information networks, especially the ones of intersubjective realities, are the foundation of human cooperation. The realities are shaped through stories, which do not need to be factually accurate.

Networks drive advancement and cooperation (science), but can create division when competing narratives exist (religion, multiple states/nations).

Chapter 3: Documents

Stories are emotional and ideological glue. Documents turn stories into a system that can structurally categorize. They ensure an accurate and long-term information survival.

With many documents, newfound problem arise: Retrieval of information.

  • Introduced non-organic structures by archivists. -> Bureaucracy
  • If you search for an apple, find the apple tree.
  • A Tax document first needs to be placed into the tax cabinet or folder.

A Bureaucracy creates imperfect structures, that enable massive scale information network collaboration.

How something is labeled by a bureaucracy matters to the life of something. Be that the classification of a virus as life form or not. Be that done by human experts or algorithms. Be it accurate or not.

Chapter 4: The Fantasy of Infallibility

Claiming the text is from a higher intelligence:

  • Circumvents discussion of populus
  • Makes it difficult to adjust with time. E.g. religious documents like the Bible cannot really be updated.

Religion

A written document ensures the story is told identically. With the invention of the book and the mentioned claim of the content stemming from a higher, godly intelligence, big impact could be achieved (bible).

Naive View of Information

The ideal of a free flow of all information. The more available information, the better. The rationale being that it expects all wrong information to be exposed and emerge on top. Harari labels that as wishful thinking.
As a historical example he uses the print revolution. Churches were against it. All kinds of people could distribute their ideas much more freely.
Allowed the rapid spread of scientific facts (partially enabling the scientific revolution), but also conspiracy theories leading to the grueling witch hunt through the then bestseller "The Hammer of the Witches". This illustrates the dark side of creating an information sphere. The witch-hunting bureaucracy invented the intersubjective category of witches and imposed it on reality.

Institutes of Information

Should optimize for "How much for proof for being true does this have?" over "How many people will read this?"
Rely on collaboration as all susceptible to personal biases. Self-skepticism is rewarded and an integral part.

Self-Correcting Mechanisms

Natural and needed to survive. Learn how to walk by falling.
Institutions that cannot admit and correct mistakes have it harder. E.g. church only has weak self-correcting mechanisms. They can condemn a misbehaving priest as sinner, but cannot argue with the infallible bible and are never wrong as an institution. The churches behaviour (e.g. acceptance of women) has changed over time; corrected through self-reflection and external pressure. But the correction is not celebrated and rather reported as previous misdoing or as misinterpretation of the bible.

Pers thought: How are our political systems? Do we really celebrate when we see things wrong?

Scientific Institutions must have strong self-correcting mechanisms or are not to be labeled as such otherwise.
Whilst there is pushback and personal attacks for groundbreaking theories (quasi-atoms, relativity theory, quantum mechanics); with enough evidence perceptions change.

US Newspapers have and do criticize the US war crimes in Vietnam and represent a self-correcting mechanism.

The question is whether AI will favour or undermine political self-correcting mechanisms.

Chapter 5: Decisions: History of Democracy and Totalitarianism

Totalitarianism: Form of dictatorship that centralizes information flow and decision making.

Democracy:

  • Decentralized Information Flow
  • People make their own decisions, also minorities
  • Self-correcting mechanisms
    • Votes
    • No dismantling of minorities like removing their voting powers.
    • No dismantling of courts and media
  • Electoral victory does not grant unlimited power
  • Only do small set of decisions in majority votes. E.g. go to war or not, taxes, ...
  • Government protects members from being murdered. Else it is an anarchy.
    • Who defines "human rights"?
    • Right of religion?
    • Right of AI?

Elections are what people desire, not the objective truth.

The reality of climate change should be determined by scientific institutions and presented as such. What to do about it is a question of desire and should be voted on (cut emissions vs keep living as is). A democracy should not sacrifice the truth (e.g say climate change is a hoax).

Democracy is complicated with many legitimate conversations going on in parallel. One-decision maker systems are far easier.

Populism claims to represent the people and denies other people's claim of doing the same with the argument of the others being misguided.

Prior to the development of modern information technologies, there were no large scale democracies anywhere. In many autocracies, many local issues were resolved democratically.
Mass media made mass democracy possible.

Totalitarian systems assume the infallibility of their system and assume total control of the people.
Autocratic regimes (roman empire) have infallible rulers, but do not know what everyone thinks and does.

Parallels from Soviet Collectivization to the Witch-hunt

Belief in perceived scientific data. In collectivism it was the concept of uniformed, state-managed farming. They did not account for human factors like them not wanting to hand over their land and lose sovereignty, thus reducing their efficiency. They blamed "Kulags", capitalist farmers, trying to undermine their system and believed to be able to identify them based on data. Which lead to wrongful prosecutions.
Intersubjective reality "Kulag" was imposed on people. They were enslaved.

In totalitarianism, information should all flow through the center. Which is an issue if an information flow is blocked. Which happens frequently since the goal is not about finding truth, but about maintaining stability. Bad news tend to not get passed on, out of fear of reprimands.
Besides, with the government being infallible it does not need free press, science, free court, no way to report abuse and no way to challenge the system.
Mindset: Questions lead to trouble (not to answers).


Part II: The Inorganic Network

Here, Harari moves to today's challenges. How does AI influence our information systems?

Difference to the printing press

Computers can and do make decisions.
Current example of where new technology majorly impacted society: Facebook's news feed algos fueling hatred in Myanmar. Auto-suggestions for the next video were driven by user engagement. Which the algos found to rise when triggering outrage. Combined with facebook being the major source of information this fueled mass killings.

The book rejects the take, that AI is only the result of human engineering and follows orders of executive decisions.
They can take decisions and actions no one foresaw. facebook did not intend to cause mass killings in Myanmar.

Intelligence <-> Consciousness

Intelligence: Ability to attain goals such as driving KPIs
Consciousness: Ability to experience subjective feelings such as pain, love.

We don't know how consciousness exists in carbon-based lifeforms and thus cannot predict whether it might exist in the future for machines.

Information chain

Document to document chain is a novel information flow that formerly needed to go through humans.
A news outlet might now post a story. One AI identifies it as fraudulent, releases a note on that. Which again might be interpreted by finance bots and have impacts on the market. All within seconds.

Computers already are superior information networks to humans. They can already act better in the financial market than most humans. E.g. "When is it advisable to short the price of oil?". We may reach a point when computers dominate the financial market.

Same thing with laws.

Computers Gain the Power of the Language

With computers able to converse with us, emotionally affecting us, new realities arise.

  • Politics: Democracies are based on human conversations. We might be arguing with a bot, that even sways our opinion.
  • Religion: The previous holy book required human interpretation and was spread by humans. Modern AI is able to do it all.

Large scale human society lives on stories as discussed in [[Harari - Homo Deus]]. Computers must not feel anything, but just need to know how they can make us feel a certain way.

Information isn't truth. Information creates political structures.

Chapter 7: Relentless: The Network Is Always On

We are used to being watched around the clock. By other animals, by relatives, by neighbouring tribes.
Today's "negative" drivers:

  • Government: Have you paid your taxes, are you plotting a revolution?
  • Religion: Did you masturbate? Did you go to church?
  • Corporations: How do you behave so we can sell our product?

At the same time:

  • Government: Where do you get your water from? How can we prevent the spread of diseases?
  • Religion: Who needs support?

Though, so far surveillance has always been incomplete (not possible or protected by rules). It relied on human surveillance and could not read thoughts.

The analysis of the gathered data is now not bottlenecked by humans analysts anymore.

At present, the smartphone is a superior surveillance tool than biometric sensors. The latter has yet to be deciphered. If biometric data gets understood, algorithms can be used to more precisely trigger specific emotions from people.

Chapter 8: Fallible: The Network is Often Wrong

Which metric would we optimize a "good" AI for? We have failed to define rational, ultimate goals so far.

Should the machine kill a cat or a human? If we say the cat, because it has less capability for suffering, we are agreeing on a form of utilitarianism.
But how do we measure suffering? Where does it end? Should it extend beyond species?
If you have a strong enough belief in a future utopia, it can become a license to invoke brutal pains in the present.

We have a growing reliance on algorithm-driven systems as they are powerful. The consequences of their errors will be growingly severe. We need to understand its limitations outside the tech industry.

Intercomputer Reality

Easy example are online video games. But it can also be an algorithmic score passed on to another digital product.

Humankind got dominant due to the creation of intersubjective entities and organizing them. Now computers may do the same with us excluded from them.

We could pass on interhuman myths (e.g. women are not good at engineering) through training data into an intercomputer myth. We must get rid of all bias at the beginning, otherwise computers are likely to perpetuate and magnify the issue.

Computers can adapt themselves to changing circumstances other than the bible. Thus, there are voices we should treat the computers as new godlike instance. But how can humans probe and correct a computer mythology we do not understand?


Part III: Computer Politics

Chapter 9: Democracies: Can we still hold a conversation?

Given the inability to predict how the new computer network will develop, democracy and strong self correcting mechanisms are needed. But can democracy survive?
Governments could have new kind of surveillance mechanisms and manipulative powers. Though, this does not mean this is inevitable. Major long-present rules should be applied to the digital world:

Principle 1: Benevolence

When a computer network collects information on me, that information should be used for my benefit.

We already have such systems in place with doctors, lawyers. We share it willingly with them for them to help us. Access to our personal data comes with a duty. For them, it is not only unethical to share our personal data, but also illegal.

Major IT business models drive on user information and in turn offer their service free of monetary purchase. We might force these business models to transition to more traditional ones. Users pay in cash for services such as the search engine, e-mail, social media. If we deem a service essential for everyone, the responsibility for providing it should lie on the government. Like it already is with education services or healthcare.

Principle 2: Decentralization

A democratic society should never allow all its information to be concentrated in one place. We can see some inefficiencies in information retrieval as features of a democracy. There should be separate databases with varying access rights for police records, medical records, taxes, bank statements.

Principle 3: Mutuality

If governments increase surveillance of society, they must also must increase surveillance of their tax policies, their political affiliations. How do politicians make their money? Democracies require balance. Algorithms can serve surveillance both ways. It will be easier to hold them accountable

Principle 4: Surveillance Systems must leave room for change and rest

Humans need to be allowed to improve and are not made for continuously being monitored.

Besides the principles, we observe a growing demand to a new right: The right to an explanation. Algorithmic decisions (e.g. refusing to grant us a credit) are demanded to be explainable.

The complexities and turmoil in an evolving job market makes people pulled to conspiracy theories and charismatic leaders. No single human will be able to explain the new algorithmic world, even less so evaluate how to make it fair.

Humans struggle with evaluating more than a few data points consciously. Algos might help make us decisions more fair.

Social media undermines gatekeeping of radio stations, leading to a more open, but also anarchical conversation. Bots mixing into those platforms, pretending to be humans drastically further changes those dynamics. The law should ban bots pretending to be human like it bans fake money.

Chapter 10: Totalitarianism: All Power to the Algorithms?

Blockchain could also enable totalitarianism tendencies. With a 51% stake, governments would not only have control over the current state, but also the past.
Democracies have fewer skeletons in the closet and fear an uncontrolled bot less than a dictator.

Chapter 11: The Silicon Curtain

The warfare changes in the silicon age.

Data Colonies

To dominate a country you do not need to invade it anymore, but take out its data.
Former colonialism: Get low cost goods abroad, do high value work in home country.
Raw data is the new cheap source (free), aggregated to high value algos in the home country.

Digital Cocoons

To fight this, China and the US are already regulating the usage of foreign owned apps. Google is blocked in china, barely anyone uses WeChat in the west. This leads to entirely different digital ecosystems. This can lead to profound cultural conflict.

Cyber Weapons

Way more versatile than nuclear bombs. Can bring down electrical grids, inflame political scandals, influence elections, jam enemy sensors and above all can do that secretly. Would nuclear bombs be activated upon order or is their launch application already hacked?

A happier outlook

Humans would have never been able to create states if their only interest was power. States are built upon processes building up trust. With this trust we managed to have healthcare budgets that exceed military budgets. That would have sounded impossible in the beginning of the 20th century.

Information is not truth. It's main task is to connect. It has enabled evil power-networks such as a highly efficient war machine of Nazi Germany. To build wiser networks we must abandon the naive-view of data and commit ourselves to the hard work of building institutions with strong self-correcting mechanisms.