I have an interest in Artificial Intelligence for 4 reasons:
- I’m interested in whenthe first AI might be created
- Expert opinion ranges from years, to decades, to a century from now
- Whatimpact it will have on human jobs
- You may have noticed that every few months you’ll see the headline in the media “The robots are taking our jobs!”
- Some jobs we welcome robots to take (dangerous, menial, physical), and others we think are ours to keep
- Howan AI will treat humanity once it becomes self-aware
- Will it remove us from the surface of the earth (a popular idea in many movies)?
- Or usher us into a period of leisure and prosperity?
- I’m fascinated with the idea of the AI rebuilding itselfby re-writing it’s own code
- Perhaps releasing a new improved version of itself every day, or every minute
- Very quickly we will have no idea what it’s thinking and no understanding of its code
We humans have a fear of the unknown and we simply don’t know what the future will bring when it comes to computers being smarter than humans.
The authors purpose for this book is to acknowledge this uncertainty and prompts us to collectively make some choices now. Choices like:
- Do we want A.I. to serve all of humanity and provide peace and prosperity for everyone?
- Or build autonomous weapons for a select few to allow us to fight more efficiently with each other?
- Or, is there a chance that, if we do not prepare, that the AI will squash us like ants because we are an inferior lifeform?
The book begins with a prelude which is a story of a possible near-future. I found it so fascinating that I have copied it out in full (it’s just over 6000 words so its a 20 minute read).
Have you seen Netflix’s “Black Mirror”? The prelude reminds me of an episode of that show which explores possible technological futures which are frighteningly plausible.
The book ends with a epilogue which tells the story of how the author founded the Future Of Life (FLI) foundation which held a conference where every notable AI researcher on the planet came together and co-wrote the “Asilomar AI principles” which I have included at the end of this article.
And in the middle, the author, Tegmark, provides a comprehensive discussion of the benefits and dangers of AI.
At the end of each chapter, Tegmark provides useful chapter summaries. I have provided most of those below, plus a few of my other favourite passages.
Before we get into it, I have a confession to make. I found Chapter 6 unfathomable. It got really deep into physics and I understood it not at all. I encourage you to buy Life 3.0 and read it yourself (but don’t be afraid of just skim reading Chapter 6!).
Here are my notes on “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark.
CHAPTER 1 Welcome to the Most Important Conversation of Our Time
- Life, defined as a process that can retain its complexity and replicate, can develop through three stages: a biological stage (1.0), where its hardware and software are evolved, a cultural stage (2.0), where it can design its software (through learning) and a technological stage (3.0), where it can design its hardware as well, becoming the master of its own destiny.
- Artificial intelligence may enable us to launch Life 3.0 this century, and a fascinating conversation has sprung up regarding what future we should aim for and how this can be accomplished. There are three main camps in the controversy: techno-skeptics, digital utopians and the beneficial-AI movement.
- Techno-skeptics view building superhuman AGI (Artificial GeneralIntelligence) as so hard that it won’t happen for hundreds of years, making it silly to worry about it (and Life 3.0) now.
- AGI is when the system can apply itself to a wide range variety of tasks, more like a human brain, instead of having narrow expertise, such as only being good at Chess or Jeopardy – Sheldon
- Digital utopians view it as likely this century and wholeheartedly welcome Life 3.0, viewing it as the natural and desirable next step in the cosmic evolution.
- The beneficial-AI movement also views it as likely this century, but views a good outcome not as guaranteed, but as something that needs to be ensured by hard work in the form of AI-safety research.
- Beyond such legitimate controversies where world-leading experts disagree, there are also boring pseudo-controversies caused by misunderstandings. For example, never waste time arguing about “life,” “intelligence,” or “consciousness” before ensuring that you and your protagonist are using these words to mean the same thing! This book uses the definitions in table 1.1.
- Also beware the common misconceptions in figure 1.5: “Superintelligence by 2100 is inevitable/impossible.” “Only Luddites worry about AI.” “The concern is about AI turning evil and/or conscious, and it’s just years away.” “Robots are the main concern.” “AI can’t control humans and can’t have goals.”
- In chapters 2 through 6, we’ll explore the story of intelligence from its humble beginning billions of years ago to possible cosmic futures billions of years from now. We’ll first investigate near-term challenges such as jobs, AI weapons and the quest for human-level AGI, then explore possibilities for a fascinating spectrum of possible futures with intelligent machines and/or humans. I wonder which options you’ll prefer!
- In chapters 7 through 9, we’ll switch from cold factual descriptions to an exploration of goals, consciousness and meaning, and investigate what we can do right now to help create the future we want.
- I view this conversation about the future of life with AI as the most important one of our time—please join it!
CHAPTER 2 Matter Turns Intelligent
What is Intelligence?
Intelligence = ability to accomplish complex goals
This is broad enough to include all above-mentioned definitions, since understanding, self-awareness, problem solving, learning, etc. are all examples of complex goals that one might have.
It’s also broad enough to subsume the Oxford Dictionary definition—“the ability to acquire and apply knowledge and skills”—since one can have as a goal to apply knowledge and skills.
- Intelligence, defined as ability to accomplish complex goals, can’t be measured by a single IQ, only by an ability spectrum across all goals.
- Today’s artificial intelligence tends to be narrow, with each system able to accomplish only very specific goals, while human intelligence is remarkably broad.
- Memory, computation, learning and intelligence have an abstract, intangible and ethereal feel to them because they’re substrate-independent: able to take on a life of their own that doesn’t depend on or reflect the details of their underlying material substrate.
- Any chunk of matter can be the substrate for memory as long as it has many different stable states.
- Any matter can be computronium, the substrate for computation, as long as it contains certain universal building blocks that can be combined to implement any function. NAND gates and neurons are two important examples of such universal “computational atoms.”
- A neural network is a powerful substrate for learning because, simply by obeying the laws of physics, it can rearrange itself to get better and better at implementing desired computations.
- Because of the striking simplicity of the laws of physics, we humans only care about a tiny fraction of all imaginable computational problems, and neural networks tend to be remarkably good at solving precisely this tiny fraction.
- Once technology gets twice as powerful, it can often be used to design and build technology that’s twice as powerful in turn, triggering repeated capability doubling in the spirit of Moore’s law. The cost of information technology has now halved roughly every two years for about a century, enabling the information age.
- If AI progress continues, then long before AI reaches human level for all skills, it will give us fascinating opportunities and challenges involving issues such as bugs, laws, weapons and jobs—which we’ll explore in the next chapter.
CHAPTER 3 The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs
- How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?
- How can we update our legal systems to be more fair and efficient and to keep pace with the rapidly changing digital landscape?
- How can we make weapons smarter and less prone to killing innocent civilians without triggering an out-of-control arms race in lethal autonomous weapons?
- How can we grow our prosperity through automation without leaving people lacking income or purpose?
- AI for space exploration
- AI for finance
- AI for manufacturing
- AI for transportation
- Elon Musk envisions that future self-driving cars will not only be safer, but will also earn money for their owners while they’re not needed, by competing with Uber and Lyft.
- AI for energy
- AI for Healthcare
- AI for Communication
- All pending cases to be processed in parallel rather than in series, each case getting its own robojudge for as long as it takes
- They could make it dramatically cheaper to get justice through the courts
- Legal controversies
- fMRI scanners to determine what a person is thinking about and, in particular, whether they’re telling the truth or lying
- AI becomes able to generate fully realistic fake videos of you committing crimes
- So if a self-driving car causes an accident, who should be liable—its occupants, its owner or its manufacturer? Legal scholar David Vladeck has proposed a fourth answer: the car itself! Specifically, he proposes that self-driving cars be allowed (and required) to hold car insurance.
- The Next Arms Race?
- Should There Be an International Treaty?
- Jobs and Wages
- Technology and Inequality
- Career Advice for Kids
- Does it require interacting with people and using social intelligence?
- Does it involve creativity and coming up with clever solutions?
- Does it require working in an unpredictable environment?
- The more of these questions you can answer with a yes, the better your career choice is likely to be.
- This means that relatively safe bets include becoming a teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist.
- Will Humans Eventually Become Unemployable?
- The vast majority of today’s occupations are ones that already existed a century ago, and when we sort them by the number of jobs they provide, we have to go all the way down to twenty-first place in the list until we encounter a new occupation: software developers, who make up less than 1% of the U.S. job market.
- The main trend on the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain in figure 2.2 that haven’t yet been submerged by the rising tide of technology!
- Giving People Income Without Jobs
- Technological progress can end up providing many valuable products and services for free even without government intervention.
- For example, people used to pay for encyclopedias, atlases, sending letters and making phone calls, but now anyone with an internet connection gets access to all these things at no cost—together with free videoconferencing, photo sharing, social media, online courses and countless other new services.
- Human-Level Intelligence?
- Near-term AI progress has the potential to greatly improve our lives in myriad ways, from making our personal lives, power grids and financial markets more efficient to saving lives with self-driving cars, surgical bots and AI diagnosis systems.
- When we allow real-world systems to be controlled by AI, it’s crucial that we learn to make AI more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security and control.
- This need for improved robustness is particularly pressing for AI-controlled weapon systems, where the stakes can be huge.
- Many leading AI researchers and roboticists have called for an international treaty banning certain kinds of autonomous weapons, to avoid an out-of-control arms race that could end up making convenient assassination machines available to everybody with a full wallet and an axe to grind.
- AI can make our legal systems more fair and efficient if we can figure out how to make robojudges transparent and unbiased.
- Our laws need rapid updating to keep up with AI, which poses tough legal questions involving privacy, liability and regulation.
- Long before we need to worry about intelligent machines replacing us altogether, they may increasingly replace us on the job market.
- This need not be a bad thing, as long as society redistributes a fraction of the AI-created wealth to make everyone better off.
- Otherwise, many economists argue, inequality will greatly increase.
- With advance planning, a low-employment society should be able to flourish not only financially, with people getting their sense of purpose from activities other than jobs.
- Career advice for today’s kids: Go into professions that machines are bad at—those involving people, unpredictability and creativity.
- There’s a non-negligible possibility that AGI progress will proceed to human levels and beyond—we’ll explore that in the next chapter!
CHAPTER 4 Intelligence Explosion?
- If we one day succeed in building human-level AGI, this may trigger an intelligence explosion, leaving us far behind.
- If a group of humans manage to control an intelligence explosion, they may be able to take over the world in a matter of years.
- If humans fail to control an intelligence explosion, the AI itself may take over the world even faster.
- Whereas a rapid intelligence explosion is likely to lead to a single world power, a slow one dragging on for years or decades may be more likely to lead to a multipolar scenario with a balance of power between a large number of rather independent entities.
- The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control. Superintelligence is likely to enable coordination on ever-larger cosmic scales, but it’s unclear whether it will ultimately lead to more totalitarian top-down control or more individual empowerment.
- Cyborgs and uploads are plausible, but arguably not the fastest route to advanced machine intelligence.
- The climax of our current race toward AI may be either the best or the worst thing ever to happen to humanity, with a fascinating spectrum of possible outcomes that we’ll explore in the next chapter.
- We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.
CHAPTER 5 Aftermath: The Next 10,000 Years
- The current race toward AGI can end in a fascinatingly broad range of aftermath scenarios for upcoming millennia.
- Superintelligence can peacefully coexist with humans either because it’s forced to (enslaved-god scenario) or because it’s “friendly AI” that wants to (libertarian-utopia, protector-god, benevolent-dictator and zookeeper scenarios).
- Superintelligence can be prevented by an AI (gatekeeper scenario) or by humans (1984 scenario), by deliberately forgetting the technology (reversion scenario) or by lack of incentives to build it (egalitarian-utopia scenario).
- Humanity can go extinct and get replaced by AIs (conqueror and descendant scenarios) or by nothing (self-destruction scenario).
- There’s absolutely no consensus on which, if any, of these scenarios are desirable, and all involve objectionable elements. This makes it all the more important to continue and deepen the conversation around our future goals, so that we don’t inadvertently drift or steer in an unfortunate direction.
CHAPTER 6 Our Cosmic Endowment: The Next Billion Years and Beyond
Here’s where the book got deep into physics. I understood very little of this chapter so no notes from me – Sheldon.
CHAPTER 7 Goals
Figuring out how to align the goals of a superintelligent AI with our goals isn’t just important, but also hard. In fact, it’s currently an unsolved problem. It splits into three tough subproblems, each of which is the subject of active research by computer scientists and other thinkers:
- Making AI learn our goals
- Making AI adopt our goals
- Making AI retain our goals
To learn our goals, an AI must figure out not what we do, but why we do it. We humans accomplish this so effortlessly that it’s easy to forget how hard the task is for a computer, and how easy it is to misunderstand. If you ask a future self-driving car to take you to the airport as fast as possible and it takes you literally, you’ll get there chased by helicopters and covered in vomit. If you exclaim, “That’s not what I wanted!,” it can justifiably answer, “That’s what you asked for.”
For example, suppose a bunch of ants create you to be a recursively self-improving robot, much smarter than them, who shares their goals and helps them build bigger and better anthills, and that you eventually attain the human-level intelligence and understanding that you have now.
Do you think you’ll spend the rest of your days just optimizing anthills, or do you think you might develop a taste for more sophisticated questions and pursuits that the ants have no ability to comprehend?
If so, do you think you’ll find a way to override the ant-protection urge that your formicine creators endowed you with in much the same way that the real you overrides some of the urges your genes have given you? And in that case, might a superintelligent friendly AI find our current human goals as uninspiring and vapid as you find those of the ants, and evolve new goals different from those it learned and adopted from us?
CHAPTER 8 Consciousness
- There’s no undisputed definition of “consciousness.” I use the broad and non-anthropocentric definition consciousness = subjective experience.
- Whether AIs are conscious in that sense is what matters for the thorniest ethical and philosophical problems posed by the rise of AI: Can AIs suffer? Should they have rights? Is uploading a subjective suicide? Could a future cosmos teeming with AIs be the ultimate zombie apocalypse?
- The problem of understanding intelligence shouldn’t be conflated with three separate problems of consciousness: the “pretty hard problem” of predicting which physical systems are conscious, the “even harder problem” of predicting qualia, and the “ really hard problem” of why anything at all is conscious.
- The “pretty hard problem” of consciousness is scientific, since a theory that predicts which of your brain processes are conscious is experimentally testable and falsifiable, while it’s currently unclear how science could fully resolve the two harder problems.
- Neuroscience experiments suggest that many behaviors and brain regions are unconscious, with much of our conscious experience representing an after-the-fact summary of vastly larger amounts of unconscious information.
- Generalizing consciousness predictions from brains to machines requires a theory. Consciousness appears to require not a particular kind of particle or field, but a particular kind of information processing that’s fairly autonomous and integrated, so that the whole system is rather autonomous but its parts aren’t.
- Consciousness might feel so non-physical because it’s doubly substrate-independent: if consciousness is the way information feels when being processed in certain complex ways, then it’s merely the structure of the information processing that matters, not the structure of the matter doing the information processing.
- If artificial consciousness is possible, then the space of possible AI experiences is likely to be huge compared to what we humans can experience, spanning a vast spectrum of qualia and timescales—all sharing a feeling of having free will.
- Since there can be no meaning without consciousness, it’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
- This suggests that as we humans prepare to be humbled by ever smarter machines, we take comfort mainly in being Homo sentiens, not Homo sapiens.
Epilogue: The Tale of the FLI Team
The book finishes with the authors creation of the foundation “FutureOfLife.org” which began with a conference where every notable AI researcher on the planet came together and co-wrote the “Asilomar AI principles”.
Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
- How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
- How can we grow our prosperity through automation while maintaining people’s resources and purpose?
- How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
- What set of values should AI be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and Values
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
To date, the Principles have been signed by 1273 AI/Robotics researchers and 2541 others.
To buy the book, click the link in the image below to purchase from Book Depository