The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode: https://epochai.org/blog/can-ai-scaling-continue-through-2030 Timestamps: 00:00 How important is scaling? 08:03 How capable will AIs be in 2030? 18:33 AI agents, reasoning, and planning 23:39 Automating coding and mathematics 31:26 Uncertainty about investing in AI 40:34 Gap between investment and returns 45:30 Compute, software and data 51:54 Inference-time compute 01:08:49 Returns to software R&D 01:19:22 Limits to expanding compute
11/10/2024 • 1 heure, 30 minutes, 29 secondes
Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI. You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt Timestamps: 00:00 AI control 09:35 Challenges to AI control 23:48 AI control as a bridge to alignment 26:54 Policy and coordination for AI safety 29:25 Slowing down around human-level AI 49:14 Scheming and misalignment 01:27:27 AI timelines and takeoff speeds 01:58:15 Human cognition versus AI cognition
27/09/2024 • 2 heures, 8 minutes, 44 secondes
Tom Barnes on How to Build a Resilient World
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world. Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence Timestamps: 00:00 Spending on safety vs capabilities 09:06 Racing dynamics - is the classic story true? 28:15 How are governments preparing for advanced AI? 49:06 US-China dialogues on AI 57:44 Coordination failures 1:04:26 Global resilience 1:13:09 Patient philanthropy The John von Neumann biography we reference: https://www.penguinrandomhouse.com/books/706577/the-man-from-the-future-by-ananyo-bhattacharya/
12/09/2024 • 1 heure, 19 minutes, 41 secondes
Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more. Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai Timestamps: 00:00 Is AI plateauing or accelerating? 06:55 How do we get AI agents? 16:12 Do agency and reasoning emerge? 23:57 Compute thresholds in regulation28:59 Superintelligence as an ideological goal 37:09 General progress vs superintelligence 44:22 Meta and open source AI 49:09 Technological change and regime change 01:03:06 How will governments react to AI? 01:07:50 Will the US nationalize AGI corporations? 01:17:05 Economics of an intelligence explosion 01:31:38 AI cognition vs human cognition 01:48:03 AI and future religions 01:56:40 Is consciousness functional? 02:05:30 AI and children
22/08/2024 • 2 heures, 16 minutes, 11 secondes
Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home Timestamps: 00:00 Innovation prizes at XPRIZE 08:25 Deciding which prizes to create 19:00 Creating new markets 29:51 How far can prizes scale? 35:25 When are prizes successful? 46:06 100M dollar carbon removal prize 54:40 Upcoming prizes 59:52 Anousheh's time in space
09/08/2024 • 1 heure, 3 minutes, 10 secondes
Mary Robinson (Former President of Ireland) on Long-View Leadership
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org Timestamps: 00:00 Mary's journey to presidency 05:11 Long-view leadership 06:55 Prioritizing global problems 08:38 Risks from artificial intelligence 11:55 Climate change 15:18 Barriers to global gender equality 16:28 Risk of nuclear war 20:51 Advice to future leaders 22:53 Humor in politics 24:21 Barriers to international cooperation 27:10 Institutions and technological change
25/07/2024 • 30 minutes, 1 secondes
Emilia Javorsky on how AI Concentrates Power
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation. Apply for our RFP here: https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/Timestamps: 00:00 Power concentration 07:43 RFP: Mitigating AI-driven power concentration 14:15 Open source AI 26:50 Institutions and incentives 35:20 Techno-optimism 43:44 Global monoculture 53:55 Imagining utopia
11/07/2024 • 1 heure, 3 minutes, 35 secondes
Anton Korinek on Automating Work and the Economics of an Intelligence Explosion
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com Timestamps: 00:00 Automation and wages 14:32 Complexity for people and machines 20:31 Moravec's paradox 26:15 Can people switch careers? 30:57 Intelligence explosion economics 44:08 The lump of labor fallacy 51:40 An industry for nostalgia? 57:16 Universal basic income 01:09:28 Market structure in AI
21/06/2024 • 1 heure, 32 minutes, 24 secondes
Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com Timestamps: 00:00 US-China competition and risk 18:01 The security dilemma 30:21 Official and unofficial diplomacy 39:53 Hotlines between countries 01:01:54 Preventing escalation after war 01:09:58 Catastrophic biological risks 01:20:42 Ultraviolet germicidal light 01:25:54 Ancient civilizational collapse
07/06/2024 • 1 heure, 36 minutes, 1 secondes
Christian Nunes on Deepfakes (with Max Tegmark)
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org Timestamps:00:00 The National Organisation for Women (NOW) 05:37 Deepfakes and women 10:12 Protecting ordinary victims of deepfakes 16:06 Deepfake legislation 23:38 Current harm from deepfakes 30:20 Bodily autonomy as a right 34:44 NOW's work on AI Here's FLI's recommended amendments to legislative proposals on deepfakes: https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/
24/05/2024 • 37 minutes, 12 secondes
Dan Faggella on the Race to AGI
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com
Timestamps:
00:00 Value differences in AI
12:07 Should we eventually create AGI?
28:22 What is a worthy successor?
43:19 AI changing power dynamics
59:00 Open source AI
01:05:07 What drives AI progress?
01:16:36 What limits AI progress?
01:26:31 Which industries are using AI?
03/05/2024 • 1 heure, 45 minutes, 20 secondes
Liron Shapira on Superintelligence Goals
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?
19/04/2024 • 1 heure, 26 minutes, 30 secondes
Annie Jacobsen on Nuclear War - a Second by Second Timeline
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com
Timestamps:
00:00 A scenario of nuclear war
06:56 Who would launch an attack?
13:50 Detecting nuclear attacks
19:37 The first critical seconds
29:42 Decisions under time pressure
34:27 Lessons from insiders
44:18 Submarines
51:06 How did we end up like this?
59:40 Interceptor missiles
1:11:25 Nuclear weapons and cyberattacks
1:17:35 Concentration of power
05/04/2024 • 1 heure, 26 minutes, 28 secondes
Katja Grace on the Largest Survey of AI Researchers
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/.
Timestamps:
0:20 AI Impacts surveys
18:11 What AI will look like in 20 years
22:43 Experts’ extinction risk predictions
29:35 Opinions on slowing down AI development
31:25 AI “arms races”
34:00 AI risk areas with the most agreement
40:41 Do “high hopes and dire concerns” go hand-in-hand?
42:00 Intelligence explosions
45:37 Discontinuous progress
49:43 Impacts of AI crossing the human-level intelligence threshold
59:39 What does AI learn from human culture?
1:02:59 AI scaling
1:05:04 What should we do?
14/03/2024 • 1 heure, 8 minutes
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info
Timestamps:
00:00 Pausing AI
10:23 Risks during an AI pause
19:41 Hardware overhang
29:04 Technological progress
37:00 Safety research during a pause
54:42 Social dynamics of AI risk
1:10:00 What prevents cooperation?
1:18:21 What about China?
1:28:24 Protesting AGI corporations
29/02/2024 • 1 heure, 36 minutes, 5 secondes
Sneha Revanur on the Social Effects of AI
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org
Timestamps:
00:00 Encode Justice
06:11 AI ethics and AI safety
15:49 Humans in the loop
23:59 AI in social media
30:42 Deteriorating social skills?
36:00 AIs identifying as AIs
43:36 AI influence in elections
50:32 AIs interacting with human systems
16/02/2024 • 57 minutes, 48 secondes
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Is AI like a Shoggoth?
09:50 Scaling laws
16:41 Are humans more general than AIs?
21:54 Are AI models explainable?
27:49 Using AI to explain AI
32:36 Evidence for AI being uncontrollable
40:29 AI verifiability
46:08 Will AI be aligned by default?
54:29 Creating human-like AI
1:03:41 Robotics and safety
1:09:01 Obstacles to AI in the economy
1:18:00 AI innovation with current models
1:23:55 AI accidents in the past and future
02/02/2024 • 1 heure, 31 minutes, 13 secondes
Special: Flo Crivello on AI as a New Form of Life
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years.
Timestamps:
00:00 Technological progress
07:59 Regulatory capture and AI
11:53 AI as a new form of life
15:44 Can AI development be paused?
20:12 Biden's executive order on AI
22:54 How would a GPU kill switch work?
27:00 Regulating models or applications?
32:13 AGI in 2-8 years
42:00 China and US collaboration on AI
19/01/2024 • 47 minutes, 38 secondes
Carl Robichaud on Preventing Nuclear War
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/
Timestamps:
00:00 A new nuclear arms race
08:07 How much do world leaders matter?
18:04 How much does ideology matter?
22:14 Do nuclear weapons cause stable peace?
31:29 North Korea
34:01 Have we overestimated nuclear risk?
43:24 Time pressure in nuclear decisions
52:00 Why so many nuclear warheads?
1:02:17 Has containment been successful?
1:11:34 Coordination mechanisms
1:16:31 Technological innovations
1:25:57 Public perception of nuclear risk
1:29:52 Easier access to nuclear weapons
1:33:31 Reaching a stable, low-risk era
06/01/2024 • 1 heure, 39 minutes, 3 secondes
Frank Sauer on Autonomous Weapon Systems
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/
Timestamps:
00:00 Autonomy in weapon systems
12:19 Balance of offense and defense
20:05 Killer drone systems
28:53 Is autonomy like nuclear weapons?
37:20 Low-tech defenses against drones
48:29 Autonomy and power balance
1:00:24 Tricking autonomous systems
1:07:53 Unpredictability of autonomous systems
1:13:16 Will we trust autonomous systems too much?
1:27:28 Legal terminology
1:32:12 Political possibilities
14/12/2023 • 1 heure, 42 minutes, 40 secondes
Darren McKee on Uncontrollable Superintelligence
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
Timestamps:
00:00 Uncontrollable superintelligence
16:41 AI goals and the "virus analogy"
28:36 Speed of AI cognition
39:25 Narrow AI and autonomy
52:23 Reliability of current and future AI
1:02:33 Planning for multiple AI scenarios
1:18:57 Will AIs seek self-preservation?
1:27:57 Is there a unified solution to AI alignment?
1:30:26 Concrete AI safety proposals
01/12/2023 • 1 heure, 40 minutes, 37 secondes
Mark Brakel on the UK AI Summit and the Future of AI Policy
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems.
Timestamps:
00:00 AI Safety Summit in the UK
12:18 Are officials up to date on AI?
23:22 Objections to AI policy
31:27 The EU AI Act
43:37 The right level of regulation
57:11 Risks and regulatory tools
1:04:44 Open-source AI
1:14:56 Subsidising AI safety research
1:26:29 Global institutions for safe AI
1:34:34 Autonomy in weapon systems
17/11/2023 • 1 heure, 48 minutes, 36 secondes
Dan Hendrycks on Catastrophic AI Risks
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai
Timestamps:
00:00 X.ai - Elon Musk's new AI venture
02:41 How AI risk thinking has evolved
12:58 AI bioengeneering
19:16 AI agents
24:55 Preventing autocracy
34:11 AI race - corporations and militaries
48:04 Bulletproofing AI organizations
1:07:51 Open-source models
1:15:35 Dan's textbook on AI safety
1:22:58 Rogue AI
1:28:09 LLMs and value specification
1:33:14 AI goal drift
1:41:10 Power-seeking AI
1:52:07 AI deception
1:57:53 Representation engineering
03/11/2023 • 2 heures, 7 minutes, 24 secondes
Samuel Hammond on AGI and Institutional Disruption
Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca
Timestamps:
00:00 Is AGI close?
06:56 Compute versus data
09:59 Information theory
20:36 Universality of learning
24:53 Hards steps in evolution
30:30 Governments and advanced AI
40:33 How will AI transform the economy?
55:26 How will AI change transaction costs?
1:00:31 Isolated thinking about AI
1:09:43 AI and Leviathan
1:13:01 Informational resolution
1:18:36 Open-source AI
1:21:24 AI will decrease state power
1:33:17 Timeline of a techno-feudalist future
1:40:28 Alignment difficulty and AI scale
1:45:19 Solving robotics
1:54:40 A constrained Leviathan
1:57:41 An Apollo Project for AI safety
2:04:29 Secure "gain-of-function" AI research
2:06:43 Is the market expecting AGI soon?
20/10/2023 • 2 heures, 14 minutes, 51 secondes
Imagine A World: What if AI advisors helped us make better decisions?
Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year
In the eighth and final episode of Imagine A World we explore the fictional worldbuild titled 'Computing Counsel', one of the third place winners of FLI’s worldbuilding contest.
Guillaume Riesen talks to Mark L, one of the three members of the team behind 'Computing Counsel', a third-place winner of the FLI Worldbuilding Contest. Mark is a machine learning expert with a chemical engineering degree, as well as an amateur writer. His teammates are Patrick B, a mechanical engineer and graphic designer, and Natalia C, a biological anthropologist and amateur programmer.
This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad-filtering technologies and an escalating arms race that eventually puts an end to the internet as we know it. There is AI-generated art so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers.
While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become. The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that.
While social networking technologies become ever more powerful, the networks of people they connect don't necessarily just get wider and shallower. Instead, they tend to be smaller and more intimately interconnected. The world's inhabitants also have nuanced attitudes towards A.I. tools, embracing or avoiding their applications based on their religious or philosophical beliefs.
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this worldbuild: https://worldbuild.ai/computing-counsel
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
17/10/2023 • 59 minutes, 44 secondes
Imagine A World: What if narrow AI fractured our shared reality?
Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don’t know or care about the difference and have no idea how they could distinguish between a human or artificial stranger. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and therapy, and those bubbles help to sustain their inhabitants. Can you get excited about a world with these tradeoffs?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year
In the seventh episode of Imagine A World we explore a fictional worldbuild titled 'Hall of Mirrors', which was a third-place winner of FLI's worldbuilding contest.
Michael Vasser joins Guillaume Riesen to discuss his imagined future, which he created with the help of Matija Franklin and Bryce Hidysmith. Vassar was formerly the president of the Singularity Institute, and co-founded Metamed; more recently he has worked on communication across political divisions. Franklin is a PhD student at UCL working on AI Ethics and Alignment. Finally, Hidysmith began in fashion design, passed through fortune-telling before winding up in finance and policy research, at places like Numerai, the Median Group, Bismarck Analysis, and Eco.com.
Hall of Mirrors is a deeply unstable world where nothing is as it seems. The structures of power that we know today have eroded away, survived only by shells of expectation and appearance. People are isolated by perceptual bubbles and struggle to agree on what is real.
This team put a lot of effort into creating a plausible, empirically grounded world, but their work is also notable for its irreverence and dark humor. In some ways, this world is kind of a caricature of the present. We see deeper isolation and polarization caused by media, and a proliferation of powerful but ultimately limited AI tools that further erode our sense of objective reality. A deep instability threatens. And yet, on a human level, things seem relatively calm. It turns out that the stories we tell ourselves about the world have a lot of inertia, and so do the ways we live our lives.
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this worldbuild: https://worldbuild.ai/hall-of-mirrors
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
10/10/2023 • 50 minutes, 36 secondes
Steve Omohundro on Provably Safe AGI
Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf
Timestamps:
00:00 Provably safe AI systems
12:17 Alignment and evaluations
21:08 Proofs about language model behavior
27:11 Can we formalize safety?
30:29 Provable contracts
43:13 Digital replicas of actual systems
46:32 Proof-carrying code
56:25 Can language models think logically?
1:00:44 Can AI do proofs for us?
1:09:23 Hard to proof, easy to verify
1:14:31 Digital neuroscience
1:20:01 Risks of totalitarianism
1:22:29 Can we guarantee safety?
1:25:04 Real-world provable safety
1:29:29 Tamper-proof hardware
1:35:35 Mortal and throttled AI
1:39:23 Least-privilege guarantee
1:41:53 Basic AI drives
1:47:47 AI agency and world models
1:52:08 Self-improving AI
1:58:21 Is AI overhyped now?
05/10/2023 • 2 heures, 2 minutes, 32 secondes
Imagine A World: What if AI enabled us to communicate with animals?
What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
In the sixth episode of Imagine A World we explore the fictional worldbuild titled 'AI for the People', a third place winner of the worldbuilding contest.
Our host Guillaume Riesen welcomes Chi Rainer Bornfree, part of this three-person worldbuilding team alongside her husband Micah White, and their collaborator, J.R. Harris. Chi has a PhD in Rhetoric from UC Berkeley and has taught at Bard, Princeton, and NY State Correctional facilities, in the meantime writing fiction, essays, letters, and more. Micah, best-known as the co-creator of the 'Occupy Wall Street' movement and the author of 'The End of Protest', now focuses primarily on the social potential of cryptocurrencies, while Harris is a freelance illustrator and comic artist.
The name 'AI for the People' does a great job of capturing this team's activist perspective and their commitment to empowerment. They imagine social and political shifts that bring power back into the hands of individuals, whether that means serving as lawmakers on randomly selected committees, or gaining income by choosing to sell their personal data online. But this world isn't just about human people. Its biggest bombshell is an AI breakthrough that allows humans to communicate with other animals. What follows is an existential reconsideration of humanity's place in the universe. This team has created an intimate, complex portrait of a world shared by multiple parties: AIs, humans, other animals, and the environment itself. As these entities find their way forward together, their goals become enmeshed and their boundaries increasingly blurred.
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this worldbuild: https://worldbuild.ai/ai-for-the-people
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects
Media and resources referenced in the episode:
https://en.wikipedia.org/wiki/Life_3.0
https://en.wikipedia.org/wiki/1_the_Road
https://ignota.org/products/pharmako-ai
https://en.wikipedia.org/wiki/The_Ministry_for_the_Future
https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/
https://en.wikipedia.org/wiki/Occupy_Wall_Street
https://en.wikipedia.org/wiki/Sortition
https://en.wikipedia.org/wiki/Iroquois
https://en.wikipedia.org/wiki/The_Ship_Who_Sang
https://en.wikipedia.org/wiki/The_Sparrow_(novel)
https://en.wikipedia.org/wiki/After_Yang
03/10/2023 • 1 heure, 4 minutes, 7 secondes
Imagine A World: What if some people could live forever?
If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year
In the fifth episode of Imagine A World, we explore the fictional worldbuild titled 'To Light’. Our host Guillaume Riesen speaks to Mako Yass, the first place winner of the FLI Worldbuilding Contest we ran last year. Mako lives in Auckland, New Zealand. He describes himself as a 'stray philosopher-designer', and has a background in computer programming and analytic philosophy.
Mako’s world is particularly imaginative, with richly interwoven narrative threads and high-concept sci fi inventions. By 2045, his world has been deeply transformed. There’s an AI-designed miracle pill that greatly extends lifespan and eradicates most human diseases. Sachets of this life-saving medicine are distributed freely by dove-shaped drones. There’s a kind of mind uploading which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence. The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. Some people move into space, building massive structures around the sun where they practice esoteric arts in pursuit of a more perfect peace.
While this peaceful, flourishing end state is deeply optimistic, Mako is also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He’s particularly concerned with the risks presented by artificial intelligence systems as they surpass us. An AI system that is more capable than a human at all tasks - not just playing chess or driving a car - is what we’d call an Artificial General Intelligence - abbreviated ‘AGI’.
Mako proposes that we could build safe AIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they are released into the world.
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this worldbuild: https://worldbuild.ai/to-light
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects
Media and concepts referenced in the episode:
https://en.wikipedia.org/wiki/Terra_Ignota
https://en.wikipedia.org/wiki/The_Transparent_Society
https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer
https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain
https://en.wikipedia.org/wiki/The_Matrix
https://aboutmako.makopool.com/
26/09/2023 • 58 minutes, 53 secondes
Johannes Ackva on Managing Climate Change
Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climate
Timestamps:
00:00 Johannes's journey as an environmentalist
13:21 The drivers of climate change
23:00 Oil, coal, and gas
38:05 Solar, wind, and hydro
49:34 Nuclear energy
57:03 Geothermal energy
1:00:41 Most promising technologies
1:05:40 Government subsidies
1:13:28 Carbon taxation
1:17:10 Planting trees
1:21:53 Influencing government policy
1:26:39 Different climate scenarios
1:34:49 Economic growth and emissions
1:37:23 Social stability
References:
Emissions by sector: https://ourworldindata.org/emissions-by-sector
Energy density of different energy sources: https://www.nature.com/articles/s41598-022-25341-9
Emissions forecasts: https://www.lse.ac.uk/granthaminstitute/publication/the-unconditional-probability-distribution-of-future-emissions-and-temperatures/ and https://www.science.org/doi/10.1126/science.adg6248
Risk management: https://www.youtube.com/watch?v=6JJvIR1W-xI
Carbon pricing: https://www.cell.com/joule/pdf/S2542-4351(18)30567-1.pdf
Why not simply plant trees?: https://climate.mit.edu/ask-mit/how-many-new-trees-would-we-need-offset-our-carbon-emissions
Deforestation: https://www.science.org/doi/10.1126/science.ade3535
Decoupling of economic growth and emissions: https://www.globalcarbonproject.org/carbonbudget/22/highlights.htm
Premature deaths from air pollution: https://www.unep.org/interactives/air-pollution-note/
21/09/2023 • 1 heure, 40 minutes, 13 secondes
Imagine A World: What if we had digital nations untethered to geography?
How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
In the fourth episode of Imagine A World, we explore the fictional worldbuild titled 'Digital Nations'.
Conrad Whitaker and Tracey Kamande join Guillaume Riesen on 'Imagine a World' to talk about their worldbuild, 'Digital Nations', which they created with their teammate, Dexter Findley. All three worldbuilders were based in Kenya while crafting their entry, though Dexter has just recently moved to the UK. Conrad is a Nairobi-based startup advisor and entrepreneur, Dexter works in humanitarian aid, and Tracey is the Co-founder of FunKe Science, a platform that promotes interactive learning of science among school children.
As the name suggests, this world is a deep dive into virtual communities. It explores how people might find belonging and representation on the global stage through digital nations that aren't tied to any physical location. This world also features a fascinating and imaginative kind of artificial intelligence that they call 'digital persons'. These are inspired by biological brains and have a rich internal psychology. Rather than being trained on data, they're considered to be raised in digital nurseries. They have a nuanced but mostly loving relationship with humanity, with some even going on to found their own digital nations for us to join.
In an incredible turn of events, last year the South Pacific state of Tuvalu was the first to “go virtual” in response to sea levels threatening the island nation's physical territory. This happened in real life just months after it was written into this imagined world in our worldbuilding contest, showing how rapidly ideas that seem ‘out there’ can become reality. Will all nations eventually go digital? And might AGIs be assimilated, 'brought up' rather than merely trained, as 'digital people', citizens to live communally alongside humans in these futuristic states?
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this worldbuild: https://worldbuild.ai/digital-nations
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects
Media and concepts referenced in the episode:
https://www.tuvalu.tv/
https://en.wikipedia.org/wiki/Trolley_problem
https://en.wikipedia.org/wiki/Climate_change_in_Kenya
https://en.wikipedia.org/wiki/John_von_Neumann
https://en.wikipedia.org/wiki/Brave_New_World
https://thenetworkstate.com/the-network-state
https://en.wikipedia.org/wiki/Culture_series
19/09/2023 • 55 minutes, 38 secondes
Imagine A World: What if global challenges led to more centralization?
What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
In the third episode of Imagine A World, we explore the fictional worldbuild titled 'Core Central'.
How does a team of seven academics agree on one cohesive imagined world? That's a question the team behind 'Core Central', a second-place prizewinner in the FLI Worldbuilding Contest, had to figure out as they went along. In the end, this entry's realistic sense of multipolarity and messiness reflect positively its organic formulation. The team settled on one core, centralised AGI system as the governance model for their entire world. This eventually moves their world 'beyond' nation states. Could this really work?
In this third episode of 'Imagine a World', Guillaume Riesen speaks to two of the academics in this team, John Burden and Henry Shevlin, representing the team that created 'Core Central'. The full team includes seven members, three of whom (Henry, John and Beba Cibralic) are researchers at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and five of whom (Jessica Bland, Lara Mani, Clarissa Rios Rojas, Catherine Richards alongside John) work with the Centre for the Study of Existential Risk, also at Cambridge University.
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this imagined world: https://worldbuild.ai/core-central
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects
Media and Concepts referenced in the episode:
https://en.wikipedia.org/wiki/Culture_series
https://en.wikipedia.org/wiki/The_Expanse_(TV_series)
https://www.vox.com/authors/kelsey-piper
https://en.wikipedia.org/wiki/Gratitude_journal
https://en.wikipedia.org/wiki/The_Diamond_Age
https://www.scientificamerican.com/article/the-mind-of-an-octopus/
https://en.wikipedia.org/wiki/Global_workspace_theory
https://en.wikipedia.org/wiki/Alien_hand_syndrome
https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)
12/09/2023 • 1 heure, 28 secondes
Tom Davidson on How Quickly AI Could Automate the Economy
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky.
Timestamps:
00:00 The current pace of AI
03:58 Near-term risks from AI
09:34 Historical analogies to AI
13:58 AI benchmarks VS economic impact
18:30 AI takeoff speed and bottlenecks
31:09 Tom's model of AI takeoff speed
36:21 How AI could automate AI research
41:49 Bottlenecks to AI automating AI hardware
46:15 How much of AI research is automated now?
48:26 From 20% to 100% automation
53:24 AI takeoff in 3 years
1:09:15 Economic impacts of fast AI takeoff
1:12:51 Bottlenecks slowing AI takeoff
1:20:06 Does the market predict a fast AI takeoff?
1:25:39 "Hard to avoid AGI by 2060"
1:27:22 Risks from AI over the next 20 years
1:31:43 AI progress without more compute
1:44:01 What if AI models fail safety evaluations?
1:45:33 Cybersecurity at AI companies
1:47:33 Will AI turn out well for humanity?
1:50:15 AI and board games
08/09/2023 • 1 heure, 56 minutes, 22 secondes
Imagine A World: What if we designed and built AI in an inclusive way?
How does who is involved in the design of AI affect the possibilities for our future? Why isn’t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year
In this second episode of Imagine A World we explore the fictional worldbuild titled 'Crossing Points', a second place entry in FLI's worldbuilding contest.
Joining Guillaume Reisen on the Imagine a World podcast this time are two members of the Crossing Points team, Elaine Czech and Vanessa Hampshire, both academics at the University of Bristol. Elaine has a background in art and design, and is studying the accessibility of technologies for the elderly. Vanessa is studying responsible AI practices of technologists, using methods like storytelling to promote diverse voices in AI research. Their teammates in the contest were Tashi Namgyal, a University of Bristol PhD studying the controllability of deep generative models, Dr. Susan Lechelt, who researches the applications and implications of emerging technologies at the University of Edinburgh, and Nicole Oxton, a British civil servant.
There's an emphasis on the unanticipated impacts of new technologies on those who weren't considered during their development. From urban families in Indonesia to anti-technology extremists in America, we're shown that there's something to learn from every human story. This world emphasizes the importance of broadening our lens and empowering marginalized voices in order to build a future that would be bright for more than just a privileged few.
The world of Crossing Points looks pretty different from our own, with advanced AIs debating philosophy on TV and hybrid 3D printed meats and grocery stores. But the people in this world are still basically the same. Our hopes and dreams haven't fundamentally changed, and neither have our blindspots and shortcomings. Crossing Points embraces humanity in all its diversity and looks for the solutions that human nature presents alongside the problems. It shows that there's something to learn from everyone's experience and that even the most radical attitudes can offer insights that help to build a better world.
Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.
Explore this worldbuild: https://worldbuild.ai/crossing-points
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
Works referenced in this episode:
https://en.wikipedia.org/wiki/The_Legend_of_Zelda
https://en.wikipedia.org/wiki/Ainu_people
https://www.goodreads.com/book/show/34846958-radicals
http://www.historyofmasks.net/famous-masks/noh-mask/
05/09/2023 • 52 minutes, 51 secondes
Imagine A World: What if new governance mechanisms helped us coordinate?
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together?
Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
In this first episode of Imagine A World we explore the fictional worldbuild titled 'Peace Through Prophecy'.
Host Guillaume Reisen speaks to the makers of 'Peace Through Prophecy', a second place entry in FLI's Worldbuilding Contest. The worldbuild was created by Jackson Wagner, Diana Gurvich and Holly Oatley. In the episode, Jackson and Holly discuss just a few of the many ideas bubbling around in their imagined future.
At its core, this world is arguably about community. It asks how technology might bring us closer together, and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems bolstered by Artificial Intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually come to see their emotional and creative potentials realized. While progress is uneven, and littered with many human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future.
Please note: This episode explores the ideas created as part of FLI’s Worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions
Explore this imagined world: https://worldbuild.ai/peace-through-prophecy
The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].
You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
Media and concepts referenced in the episode:
https://en.wikipedia.org/wiki/Prediction_market
https://forum.effectivealtruism.org/
'Veil of ignorance' thought experiment: https://en.wikipedia.org/wiki/Original_position
https://en.wikipedia.org/wiki/Isaac_Asimov
https://en.wikipedia.org/wiki/Liquid_democracy
https://en.wikipedia.org/wiki/The_Dispossessed
https://en.wikipedia.org/wiki/Terra_Ignota
https://equilibriabook.com/
https://en.wikipedia.org/wiki/John_Rawls
https://en.wikipedia.org/wiki/Radical_transparency
https://en.wikipedia.org/wiki/Audrey_Tang
https://en.wikipedia.org/wiki/Quadratic_voting#Quadratic_funding
05/09/2023 • 1 heure, 2 minutes, 35 secondes
New: Imagine A World Podcast [TRAILER]
Coming Soon…
The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster and major wars. Instead, AI and other new technologies are helping to make the world more peaceful, happy and equal. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year.
Our new podcast series digs deeper into the eight winning entries, their ideas and solutions, the diverse teams behind them and the challenges they faced. You might love some; others you might not choose to inhabit. FLI is not endorsing any one idea. Rather, we hope to grow the conversation about what futures people get excited about.
Ask yourself, with each episode, is this a world you’d want to live in? And if not, what would you prefer?
Don’t miss the first two episodes coming to your feed at the start of September!
In the meantime, do explore the winning worlds, if you haven’t already: https://worldbuild.ai/
29/08/2023 • 2 minutes
Robert Trager on International AI Governance and Cybersecurity at AI Companies
Robert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance. We also discuss Robert's forthcoming paper International Governance of Civilian AI: A Jurisdictional Certification Approach. You can read more about Robert's work at https://www.governance.ai
Timestamps:
00:00 The goals of AI governance
08:38 Incentives of governments and companies
18:58 Benefits of regulatory diversity
28:50 The track record of anticipatory regulation
37:55 The security dilemma in AI
46:20 Offense-defense balance in AI
53:27 Failure rates and international agreements
1:00:33 Verification of compliance
1:07:50 Controlling AI supply chains
1:13:47 Cybersecurity at AI companies
1:21:30 The jurisdictional certification approach
1:28:40 Objections to AI governance
20/08/2023 • 1 heure, 44 minutes, 17 secondes
Jason Crawford on Progress and Risks from AI
Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org
Timestamps:
00:00 Eras of human progress
06:47 Flywheels of progress
17:56 Main causes of progress
21:01 Progress and risk
32:49 Safety as part of progress
45:20 Slowing down specific technologies?
52:29 Four lenses on AI risk
58:48 Analogies causing disagreement
1:00:54 Solutionism about AI
1:10:43 Insurance, subsidies, and bug bounties for AI risk
1:13:24 How is AI different from other technologies?
1:15:54 Future scenarios of economic growth
21/07/2023 • 1 heure, 25 minutes, 43 secondes
Special: Jaan Tallinn on Pausing Giant AI Experiments
On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments.
Timestamps:
0:00 Nathan introduces Jaan
4:22 AI safety and Future of Life Institute
5:55 Jaan's first meeting with Eliezer Yudkowsky
12:04 Future of AI evolution
14:58 Jaan's investments in AI companies
23:06 The emerging danger paradigm
26:53 Economic transformation with AI
32:31 AI supervising itself
34:06 Language models and validation
38:49 Lack of insight into evolutionary selection process
41:56 Current estimate for life-ending catastrophe
44:52 Inverse scaling law
53:03 Our luck given the softness of language models
55:07 Future of language models
59:43 The Moore's law of mad science
1:01:45 GPT-5 type project
1:07:43 The AI race dynamics
1:09:43 AI alignment with the latest models
1:13:14 AI research investment and safety
1:19:43 What a six-month pause buys us
1:25:44 AI passing the Turing Test
1:28:16 AI safety and risk
1:32:01 Responsible AI development.
1:40:03 Neuralink implant technology
06/07/2023 • 1 heure, 41 minutes, 8 secondes
Joe Carlsmith on How We Change Our Minds About AI Risk
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com.
Timestamps:
00:00 Predictable updating on AI risk
07:27 Abstract models versus gut feelings
22:06 How Joe began believing in AI risk
29:06 Is AI risk falsifiable?
35:39 Types of skepticisms about AI risk
44:51 Are we fundamentally confused?
53:35 Becoming alienated from ourselves?
1:00:12 What will change people's minds?
1:12:34 Outline of different futures
1:20:43 Humanity losing touch with reality
1:27:14 Can we understand AI sentience?
1:36:31 Distinguishing real from fake sentience
1:39:54 AI doomer epistemology
1:45:23 AI benchmarks versus real-world AI
1:53:00 AI improving AI research and development
2:01:08 What if transformative AI comes soon?
2:07:21 AI safety if transformative AI comes soon
2:16:52 AI systems interpreting other AI systems
2:19:38 Philosophy and transformative AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
22/06/2023 • 2 heures, 24 minutes, 23 secondes
Dan Hendrycks on Why Evolution Favors AIs over Humans
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai
Timestamps:
00:00 Corporate AI race
06:28 Evolutionary dynamics in AI
25:26 Why evolution applies to AI
50:58 Deceptive AI
1:06:04 Competition erodes safety
10:17:40 Evolutionary fitness: humans versus AI
1:26:32 Different paradigms of AI risk
1:42:57 Interpreting AI systems
1:58:03 Honest AI and uncertain AI
2:06:52 Empirical and conceptual work
2:12:16 Losing touch with reality
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
08/06/2023 • 2 heures, 26 minutes, 37 secondes
Roman Yampolskiy on Objections to AI Safety
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?
1:12:27 Will AI accidents increase?
1:23:56 Will we know who was right about AI?
1:33:33 Difference between AI output and AI model
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
26/05/2023 • 1 heure, 42 minutes, 13 secondes
Nathan Labenz on How AI Will Transform the Economy
Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai
Timestamps:
00:00 Economic transformation from AI
11:15 Productivity increases from technology
17:44 AI effects on employment
28:43 Life without jobs
38:42 Losing contact with reality
42:31 Catastrophic risks from AI
53:52 Scaling AI training runs
1:02:39 Stable opinions on AI?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
11/05/2023 • 1 heure, 6 minutes, 54 secondes
Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at
https://www.cognitiverevolution.ai
Timestamps:
00:00 The cognitive revolution
07:47 Red teaming GPT-4
24:00 Coming to believe in transformative AI
30:14 Is AI depth or breadth most impressive?
42:52 Potential near-term dangers from AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
04/05/2023 • 59 minutes, 43 secondes
Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures
Timestamps:
00:00 How does venture capital work?
09:01 Failure and success for startups
13:22 Is overconfidence necessary?
19:20 Repeat entrepreneurs
24:38 Long-term investing
30:36 Feedback loops from investments
35:05 Timing investments
38:35 The hardware-software dichotomy
42:19 Innovation prizes
45:43 VC lessons for philanthropy
51:03 Creating new markets
54:01 Investing versus philanthropy
56:14 Technology preying on human frailty
1:00:55 Are good ideas getting harder to find?
1:06:17 Artificial intelligence
1:12:41 Funding ethics research
1:14:25 Is philosophy useful?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
27/04/2023 • 1 heure, 17 minutes, 46 secondes
Connor Leahy on the State of AI and Alignment Research
Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Landscape of AI research labs
10:13 Is AGI a useful term?
13:31 AI predictions
17:56 Reinforcement learning from human feedback
29:53 Mechanistic interpretability
33:37 Yudkowsky and Christiano
41:39 Cognitive Emulations
43:11 Public reactions to AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
20/04/2023 • 52 minutes, 7 secondes
Connor Leahy on AGI and Cognitive Emulation
Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev
Timestamps:
00:00 GPT-4
16:35 "Magic" in machine learning
27:43 Cognitive emulations
38:00 Machine learning VS explainability
48:00 Human data = human AI?
1:00:07 Analogies for cognitive emulations
1:26:03 Demand for human-like AI
1:31:50 Aligning superintelligence
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
13/04/2023 • 1 heure, 36 minutes, 34 secondes
Lennart Heim on Compute Governance
Lennart Heim joins the podcast to discuss options for governing the compute used by AI labs and potential problems with this approach to AI safety. You can read more about Lennart's work here: https://heim.xyz/about/
Timestamps:
00:00 Introduction
00:37 AI risk
03:33 Why focus on compute?
11:27 Monitoring compute
20:30 Restricting compute
26:54 Subsidising compute
34:00 Compute as a bottleneck
38:41 US and China
42:14 Unintended consequences
46:50 Will AI be like nuclear energy?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
06/04/2023 • 50 minutes, 25 secondes
Lennart Heim on the AI Triad: Compute, Data, and Algorithms
Lennart Heim joins the podcast to discuss how we can forecast AI progress by researching AI hardware. You can read more about Lennart's work here: https://heim.xyz/about/
Timestamps:
00:00 Introduction
01:00 The AI triad
06:26 Modern chip production
15:54 Forecasting AI with compute
27:18 Running out of data?
32:37 Three eras of AI training
37:58 Next chip paradigm
44:21 AI takeoff speeds
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
30/03/2023 • 47 minutes, 51 secondes
Liv Boeree on Poker, GPT-4, and the Future of AI
Liv Boeree joins the podcast to discuss poker, GPT-4, human-AI interaction, whether this is the most important century, and building a dataset of human wisdom. You can read more about Liv's work here: https://livboeree.com
Timestamps:
00:00 Introduction
00:36 AI in Poker
09:35 Game-playing AI
13:45 GPT-4 and generative AI
26:41 Human-AI interaction
32:05 AI arms race risks
39:32 Most important century?
42:36 Diminishing returns to intelligence?
49:14 Dataset of human wisdom/meaning
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
23/03/2023 • 51 minutes, 31 secondes
Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI
Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv's work here: https://livboeree.com
Timestamps:
00:00 Introduction
01:57 What is Moloch?
04:13 Beauty filters
10:06 Science citations
15:18 Resisting Moloch
20:51 New institutions
26:02 Moloch and WinWin
28:41 Changing systems
33:37 Artificial intelligence
39:14 AI acceleration
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
16/03/2023 • 42 minutes, 9 secondes
Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence
Tobias Baumann joins the podcast to discuss suffering risks, space colonization, and cooperative artificial intelligence. You can read more about Tobias' work here: https://centerforreducingsuffering.org.
Timestamps:
00:00 Suffering risks
02:50 Space colonization
10:12 Moral circle expansion
19:14 Cooperative artificial intelligence
36:19 Influencing governments
39:34 Can we reduce suffering?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
09/03/2023 • 43 minutes, 20 secondes
Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering
Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future. You can read more about Tobias' work here: https://centerforreducingsuffering.org.
Timestamps:
00:00 Introduction
00:52 What are suffering risks?
05:40 Artificial sentience
17:18 Is reducing suffering hopelessly difficult?
26:06 Can we know how to reduce suffering?
31:17 Why are suffering risks neglected?
37:31 How do we avoid accidentally increasing suffering?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
02/03/2023 • 47 minutes, 4 secondes
Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI
Neel Nanda joins the podcast for a lightning round on mathematics, technological progress, aging, living up to our values, and generative AI. You can find his blog here: https://www.neelnanda.io
Timestamps:
00:00 Introduction
00:55 How useful is advanced mathematics?
02:24 Will AI replace mathematicians?
03:28 What are the key drivers of tech progress?
04:13 What scientific discovery would disrupt Neel's worldview?
05:59 How should humanity view aging?
08:03 How can we live up to our values?
10:56 What can we learn from a person who lived 1.000 years ago?
12:05 What should we do after we have aligned AGI?
16:19 What important concept is often misunderstood?
17:22 What is the most impressive scientific discovery?
18:08 Are language models better learning tools that textbooks?
21:22 Should settling Mars be a priority for humanity?
22:44 How can we focus on our work?
24:04 Are human-AI relationships morally okay?
25:18 Are there aliens in the universe?
26:02 What are Neel's favourite books?
27:15 What is an overlooked positive aspect of humanity?
28:33 Should people spend more time prepping for disaster?
30:41 Neel's advice for teens.
31:55 How will generative AI evolve over the next five years?
32:56 How much can AIs achieve through a web browser?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
23/02/2023 • 34 minutes, 47 secondes
Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability
Neel Nanda joins the podcast to talk about mechanistic interpretability and how it can make AI safer. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io
Timestamps:
00:00 Introduction
00:46 How early is the field mechanistic interpretability?
03:12 Why should we care about mechanistic interpretability?
06:38 What are some successes in mechanistic interpretability?
16:29 How promising is mechanistic interpretability?
31:13 Is machine learning analogous to evolution?
32:58 How does mechanistic interpretability make AI safer?
36:54 36:54 Does mechanistic interpretability help us control AI?
39:57 Will AI models resist interpretation?
43:43 Is mechanistic interpretability fast enough?
54:10 Does mechanistic interpretability give us a general understanding?
57:44 How can you help with mechanistic interpretability?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
16/02/2023 • 1 heure, 1 minute, 39 secondes
Neel Nanda on What is Going on Inside Neural Networks
Neel Nanda joins the podcast to explain how we can understand neural networks using mechanistic interpretability. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io
Timestamps:
00:00 Who is Neel?
04:41 How did Neel choose to work on AI safety?
12:57 What does an AI safety researcher do?
15:53 How analogous are digital neural networks to brains?
21:34 Are neural networks like alien beings?
29:13 Can humans think like AIs?
35:00 Can AIs help us discover new physics?
39:56 How advanced is the field of AI safety?
45:56 How did Neel form independent opinions on AI?
48:20 How does AI safety research decrease the risk of extinction?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
09/02/2023 • 1 heure, 4 minutes, 52 secondes
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
02/02/2023 • 1 heure, 5 minutes, 53 secondes
Connor Leahy on AI Safety and Why the World is Fragile
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Introduction
00:47 What is the best way to understand AI safety?
09:50 Why is the world relatively stable?
15:18 Is the main worry human misuse of AI?
22:47 Can humanity solve AI safety?
30:06 Can we slow down AI development?
37:13 How should governments regulate AI?
41:09 How do we avoid misallocating AI safety government grants?
51:02 Should AI safety research be done by for-profit companies?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
26/01/2023 • 1 heure, 5 minutes, 5 secondes
Connor Leahy on AI Progress, Chimps, Memes, and Markets
Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev
Timestamps:
00:00 Introduction
01:00 Defining artificial general intelligence
04:52 What makes humans more powerful than chimps?
17:23 Would AIs have to be social to be intelligent?
20:29 Importing humanity's memes into AIs
23:07 How do we measure progress in AI?
42:39 Gut feelings about AI progress
47:29 Connor's predictions about AGI
52:44 Is predicting AGI soon betting against the market?
57:43 How accurate are prediction markets about AGI?
19/01/2023 • 1 heure, 4 minutes, 10 secondes
Sean Ekins on Regulating AI Drug Discovery
On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery.
Timestramps:
00:00 Introduction
00:31 Ethical guidelines and regulation of AI drug discovery
06:11 How do we balance innovation and safety in AI drug discovery?
13:12 Keeping dangerous chemical data safe
21:16 Sean’s personal story of voicing concerns about AI drug discovery
32:06 How Sean will continue working on AI drug discovery
12/01/2023 • 36 minutes, 32 secondes
Sean Ekins on the Dangers of AI Drug Discovery
On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm.
Timestamps:
00:00 Introduction
00:46 Sean’s professional journey
03:45 Can computational models replace animal models?
07:24 The risks of AI drug discovery
12:48 Should scientists disclose dangerous discoveries?
19:40 How should scientists handle dual-use technologies?
22:08 Should we open-source potentially dangerous discoveries?
26:20 How do we control autonomous drug creation?
31:36 Surprising chemical discoveries made by black-box AI systems
36:56 How could the dangers of AI drug discovery be mitigated?
05/01/2023 • 39 minutes, 10 secondes
Anders Sandberg on the Value of the Future
Anders Sandberg joins the podcast to discuss various philosophical questions about the value of the future.
Learn more about Anders' work: https://www.fhi.ox.ac.uk
Timestamps:
00:00 Introduction
00:54 Humanity as an immature teenager
04:24 How should we respond to our values changing over time?
18:53 How quickly should we change our values?
24:58 Are there limits to what future morality could become?
29:45 Could the universe contain infinite value?
36:00 How do we balance weird philosophy with common sense?
41:36 Lightning round: mind uploading, aliens, interstellar travel, cryonics
29/12/2022 • 49 minutes, 42 secondes
Anders Sandberg on Grand Futures and the Limits of Physics
Anders Sandberg joins the podcast to discuss how big the future could be and what humanity could achieve at the limits of physics.
Learn more about Anders' work: https://www.fhi.ox.ac.uk
Timestamps:
00:00 Introduction
00:58 Does it make sense to write long books now?
06:53 Is it possible to understand all of science now?
10:44 What is exploratory engineering?
15:48 Will humanity develop a completed science?
21:18 How much of possible technology has humanity already invented?
25:22 Which sciences have made the most progress?
29:11 How materially wealthy could humanity become?
39:34 Does a grand futures depend on space travel?
49:16 Trade between proponents of different moral theories
53:13 How does physics limit our ethical options?
55:24 How much could our understanding of physics change?
1:02:30 The next episode
22/12/2022 • 1 heure, 2 minutes, 47 secondes
Anders Sandberg on ChatGPT and the Future of AI
Anders Sandberg from The Future of Humanity Institute joins the podcast to discuss ChatGPT, large language models, and what he's learned about the risks and benefits of AI.
Timestamps:
00:00 Introduction
00:40 ChatGPT
06:33 Will AI continue to surprise us?
16:22 How do language models fail?
24:23 Language models trained on their own output
27:29 Can language models write college-level essays?
35:03 Do language models understand anything?
39:59 How will AI models improve in the future?
43:26 AI safety in light of recent AI progress
51:28 AIs should be uncertain about values
15/12/2022 • 58 minutes, 15 secondes
Vincent Boulanin on Military Use of Artificial Intelligence
Vincent Boulanin joins the podcast to explain how modern militaries use AI, including in nuclear weapons systems.
Learn more about Vincent's work: https://sipri.org
Timestamps:
00:00 Introduction
00:45 Categorizing risks from AI and nuclear
07:40 AI being used by non-state actors
12:57 Combining AI with nuclear technology
15:13 A human should remain in the loop
25:05 Automation bias
29:58 Information requirements for nuclear launch decisions
35:22 Vincent's general conclusion about military machine learning
37:22 Specific policy measures for decreasing nuclear risk
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
08/12/2022 • 48 minutes, 7 secondes
Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems
Vincent Boulanin joins the podcast to explain the dangers of incorporating artificial intelligence in nuclear weapons systems.
Learn more about Vincent's work: https://sipri.org
Timestamps:
00:00 Introduction
00:55 What is strategic stability?
02:45 How can AI be a positive factor in nuclear risk?
10:17 Remote sensing of nuclear submarines
19:50 Using AI in nuclear command and control
24:21 How does AI change the game theory of nuclear war?
30:49 How could AI cause an accidental nuclear escalation?
36:57 How could AI cause an inadvertent nuclear escalation?
43:08 What is the most important problem in AI nuclear risk?
44:39 The next episode
01/12/2022 • 44 minutes, 52 secondes
Robin Hanson on Predicting the Future of Artificial Intelligence
Robin Hanson joins the podcast to discuss AI forecasting methods and metrics.
Timestamps:
00:00 Introduction
00:49 Robin's experience working with AI
06:04 Robin's views on AI development
10:41 Should we care about metrics for AI progress?
16:56 Is it useful to track AI progress?
22:02 When should we begin worrying about AI safety?
29:16 The history of AI development
39:52 AI progress that deviates from current trends
43:34 Is this AI boom different than past booms?
48:26 Different metrics for predicting AI
24/11/2022 • 51 minutes, 48 secondes
Robin Hanson on Grabby Aliens and When Humanity Will Meet Them
Robin Hanson joins the podcast to explain his theory of grabby aliens and its implications for the future of humanity.
Learn more about the theory here: https://grabbyaliens.com
Timestamps:
00:00 Introduction
00:49 Why should we care about aliens?
05:58 Loud alien civilizations and quiet alien civilizations
08:16 Why would some alien civilizations be quiet?
14:50 The moving parts of the grabby aliens model
23:57 Why is humanity early in the universe?
28:46 Could't we just be alone in the universe?
33:15 When will humanity expand into space?
46:05 Will humanity be more advanced than the aliens we meet?
49:32 What if we discovered aliens tomorrow?
53:44 Should the way we think about aliens change our actions?
57:48 Can we reasonably theorize about aliens?
53:39 The next episode
17/11/2022 • 59 minutes, 53 secondes
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World
Ajeya Cotra joins us to talk about thinking clearly in a rapidly changing world.
Learn more about the work of Ajeya and her colleagues: https://www.openphilanthropy.org
Timestamps:
00:00 Introduction
00:44 The default versus the accelerating picture of the future
04:25 The role of AI in accelerating change
06:48 Extrapolating economic growth
08:53 How do we know whether the pace of change is accelerating?
15:07 How can we cope with a rapidly changing world?
18:50 How could the future be utopian?
22:03 Is accelerating technological progress immoral?
25:43 Should we imagine concrete future scenarios?
31:15 How should we act in an accelerating world?
34:41 How Ajeya could be wrong about the future
41:41 What if change accelerates very rapidly?
10/11/2022 • 44 minutes, 41 secondes
Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe.
Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org
Timestamps:
00:00 Introduction
00:53 AI safety research in general
02:04 Realistic scenarios for AI catastrophes
06:51 A dangerous AI model developed in the near future
09:10 Assumptions behind dangerous AI development
14:45 Can AIs learn long-term planning?
18:09 Can AIs understand human psychology?
22:32 Training an AI model with naive safety features
24:06 Can AIs be deceptive?
31:07 What happens after deploying an unsafe AI system?
44:03 What can we do to prevent an AI catastrophe?
53:58 The next episode
03/11/2022 • 54 minutes, 18 secondes
Ajeya Cotra on Forecasting Transformative Artificial Intelligence
Ajeya Cotra joins us to discuss forecasting transformative artificial intelligence.
Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org
Timestamps:
00:00 Introduction
00:53 Ajeya's report on AI
01:16 What is transformative AI?
02:09 Forecasting transformative AI
02:53 Historical growth rates
05:10 Simpler forecasting methods
09:01 Biological anchors
16:31 Different paths to transformative AI
17:55 Which year will we get transformative AI?
25:54 Expert opinion on transformative AI
30:08 Are today's machine learning techniques enough?
33:06 Will AI be limited by the physical world and regulation?
38:15 Will AI be limited by training data?
41:48 Are there human abilities that AIs cannot learn?
47:22 The next episode
27/10/2022 • 47 minutes, 40 secondes
Alan Robock on Nuclear Winter, Famine, and Geoengineering
Alan Robock joins us to discuss nuclear winter, famine and geoengineering.
Learn more about Alan's work: http://people.envsci.rutgers.edu/robock/
Follow Alan on Twitter: https://twitter.com/AlanRobock
Timestamps:
00:00 Introduction
00:45 What is nuclear winter?
06:27 A nuclear war between India and Pakistan
09:16 Targets in a nuclear war
11:08 Why does the world have so many nuclear weapons?
19:28 Societal collapse in a nuclear winter
22:45 Should we prepare for a nuclear winter?
28:13 Skepticism about nuclear winter
35:16 Unanswered questions about nuclear winter
20/10/2022 • 41 minutes, 21 secondes
Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity
Brian Toon joins us to discuss the risk of nuclear winter.
Learn more about Brian's work: https://lasp.colorado.edu/home/people/brian-toon/
Read Brian's publications: https://airbornescience.nasa.gov/person/Brian_Toon
Timestamps:
00:00 Introduction
01:02 Asteroid impacts
04:20 The discovery of nuclear winter
13:56 Comparing volcanoes and asteroids to nuclear weapons
19:42 How did life survive the asteroid impact 65 million years ago?
25:05 How humanity could go extinct
29:46 Nuclear weapons as a great filter
34:32 Nuclear winter and food production
40:58 The psychology of nuclear threat
43:56 Geoengineering to prevent nuclear winter
46:49 Will humanity avoid nuclear winter?
13/10/2022 • 49 minutes, 19 secondes
Philip Reiner on Nuclear Command, Control, and Communications
Philip Reiner joins us to talk about nuclear, command, control and communications systems.
Learn more about Philip’s work: https://securityandtechnology.org/
Timestamps:
[00:00:00] Introduction
[00:00:50] Nuclear command, control, and communications
[00:03:52] Old technology in nuclear systems
[00:12:18] Incentives for nuclear states
[00:15:04] Selectively enhancing security
[00:17:34] Unilateral de-escalation
[00:18:04] Nuclear communications
[00:24:08] The CATALINK System
[00:31:25] AI in nuclear command, control, and communications
[00:40:27] Russia's war in Ukraine
06/10/2022 • 47 minutes, 21 secondes
Daniela and Dario Amodei on Anthropic
Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Topics discussed in this episode include:
-Anthropic's mission and research strategy
-Recent research and papers by Anthropic
-Anthropic's structure as a "public benefit corporation"
-Career opportunities
You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/
Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A
Careers at Anthropic: https://www.anthropic.com/#careers
Anthropic's Transformer Circuits research: https://transformer-circuits.pub/
Follow Anthropic on Twitter: https://twitter.com/AnthropicAI
microCOVID Project: https://www.microcovid.org/
Follow Lucas on Twitter: https://twitter.com/lucasfmperry
Have any feedback about the podcast? You can share your thoughts here:
www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:44 What was the intention behind forming Anthropic?
6:28 Do the founders of Anthropic share a similar view on AI?
7:55 What is Anthropic's focused research bet?
11:10 Does AI existential safety fit into Anthropic's work and thinking?
14:14 Examples of AI models today that have properties relevant to future AI existential safety
16:12 Why work on large scale models?
20:02 What does it mean for a model to lie?
22:44 Safety concerns around the open-endedness of large models
29:01 How does safety work fit into race dynamics to more and more powerful AI?
36:16 Anthropic's mission and how it fits into AI alignment
38:40 Why explore large models for AI safety and scaling to more intelligent systems?
43:24 Is Anthropic's research strategy a form of prosaic alignment?
46:22 Anthropic's recent research and papers
49:52 How difficult is it to interpret current AI models?
52:40 Anthropic's research on alignment and societal impact
55:35 Why did you decide to release tools and videos alongside your interpretability research
1:01:04 What is it like working with your sibling?
1:05:33 Inspiration around creating Anthropic
1:12:40 Is there an upward bound on capability gains from scaling current models?
1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI?
1:21:10 Bootstrapping models
1:22:26 How does Anthropic see itself as positioned in the AI safety space?
1:25:35 What does being a public benefit corporation mean for Anthropic?
1:30:55 Anthropic's perspective on windfall profits from powerful AI systems
1:34:07 Issues with current AI systems and their relationship with long-term safety concerns
1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers
1:41:28 AI evaluations and monitoring
1:42:50 AI governance
1:45:12 Careers at Anthropic
1:48:30 What it's like working at Anthropic
1:52:48 Why hire people of a wide variety of technical backgrounds?
1:54:33 What's a future you're excited about or hopeful for?
1:59:42 Where to find and follow Anthropic
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
04/03/2022 • 2 heures, 1 minute, 27 secondes
Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest
Anthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest.
Topics discussed in this episode include:
-Motivations behind the contest
-The importance of worldbuilding
-The rules of the contest
-What a submission consists of
-Due date and prizes
Learn more about the contest here: https://worldbuild.ai/
Join the discord: https://discord.com/invite/njZyTJpwMz
You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-yelizarova-on-flis-worldbuilding-contest/
Watch the video version of this episode here: https://www.youtube.com/watch?v=WZBXSiyienI
Follow Lucas on Twitter here: twitter.com/lucasfmperry
Have any feedback about the podcast? You can share your thoughts here:
www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:30 What is "worldbuilding" and FLI's Worldbuilding Contest?
6:32 Why do worldbuilding for 2045?
7:22 Why is it important to practice worldbuilding?
13:50 What are the rules of the contest?
19:53 What does a submission consist of?
22:16 Due dates and prizes?
25:58 Final thoughts and how the contest contributes to creating beneficial futures
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
09/02/2022 • 33 minutes, 17 secondes
David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy
David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy.
Topics discussed in this episode include:
-Virtual reality as genuine reality
-Why VR is compatible with the good life
-Why we can never know whether we're in a simulation
-Consciousness in virtual realities
-The ethics of simulated beings
You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-reality-virtual-worlds-and-the-problems-of-philosophy/
Watch the video version of this episode here: https://www.youtube.com/watch?v=hePEg_h90KI
Check out David's book and website here: http://consc.net/
Follow Lucas on Twitter here: https://twitter.com/lucasfmperry
Have any feedback about the podcast? You can share your thoughts here:
www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:43 How this books fits into David's philosophical journey
9:40 David's favorite part(s) of the book
12:04 What is the thesis of the book?
14:00 The core areas of philosophy and how they fit into Reality+
16:48 Techno-philosophy
19:38 What is "virtual reality?"
21:06 Why is virtual reality "genuine reality?"
25:27 What is the dust theory and what's it have to do with the simulation hypothesis?
29:59 How does the dust theory fit in with arguing for virtual reality as genuine reality?
34:45 Exploring criteria for what it means for something to be real
42:38 What is the common sense view of what is real?
46:19 Is your book intended to address common sense intuitions about virtual reality?
48:51 Nozick's experience machine and how questions of value fit in
54:20 Technological implementations of virtual reality
58:40 How does consciousness fit into all of this?
1:00:18 Substrate independence and if classical computers can be conscious
1:02:35 How do problems of identity fit into virtual reality?
1:04:54 How would David upload himself?
1:08:00 How does the mind body problem fit into Reality+?
1:11:40 Is consciousness the foundation of value?
1:14:23 Does your moral theory affect whether you can live a good life in a virtual reality?
1:17:20 What does a good life in virtual reality look like?
1:19:08 David's favorite VR experiences
1:20:42 What is the moral status of simulated people?
1:22:38 Will there be unconscious simulated people with moral patiency?
1:24:41 Why we can never know we're not in a simulation
1:27:56 David's credences for whether we live in a simulation
1:30:29 Digital physics and what is says about the simulation hypothesis
1:35:21 Imperfect realism and how David sees the world after writing Reality+
1:37:51 David's thoughts on God
1:39:42 Moral realism or anti-realism?
1:40:55 Where to follow David and find Reality+
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
26/01/2022 • 1 heure, 42 minutes, 30 secondes
Rohin Shah on the State of AGI Safety Research in 2021
Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk.
Topics discussed in this episode include:
- Inner Alignment versus Outer Alignment
- Foundation Models
- Structural AI Risks
- Unipolar versus Multipolar Scenarios
- The Most Important Thing That Impacts the Future of Life
You can find the page for the podcast here:
https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021
Watch the video version of this episode here:
https://youtu.be/_5xkh-Rh6Ec
Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/
Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
00:02:22 What is AI alignment?
00:06:00 How has your perspective of this problem changed over the past year?
00:06:28 Inner Alignment
00:13:00 Ways that AI could actually lead to human extinction
00:18:53 Inner Alignment and MACE optimizers
00:20:15 Outer Alignment
00:23:12 The core problem of AI alignment
00:24:54 Learning Systems versus Planning Systems
00:28:10 AI and Existential Risk
00:32:05 The probability of AI existential risk
00:51:31 Core problems in AI alignment
00:54:46 How has AI alignment, as a field of research changed in the last year?
00:54:02 Large scale language models
00:54:50 Foundation Models
00:59:58 Why don't we know that AI systems won't totally kill us all?
01:09:05 How much of the alignment and safety problems in AI will be solved by industry?
01:14:44 Do you think about what beneficial futures look like?
01:19:31 Moral Anti-Realism and AI
01:27:25 Unipolar versus Multipolar Scenarios
01:35:33 What is the safety team at DeepMind up to?
01:35:41 What is the most important thing that impacts the future of life?
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
02/11/2021 • 1 heure, 43 minutes, 50 secondes
Future of Life Institute's $25M Grants Program for Existential Risk Reduction
Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program.
Topics discussed in this episode include:
- The reason Future of Life Institute is offering AI Existential Safety Grants
- Max speaks about how receiving a grant changed his career early on
- Daniel and Andrea provide details on the fellowships and future grant priorities
Check out our grants programs here: https://grants.futureoflife.org/
Join our AI Existential Safety Community:
https://futureoflife.org/team/ai-exis...
Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
18/10/2021 • 24 minutes, 44 secondes
Filippa Lentzos on Global Catastrophic Biological Risks
Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk.
Topics discussed in this episode include:
- The most pressing issue in biosecurity
- Stories from when biosafety labs failed to contain dangerous pathogens
- The lethality of pathogens being worked on at biolaboratories
- Lessons from COVID-19
You can find the page for the podcast here:
https://futureoflife.org/2021/10/01/filippa-lentzos-on-emerging-threats-in-biosecurity/
Watch the video version of this episode here:
https://www.youtube.com/watch?v=I6M34oQ4v4w
Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:35 What are the least understood aspects of biological risk?
8:32 Which groups are interested biotechnologies that could be used for harm?
16:30 Why countries may pursue the development of dangerous pathogens
18:45 Dr. Lentzos' strands of research
25:41 Stories from when biosafety labs failed to contain dangerous pathogens
28:34 The most pressing issue in biosecurity
31:06 What is gain of function research? What are the risks?
34:57 Examples of gain of function research
36:14 What are the benefits of gain of function research?
37:54 The lethality of pathogens being worked on at biolaboratorie
40:25 Benefits and risks of big data in biology and the life sciences
45:03 Creating a bioweather map or using big data for biodefense
48:35 Lessons from COVID-19
53:46 How does governance fit in to biological risk?
55:59 Key takeaways from Dr. Lentzos
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
01/10/2021 • 58 minutes, 14 secondes
Susan Solomon and Stephen Andersen on Saving the Ozone Layer
Susan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster.
Topics discussed in this episode include:
-The industrial and commercial uses of chlorofluorocarbons (CFCs)
-How we discovered the atmospheric effects of CFCs
-The Montreal Protocol and its significance
-Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial roles in helping to solve the ozone hole crisis
-Lessons we can take away for climate change and other global catastrophic risks
You can find the page for this podcast here: https://futureoflife.org/2021/09/16/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/
Check out the video version of the episode here: https://www.youtube.com/watch?v=7hwh-uDo-6A&ab_channel=FutureofLifeInstitute
Check out the story of the ozone hole crisis here: https://undsci.berkeley.edu/article/0_0_0/ozone_depletion_01
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
3:13 What are CFCs and what was their role in society?
7:09 James Lovelock discovering an abundance of CFCs in the lower atmosphere
12:43 F. Sherwood Rowland's and Mario Molina's research on the atmospheric science of CFCs
19:52 How a single chlorine atom from a CFC molecule can destroy a large amount of ozone
23:12 Moving from models of ozone depletion to empirical evidence of the ozone depleting mechanism
24:41 Joseph Farman and discovering the ozone hole
30:36 Susan Solomon's discovery of the surfaces of high altitude Arctic clouds being crucial for ozone depletion
47:22 The Montreal Protocol
1:00:00 Who were the key stake holders in the Montreal Protocol?
1:03:46 Stephen Andersen's efforts to phase out CFCs as the co-chair of the Montreal Protocol Technology and Economic Assessment Panel
1:13:28 The Montreal Protocol helping to prevent 11 billion metric tons of CO2 emissions per year
1:18:30 Susan and Stephen's key takeaways from their experience with the ozone hole crisis
1:24:24 What world did we avoid through our efforts to save the ozone layer?
1:28:37 The lessons Stephen and Susan take away from their experience working to phase out CFCs from industry
1:34:30 Is action on climate change practical?
1:40:34 Does the Paris Agreement have something like the Montreal Protocol Technology and Economic Assessment Panel?
1:43:23 Final words from Susan and Stephen
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
16/09/2021 • 1 heure, 44 minutes, 54 secondes
James Manyika on Global Economic and Technological Trends
James Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it.
Topics discussed in this episode include:
-The modern social contract
-Reskilling, wage stagnation, and inequality
-Technology induced unemployment
-The structure of the global economy
-The geographic concentration of economic growth
You can find the page for this podcast here: https://futureoflife.org/2021/09/06/james-manyika-on-global-economic-and-technological-trends/
Check out the video version of the episode here: https://youtu.be/zLXmFiwT0-M
Check out the McKinsey Global Institute here: https://www.mckinsey.com/mgi/overview
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:14 What are the most important problems in the world today?
4:30 The issue of inequality
8:17 How the structure of the global economy is changing
10:21 How does the role of incentives fit into global issues?
13:00 How the social contract has evolved in the 21st century
18:20 A billion people lifted out of poverty
19:04 What drives economic growth?
29:28 How does AI automation affect the virtuous and vicious versions of productivity growth?
38:06 Automation and reflecting on jobs lost, jobs gained, and jobs changed
43:15 AGI and automation
48:00 How do we address the issue of technology induced unemployment
58:05 Developing countries and economies
1:01:29 The central forces in the global economy
1:07:36 The global economic center of gravity
1:09:42 Understanding the core impacts of AI
1:12:32 How do global catastrophic and existential risks fit into the modern global economy?
1:17:52 The economics of climate change and AI risk
1:20:50 Will we use AI technology like we've used fossil fuel technology?
1:24:34 The risks of AI contributing to inequality and bias
1:31:45 How do we integrate developing countries voices in the development and deployment of AI systems
1:33:42 James' core takeaway
1:37:19 Where to follow and learn more about James' work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
07/09/2021 • 1 heure, 38 minutes, 12 secondes
Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse
Michael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse.
Topics discussed in this episode include:
-How the US military views and takes action on climate change
-Examples of existing climate related difficulties and what they tell us about the future
-Threat multiplication from climate change
-The risks of climate change catalyzed nuclear war and major conflict
-The melting of the Arctic and the geopolitical situation which arises from that
-Messaging on climate change
You can find the page for this podcast here: https://futureoflife.org/2021/07/30/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/
Check out the video version of the episode here: https://www.youtube.com/watch?v=bn57jxEoW24
Check out Michael's website here: http://michaelklare.com/
Apply for the Podcast Producer position here: futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:28 How does the Pentagon view climate change and why are they interested in it?
5:30 What are the Pentagon's main priorities besides climate change?
8:31 What are the objectives of career officers at the Pentagon and how do they see climate change?
10:32 The relationship between Pentagon career officers and the Trump administration on climate change
15:47 How is the Pentagon's view of climate change unique and important?
19:54 How climate change exacerbates existing difficulties and the issue of threat multiplication
24:25 How will climate change increase the tensions between the nuclear weapons states of India, Pakistan, and China?
26:32 What happened to Tacloban City and how is it relevant?
32:27 Why does the US military provide global humanitarian assistance?
34:39 How has climate change impacted the conditions in Nigeria and how does this inform the Pentagon's perspective?
39:40 What is the ladder of escalation for climate change related issues?
46:54 What is "all hell breaking loose?"
48:26 What is the geopolitical situation arising from the melting of the Arctic?
52:48 Why does the Bering Strait matter for the Arctic?
54:23 The Arctic as a main source of conflict for the great powers in the coming years
58:01 Are there ongoing proposals for resolving territorial disputes in the Arctic?
1:01:40 Nuclear weapons risk and climate change
1:03:32 How does the Pentagon intend to address climate change?
1:06:20 Hardening US military bases and going green
1:11:50 How climate change will affect critical infrastructure
1:15:47 How do lethal autonomous weapons fit into the risks of escalation in a world stressed by climate change?
1:19:42 How does this all affect existential risk?
1:24:39 Are there timelines for when climate change induced stresses will occur?
1:27:03 Does tying existential risks to national security issues benefit awareness around existential risk?
1:30:18 Does relating climate change to migration issues help with climate messaging?
1:31:08 A summary of the Pentagon's interest, view, and action on climate change
1:33:00 Final words from Michael
1:34:33 Where to find more of Michael's work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
30/07/2021 • 1 heure, 35 minutes, 13 secondes
Avi Loeb on UFOs and if they're Alien in Origin
Avi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat.
Topics discussed in this episode include:
-Evidence counting for the natural, human, and extraterrestrial origins of UAPs
-The culture of science and how it deals with UAP reports
-How humanity should respond if we discover UAPs are alien in origin
-A project for collecting high quality data on UAPs
You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/
Apply for the Podcast Producer position here: futureoflife.org/job-postings/
Check out the video version of the episode here: https://www.youtube.com/watch?v=AyNlLaFTeFI&ab_channel=FutureofLifeInstitute
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:41 Why is the US Government report on UAPs significant?
7:08 Multiple different sensors detecting the same phenomena
11:50 Are UAPs a US technology?
13:20 Incentives to deploy powerful technology
15:48 What are the flight and capability characteristics of UAPs?
17:53 The similarities between 'Oumuamua and UAP reports
20:11 Are UAPs some form of spoofing technology?
22:48 What is the most convincing natural or conventional explanation of UAPs?
25:09 UAPs as potentially containing artificial intelligence
28:15 Can you give a credence to UAPs being alien in origin?
29:32 Why aren't UAPs far more technologically advanced?
32:15 How should humanity respond if UAPs are found to be alien in origin?
35:15 A plan to get better data on UAPs
38:56 Final thoughts from Avi
39:40 Getting in contact with Avi to support his project
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
09/07/2021 • 40 minutes, 34 secondes
Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures
Avi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos.
Topics discussed in this episode include:
-Whether 'Oumuamua is alien or natural in origin
-The culture of science and how it affects fruitful inquiry
-Looking for signs of alien life throughout the solar system and beyond
-Alien artefacts and galactic treaties
-How humanity should handle a potential first contact with extraterrestrials
-The relationship between what is true and what is good
You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/
Apply for the Podcast Producer position here: https://futureoflife.org/job-postings/
Check out the video version of the episode here: https://www.youtube.com/watch?v=qcxJ8QZQkwE&ab_channel=FutureofLifeInstitute
See our second interview with Avi here: https://soundcloud.com/futureoflife/avi-loeb-on-ufos-and-if-theyre-alien-in-origin
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
3:28 What is 'Oumuamua's wager?
11:29 The properties of 'Oumuamua and how they lend credence to the theory of it being artificial in origin
17:23 Theories of 'Oumuamua being natural in origin
21:42 Why was the smooth acceleration of 'Oumuamua significant?
23:35 What are comets and asteroids?
28:30 What we know about Oort clouds and how 'Oumuamua relates to what we expect of Oort clouds
33:40 Could there be exotic objects in Oort clouds that would account for 'Oumuamua
38:08 What is your credence that 'Oumuamua is alien in origin?
44:50 Bayesian reasoning and 'Oumuamua
46:34 How do UFO reports and sightings affect your perspective of 'Oumuamua?
54:35 Might alien artefacts be more common than we expect?
58:48 The Drake equation
1:01:50 Where are the most likely great filters?
1:11:22 Difficulties in scientific culture and how they affect fruitful inquiry
1:27:03 The cosmic endowment, traveling to galactic clusters, and galactic treaties
1:31:34 Why don't we find evidence of alien superstructures?
1:36:36 Looking for the bio and techno signatures of alien life
1:40:27 Do alien civilizations converge on beneficence?
1:43:05 Is there a necessary relationship between what is true and good?
1:47:02 Is morality evidence based knowledge?
1:48:18 Axiomatic based knowledge and testing moral systems
1:54:08 International governance and making contact with alien life
1:55:59 The need for an elite scientific body to advise on global catastrophic and existential risk
1:59:57 What are the most fundamental questions?
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
09/07/2021 • 2 heures, 4 minutes
Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI
Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century.
Topics discussed in this episode include:
-What wisdom consists of
-The role of ideas in society and civilization
-The increasing concentration of power and wealth
-The technological displacement of human labor
-Democracy, universal basic income, and universal basic capital
-Living an examined life
You can find the page for this podcast here: https://futureoflife.org/2021/05/31/nicolas-berggruen-on-the-dynamics-of-power-wisdom-technology-and-ideas-in-the-age-of-ai/
Check out Nicolas' thoughts archive here: www.nicolasberggruen.com
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:45 The race between the power of our technology and the wisdom with which we manage it
5:19 What is wisdom?
8:30 The power of ideas
11:06 Humanity’s investment in wisdom vs the power of our technology
15:39 Why does our wisdom lag behind our power?
20:51 Technology evolving into an agent
24:28 How ideas play a role in the value alignment of technology
30:14 Wisdom for building beneficial AI and mitigating the race to power
34:37 Does Mark Zuckerberg have control of Facebook?
36:39 Safeguarding the human mind and maintaining control of AI
42:26 The importance of the examined life in the 21st century
45:56 An example of the examined life
48:54 Important ideas for the 21st century
52:46 The concentration of power and wealth, and a proposal for universal basic capital
1:03:07 Negative and positive futures
1:06:30 Final thoughts from Nicolas
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
01/06/2021 • 1 heure, 8 minutes, 16 secondes
Bart Selman on the Promises and Perils of Artificial Intelligence
Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence.
Topics discussed in this episode include:
-Negative and positive outcomes from AI in the short, medium, and long-terms
-The perils and promises of AGI and superintelligence
-AI alignment and AI existential risk
-Lethal autonomous weapons
-AI governance and racing to powerful AI systems
-AI consciousness
You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:35 Futures that Bart is excited about
4:08 Positive futures in the short, medium, and long-terms
7:23 AGI timelines
8:11 Bart’s research on “planning” through the game of Sokoban
13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable?
15:28 What’s exciting about futures with AGI and superintelligence?
17:10 How long does it take for superintelligence to arise after AGI?
21:08 Would a superintelligence have something intelligent to say about income inequality?
23:24 Are there true or false answers to moral questions?
25:30 Can AGI and superintelligence assist with moral and philosophical issues?
28:07 Do you think superintelligences converge on ethics?
29:32 Are you most excited about the short or long-term benefits of AI?
34:30 Is existential risk from AI a legitimate threat?
35:22 Is the AI alignment problem legitimate?
43:29 What are futures that you fear?
46:24 Do social media algorithms represent an instance of the alignment problem?
51:46 The importance of educating the public on AI
55:00 Income inequality, cyber security, and negative futures
1:00:06 Lethal autonomous weapons
1:01:50 Negative futures in the long-term
1:03:26 How have your views of AI alignment evolved?
1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence
1:13:45 Policy recommendations for existing AIs and the AI ecosystem
1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives
1:18:17 Narratives of an international race to powerful AI systems
1:20:42 How does an international race to AI affect the chances of successful AI alignment?
1:23:20 Is AI a zero sum game?
1:28:51 Lethal autonomous weapons governance
1:31:38 Does the governance of autonomous weapons affect outcomes from AGI
1:33:00 AI consciousness
1:39:37 Alignment is important and the benefits of AI can be great
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
20/05/2021 • 1 heure, 41 minutes, 3 secondes
Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century
Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century.
Topics discussed in this episode include:
-Intelligence and coordination
-Existential risk from AI, synthetic biology, and unknown unknowns
-AI adoption as a delegation process
-Jaan's investments and philanthropic efforts
-International coordination and incentive structures
-The short-term and long-term AI safety communities
You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
1:29 How can humanity improve?
3:10 The importance of intelligence and coordination
8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans
15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks
17:15 How Jaan evaluates and thinks about existential risk
18:30 Nuclear weapons as the first existential risk we faced
20:47 The likelihood of unknown unknown existential risks
25:04 Why Jaan doesn't see nuclear war as an existential risk
27:54 Climate change
29:00 Existential risk from synthetic biology
31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge
36:23 AI adoption as a delegation process
42:52 Attractors in the design space of AI
44:24 The regulation of AI
45:31 Jaan's investments and philanthropy in AI
55:18 International coordination issues from AI adoption as a delegation process
57:29 AI today and the negative impacts of recommender algorithms
1:02:43 Collective, institutional, and interpersonal coordination
1:05:23 The benefits and risks of longevity research
1:08:29 The long-term and short-term AI safety communities and their relationship with one another
1:12:35 Jaan's current philanthropic efforts
1:16:28 Software as a philanthropic target
1:19:03 How do we move towards beneficial futures with AI?
1:22:30 An idea Jaan finds meaningful
1:23:33 Final thoughts from Jaan
1:25:27 Where to find Jaan
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
21/04/2021 • 1 heure, 26 minutes, 37 secondes
Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures
Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.
Topics discussed in this episode include:
-Understanding the universe through digital physics
-How human consciousness operates and is structured
-The path to aligned AGI and bottlenecks to beneficial futures
-Incentive structures and collective coordination
You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/
You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
3:17 What is truth and knowledge?
11:39 What is subjectivity and objectivity?
14:32 What is the universe ultimately?
19:22 Is the universe a cellular automaton? Is the universe ultimately digital or analogue?
24:05 Hilbert's hotel from the point of view of computation
35:18 Seeing the world as a fractal
38:48 Describing human consciousness
51:10 Meaning, purpose, and harvesting negentropy
55:08 The path to aligned AGI
57:37 Bottlenecks to beneficial futures and existential security
1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?
1:19:39 Non-duality and collective coordination
1:22:53 What difficulties are there for an idealist worldview that involves computation?
1:27:20 Which features of mind and consciousness are necessarily coupled and which aren't?
1:36:40 Joscha's final thoughts on AGI
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
01/04/2021 • 1 heure, 38 minutes, 17 secondes
Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.
Topics discussed in this episode include:
-Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
-The relationship between AI safety, control, and alignment
-Virtual worlds as a proposal for solving multi-multi alignment
-AI security
You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/
You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:35 Roman’s primary research interests
4:09 How theoretical proofs help AI safety research
6:23 How impossibility results constrain computer science systems
10:18 The inability to tell if arbitrary code is friendly or unfriendly
12:06 Impossibility results clarify what we can do
14:19 Roman’s results on unexplainability and incomprehensibility
22:34 Focusing on comprehensibility
26:17 Roman’s results on uncontrollability
28:33 Alignment as a subset of safety and control
30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment
33:40 What does it mean to solve AI safety?
34:19 What do the impossibility results really mean?
37:07 Virtual worlds and AI alignment
49:55 AI security and malevolent agents
53:00 Air gapping, boxing, and other security methods
58:43 Some examples of historical failures of AI systems and what we can learn from them
1:01:20 Clarifying impossibility results
1:06 55 Examples of systems failing and what these demonstrate about AI
1:08:20 Are oracles a valid approach to AI safety?
1:10:30 Roman’s final thoughts
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
20/03/2021 • 1 heure, 12 minutes, 1 secondes
Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons
Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.
Topics discussed in this episode include:
-The current state of the deployment and development of lethal autonomous weapons and swarm technologies
-Drone swarms as a potential weapon of mass destruction
-The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
-The difficulty of attribution, verification, and accountability with autonomous weapons
-Autonomous weapons governance as norm setting for global AI issues
You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/
You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:23 Emilia Javorsky on lethal autonomous weapons
7:27 What is a lethal autonomous weapon?
11:33 Autonomous weapons that exist today
16:57 The concerns of collateral damage, accidental escalation, scalability, control, and error risk
26:57 The proliferation risk of autonomous weapons
32:30 To what extent are global superpowers pursuing these weapons? What is the state of industry's pursuit of the research and manufacturing of this technology
42:13 A possible proposal for a selective ban on small anti-personnel autonomous weapons
47:20 Lethal autonomous weapons as a potential weapon of mass destruction
53:49 The unpredictability of autonomous weapons, especially when swarms are interacting with other swarms
58:09 The risk of autonomous weapons escalating conflicts
01:10:50 The risk of drone swarms proliferating
01:20:16 The risk of assassination
01:23:25 The difficulty of attribution and accountability
01:26:05 The governance of autonomous weapons being relevant to the global governance of AI
01:30:11 The importance of verification for responsibility, accountability, and regulation
01:35:50 Concerns about the beginning of an arms race and the need for regulation
01:38:46 Wrapping up
01:39:23 Outro
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
25/02/2021 • 1 heure, 39 minutes, 48 secondes
John Prendergast on Non-dual Awareness and Wisdom for the 21st Century
John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existential risk issues.
Topics discussed in this episode include:
-The experience of egocentricity and ego-identification
-Waking up into heart awareness
-The movement towards and qualities of non-dual consciousness
-The ways in which the condition of our minds collectively affect the world
-How waking up may be relevant to the creation of AGI
You can find the page for this podcast here: https://futureoflife.org/2021/02/09/john-prendergast-on-non-dual-awareness-and-wisdom-for-the-21st-century/
Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
7:10 The modern human condition
9:29 What egocentricity and ego-identification are
15:38 Moving beyond the experience of self
17:38 The origins and structure of self
20:25 A pointing out instruction for noticing ego-identification and waking up out of it
24:34 A pointing out instruction for abiding in heart-mind or heart awareness
28:53 The qualities of and moving into heart awareness and pure awareness
33:48 An explanation of non-dual awareness
40:50 Exploring the relationship between awareness, belief, and action
46:25 Growing up and improving the egoic structure
48:29 Waking up as recognizing true nature
51:04 Exploring awareness as primitive and primary
53:56 John's dream of Sri Nisargadatta Maharaj
57:57 The use and value of conceptual thought and the mind
1:00:57 The epistemics of heart-mind and the conceptual mind as we shift levels of identity
1:17:46 A pointing out instruction for inquiring into core beliefs
1:27:28 The universal heart, qualities of awakening, and the ethical implications of such shifts
1:31:38 Wisdom, waking up, and growing up for the transgenerational issues of the 21st century
1:38:44 Waking up and its applicability to the creation of AGI
1:43:25 Where to find, follow, and reach out to John
1:45:56 Outro
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
09/02/2021 • 1 heure, 46 minutes, 16 secondes
Beatrice Fihn on the Total Elimination of Nuclear Weapons
Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world.
Topics discussed in this episode include:
-The current nuclear weapons geopolitical situation
-The risks and mechanics of accidental and intentional nuclear war
-Policy proposals for reducing the risks of nuclear war
-Deterrence theory
-The Treaty on the Prohibition of Nuclear Weapons
-Working towards the total elimination of nuclear weapons
You can find the page for this podcast here: https://futureoflife.org/2021/01/21/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/
Timestamps:
0:00 Intro
4:28 Overview of the current nuclear weapons situation
6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war
9:27 Accidental nuclear war and human systems
12:08 The risks of nuclear war in 2021 and nuclear stability
17:49 Toxic personalities and the human component of nuclear weapons
23:23 Policy proposals for reducing the risk of nuclear war
23:55 New START Treaty
25:42 What does it mean to maintain credible deterrence
26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons
28:00 Deterrence theoretic arguments for nuclear weapons
32:36 The reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons
39:13 Arguments for and against nuclear risk reduction policy proposals
46:02 Moving all of the United State's nuclear weapons to bombers and nuclear submarines
48:27 Working towards and the theory of the total elimination of nuclear weapons
1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons
1:14:26 Elevating activism around nuclear weapons and messaging more skillfully
1:15:40 What the public needs to understand about nuclear weapons
1:16:35 World leaders' views of the treaty
1:17:15 How to get involved
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
22/01/2021 • 1 heure, 17 minutes, 56 secondes
Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year
Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021.
Topics discussed in this episode include:
-FLI's perspectives on 2020 and hopes for 2021
-What our favorite projects from 2020 were
-The biggest lessons we've learned from 2020
-What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety
You can find the page for this podcast here: https://futureoflife.org/2021/01/08/max-tegmark-and-the-fli-team-on-2020-and-existential-risk-reduction-in-the-new-year/
Timestamps:
0:00 Intro
00:52 First question: What was your favorite project from 2020?
1:03 Max Tegmark on the Future of Life Award
4:15 Anthony Aguirre on AI Loyalty
9:18 David Nicholson on the Future of Life Award
12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation
14:03 Jared Brown on developing comments on the European Union's White Paper on AI through community collaboration
16:40 Tucker Davey on editing the biography of Victor Zhdanov
19:49 Lucas Perry on the podcast and Pindex video
23:17 Second question: What lessons do you take away from 2020?
23:26 Max Tegmark on human fragility and vulnerability
25:14 Max Tegmark on learning from history
26:47 Max Tegmark on the growing threats of AI
29:45 Anthony Aguirre on the inability of present-day institutions to deal with large unexpected problems
33:00 David Nicholson on the need for self-reflection on the use and development of technology
38:05 Emilia Javorsky on the global community coming to awareness about tail risks
39:48 Jared Brown on our vulnerability to low probability, high impact events and the importance of adaptability and policy engagement
41:43 Tucker Davey on taking existential risks more seriously and ethics-washing
43:57 Lucas Perry on the fragility of human systems
45:40 Third question: What is needed in 2021 to make progress on existential risk mitigation
45:50 Max Tegmark on holding Big Tech accountable, repairing geopolitics, and fighting the myth of the technological zero-sum game
49:58 Anthony Aguirre on the importance of spreading understanding of expected value reasoning and fixing the information crisis
53:41 David Nicholson on the need to reflect on our values and relationship with technology
54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue
56:00 Jared Brown on the need for robust government engagement
57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation
1:00:10 Outro
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
08/01/2021 • 1 heure, 41 secondes
Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox
The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events.
Topics discussed in this episode include:
-William Foege's and Victor Zhdanov's efforts to eradicate smallpox
-Personal stories from Foege's and Zhdanov's lives
-The history of smallpox
-Biological issues of the 21st century
You can find the page for this podcast here: https://futureoflife.org/2020/12/11/future-of-life-award-2020-saving-200000000-lives-by-eradicating-smallpox/
You can watch the 2020 Future of Life Award ceremony here: https://www.youtube.com/watch?v=73WQvR5iIgk&feature=emb_title&ab_channel=FutureofLifeInstitute
You can learn more about the Future of Life Award here: https://futureoflife.org/future-of-life-award/
Timestamps:
0:00 Intro
3:13 Part 1: How William Foege got into smallpox efforts and his work in Eastern Nigeria
14:12 The USSR's smallpox eradication efforts and convincing the WHO to take up global smallpox eradication
15:46 William Foege's efforts in and with the WHO for smallpox eradication
18:00 Surveillance and containment as a viable strategy
18:51 Implementing surveillance and containment throughout the world after success in West Africa
23:55 Wrapping up with eradication and dealing with the remnants of smallpox
25:35 Lab escape of smallpox in Birmingham England and the final natural case
27:20 Part 2: Introducing Michael Burkinsky as well as Victor and Katia Zhdanov
29:45 Introducing Victor Zhdanov Sr. and Alissa Zhdanov
31:05 Michael Burkinsky's memories of Victor Zhdanov Sr.
39:26 Victor Zhdanov Jr.'s memories of Victor Zhdanov Sr.
46:15 Mushrooms with meat
47:56 Stealing the family car
49:27 Victor Zhdanov Sr.'s efforts at the WHO for smallpox eradication
58:27 Exploring Alissa's book on Victor Zhdanov Sr.'s life
1:06:09 Michael's view that Victor Zhdanov Sr. is unsung, especially in Russia
1:07:18 Part 3: William Foege on the history of smallpox and biology in the 21st century
1:07:32 The origin and history of smallpox
1:10:34 The origin and history of variolation and the vaccine
1:20:15 West African "healers" who would create smallpox outbreaks
1:22:25 The safety of the smallpox vaccine vs. modern vaccines
1:29:40 A favorite story of William Foege's
1:35:50 Larry Brilliant and people central to the eradication efforts
1:37:33 Foege's perspective on modern pandemics and human bias
1:47:56 What should we do after COVID-19 ends
1:49:30 Bio-terrorism, existential risk, and synthetic pandemics
1:53:20 Foege's final thoughts on the importance of global health experts in politics
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
11/12/2020 • 1 heure, 54 minutes, 18 secondes
Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress
Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far.
Topics discussed in this episode include:
-Important intellectual movements and their merits
-The evolution of metaphysical and epistemological views over human history
-Consciousness, free will, and philosophical blunders
-Lessons for the 21st century
You can find the page for this podcast here: https://futureoflife.org/2020/12/01/sean-carroll-on-consciousness-physicalism-and-the-history-of-intellectual-progress/
You can find the video for this podcast here: https://youtu.be/6HNjL8_fsTk
Timestamps:
0:00 Intro
2:06 The problem of beliefs and the strengths and weaknesses of religion
6:40 The Age of Enlightenment and importance of reason
10:13 The importance of humility and the is--ought gap
17:53 The advantages of religion and mysticism
19:50 Materialism and Newtonianism
28:00 Duality, self, suffering, and philosophical blunders
36:56 Quantum physics as a paradigm shift
39:24 Physicalism, the problem of consciousness, and free will
01:01:50 What does it mean for something to be real?
01:09:40 The hard problem of consciousness
01:14:20 The multiple worlds interpretation of quantum mechanics and utilitarianism
01:21:16 The importance of being charitable in conversation
1:24:55 Sean's position in the philosophy of consciousness
01:27:29 Sean's metaethical position
01:29:36 Where to find and follow Sean
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
02/12/2020 • 1 heure, 30 minutes, 33 secondes
Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity
Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation.
Topics discussed in this episode include:
-How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible
-The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation
-How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers
-How to combat the problem of ethics-washing in Big Tech
You can find the page for this podcast here: https://futureoflife.org/2020/11/17/mohamed-abdalla-on-big-tech-ethics-washing-and-the-threat-on-academic-integrity/
The Future of Life Institute AI policy page: https://futureoflife.org/AI-policy/
Timestamps:
0:00 Intro
1:55 How Big Tech actively distorts the academic landscape and what counts as big tech
6:00 How Big Tobacco has shaped industry research
12:17 The four tactics of Big Tobacco and Big Tech
13:34 Big Tech and Big Tobacco working to appear socially responsible
22:15 Big Tech and Big Tobacco working to influence the decisions made by funded universities
32:25 Big Tech and Big Tobacco working to influence research questions and the plans of individual scientists
51:53 Big Tech and Big Tobacco finding skeptics and critics of them and funding them to give the impression of social responsibility
1:00:24 Big Tech and being authentically socially responsible
1:11:41 Transformative AI, social responsibility, and the race to powerful AI systems
1:16:56 Ethics-washing as systemic
1:17:30 Action items for solving Ethics-washing
1:19:42 Has Mohamed received criticism for this paper?
1:20:07 Final thoughts from Mohamed
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
17/11/2020 • 1 heure, 22 minutes, 21 secondes
Maria Arpa on the Power of Nonviolent Communication
Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication.
Topics discussed in this episode include:
-What nonviolent communication (NVC) consists of
-How NVC is different from normal discourse
-How NVC is composed of observations, feelings, needs, and requests
-NVC for systemic change
-Foundational assumptions in NVC
-An NVC exercise
You can find the page for this podcast here: https://futureoflife.org/2020/11/02/maria-arpa-on-the-power-of-nonviolent-communication/
Timestamps:
0:00 Intro
2:50 What is nonviolent communication?
4:05 How is NVC different from normal discourse?
18:40 NVC’s four components: observations, feelings, needs, and requests
34:50 NVC for systemic change
54:20 The foundational assumptions of NVC
58:00 An exercise in NVC
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
02/11/2020 • 1 heure, 12 minutes, 43 secondes
Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism
Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.
Topics discussed in this episode include:
-The projects of awakening and growing the wisdom with which to manage technologies
-What might be possible of embarking on the project of waking up
-Facets of human nature that contribute to existential risk
-The dangers of the problem solving mindset
-Improving the effective altruism and existential risk communities
You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/
Timestamps:
0:00 Intro
3:40 Albert Einstein and the quest for awakening
8:45 Non-self, emptiness, and non-duality
25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise
33:32 The importance of insight
49:45 The present moment, creativity, and suffering/pain/dukkha
58:44 Stephen's article, Embracing Extinction
1:04:48 The dangers of the problem solving mindset
1:26:12 Improving the effective altruism and existential risk communities
1:37:30 Where to find and follow Stephen
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
15/10/2020 • 1 heure, 39 minutes, 26 secondes
Kelly Wanser on Climate Change as a Possible Existential Threat
Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.
Topics discussed in this episode include:
- The risks of climate change in the short-term
- Tipping points and tipping cascades
- Climate intervention via marine cloud brightening and releasing particles in the stratosphere
- The benefits and risks of climate intervention techniques
- The international politics of climate change and weather modification
You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/
Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU
Timestamps:
0:00 Intro
2:30 What is SilverLining’s mission?
4:27 Why is climate change thought to be very risky in the next 10-30 years?
8:40 Tipping points and tipping cascades
13:25 Is climate change an existential risk?
17:39 Earth systems that help to stabilize the climate
21:23 Days where it will be unsafe to work outside
25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in
41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions?
50:20 International politics of weather modification
53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight?
57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous?
59:33 What are the main points of persons skeptical of climate intervention approaches
01:13:21 The international problem of coordinating on climate change
01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks?
01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention?
01:37:48 What can listeners do to help with this issue?
01:40:00 Climate change and mars colonization
01:44:55 Where to find and follow Kelly
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
30/09/2020 • 1 heure, 45 minutes, 48 secondes
Andrew Critch on AI Research Considerations for Human Existential Safety
In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.
Topics discussed in this episode include:
- The mainstream computer science view of AI existential risk
- Distinguishing AI safety from AI existential safety
- The need for more precise terminology in the field of AI existential safety and alignment
- The concept of prepotent AI systems and the problem of delegation
- Which alignment problems get solved by commercial incentives and which don’t
- The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives
- Prepotent AI risk types that lead to unsurvivability for humanity
You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/
Timestamps:
0:00 Intro
2:53 Why Andrew wrote ARCHES and what it’s about
6:46 The perspective of the mainstream CS community on AI existential risk
13:03 ARCHES in relation to AI existential risk literature
16:05 The distinction between safety and existential safety
24:27 Existential risk is most likely to obtain through externalities
29:03 The relationship between existential safety and safety for current systems
33:17 Research areas that may not be solved by natural commercial incentives
51:40 What’s an AI system and an AI technology?
53:42 Prepotent AI
59:41 Misaligned prepotent AI technology
01:05:13 Human frailty
01:07:37 The importance of delegation
01:14:11 Single-single, single-multi, multi-single, and multi-multi
01:15:26 Control, instruction, and comprehension
01:20:40 The multiplicity thesis
01:22:16 Risk types from prepotent AI that lead to human unsurvivability
01:34:06 Flow-through effects
01:41:00 Multi-stakeholder objectives
01:49:08 Final words from Andrew
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
16/09/2020 • 1 heure, 51 minutes, 28 secondes
Iason Gabriel on Foundational Philosophical Questions in AI Alignment
In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.
Topics discussed in this episode include:
-How moral philosophy and political theory are deeply related to AI alignment
-The problem of dealing with a plurality of preferences and philosophical views in AI alignment
-How the is-ought problem and metaethics fits into alignment
-What we should be aligning AI systems to
-The importance of democratic solutions to questions of AI alignment
-The long reflection
You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/
Timestamps:
0:00 Intro
2:10 Why Iason wrote Artificial Intelligence, Values and Alignment
3:12 What AI alignment is
6:07 The technical and normative aspects of AI alignment
9:11 The normative being dependent on the technical
14:30 Coming up with an appropriate alignment procedure given the is-ought problem
31:15 What systems are subject to an alignment procedure?
39:55 What is it that we're trying to align AI systems to?
01:02:30 Single agent and multi agent alignment scenarios
01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals
01:30:28 The long reflection
01:53:55 Where to follow and contact Iason
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
03/09/2020 • 1 heure, 54 minutes, 48 secondes
Peter Railton on Moral Learning and Metaethics in AI Systems
From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics.
Topics discussed in this episode include:
-Moral epistemology
-The potential relevance of metaethics to AI alignment
-The importance of moral learning in AI systems
-Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views
You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/
Timestamps:
0:00 Intro
3:05 Does metaethics matter for AI alignment?
22:49 Long-reflection considerations
26:05 Moral learning in humans
35:07 The need for moral learning in artificial intelligence
53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit
1:38:50 The need for engagement between philosophers and the AI alignment community
1:40:37 Where to find Peter's work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
18/08/2020 • 1 heure, 41 minutes, 46 secondes
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI
It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.
Topics discussed in this episode include:
-Inner and outer alignment
-How and why inner alignment can fail
-Training competitiveness and performance competitiveness
-Evaluating imitative amplification, AI safety via debate, and microscope AI
You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/
Timestamps:
0:00 Intro
2:07 How Evan got into AI alignment research
4:42 What is AI alignment?
7:30 How Evan approaches AI alignment
13:05 What are inner alignment and outer alignment?
24:23 Gradient descent
36:30 Testing for inner alignment
38:38 Wrapping up on outer alignment
44:24 Why is inner alignment a priority?
45:30 How inner alignment fails
01:11:12 Training competitiveness and performance competitiveness
01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness
01:17:30 Imitative amplification
01:23:00 AI safety via debate
01:26:32 Microscope AI
01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment
01:34:45 Where to follow Evan and find more of his work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
01/07/2020 • 1 heure, 37 minutes, 5 secondes
Barker - Hedonic Recalibration (Mix)
This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape.
You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/
Tracklist:
Delta Rain Dance - 1
John Beltran - A Different Dream
Rrose - Horizon
Alexandroid - lvpt3
Datassette - Drizzle Fort
Conrad Sprenger - Opening
JakoJako - Wavetable#1
Barker & David Goldberg - #3
Barker & Baumecker - Organik (Intro)
Anthony Linell - Fractal Vision
Ametsub - Skydroppin’
Ladyfish\Mewark - Comfortable
JakoJako & Barker - [unreleased]
Where to follow Sam Barker :
Soundcloud: @voltek
Twitter: twitter.com/samvoltek
Instagram: www.instagram.com/samvoltek/
Website: www.voltek-labs.net/
Bandcamp: sambarker.bandcamp.com/
Where to follow Sam's label, Ostgut Ton:
Soundcloud: @ostgutton-official
Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/
Twitter: twitter.com/ostgutton
Instagram: www.instagram.com/ostgut_ton/
Bandcamp: ostgut.bandcamp.com/
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
26/06/2020 • 43 minutes, 43 secondes
Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)
Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content.
Topics discussed in this episode include:
-The relationship between Sam's music and David's writing
-Existential hope
-Ideas from the Hedonistic Imperative
-Sam's albums
-The future of art and music
You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/
You can find the mix with no interview portion of the podcast here: https://soundcloud.com/futureoflife/barker-hedonic-recalibration-mix
Where to follow Sam Barker :
Soundcloud: https://soundcloud.com/voltek
Twitter: https://twitter.com/samvoltek
Instagram: https://www.instagram.com/samvoltek/
Website: https://www.voltek-labs.net/
Bandcamp: https://sambarker.bandcamp.com/
Where to follow Sam's label, Ostgut Ton:
Soundcloud: https://soundcloud.com/ostgutton-official
Facebook: https://www.facebook.com/Ostgut.Ton.OFFICIAL/
Twitter: https://twitter.com/ostgutton
Instagram: https://www.instagram.com/ostgut_ton/
Bandcamp: https://ostgut.bandcamp.com/
Timestamps:
0:00 Intro
5:40 The inspiration around Sam's music
17:38 Barker - Maximum Utility
20:03 David and Sam on their work
23:45 Do any of the tracks evoke specific visions or hopes?
24:40 Barker - Die-Hards Of The Darwinian Order
28:15 Barker - Paradise Engineering
31:20 Barker - Hedonic Treadmill
33:05 The future and evolution of art
54:03 David on how good the future can be
58:36 Guest mix by Barker
Tracklist:
Delta Rain Dance – 1
John Beltran – A Different Dream
Rrose – Horizon
Alexandroid – lvpt3
Datassette – Drizzle Fort
Conrad Sprenger – Opening
JakoJako – Wavetable#1
Barker & David Goldberg – #3
Barker & Baumecker – Organik (Intro)
Anthony Linell – Fractal Vision
Ametsub – Skydroppin’
Ladyfish\Mewark – Comfortable
JakoJako & Barker – [unreleased]
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
24/06/2020 • 1 heure, 42 minutes, 14 secondes
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.
Topics discussed in this episode include:
-The historical and intellectual foundations of AI
-How AI systems achieve or do not achieve intelligence in the same way as the human mind
-The rise of AI and what it signifies
-The benefits and risks of AI in both the short and long term
-Whether superintelligent AI will pose an existential risk to humanity
You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/
You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3
You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/
Timestamps:
0:00 Intro
4:30 The historical and intellectual foundations of AI
11:11 Moving beyond dualism
13:16 Regarding the objectives of an agent as fixed
17:20 The distinction between artificial intelligence and deep learning
22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind
49:46 What changes to human society does the rise of AI signal?
54:57 What are the benefits and risks of AI?
01:09:38 Do superintelligent AI systems pose an existential threat to humanity?
01:51:30 Where to find and follow Steve and Stuart
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
15/06/2020 • 1 heure, 52 minutes, 42 secondes
Sam Harris on Global Priorities, Existential Risk, and What Matters Most
Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.
Topics discussed in this episode include:
-The problem of communication
-Global priorities
-Existential risk
-Animal suffering in both wild animals and factory farmed animals
-Global poverty
-Artificial general intelligence risk and AI alignment
-Ethics
-Sam’s book, The Moral Landscape
You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/
You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3
You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/
Timestamps:
0:00 Intro
3:52 What are the most important problems in the world?
13:14 Global priorities: existential risk
20:15 Why global catastrophic risks are more likely than existential risks
25:09 Longtermist philosophy
31:36 Making existential and global catastrophic risk more emotionally salient
34:41 How analyzing the self makes longtermism more attractive
40:28 Global priorities & effective altruism: animal suffering and global poverty
56:03 Is machine suffering the next global moral catastrophe?
59:36 AI alignment and artificial general intelligence/superintelligence risk
01:11:25 Expanding our moral circle of compassion
01:13:00 The Moral Landscape, consciousness, and moral realism
01:30:14 Can bliss and wellbeing be mathematically defined?
01:31:03 Where to follow Sam and concluding thoughts
Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
01/06/2020 • 1 heure, 32 minutes, 46 secondes
FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church
Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities?
Topics discussed in this episode include:
-Existential risk
-Computational substrates and AGI
-Genetics and aging
-Risks of synthetic biology
-Obstacles to space colonization
-Great Filters, consciousness, and eliminating suffering
You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synthetic-biology-and-life-with-george-church/
You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3
You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/
Timestamps:
0:00 Intro
3:58 What are the most important issues in the world?
12:20 Collective intelligence, AI, and the evolution of computational systems
33:06 Where we are with genetics
38:20 Timeline on progress for anti-aging technology
39:29 Synthetic biology risk
46:19 George's thoughts on COVID-19
49:44 Obstacles to overcome for space colonization
56:36 Possibilities for "Great Filters"
59:57 Genetic engineering for combating climate change
01:02:00 George's thoughts on the topic of "consciousness"
01:08:40 Using genetic engineering to phase out voluntary suffering
01:12:17 Where to find and follow George
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
15/05/2020 • 1 heure, 13 minutes, 24 secondes
FLI Podcast: On Superforecasting with Robert de Neufville
Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making.
Topics discussed in this episode include:
-What superforecasting is and what the community looks like
-How superforecasting is done and its potential use in decision making
-The challenges of making predictions
-Predictions about and lessons from COVID-19
You can find the page for this podcast here: https://futureoflife.org/2020/04/30/on-superforecasting-with-robert-de-neufville/
You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3
You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/
Timestamps:
0:00 Intro
5:00 What is superforecasting?
7:22 Who are superforecasters and where did they come from?
10:43 How is superforecasting done and what are the relevant skills?
15:12 Developing a better understanding of probabilities
18:42 How is it that superforecasters are better at making predictions than subject matter experts?
21:43 COVID-19 and a failure to understand exponentials
24:27 What organizations and platforms exist in the space of superforecasting?
27:31 Whats up for consideration in an actual forecast
28:55 How are forecasts aggregated? Are they used?
31:37 How accurate are superforecasters?
34:34 How is superforecasting complementary to global catastrophic risk research and efforts?
39:15 The kinds of superforecasting platforms that exist
43:00 How accurate can we get around global catastrophic and existential risks?
46:20 How to deal with extremely rare risk and how to evaluate your prediction after the fact
53:33 Superforecasting, expected value calculations, and their use in decision making
56:46 Failure to prepare for COVID-19 and if superforecasting will be increasingly applied to critical decision making
01:01:55 What can we do to improve the use of superforecasting?
01:02:54 Forecasts about COVID-19
01:11:43 How do you convince others of your ability as a superforecaster?
01:13:55 Expanding the kinds of questions we do forecasting on
01:15:49 How to utilize subject experts and superforecasters
01:17:54 Where to find and follow Robert
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
30/04/2020 • 1 heure, 20 minutes, 22 secondes
AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing.
Topics discussed in this episode include:
-Rohin's and Buck's optimism and pessimism about different approaches to aligned AI
-Traditional arguments for AI as an x-risk
-Modeling agents as expected utility maximizers
-Ambitious value learning and specification learning/narrow value learning
-Agency and optimization
-Robustness
-Scaling to superhuman abilities
-Universality
-Impact regularization
-Causal models, oracles, and decision theory
-Discontinuous and continuous takeoff scenarios
-Probability of AI-induced existential risk
-Timelines for AGI
-Information hazards
You can find the page for this podcast here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/
Timestamps:
0:00 Intro
3:48 Traditional arguments for AI as an existential risk
5:40 What is AI alignment?
7:30 Back to a basic analysis of AI as an existential risk
18:25 Can we model agents in ways other than as expected utility maximizers?
19:34 Is it skillful to try and model human preferences as a utility function?
27:09 Suggestions for alternatives to modeling humans with utility functions
40:30 Agency and optimization
45:55 Embedded decision theory
48:30 More on value learning
49:58 What is robustness and why does it matter?
01:13:00 Scaling to superhuman abilities
01:26:13 Universality
01:33:40 Impact regularization
01:40:34 Causal models, oracles, and decision theory
01:43:05 Forecasting as well as discontinuous and continuous takeoff scenarios
01:53:18 What is the probability of AI-induced existential risk?
02:00:53 Likelihood of continuous and discontinuous take off scenarios
02:08:08 What would you both do if you had more power and resources?
02:12:38 AI timelines
02:14:00 Information hazards
02:19:19 Where to follow Buck and Rohin and learn more
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
15/04/2020 • 2 heures, 21 minutes, 27 secondes
FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre
The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk.
Topics discussed in this episode include:
-The importance of taking expected value calculations seriously
-The need for making accurate predictions
-The difficulty of taking probabilities seriously
-Human psychological bias around estimating and acting on risk
-The massive online prediction solicitation and aggregation engine, Metaculus
-The risks and benefits of synthetic biology in the 21st Century
You can find the page for this podcast here: https://futureoflife.org/2020/04/08/lessons-from-covid-19-with-emilia-javorsky-and-anthony-aguirre/
Timestamps:
0:00 Intro
2:35 How has COVID-19 demonstrated weakness in human systems and risk preparedness
4:50 The importance of expected value calculations and considering risks over timescales
10:50 The importance of being able to make accurate predictions
14:15 The difficulty of trusting probabilities and acting on low probability high cost risks
21:22 Taking expected value calculations seriously
24:03 The lack of transparency, explanation, and context around how probabilities are estimated and shared
28:00 Diffusion of responsibility and other human psychological weaknesses in thinking about risk
38:19 What Metaculus is and its relevance to COVID-19
45:57 What is the accuracy of predictions on Metaculus and what has it said about COVID-19?
50:31 Lessons for existential risk from COVID-19
58:42 The risk of synthetic bio enabled pandemics in the 21st century
01:17:35 The extent to which COVID-19 poses challenges to democratic institutions
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
09/04/2020 • 1 heure, 26 minutes, 36 secondes
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord
Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.
Topics discussed in this episode include:
-An overview of Toby's new book
-What it means to be standing at the precipice and how we got here
-Useful arguments for why existential risk matters
-The risks themselves and their likelihoods
-What we can do to safeguard humanity's potential
You can find the page for this podcast here: https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/
Timestamps:
0:00 Intro
03:35 What the book is about
05:17 What does it mean for us to be standing at the precipice?
06:22 Historical cases of global catastrophic and existential risk in the real world
10:38 The development of humanity’s wisdom and power over time
15:53 Reaching existential escape velocity and humanity’s continued evolution
22:30 On effective altruism and writing the book for a general audience
25:53 Defining “existential risk”
28:19 What is compelling or important about humanity’s potential or future persons?
32:43 Various and broadly appealing arguments for why existential risk matters
50:46 Short overview of natural existential risks
54:33 Anthropogenic risks
58:35 The risks of engineered pandemics
01:02:43 Suggestions for working to mitigate x-risk and safeguard the potential of humanity
01:09:43 How and where to follow Toby and pick up his book
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
01/04/2020 • 1 heure, 10 minutes, 50 secondes
AIAP: On Lethal Autonomous Weapons with Paul Scharre
Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines around the acceptable and unacceptable uses of this technology will set precedents and grounds for future international AI collaboration and governance. Such regulation efforts or lack thereof will also shape the kinds of weapons technologies that proliferate in the 21st century. On this episode of the AI Alignment Podcast, Paul Scharre joins us to discuss autonomous weapons, their potential benefits and risks, and the ongoing debate around the regulation of their development and use.
Topics discussed in this episode include:
-What autonomous weapons are and how they may be used
-The debate around acceptable and unacceptable uses of autonomous weapons
-Degrees and kinds of ways of integrating human decision making in autonomous weapons
-Risks and benefits of autonomous weapons
-Whether there is an arms race for autonomous weapons
-How autonomous weapons issues may matter for AI alignment and long-term AI safety
You can find the page for this podcast here: https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/
Timestamps:
0:00 Intro
3:50 Why care about autonomous weapons?
4:31 What are autonomous weapons?
06:47 What does “autonomy” mean?
09:13 Will we see autonomous weapons in civilian contexts?
11:29 How do we draw lines of acceptable and unacceptable uses of autonomous weapons?
24:34 Defining and exploring human “in the loop,” “on the loop,” and “out of loop”
31:14 The possibility of generating international lethal laws of robotics
36:15 Whether autonomous weapons will sanitize war and psychologically distance humans in detrimental ways
44:57 Are persons studying the psychological aspects of autonomous weapons use?
47:05 Risks of the accidental escalation of war and conflict
52:26 Is there an arms race for autonomous weapons?
01:00:10 Further clarifying what autonomous weapons are
01:05:33 Does the successful regulation of autonomous weapons matter for long-term AI alignment considerations?
01:09:25 Does Paul see AI as an existential risk?
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
16/03/2020 • 1 heure, 16 minutes, 20 secondes
FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe
As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally.
Topics discussed in this episode include:
-What the Windfall Clause is and how it might function
-The need for such a mechanism given AGI generated economic windfall
-Problems the Windfall Clause would help to remedy
-The mechanism for distributing windfall profit and the function for defining such profit
-The legal permissibility of the Windfall Clause
-Objections and alternatives to the Windfall Clause
You can find the page for this podcast here: https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/
Timestamps:
0:00 Intro
2:13 What is the Windfall Clause?
4:51 Why do we need a Windfall Clause?
06:01 When we might reach windfall profit and what that profit looks like
08:01 Motivations for the Windfall Clause and its ability to help with job loss
11:51 How the Windfall Clause improves allocation of economic windfall
16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems
18:45 The Windfall Clause as assisting with general norm setting
20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk
23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation
25:03 The windfall function and desiderata for guiding it’s formation
26:56 How the Windfall Clause is different from being a new taxation scheme
30:20 Developing the mechanism for distributing the windfall
32:56 The legal permissibility of the Windfall Clause in the United States
40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands
43:28 Historical precedents for the Windfall Clause
44:45 Objections to the Windfall Clause
57:54 Alternatives to the Windfall Clause
01:02:51 Final thoughts
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
28/02/2020 • 1 heure, 4 minutes, 32 secondes
AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown
From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.
Topics discussed in this episode include:
-The importance of current AI policy work for long-term AI risk
-Where we currently stand in the process of forming AI policy
-Why persons worried about existential risk should care about present day AI policy
-AI and the global community
-The rationality and irrationality around AI race narratives
You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/
Timestamps:
0:00 Intro
4:58 Why it’s important to work on AI policy
12:08 Our historical position in the process of AI policy
21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant?
33:46 AI policy and shorter-term global catastrophic and existential risks
38:18 The Brussels and Sacramento effects
41:23 Why is racing on AI technology bad?
48:45 The rationality of racing to AGI
58:22 Where is AI policy currently?
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
18/02/2020 • 1 heure, 11 minutes, 10 secondes
FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre
Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity.
Topics discussed in this episode include:
- Views on the nature of reality
- Quantum mechanics and the implications of quantum uncertainty
- Identity, information and description
- Continuum of objectivity/subjectivity
You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/31/fli-podcast-identity-information-the-nature-of-reality-with-anthony-aguirre/
Timestamps:
3:35 - General history of views on fundamental reality
9:45 - Quantum uncertainty and observation as interaction
24:43 - The universe as constituted of information
29:26 - What is information and what does the view of reality as information have to say about objects and identity
37:14 - Identity as on a continuum of objectivity and subjectivity
46:09 - What makes something more or less objective?
58:25 - Emergence in physical reality and identity
1:15:35 - Questions about the philosophy of identity in the 21st century
1:27:13 - Differing views on identity changing human desires
1:33:28 - How the reality as information perspective informs questions of identity
1:39:25 - Concluding thoughts
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
31/01/2020 • 1 heure, 45 minutes, 20 secondes
AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson
In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide?
Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world where it is possible to merge with AI directly, upload ourselves, copy and duplicate ourselves arbitrarily, or even manipulate and re-program our sense of identity. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction?
Topics discussed in this episode include:
-Identity from epistemic, ontological, and phenomenological perspectives
-Identity formation in biological evolution
-Open, closed, and empty individualism
-The moral relevance of views on identity
-Identity in the world today and on the path to superintelligence and beyond
You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/15/identity-and-the-ai-revolution-with-david-pearce-and-andres-gomez-emilsson/
Timestamps:
0:00 - Intro
6:33 - What is identity?
9:52 - Ontological aspects of identity
12:50 - Epistemological and phenomenological aspects of identity
18:21 - Biological evolution of identity
26:23 - Functionality or arbitrariness of identity / whether or not there are right or wrong answers
31:23 - Moral relevance of identity
34:20 - Religion as codifying views on identity
37:50 - Different views on identity
53:16 - The hard problem and the binding problem
56:52 - The problem of causal efficacy, and the palette problem
1:00:12 - Navigating views of identity towards truth
1:08:34 - The relationship between identity and the self model
1:10:43 - The ethical implications of different views on identity
1:21:11 - The consequences of different views on identity on preference weighting
1:26:34 - Identity and AI alignment
1:37:50 - Nationalism and AI alignment
1:42:09 - Cryonics, species divergence, immortality, uploads, and merging.
1:50:28 - Future scenarios from Life 3.0
1:58:35 - The role of identity in the AI itself
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
16/01/2020 • 2 heures, 3 minutes, 19 secondes
On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark
Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us.
Topics discussed include:
-Max and Yuval's views and intuitions about consciousness
-How they ground and think about morality
-Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk
-The function of myths and stories in human society
-How emerging science, technology, and global paradigms challenge the foundations of many of our stories
-Technological risks of the 21st century
You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/
Timestamps:
0:00 Intro
3:14 Grounding morality and the need for a science of consciousness
11:45 The effective altruism community and it's main cause areas
13:05 Global health
14:44 Animal suffering and factory farming
17:38 Existential risk and the ethics of the long-term future
23:07 Nuclear war as a neglected global risk
24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence
28:37 On creating new stories for the challenges of the 21st century
32:33 The risks of big data and AI enabled human hacking and monitoring
47:40 What does it mean to be human and what should we want to want?
52:29 On positive global visions for the future
59:29 Goodbyes and appreciations
01:00:20 Outro and supporting the Future of Life Institute Podcast
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
31/12/2019 • 1 heure, 58 secondes
FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team
As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond.
Topics discussed include:
-Introductions to the FLI team and our work
-Motivations for our projects and existential risk mitigation efforts
-The goals and outcomes of our work
-Our favorite projects at FLI in 2019
-Optimistic directions for projects in 2020
-Reasons for existential hope going into 2020 and beyond
You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/27/existential-hope-in-2020-and-beyond-with-the-fli-team/
Timestamps:
0:00 Intro
1:30 Meeting the Future of Life Institute team
18:30 Motivations for our projects and work at FLI
30:04 What we strive to result from our work at FLI
44:44 Favorite accomplishments of FLI in 2019
01:06:20 Project directions we are most excited about for 2020
01:19:43 Reasons for existential hope in 2020 and beyond
01:38:30 Outro
28/12/2019 • 1 heure, 39 minutes, 2 secondes
AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike
Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind.
Topics discussed in this episode include:
-Theoretical and empirical AI safety research
-Jan's and DeepMind's approaches to AI safety
-Jan's work and thoughts on recursive reward modeling
-AI safety benchmarking at DeepMind
-The potential modularity of AGI
-Comments on the cultural and intellectual differences between the AI safety and mainstream AI communities
-Joining the DeepMind safety team
You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/
Timestamps:
0:00 intro
2:15 Jan's intellectual journey in computer science to AI safety
7:35 Transitioning from theoretical to empirical research
11:25 Jan's and DeepMind's approach to AI safety
17:23 Recursive reward modeling
29:26 Experimenting with recursive reward modeling
32:42 How recursive reward modeling serves AI safety
34:55 Pessimism about recursive reward modeling
38:35 How this research direction fits in the safety landscape
42:10 Can deep reinforcement learning get us to AGI?
42:50 How modular will AGI be?
44:25 Efforts at DeepMind for AI safety benchmarking
49:30 Differences between the AI safety and mainstream AI communities
55:15 Most exciting piece of empirical safety work in the next 5 years
56:35 Joining the DeepMind safety team
16/12/2019 • 58 minutes, 5 secondes
FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert
We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can.
Topics discussed include:
-The psychology of existential risk, longtermism, effective altruism, and speciesism
-Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction"
-Various works and studies Stefan Schubert has co-authored in these spaces
-How this enables us to be more altruistic
You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/02/the-psychology-of-existential-risk-and-effective-altruism-with-stefan-schubert/
Timestamps:
0:00 Intro
2:31 Stefan's academic and intellectual journey
5:20 How large is this field?
7:49 Why study the psychology of X-risk and EA?
16:54 What does a better understanding of psychology here enable?
21:10 What are the cognitive limitations psychology helps to elucidate?
23:12 Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction"
34:45 Messaging on existential risk
37:30 Further areas of study
43:29 Speciesism
49:18 Further studies and work by Stefan
02/12/2019 • 58 minutes, 39 secondes
Not Cool Epilogue: A Climate Conversation
In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.
27/11/2019 • 4 minutes, 39 secondes
Not Cool Ep 26: Naomi Oreskes on trusting climate science
It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we should listen to experts, how we can identify the best experts in a field, and why we should be open to the idea of more than one type of "scientific method." She also discusses industry-funded science, scientists’ misconceptions about the public, and the role of the media in proliferating bad research.
Topics discussed include:
-Why Trust Science?
-5 tenets of reliable science
-How to decide which experts to trust
-Why non-scientists can't debate science
-Industry disinformation
-How to communicate science
-Fact-value distinction
-Why people reject science
-Shifting arguments from climate deniers
-Individual vs. structural change
-State- and city-level policy change
26/11/2019 • 51 minutes, 13 secondes
Not Cool Ep 25: Mario Molina on climate action
Most Americans believe in climate change — yet far too few are taking part in climate action. Many aren't even sure what effective climate action should look like. On Not Cool episode 25, Ariel is joined by Mario Molina, Executive Director of Protect our Winters, a non-profit aimed at increasing climate advocacy within the outdoor sports community. In this interview, Mario looks at climate activism more broadly: he explains where advocacy has fallen short, why it's important to hold corporations responsible before individuals, and what it would look like for the US to be a global leader on climate change. He also discusses the reforms we should be implementing, the hypocrisy allegations sometimes leveled at the climate advocacy community, and the misinformation campaign undertaken by the fossil fuel industry in the '90s.
Topics discussed include:
-Civic engagement and climate advocacy
-Recent climate policy rollbacks
-Local vs. global action
-Energy and transportation reform
-Agricultural reform
-Overcoming lack of political will
-Creating cultural change
-Air travel and hypocrisy allegations
-Individual vs. corporate carbon footprints
-Collective action
-Divestment
-The unique influence of the US
21/11/2019 • 35 minutes, 10 secondes
Not Cool Ep 24: Ellen Quigley and Natalie Jones on defunding the fossil fuel industry
Defunding the fossil fuel industry is one of the biggest factors in addressing climate change and lowering carbon emissions. But with international financing and powerful lobbyists on their side, fossil fuel companies often seem out of public reach. On Not Cool episode 24, Ariel is joined by Ellen Quigley and Natalie Jones, who explain why that’s not the case, and what you can do — without too much effort — to stand up to them. Ellen and Natalie, both researchers at the University of Cambridge’s Centre for the Study of Existential Risk (CSER), explain what government regulation should look like, how minimal interactions with our banks could lead to fewer fossil fuel investments, and why divestment isn't enough on its own. They also discuss climate justice, Universal Ownership theory, and the international climate regime.
Topics discussed include:
-Divestment
-Universal Ownership theory
-Demand side and supply side regulation
-Impact investing
-Nationally determined contributions
-Low greenhouse gas emission development strategies
-Just transition
-Economic diversification
For more on universal ownership: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3457205
19/11/2019 • 54 minutes, 24 secondes
AIAP: Machine Ethics and AI Governance with Wendell Wallach
Wendell Wallach has been at the forefront of contemporary emerging technology issues for decades now. As an interdisciplinary thinker, he has engaged at the intersections of ethics, governance, AI, bioethics, robotics, and philosophy since the beginning formulations of what we now know as AI alignment were being codified. Wendell began with a broad interest in the ethics of emerging technology and has since become focused on machine ethics and AI governance. This conversation with Wendell explores his intellectual journey and participation in these fields.
Topics discussed in this episode include:
-Wendell’s intellectual journey in machine ethics and AI governance
-The history of machine ethics and alignment considerations
-How machine ethics and AI alignment serve to produce beneficial AI
-Soft law and hard law for shaping AI governance
-Wendell’s and broader efforts for the global governance of AI
-Social and political mechanisms for mitigating the risks of AI
-Wendell’s forthcoming book
You can find the page and transcript here: https://futureoflife.org/2019/11/15/machine-ethics-and-ai-governance-with-wendell-wallach/
Important timestamps:
0:00 intro
2:50 Wendell's evolution in work and thought
10:45 AI alignment and machine ethics
27:05 Wendell's focus on AI governance
34:04 How much can soft law shape hard law?
37:27 What does hard law consist of?
43:25 Contextualizing the International Congress for the Governance of AI
45:00 How AI governance efforts might fail
58:40 AGI governance
1:05:00 Wendell's forthcoming book
15/11/2019 • 1 heure, 12 minutes, 36 secondes
Not Cool Ep 23: Brian Toon on nuclear winter: the other climate change
Though climate change and global warming are often used synonymously, there’s a different kind of climate change that also deserves attention: nuclear winter. A period of extreme global cooling that would likely follow a major nuclear exchange, nuclear winter is as of now — unlike global warming — still avoidable. But as Cold War era treaties break down and new nations gain nuclear capabilities, it's essential that we understand the potential climate impacts of nuclear war. On Not Cool Episode 23, Ariel talks to Brian Toon, one of the five authors of the 1983 paper that first outlined the concept of nuclear winter. Brian discusses the global tensions that could lead to a nuclear exchange, the process by which such an exchange would drastically reduce the temperature of the planet, and the implications of this kind of drastic temperature drop for humanity. He also explains how nuclear weapons have evolved since their invention, why our nuclear arsenal doesn't need an upgrade, and why modern building materials would make nuclear winter worse.
Topics discussed include:
-Causes and impacts of nuclear winter
-History of nuclear weapons development
-History of disarmament
-Current nuclear arsenals
-Mutually assured destruction
-Fires and climate
-Greenhouse gases vs. aerosols
-Black carbon and plastics
-India/Pakistan tensions
-US/Russia tensions
-Unknowns
-Global food storage and shortages
For more:
https://futureoflife.org/2016/10/31/nuclear-winter-robock-toon-podcast/
https://futureoflife.org/2017/04/27/climate-change-podcast-toon-trenberth/
15/11/2019 • 1 heure, 3 minutes, 2 secondes
Not Cool Ep 22: Cullen Hendrix on climate change and armed conflict
Right before civil war broke out in 2011, Syria experienced a historic five-year drought. This particular drought, which exacerbated economic and political insecurity within the country, may or may not have been caused by climate change. But as climate change increases the frequency of such extreme events, it’s almost certain to inflame pre-existing tensions in other countries — and in some cases, to trigger armed conflict. On Not Cool episode 22, Ariel is joined by Cullen Hendrix, co-author of “Climate as a risk factor for armed conflict.” Cullen, who serves as Director of the Sié Chéou-Kang Center for International Security and Diplomacy and Senior Research Advisor at the Center for Climate & Security, explains the main drivers of conflict and the impact that climate change may have on them. He also discusses the role of climate change in current conflicts like those in Syria, Yemen, and northern Nigeria; the political implications of such conflicts for Europe and other developed regions; and the chance that climate change might ultimately foster cooperation.
Topics discussed include:
-4 major drivers of conflict
-Yemeni & Syrian civil wars
-Boko Haram conflict
-Arab Spring
-Decline in predictability of at-risk countries:
-Instability in South/central America
-Climate-driven migration
-International conflict
-Implications for developing vs. developed countries
-Impact of Syrian civil war/migrant crisis on EU
-Backlash in domestic European politics
-Brexit
-Dealing with uncertainty
-Actionable steps for governments
13/11/2019 • 35 minutes, 40 secondes
Not Cool Ep 21: Libby Jewett on ocean acidification
The increase of CO2 in the atmosphere is doing more than just warming the planet and threatening the lives of many terrestrial species. A large percentage of that carbon is actually reabsorbed by the oceans, causing a phenomenon known as ocean acidification — that is, our carbon emissions are literally changing the chemistry of ocean water and threatening ocean ecosystems worldwide. On Not Cool episode 21, Ariel is joined by Libby Jewett, founding Director of the Ocean Acidification Program at the National Oceanic and Atmospheric Administration (NOAA), who explains the chemistry behind ocean acidification, its impact on animals and plant life, and the strategies for helping organisms adapt to its effects. She also discusses the vulnerability of human communities that depend on marine resources, the implications for people who don't live near the ocean, and the relationship between ocean acidification and climate change.
Topics discussed include:
-Chemistry of ocean acidification
-Impact on animals and plant life
-Coral reefs
-Variation in acidification between oceans
-Economic repercussions
-Vulnerability of resources and human communities
-Global effects of ocean acidification
-Adaptation and management
-Mitigation
-Acidification of freshwater bodies
-Geoengineering
07/11/2019 • 39 minutes, 16 secondes
Not Cool Ep 20: Deborah Lawrence on deforestation
This summer, the world watched in near-universal horror as thousands of square miles of rainforest went up in flames. But what exactly makes forests so precious — and deforestation so costly? On the 20th episode of Not Cool, Ariel explores the many ways in which forests impact the global climate — and the profound price we pay when we destroy them. She’s joined by Deborah Lawrence, Environmental Science Professor at the University of Virginia whose research focuses on the ecological effects of tropical deforestation. Deborah discusses the causes of this year's Amazon rain forest fires, the varying climate impacts of different types of forests, and the relationship between deforestation, agriculture, and carbon emissions. She also explains why the Amazon is not the lungs of the planet, what makes tropical forests so good at global cooling, and how putting a price on carbon emissions could slow deforestation.
Topics discussed include:
-Amazon rain forest fires
-Deforestation of the rainforest
-Tipping points in deforestation
-Climate impacts of forests: local vs. global
-Evapotranspiration
-Why tropical forests do the most cooling
-Non-climate impacts of forests
-Global rate of deforestation
-Why the amazon is not the lungs of the planet
-Impacts of agriculture on forests
-Using degraded land for new crops
-Connection between forests and other greenhouse gases
-Individual actions and policies
06/11/2019 • 42 minutes, 31 secondes
FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre
There exist many facts about the nature of reality which stand at odds with our commonly held intuitions and experiences of the world. Ultimately, there is a relativity of the simultaneity of events and there is no universal "now." Are these facts baked into our experience of the world? Or are our experiences and intuitions at odds with these facts? When we consider this, the origins of our mental models, and what modern physics and cosmology tell us about the nature of reality, we are beckoned to identify our commonly held experiences and intuitions, to analyze them in the light of modern science and philosophy, and to come to new implicit, explicit, and experiential understandings of reality. In his book Cosmological Koans: A Journey to the Heart of Physical Reality, FLI co-founder Anthony Aguirre explores the nature of space, time, motion, quantum physics, cosmology, the observer, identity, and existence itself through Zen koans fueled by science and designed to elicit questions, experiences, and conceptual shifts in the reader. The universe can be deeply counter-intuitive at many levels and this conversation, rooted in Anthony's book, is an attempt at exploring this problem and articulating the contemporary frontiers of science and philosophy.
Topics discussed include:
-What is skillful of a synergy of Zen and scientific reasoning
-The history and philosophy of science
-The role of the observer in science and knowledge
-The nature of information
-What counts as real
-The world in and of itself and the world we experience as populated by our concepts and models of it
-Identity in human beings and future AI systems
-Questions of how identity should evolve
-Responsibilities and open questions associated with architecting life 3.0
You can find the podcast page, including the transcript, here: https://futureoflife.org/2019/10/31/cosmological-koans-a-journey-to-the-heart-of-physical-reality-with-anthony-aguirre/
31/10/2019 • 1 heure, 31 minutes, 20 secondes
Not Cool Ep 19: Ilissa Ocko on non-carbon causes of climate change
Carbon emissions account for about 50% of warming, yet carbon overwhelmingly dominates the climate change discussion. On Episode 19 of Not Cool, Ariel is joined by Ilissa Ocko for a closer look at the non-carbon causes of climate change — like methane, sulphur dioxide, and an aerosol known as black carbon — that are driving the other 50% of warming. Ilissa is a senior climate scientist with the Environmental Defense Fund and an expert on short-lived climate pollutants. She explains how these non-carbon pollutants affect the environment, where they’re coming from, and why they’ve received such little attention relative to carbon. She also discusses a major problem with the way we model climate impacts over 100-year time scales, the barriers to implementing a solution, and more.
Topics discussed include:
-Anthropogenic aerosols
-Non-CO2 climate forcers: black carbon, methane, etc.
-Warming vs. cooling pollutants
-Environmental impacts of methane emissions
-Modeling methane vs. carbon
-Why we need to look at climate impacts on different timescales
-Why we shouldn't geoengineer with cooling aerosols
-How we can reduce methane emissions
31/10/2019 • 37 minutes, 52 secondes
Not Cool Ep 18: Glen Peters on the carbon budget and global carbon emissions
In many ways, the global carbon budget is like any other budget. There’s a maximum amount we can spend, and it must be allocated to various countries and various needs. But how do we determine how much carbon each country can emit? Can developing countries grow their economies without increasing their emissions? And if a large portion of China’s emissions come from products made for American and European consumption, who’s to blame for those emissions? On episode 18 of Not Cool, Ariel is joined by Glen Peters, Research Director at the Center for International Climate Research (CICERO) in Oslo. Glen explains the components that make up the carbon budget, the complexities of its calculation, and its implications for climate policy and mitigation efforts. He also discusses how emissions are allocated to different countries, how emissions are related to economic growth, what role China plays in all of this, and more.
Topics discussed include:
-Global carbon budget
-Carbon cycle
-Mitigation
-Calculating carbon footprints
-Allocating emissions
-Equity issues in allocation and climate policy
-U.S.-China trade war
-Emissions from fossil fuels
-Land use change
-Uncertainties in estimates
-Greenhouse gas inventories
-Reporting requirements for developed vs. developing nations
-Emissions trends
-Negative emissions
-Policies and individual actions
30/10/2019 • 50 minutes, 58 secondes
Not Cool Ep 17: Tackling Climate Change with Machine Learning, part 2
It’s time to get creative in the fight against climate change, and machine learning can help us do that. Not Cool episode 17 continues our discussion of “Tackling Climate Change with Machine Learning,” a nearly 100 page report co-authored by 22 researchers from some of the world’s top AI institutes. Today, Ariel talks to Natasha Jaques and Tegan Maharaj, the respective authors of the report’s “Tools for Individuals” and “Tools for Society” chapters. Natasha and Tegan explain how machine learning can help individuals lower their carbon footprints and aid politicians in implementing better climate policies. They also discuss uncertainty in climate predictions, the relative price of green technology, and responsible machine learning development and use.
Topics discussed include:
-Reinforcement learning
-Individual carbon footprints
-Privacy concerns
-Residential electricity use
-Asymmetrical uncertainty
-Natural language processing and sentiment analysis
-Multi-objective optimization and multi-criteria decision making
-Hedonic pricing
-Public goods problems
-Evolutionary game theory
-Carbon offsets
-Nuclear energy
-Interdisciplinary collaboration
-Descriptive vs. prescriptive uses of ML
24/10/2019 • 57 minutes, 45 secondes
Not Cool Ep 16: Tackling Climate Change with Machine Learning, part 1
How can artificial intelligence, and specifically machine learning, be used to combat climate change? In an ambitious recent report, machine learning researchers provided a detailed overview of the ways that their work can be applied to both climate mitigation and adaptation efforts. The massive collaboration, titled “Tackling Climate Change with Machine Learning,” involved 22 authors from 16 of the world's top AI institutions. On Not Cool episodes 16 and 17, Ariel speaks directly to some of these researchers about their specific contributions, as well as the paper's significance more widely. Today, she’s joined by lead author David Rolnick; Priya Donti, author of the electricity systems chapter; Lynn Kaack, author of the transportation chapter and co-author of the buildings and cities chapter; and Kelly Kochanski, author of the climate prediction chapter. David, Priya, Lynn, and Kelly discuss the origins of the paper, the solutions it proposes, the importance of this kind of interdisciplinary work, and more.
Topics discussed include:
-Translating data to action
-Electricity systems
-Transportation
-Buildings and cities
-Climate prediction
-Adaptation
-Demand response
-Climate informatics
-Accelerated science
-Climate finance
-Responses to paper
-Next steps
-Challenges
22/10/2019 • 1 heure, 26 minutes, 41 secondes
Not Cool Ep 15: Astrid Caldas on equitable climate adaptation
Despite the global scale of the climate crisis, its impacts will vary drastically at the local level. Not Cool Episode 15 looks at the unique struggles facing different communities — both human and non-human — and the importance of equity in climate adaptation. Ariel is joined by Astrid Caldas, a senior climate scientist at the Union of Concerned Scientists, to discuss the types of climate adaptation solutions we need and how we can implement them. She also talks about biodiversity loss, ecological grief, and psychological barriers to change.
Topics discussed include:
-Climate justice and equity in climate adaptation
-How adaptation differs for different communities
-Local vs. larger scale solutions
-Potential adaptation measures and how to implement them
-Active vs. passive information
-Adaptation for non-human species
-How changes in biodiversity will affect humans
-Impact of climate change on indigenous and front line communities
17/10/2019 • 35 minutes, 14 secondes
Not Cool Ep 14: Filippo Berardi on carbon finance and the economics of climate change
As the world nears the warming limit set forth by international agreement, carbon emissions have become a costly commodity. Not Cool episode 14 examines the rapidly expanding domain of carbon finance, along with the wider economic implications of the changing climate. Ariel is joined by Filippo Berardi, an environmental management and international development specialist at the Global Environment Facility (GEF). Filippo explains the international carbon market, the economic risks of not addressing climate change, and the benefits of a low carbon economy. He also discusses where international funds can best be invested, what it would cost to fully operationalize the Paris Climate Agreement, and how the fall of the Soviet Union impacted carbon finance at the international level.
Topics discussed include:
-UNFCCC: funding, allocation of resources
-Cap and trade system vs. carbon tax
-Emission trading
-Carbon offsets
-Planetary carbon budget
-Economic risks of not addressing climate change
-Roles for public sector vs. private sector
-What a low carbon economy would look like
15/10/2019 • 40 minutes, 43 secondes
Not Cool Ep 13: Val Kapos on ecosystem-based adaptation
What is ecosystem-based adaptation, and why should we be implementing it? The thirteenth episode of Not Cool explores how we can conserve, restore, and manage natural ecosystems in ways that also help us adapt to the impacts of climate change. Ariel is joined by Val Kapos, Head of the Climate Change and Biodiversity Programme at UN Environment’s World Conservation Monitoring Center, who explains the benefits of ecosystem-based adaptation along with some of the strategies for executing it. Val also describes how ecosystem-based adaption is being used today, why it’s an effective strategy for developed and developing nations alike, and what could motivate more communities to embrace it.
Topics discussed include:
-Importance of biodiversity
-Ecosystem-based vs. engineered approaches to adaptation
-Potential downsides/risks of ecosystem-based adaptation
-Linking ecosystem-based adaptation to other societal objectives
-Obstacles to implementation
-Private sector acceptance of ecosystem-based adaptation
-National Determined Contributions
-Importance of stakeholder involvement
10/10/2019 • 32 minutes, 3 secondes
AIAP: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell
Stuart Russell is one of AI's true pioneers and has been at the forefront of the field for decades. His expertise and forward thinking have culminated in his newest work, Human Compatible: Artificial Intelligence and the Problem of Control. The book is a cornerstone piece, alongside Superintelligence and Life 3.0, that articulates the civilization-scale problem we face of aligning machine intelligence with human goals and values. Not only is this a further articulation and development of the AI alignment problem, but Stuart also proposes a novel solution which bring us to a better understanding of what it will take to create beneficial machine intelligence.
Topics discussed in this episode include:
-Stuart's intentions in writing the book
-The history of intellectual thought leading up to the control problem
-The problem of control
-Why tool AI won't work
-Messages for different audiences
-Stuart's proposed solution to the control problem
You can find the page for this podcast here: https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-intelligence-and-the-problem-of-control-with-stuart-russell/
Important timestamps:
0:00 Intro
2:10 Intentions and background on the book
4:30 Human intellectual tradition leading up to the problem of control
7:41 Summary of the structure of the book
8:28 The issue with the current formulation of building intelligent machine systems
10:57 Beginnings of a solution
12:54 Might tool AI be of any help here?
16:30 Core message of the book
20:36 How the book is useful for different audiences
26:30 Inferring the preferences of irrational agents
36:30 Why does this all matter?
39:50 What is really at stake?
45:10 Risks and challenges on the path to beneficial AI
54:55 We should consider laws and regulations around AI
01:03:54 How is this book differentiated from those like it?
08/10/2019 • 1 heure, 8 minutes, 22 secondes
Not Cool Ep 12: Kris Ebi on climate change, human health, and social stability
We know that climate change has serious implications for human health, including the spread of vector-borne disease and the global increase of malnutrition. What we don’t yet know is how expansive these health issues could become or how these problems will impact social stability. On episode 12 of Not Cool, Ariel is joined by Kris Ebi, professor at the University of Washington and founding director of its Center for Health and the Global Environment. Kris explains how increased CO2 affects crop quality, why malnutrition might alter patterns of human migration, and what we can do to reduce our vulnerability to these impacts. She also discusses changing weather patterns, the expanding geographic range of disease-carrying insects, and more.
Topics discussed include:
-Human health and social stability
-Climate related malnutrition
-Knowns and unknowns
-Extreme events and changing weather patterns
-Vulnerability and exposure
-Steps to reduce vulnerability
-Vector-borne disease
-Endemic vs. epidemic malaria
-Effects of increased CO2 on crop quality
-Actions individuals can take
08/10/2019 • 31 minutes, 6 secondes
Not Cool Ep 11: Jakob Zscheischler on climate-driven compound weather events
While a single extreme weather event can wreak considerable havoc, it's becoming increasingly clear that such events often don't occur in isolation. Not Cool Episode 11 focuses on compound weather events: what they are, why they’re dangerous, and how we've failed to prepare for them. Ariel is joined by Jakob Zscheischler, an Earth system scientist at the University of Bern, who discusses the feedback processes that drive compound events, the impacts they're already having, and the reasons we've underestimated their gravity. He also explains how extreme events can reduce carbon uptake, how human impacts can amplify climate hazards, and why we need more interdisciplinary research.
Topics discussed include:
-Carbon cycle
-Climate-driven changes in vegetation
-Land-atmosphere feedbacks
-Extreme events
-Compound events and why they’re under researched
-Risk assessment
-Spatially compounding impacts
-Importance of working across disciplines
-Important policy measures
03/10/2019 • 24 minutes, 32 secondes
Not Cool Ep 10: Stephanie Herring on extreme weather events and climate change attribution
One of the most obvious markers of climate change has been the increasing frequency and intensity of extreme weather events in recent years. In the tenth episode of Not Cool, Ariel takes a closer look at the research linking climate change and extreme events — and, in turn, linking extreme events and socioeconomic patterns. She’s joined by Stephanie Herring, a climate scientist at the National Oceanic and Atmospheric Administration whose work on extreme event attribution has landed her on Foreign Policy magazine’s list of Top 100 Global Thinkers. Stephanie discusses the changes she’s witnessed in the field of attribution research, the concerning trends that have begun to emerge, the importance of data in the decision-making process, and more.
Topics discussed include:
-Extreme events & how they’re defined
-Attribution research
-Risk management
-Selection bias in climate research
-Insurance analysis
-Compound events and impacts
-Knowns and unknowns
01/10/2019 • 33 minutes, 14 secondes
FLI Podcast: Feeding Everyone in a Global Catastrophe with Dave Denkenberger & Joshua Pearce
Most of us working on catastrophic and existential threats focus on trying to prevent them — not on figuring out how to survive the aftermath. But what if, despite everyone’s best efforts, humanity does undergo such a catastrophe? This month’s podcast is all about what we can do in the present to ensure humanity’s survival in a future worst-case scenario. Ariel is joined by Dave Denkenberger and Joshua Pearce, co-authors of the book Feeding Everyone No Matter What, who explain what would constitute a catastrophic event, what it would take to feed the global population, and how their research could help address world hunger today. They also discuss infrastructural preparations, appropriate technology, and why it’s worth investing in these efforts.
Topics discussed include:
-Causes of global catastrophe
-Planning for catastrophic events
-Getting governments onboard
-Application to current crises
-Alternative food sources
-Historical precedence for societal collapse
-Appropriate technology
-Hardwired optimism
-Surprising things that could save lives
-Climate change and adaptation
-Moral hazards
-Why it’s in the best interest of the global wealthy to make food more available
30/09/2019 • 50 minutes, 6 secondes
Not Cool Ep 9: Andrew Revkin on climate communication, vulnerability, and information gaps
In her speech at Monday’s UN Climate Action Summit, Greta Thunberg told a roomful of global leaders, “The world is waking up.” Yet the science, as she noted, has been clear for decades. Why has this awakening taken so long, and what can we do now to help it along? On Episode 9 of Not Cool, Ariel is joined by Andy Revkin, acclaimed environmental journalist and founding director of the new Initiative on Communication and Sustainability at Columbia University’s Earth Institute. Andy discusses the information gaps that have left us vulnerable, the difficult conversations we need to be having, and the strategies we should be using to effectively communicate climate science. He also talks about inertia, resilience, and creating a culture that cares about the future.
Topics discussed include:
-Inertia in the climate system
-The expanding bullseye of vulnerability
-Managed retreat
-Information gaps
-Climate science literacy levels
-Renewable energy in conservative states
-Infrastructural inertia
-Climate science communication strategies
-Increasing resilience
-Balancing inconvenient realities with productive messaging
-Extreme events
26/09/2019 • 36 minutes, 51 secondes
Not Cool Ep 8: Suzanne Jones on climate policy and government responsibility
On the eighth episode of Not Cool, Ariel tackles the topic of climate policy from the local level to the federal. She's joined by Suzanne Jones: the current mayor of Boulder, Colorado, but also public policy veteran and climate activist. Suzanne explains the climate threats facing communities like Boulder, the measures local governments can take to combat the crisis, and the ways she’d like to see the federal government step up. She also discusses the economic value of going green, the importance of promoting equity in climate solutions, and more.
Topics discussed include:
-Paris Climate Agreement
-Roles for local/state/federal governments
-Surprise costs of climate change
-Equality/equity in climate solutions
-Increasing community engagement
-Nonattainment zones
-Electrification of transportation sector
-Municipalization of electric utility
-Challenges, roadblocks, and what she’d like to see accomplished
-Affordable, sustainable development
-What individuals should be doing
-Carbon farming and sustainable agriculture
24/09/2019 • 37 minutes, 13 secondes
Not Cool Ep 7: Lindsay Getschel on climate change and national security
The impacts of the climate crisis don’t stop at rising sea levels and changing weather patterns. Episode 7 of Not Cool covers the national security implications of the changing climate, from the economic fallout to the uptick in human migration. Ariel is joined by Lindsay Getschel, a national security and climate change researcher who briefed the UN Security Council this year on these threats. Lindsay also discusses how hard-hit communities are adapting, why UN involvement is important, and more.
Topics discussed include:
-Threat multipliers
-Economic impacts of climate change
-Impacts of climate change on migration
-The importance of UN involvement
-Ecosystem-based adaptation
-Action individuals can take
20/09/2019 • 23 minutes, 23 secondes
Not Cool Ep 6: Alan Robock on geoengineering
What is geoengineering, and could it really help us solve the climate crisis? The sixth episode of Not Cool features Dr. Alan Robock, meteorologist and climate scientist, on types of geoengineering solutions, the benefits and risks of geoengineering, and the likelihood that we may need to implement such technology. He also discusses a range of other solutions, including economic and policy reforms, shifts within the energy sector, and the type of leadership that might make these transformations possible.
Topics discussed include:
-Types of geoengineering, including carbon dioxide removal and solar radiation management
-Current geoengineering capabilities
-The Year Without a Summer
-The termination problem
-Feasibility of geoengineering solutions
-Social cost of carbon
-Fossil fuel industry
-Renewable energy solutions and economic accessibility
-Biggest risks of stratospheric geoengineering
17/09/2019 • 44 minutes, 51 secondes
AIAP: Synthesizing a human's preferences into a utility function with Stuart Armstrong
In his Research Agenda v0.9: Synthesizing a human's preferences into a utility function, Stuart Armstrong develops an approach for generating friendly artificial intelligence. His alignment proposal can broadly be understood as a kind of inverse reinforcement learning where most of the task of inferring human preferences is left to the AI itself. It's up to us to build the correct assumptions, definitions, preference learning methodology, and synthesis process into the AI system such that it will be able to meaningfully learn human preferences and synthesize them into an adequate utility function. In order to get this all right, his agenda looks at how to understand and identify human partial preferences, how to ultimately synthesize these learned preferences into an "adequate" utility function, the practicalities of developing and estimating the human utility function, and how this agenda can assist in other methods of AI alignment.
Topics discussed in this episode include:
-The core aspects and ideas of Stuart's research agenda
-Human values being changeable, manipulable, contradictory, and underdefined
-This research agenda in the context of the broader AI alignment landscape
-What the proposed synthesis process looks like
-How to identify human partial preferences
-Why a utility function anyway?
-Idealization and reflective equilibrium
-Open questions and potential problem areas
Here you can find the podcast page: https://futureoflife.org/2019/09/17/synthesizing-a-humans-preferences-into-a-utility-function-with-stuart-armstrong/
Important timestamps:
0:00 Introductions
3:24 A story of evolution (inspiring just-so story)
6:30 How does your “inspiring just-so story” help to inform this research agenda?
8:53 The two core parts to the research agenda
10:00 How this research agenda is contextualized in the AI alignment landscape
12:45 The fundamental ideas behind the research project
15:10 What are partial preferences?
17:50 Why reflexive self-consistency isn’t enough
20:05 How are humans contradictory and how does this affect the difficulty of the agenda?
25:30 Why human values being underdefined presents the greatest challenge
33:55 Expanding on the synthesis process
35:20 How to extract the partial preferences of the person
36:50 Why a utility function?
41:45 Are there alternative goal ordering or action producing methods for agents other than utility functions?
44:40 Extending and normalizing partial preferences and covering the rest of section 2
50:00 Moving into section 3, synthesizing the utility function in practice
52:00 Why this research agenda is helpful for other alignment methodologies
55:50 Limits of the agenda and other problems
58:40 Synthesizing a species wide utility function
1:01:20 Concerns over the alignment methodology containing leaky abstractions
1:06:10 Reflective equilibrium and the agenda not being a philosophical ideal
1:08:10 Can we check the result of the synthesis process?
01:09:55 How did the Mahatma Armstrong idealization process fail?
01:14:40 Any clarifications for the AI alignment community?
You Can take a short (4 minute) survey to share your feedback about the podcast here: www.surveymonkey.com/r/YWHDFV7
17/09/2019 • 1 heure, 16 minutes, 32 secondes
Not Cool Ep 5: Ken Caldeira on energy, infrastructure, and planning for an uncertain climate future
Planning for climate change is particularly difficult because we're dealing with such big unknowns. How, exactly, will the climate change? Who will be affected and how? What new innovations are possible, and how might they help address or exacerbate the current problem? Etc. But we at least know that in order to minimize the negative effects of climate change, we need to make major structural changes — to our energy systems, to our infrastructure, to our power structures — and we need to start now. On the fifth episode of Not Cool, Ariel is joined by Ken Caldeira, who is a climate scientist at the Carnegie Institution for Science and the Department of Global Ecology and a professor at Stanford University's Department of Earth System Science. Ken shares his thoughts on the changes we need to be making, the obstacles standing in the way, and what it will take to overcome them.
Topics discussed include:
-Relationship between policy and science
-Climate deniers and why it isn't useful to argue with them
-Energy systems and replacing carbon
-Planning in the face of uncertainty
-Sociopolitical/psychological barriers to climate action
-Most urgently needed policies and actions
-Economic scope of climate solution
-Infrastructure solutions and their political viability
-Importance of political/systemic change
12/09/2019 • 27 minutes, 46 secondes
Not Cool Ep 4: Jessica Troni on helping countries adapt to climate change
The reality is, no matter what we do going forward, we’ve already changed the climate. So while it’s critical to try to minimize those changes, it’s also important that we start to prepare for them. On Episode 4 of Not Cool, Ariel explores the concept of climate adaptation — what it means, how it’s being implemented, and where there’s still work to be done. She’s joined by Jessica Troni, head of UN Environment’s Climate Change Adaptation Unit, who talks warming scenarios, adaptation strategies, implementation barriers, and more.
Topics discussed include:
Climate adaptation: ecology-based, infrastructure
Funding sources
Barriers: financial, absorptive capacity
Developed vs. developing nations: difference in adaptation approaches, needs, etc.
UN Environment
Policy solutions
Social unrest in relation to climate
Feedback loops and runaway climate change
Warming scenarios
What individuals can do
10/09/2019 • 25 minutes, 24 secondes
Not Cool Ep 3: Tim Lenton on climate tipping points
What is a climate tipping point, and how do we know when we’re getting close to one? On Episode 3 of Not Cool, Ariel talks to Dr. Tim Lenton, Professor and Chair in Earth System Science and Climate Change at the University of Exeter and Director of the Global Systems Institute. Tim explains the shifting system dynamics that underlie phenomena like glacial retreat and the disruption of monsoons, as well as their consequences. He also discusses how to deal with low certainty/high stakes risks, what types of policies we most need to be implementing, and how humanity’s unique self-awareness impacts our relationship with the Earth.
Topics discussed include:
Climate tipping points: impacts, warning signals
Evidence that climate is nearing tipping point?
IPCC warming targets
Risk management under uncertainty
Climate policies
Human tipping points: social, economic, technological
The Gaia Hypothesis
05/09/2019 • 38 minutes, 4 secondes
Not Cool Ep 2: Joanna Haigh on climate modeling and the history of climate change
On the second episode of Not Cool, Ariel delves into some of the basic science behind climate change and the history of its study. She is joined by Dr. Joanna Haigh, an atmospheric physicist whose work has been foundational to our current understanding of how the climate works. Joanna is a fellow of The Royal Society and recently retired as Co-Director of the Grantham Institute on Climate Change and the Environment at Imperial College London. Here, she gives a historical overview of the field of climate science and the major breakthroughs that moved it forward. She also discusses her own work on the stratosphere, radiative forcing, solar variability, and more.
Topics discussed include:
History of the study of climate change
Overview of climate modeling
Radiative forcing
What’s changed in climate science in the past few decades
How to distinguish between natural climate variation and human-induced global warming
Solar variability, sun spots, and the effect of the sun on the climate
History of climate denial
03/09/2019 • 28 minutes, 3 secondes
Not Cool Ep 1: John Cook on misinformation and overcoming climate silence
On the premier of Not Cool, Ariel is joined by John Cook: psychologist, climate change communication researcher, and founder of SkepticalScience.com. Much of John’s work focuses on misinformation related to climate change, how it’s propagated, and how to counter it. He offers a historical analysis of climate denial and the motivations behind it, and he debunks some of its most persistent myths. John also discusses his own research on perceived social consensus, the phenomenon he’s termed “climate silence,” and more.
Topics discussed include:
History of of the study of climate change
Climate denial: history and motivations
Persistent climate myths
How to overcome misinformation
How to talk to climate deniers
Perceived social consensus and climate silence
03/09/2019 • 36 minutes, 12 secondes
Not Cool Prologue: A Climate Conversation
In this short trailer, Ariel Conn talks about FLI's newest podcast series, Not Cool: A Climate Conversation.
Climate change, to state the obvious, is a huge and complicated problem. Unlike the threats posed by artificial intelligence, biotechnology or nuclear weapons, you don’t need to have an advanced science degree or be a high-ranking government official to start having a meaningful impact on your own carbon footprint. Each of us can begin making lifestyle changes today that will help. We started this podcast because the news about climate change seems to get worse with each new article and report, but the solutions, at least as reported, remain vague and elusive. We wanted to hear from the scientists and experts themselves to learn what’s really going on and how we can all come together to solve this crisis.
03/09/2019 • 3 minutes, 56 secondes
FLI Podcast: Beyond the Arms Race Narrative: AI and China with Helen Toner and Elsa Kania
Discussions of Chinese artificial intelligence often center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond this narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward.
Topics discussed in this episode include:
The rise of AI in China
The escalation of tensions between U.S. and China in AI realm
Chinese AI Development plans and policy initiatives
The AI arms race narrative and the problems with it
Civil-military fusion in China vs. U.S.
The regulation of Chinese-American technological collaboration
AI and authoritarianism
Openness in AI research and when it is (and isn’t) appropriate
The relationship between privacy and advancement in AI
30/08/2019 • 49 minutes, 28 secondes
AIAP: China's AI Superpower Dream with Jeffrey Ding
"In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China." (FLI's AI Policy - China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China's AI development and strategy, as well as China's approach to strategic technologies more generally.
Topics discussed in this episode include:
-China's historical relationships with technology development
-China's AI goals and some recently released principles
-Jeffrey Ding's work, Deciphering China's AI Dream
-The central drivers of AI and the resulting Chinese AI strategy
-Chinese AI capabilities
-AGI and superintelligence awareness and thinking in China
-Dispelling AI myths, promoting appropriate memes
-What healthy competition between the US and China might look like
Here you can find the page for this podcast: https://futureoflife.org/2019/08/16/chinas-ai-superpower-dream-with-jeffrey-ding/
Important timestamps:
0:00 Intro
2:14 Motivations for the conversation
5:44 Historical background on China and AI
8:13 AI principles in China and the US
16:20 Jeffrey Ding’s work, Deciphering China’s AI Dream
21:55 Does China’s government play a central hand in setting regulations?
23:25 Can Chinese implementation of regulations and standards move faster than in the US? Is China buying shares in companies to have decision making power?
27:05 The components and drivers of AI in China and how they affect Chinese AI strategy
35:30 Chinese government guidance funds for AI development
37:30 Analyzing China’s AI capabilities
44:20 Implications for the future of AI and AI strategy given the current state of the world
49:30 How important are AGI and superintelligence concerns in China?
52:30 Are there explicit technical AI research programs in China for AGI?
53:40 Dispelling AI myths and promoting appropriate memes
56:10 Relative and absolute gains in international politics
59:11 On Peter Thiel’s recent comments on superintelligence, AI, and China
1:04:10 Major updates and changes since Jeffrey wrote Deciphering China’s AI Dream
1:05:50 What does healthy competition between China and the US look like?
1:11:05 Where to follow Jeffrey and read more of his work
You Can take a short (4 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
Deciphering China's AI Dream: https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
FLI AI Policy - China page: https://futureoflife.org/ai-policy-china/
ChinAI Newsletter: https://chinai.substack.com
Jeff's Twitter: https://twitter.com/jjding99
Previous podcast with Jeffrey: https://youtu.be/tm2kmSQNUAU
16/08/2019 • 1 heure, 12 minutes, 20 secondes
FLI Podcast: The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield
Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems. They also discussed our species’ unique strengths and vulnerabilities –– and the ways in which technology has heightened both –– with respect to the changing climate.
01/08/2019 • 1 heure, 9 minutes, 34 secondes
AIAP: On the Governance of AI with Jade Leung
In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.
Topics discussed in this episode include:
-The landscape of AI governance
-The Center for the Governance of AI’s research agenda and priorities
-Aligning government and companies with ideal governance and the common good
-Norms and efforts in the AI alignment community in this space
-Technical AI alignment vs. AI Governance vs. malicious use cases
-Lethal autonomous weapons
-Where we are in terms of our efforts and what further work is needed in this space
You can take a short (3 minute) survey to share your feedback about the podcast here: www.surveymonkey.com/r/YWHDFV7
22/07/2019 • 1 heure, 14 minutes, 17 secondes
FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell
Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate?
In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).
This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more.
Topics discussed in this episode:
- The validity of the U.S. allegations --Is Russia really testing weapons?
- The International Monitoring System -- How effective is it if the treaty isn’t in effect?
- The modernization of U.S/Russian/Chinese nuclear arsenals and what that means.
- Why there’s a push for nuclear testing.
- Why opposing nuclear testing can help ensure the US maintains nuclear superiority.
28/06/2019 • 37 minutes, 37 secondes
FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi
In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let's build our technology around our visions for the future.
31/05/2019 • 38 minutes, 32 secondes
AIAP: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson
Consciousness is a concept which is at the forefront of much scientific and philosophical thinking. At the same time, there is large disagreement over what consciousness exactly is and whether it can be fully captured by science or is best explained away by a reductionist understanding. Some believe consciousness to be the source of all value and others take it to be a kind of delusion or confusion generated by algorithms in the brain. The Qualia Research Institute takes consciousness to be something substantial and real in the world that they expect can be captured by the language and tools of science and mathematics. To understand this position, we will have to unpack the philosophical motivations which inform this view, the intuition pumps which lend themselves to these motivations, and then explore the scientific process of investigation which is born of these considerations. Whether you take consciousness to be something real or illusory, the implications of these possibilities certainly have tremendous moral and empirical implications for life's purpose and role in the universe. Is existence without consciousness meaningful?
In this podcast, Lucas spoke with Mike Johnson and Andrés Gómez Emilsson of the Qualia Research Institute. Andrés is a consciousness researcher at QRI and is also the Co-founder and President of the Stanford Transhumanist Association. He has a Master's in Computational Psychology from Stanford. Mike is Executive Director at QRI and is also a co-founder. Mike is interested in neuroscience, philosophy of mind, and complexity theory.
Topics discussed in this episode include:
-Functionalism and qualia realism
-Views that are skeptical of consciousness
-What we mean by consciousness
-Consciousness and casuality
-Marr's levels of analysis
-Core problem areas in thinking about consciousness
-The Symmetry Theory of Valence
-AI alignment and consciousness
You can take a very short survey about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
23/05/2019 • 1 heure, 26 minutes, 40 secondes
The Unexpected Side Effects of Climate Change with Fran Moore and Nick Obradovich
It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act.
In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts of climate change, and they shared some of their most remarkable findings.
Topics discussed in this episode include:
- How getting used to climate change may make it harder for us to address the issue
- The social cost of carbon
- The effect of temperature on mood, exercise, and sleep
- The effect of temperature on public safety and democratic processes
- Why it’s hard to get people to act
- What we can all do to make a difference
- Why we should still be hopeful
30/04/2019 • 51 minutes, 15 secondes
AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 2)
The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.
Topics discussed in this episode include:
-Embedded agency
-The field of "getting AI systems to do what we want"
-Ambitious value learning
-Corrigibility, including iterated amplification, debate, and factored cognition
-AI boxing and impact measures
-Robustness through verification, adverserial ML, and adverserial examples
-Interpretability research
-Comprehensive AI Services
-Rohin's relative optimism about the state of AI alignment
You can take a short (3 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
25/04/2019 • 1 heure, 6 minutes, 50 secondes
AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 1)
The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin's take on these different approaches.
You can take a short (3 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7
In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.
Topics discussed in this episode include:
- The perspectives of CHAI, MIRI, OpenAI, DeepMind, FHI, and others
- Where and why they disagree on technical alignment
- The kinds of properties and features we are trying to ensure in our AI systems
- What Rohin is excited and optimistic about
- Rohin's recommended reading and advice for improving at AI alignment research
11/04/2019 • 1 heure, 16 minutes, 30 secondes
Why Ban Lethal Autonomous Weapons
Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion.
We've compiled their arguments, along with many of our own, and now, we want to turn the discussion over to you. We’ve set up a comments section on the FLI podcast page (www.futureoflife.org/whyban), and we want to know: Which argument(s) do you find most compelling? Why?
03/04/2019 • 49 minutes, 2 secondes
AIAP: AI Alignment through Debate with Geoffrey Irving
See full article here: https://futureoflife.org/2019/03/06/ai-alignment-through-debate-with-geoffrey-irving/
"To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information... In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment. " AI safety via debate (https://arxiv.org/pdf/1805.00899.pdf)
Debate is something that we are all familiar with. Usually it involves two or more persons giving arguments and counter arguments over some question in order to prove a conclusion. At OpenAI, debate is being explored as an AI alignment methodology for reward learning (learning what humans want) and is a part of their scalability efforts (how to train/evolve systems to solve questions of increasing complexity). Debate might sometimes seem like a fruitless process, but when optimized and framed as a two-player zero-sum perfect-information game, we can see properties of debate and synergies with machine learning that may make it a powerful truth seeking process on the path to beneficial AGI.
On today's episode, we are joined by Geoffrey Irving. Geoffrey is a member of the AI safety team at OpenAI. He has a PhD in computer science from Stanford University, and has worked at Google Brain on neural network theorem proving, cofounded Eddy Systems to autocorrect code as you type, and has worked on computational physics and geometry at Otherlab, D. E. Shaw Research, Pixar, and Weta Digital. He has screen credits on Tintin, Wall-E, Up, and Ratatouille.
Topics discussed in this episode include:
-What debate is and how it works
-Experiments on debate in both machine learning and social science
-Optimism and pessimism about debate
-What amplification is and how it fits in
-How Geoffrey took inspiration from amplification and AlphaGo
-The importance of interpretability in debate
-How debate works for normative questions
-Why AI safety needs social scientists
07/03/2019 • 1 heure, 10 minutes, 4 secondes
Part 2: Anthrax, Agent Orange, and Yellow Rain With Matthew Meselson and Max Tegmark
In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University.
Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.
28/02/2019 • 51 minutes, 35 secondes
Part 1: From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark
In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in disarmament, working with the US government to halt the use of Agent Orange in Vietnam and developing the Biological Weapons Convention. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats.
In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undertook to get them banned. Dr. Meselson was a key force behind the U.S. ratification of the Geneva Protocol, a 1925 treaty banning biological warfare, as well as the conception and implementation of the Biological Weapons Convention, the international treaty that bans biological and toxin weapons.
28/02/2019 • 56 minutes, 30 secondes
AIAP: Human Cognition and the Nature of Intelligence with Joshua Greene
See the full article here: https://futureoflife.org/2019/02/21/human-cognition-and-the-nature-of-intelligence-with-joshua-greene/
"How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind's eyes and ears? How does your brain distinguish what it's thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you'd believe me, and then I say, oh I was just kidding, didn't really happen. You still have the idea in your head, but in one case you're representing it as something true, in another case you're representing it as something false, or maybe you're representing it as something that might be true and you're not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe that they're false or you could just be agnostic, and that's essential not just for idle speculation, but it's essential for planning. You have to be able to imagine possibilities that aren't yet actual. So these are all things we're trying to understand. And then I think the project of understanding how humans do it is really quite parallel to the project of trying to build artificial general intelligence." -Joshua Greene
Josh Greene is a Professor of Psychology at Harvard, who focuses on moral judgment and decision making. His recent work focuses on cognition, and his broader interests include philosophy, psychology and neuroscience. He is the author of Moral Tribes: Emotion, Reason, and the Gap Bewtween Us and Them. Joshua Greene's research focuses on further understanding key aspects of both individual and collective intelligence. Deepening our knowledge of these subjects allows us to understand the key features which constitute human general intelligence, and how human cognition aggregates and plays out through group choice and social decision making. By better understanding the one general intelligence we know of, namely humans, we can gain insights into the kinds of features that are essential to general intelligence and thereby better understand what it means to create beneficial AGI. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.
Topics discussed in this episode include:
-The multi-modal and combinatorial nature of human intelligence
-The symbol grounding problem
-Grounded cognition
-Modern brain imaging
-Josh's psychology research using John Rawls’ veil of ignorance
-Utilitarianism reframed as 'deep pragmatism'
21/02/2019 • 37 minutes, 41 secondes
The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi
Three generals are voting on whether to attack or retreat from their siege of a castle. One of the generals is corrupt and two of them are not. What happens when the corrupted general sends different answers to the other two generals?
A Byzantine fault is "a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine Generals' Problem", developed to describe this condition, where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable."
The Byzantine Generals' Problem and associated issues in maintaining reliable distributed computing networks is illuminating for both AI alignment and modern networks we interact with like Youtube, Facebook, or Google. By exploring this space, we are shown the limits of reliable distributed computing, the safety concerns and threats in this space, and the tradeoffs we will have to make for varying degrees of efficiency or safety.
The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mahmdi is the ninth podcast in the AI Alignment Podcast series, hosted by Lucas Perry. El Mahdi pioneered Byzantine resilient machine learning devising a series of provably safe algorithms he recently presented at NeurIPS and ICML. Interested in theoretical biology, his work also includes the analysis of error propagation and networks applied to both neural and biomolecular networks. This particular episode was recorded at the Beneficial AGI 2019 conference in Puerto Rico. We hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, iTunes, Google Play, Stitcher, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.
If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.
Topics discussed in this episode include:
The Byzantine Generals' Problem
What this has to do with artificial intelligence and machine learning
Everyday situations where this is important
How systems and models are to update in the context of asynchrony
Why it's hard to do Byzantine resilient distributed ML.
Why this is important for long-term AI alignment
An overview of Adversarial Machine Learning and where Byzantine-resilient Machine Learning stands on the map is available in this (9min) video . A specific focus on Byzantine Fault Tolerant Machine Learning is available here (~7min)
In particular, El Mahdi argues in the first interview (and in the podcast) that technical AI safety is not only relevant for long term concerns, but is crucial in current pressing issues such as social media poisoning of public debates and misinformation propagation, both of which fall into Poisoning-resilience. Another example he likes to use is social media addiction, that could be seen as a case of (non) Safely Interruptible learning. This value misalignment is already an issue with the primitive forms of AIs that optimize our world today as they maximize our watch-time all over the internet.
The latter (Safe Interruptibility) is another technical AI safety question El Mahdi works on, in the context of Reinforcement Learning. This line of research was initially dismissed as "science fiction", in this interview (5min), El Mahdi explains why it is a realistic question that arises naturally in reinforcement learning
El Mahdi's work on Byzantine-resilient Machine Learning and other relevant topics is available on
his Google scholar profile.
07/02/2019 • 50 minutes, 4 secondes
AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy
Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition.
Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book, Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He's also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours.
Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.
31/01/2019 • 1 heure, 2 minutes, 56 secondes
Artificial Intelligence: American Attitudes and Trends with Baobao Zhang
Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown.
Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings--most Americans, for example, don’t trust Facebook--were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed.
This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a PhD candidate in Yale University's political science department and research affiliate with the Center for the Governance of AI at the University of Oxford. Her work focuses on American politics, international relations, and experimental methods.
In this episode, Zhang spoke about her take on some of the report’s most interesting findings, the new questions it raised, and future research directions for her team. Topics discussed include:
-Demographic differences in perceptions of AI
-Discrepancies between expert and public opinions
-Public trust (or lack thereof) in AI developers
-The effect of information on public perceptions of scientific issues
What motivates cooperative inverse reinforcement learning? What can we gain from recontextualizing our safety efforts from the CIRL point of view? What possible role can pre-AGI systems play in amplifying normative processes?
Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell is the eighth podcast in the AI Alignment Podcast series, hosted by Lucas Perry and was recorded at the Beneficial AGI 2019 conference in Puerto Rico. For those of you that are new, this series covers and explores the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, Lucas will speak with technical and non-technical researchers across areas such as machine learning, governance, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.
In this podcast, Lucas spoke with Dylan Hadfield-Menell. Dylan is a 5th year PhD student at UC Berkeley advised by Anca Dragan, Pieter Abbeel and Stuart Russell, where he focuses on technical AI alignment research.
Topics discussed in this episode include:
-How CIRL helps to clarify AI alignment and adjacent concepts
-The philosophy of science behind safety theorizing
-CIRL in the context of varying alignment methodologies and it's role
-If short-term AI can be used to amplify normative processes
17/01/2019 • 51 minutes, 52 secondes
Existential Hope in 2019 and Beyond
Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended.
The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts--Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg--about their views on the present, the future, and the path between them.
Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7th Future, she’s focusing on bringing people and organizations together to imagine and figure out how to build a better future. Allison is a researcher and program coordinator at the Foresight Institute and creator of the website existentialhope.com. Josh is cohost on the Stuff You Should Know Podcast, and he recently released a 10-part series on existential risks called The End of the World with Josh Clark. Anders is a senior researcher at the Future of Humanity Institute with a background in computational neuroscience, and for the past 20 years, he’s studied the ethics of human enhancement, existential risks, emerging technology, and life in the far future.
We hope you’ll come away feeling inspired and motivated--not just to prevent catastrophe, but to facilitate greatness.
Topics discussed in this episode include:
How technology aids us in realizing personal and societal goals.
FLI’s successes in 2018 and our goals for 2019.
Worldbuilding and how to conceptualize the future.
The possibility of other life in the universe and its implications for the future of humanity.
How we can improve as a species and strategies for doing so.
The importance of a shared positive vision for the future, what that vision might look like, and how a shared vision can still represent a wide enough set of values and goals to cover the billions of people alive today and in the future.
Existential hope and what it looks like now and far into the future.
21/12/2018 • 2 heures, 6 minutes, 15 secondes
AIAP: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah
What role does inverse reinforcement learning (IRL) have to play in AI alignment? What issues complicate IRL and how does this affect the usefulness of this preference learning methodology? What sort of paradigm of AI alignment ought we to take up given such concerns?
Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah is the seventh podcast in the AI Alignment Podcast series, hosted by Lucas Perry. For those of you that are new, this series is covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, governance, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.
In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.
Topics discussed in this episode include:
- The role of systematic bias in IRL
- The metaphilosophical issues of IRL
- IRL's place in preference learning
- Rohin's take on the state of AI alignment
- What Rohin has changed his mind about
18/12/2018 • 1 heure, 7 minutes, 35 secondes
Governing Biotechnology: From Avian Flu to Genetically-Modified Babies With Catherine Rhodes
A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning.
As biotechnology and other emerging technologies become more powerful, the dual-use nature of research -- that is, research that can have both beneficial and risky outcomes -- is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats?
On this month’s podcast, Ariel spoke with Catherine Rhodes about these issues and more. Catherine is a senior research associate and deputy director of the Center for the Study of Existential Risk. Her work has broadly focused on understanding the intersection and combination of risks stemming from technologies and risks stemming from governance. She has particular expertise in international governance of biotechnology, including biosecurity and broader risk management issues.
Topics discussed in this episode include:
~ Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information
~ The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically
~ The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins
~ How scientists can anticipate whether the results of their research could be misused by someone else
~ To what extent does risk stem from technology, and to what extent does it stem from how we govern it?
30/11/2018 • 32 minutes, 40 secondes
Avoiding the Worst of Climate Change with Alexander Verbeek and John Moorhead
“There are basically two choices. We're going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don't care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” - Alexander Verbeek
On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-security relationship. John is President of Drawdown Switzerland, an act tank to support Project Drawdown and other science-based climate solutions that reverse global warming. He is a blogger at Thomson Reuters, The Economist, and sciencebasedsolutions.com, and he advises and informs on climate solutions that are economy, society, and environment positive.
31/10/2018 • 1 heure, 21 minutes, 20 secondes
AIAP: On Becoming a Moral Realist with Peter Singer
Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI?
On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.
In this podcast, Lucas spoke with Peter Singer. Peter is a world-renowned moral philosopher known for his work on animal ethics, utilitarianism, global poverty, and altruism. He's a leading bioethicist, the founder of The Life You Can Save, and currently holds positions at both Princeton University and The University of Melbourne.
Topics discussed in this episode include:
-Peter's transition from moral anti-realism to moral realism
-Why emotivism ultimately fails
-Parallels between mathematical/logical truth and moral truth
-Reason's role in accessing logical spaces, and its limits
-Why Peter moved from preference utilitarianism to hedonic utilitarianism
-How objectivity in ethics might affect AI alignment
18/10/2018 • 51 minutes, 14 secondes
On the Future: An Interview with Martin Rees
How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks? In this special podcast episode, Ariel speaks with cosmologist Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future.
Topics discussed in this episode include:
- Why Martin remains a technical optimist even as he focuses on existential risks
- The economics and ethics of climate change
- How AI and automation will make it harder for Africa and the Middle East to economically develop
- How high expectations for health care and quality of life also put society at risk
- Why growing inequality could be our most underappreciated global risk
- Martin’s view that biotechnology poses greater risk than AI
- Earth’s carrying capacity and the dangers of overpopulation
- Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars
- The ethics of artificial meat, life extension, and cryogenics
- How intelligent life could expand into the galaxy
- Why humans might be unable to answer fundamental questions about the universe
11/10/2018 • 53 minutes, 2 secondes
AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics.
Topics discussed in this episode include:
The sophisticated military robots developed by Soviets during the Cold War
How technology shapes human decision-making in war
“Automation bias” and why having a “human in the loop” is much trickier than it sounds
The United States’ stance on automation with nuclear weapons
Why weaker countries might have more incentive to build AI into warfare
How the US and Russia perceive first-strike capabilities
“Deep fakes” and other ways AI could sow instability and provoke crisis
The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea
The perceived obstacles to reducing nuclear arsenals
28/09/2018 • 51 minutes, 12 secondes
AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill
How are we to make progress on AI alignment given moral uncertainty? What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty?
Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.
If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.
In this podcast, Lucas spoke with William MacAskill. Will is a professor of philosophy at the University of Oxford and is a co-founder of the Center for Effective Altruism, Giving What We Can, and 80,000 Hours. Will helped to create the effective altruism movement and his writing is mainly focused on issues of normative and decision theoretic uncertainty, as well as general issues in ethics.
Topics discussed in this episode include:
-Will’s current normative and metaethical credences
-The value of moral information and moral philosophy
-A taxonomy of the AI alignment problem
-How we ought to practice AI alignment given moral uncertainty
-Moral uncertainty in preference aggregation
-Moral uncertainty in deciding where we ought to be going as a society
-Idealizing persons and their preferences
-The most neglected portion of AI alignment
18/09/2018 • 56 minutes, 56 secondes
AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins
Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction?
On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she's also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance.
Topics discussed in this episode include:
- Three lenses through which to view AI’s transformative power
- Emerging international and national AI governance strategies
- The risks and benefits of regulating artificial intelligence
- The importance of public trust in AI systems
- The dangers of an AI race
- How AI will change the nature of wealth and power
31/08/2018 • 44 minutes, 17 secondes
The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce
What role does metaethics play in AI alignment and safety? How might paths to AI alignment change given different metaethical views? How do issues in moral epistemology, motivation, and justification affect value alignment? What might be the metaphysical status of suffering and pleasure? What's the difference between moral realism and anti-realism and how is each view grounded? And just what does any of this really have to do with AI?
The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.
If you're interested in exploring the interdisciplinary nature of AI alignment, we suggest you take a look here at a preliminary landscape which begins to map this space.
In this podcast, Lucas spoke with David Pearce and Brian Tomasik. David is a co-founder of the World Transhumanist Association, currently rebranded Humanity+. You might know him for his work on The Hedonistic Imperative, a book focusing on our moral obligation to work towards the abolition of suffering in all sentient life. Brian is a researcher at the Foundational Research Institute. He writes about ethics, animal welfare, and future scenarios on his website "Essays On Reducing Suffering."
Topics discussed in this episode include:
-What metaethics is and how it ties into AI alignment or not
-Brian and David's ethics and metaethics
-Moral realism vs antirealism
-Emotivism
-Moral epistemology and motivation
-Different paths to and effects on AI alignment given different metaethics
-Moral status of hedonic tones vs preferences
-Can we make moral progress and would this mean?
-Moving forward given moral uncertainty
16/08/2018 • 1 heure, 45 minutes, 56 secondes
Six Experts Explain the Killer Robots Debate
Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated.
In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre, artificial intelligence professor Toby Walsh, Article 36 founder Richard Moyes, Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty, and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro.
If you don't have time to listen to the podcast in full, or if you want to skip around through the interviews, each interview starts at the timestamp below:
Paul Scharre: 3:40
Toby Walsh: 40:50
Richard Moyes: 53:30
Mary Wareham & Bonnie Docherty: 1:03:35
Peter Asaro: 1:32:40
31/07/2018 • 2 heures, 12 secondes
AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy
What role does cyber security play in alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway?
AI Safety, Possible Minds, and Simulated Worlds is the third podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.
In this podcast, Lucas spoke with Roman Yampolskiy, a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. He is an author of over 100 publications including multiple journal articles and books.
Topics discussed in this episode include:
-Cyber security applications to AI safety
-Key concepts in Roman's papers and books
-Is AI alignment solvable?
-The control problem
-The ethics of and detecting qualia in machine intelligence
-Machine ethics and it's role or lack thereof in AI safety
-Simulated worlds and if detecting base reality is possible
-AI safety publicity strategy
16/07/2018 • 1 heure, 22 minutes, 30 secondes
Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams
How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI's development?
On this month's podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include everyone's voice.
Charlie is the founder and CEO of the digital media strategy company Served Fresh Media, and she's also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies and discuss the implications of emerging tech on society. Randi is a doctoral student in the personal robotics group at the MIT Media Lab. She wants to understand children's interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems.
29/06/2018 • 52 minutes, 48 secondes
AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala
In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity?
In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research.
Topics discussed in this episode include:
-The definition of and a taxonomy of suffering risks
-How superintelligence has special leverage for generating or mitigating suffering risks
-How different moral systems view suffering risks
-What is possible of minds in general and how this plays into suffering risks
-The probability of suffering risks
-What we can do to mitigate suffering risks
14/06/2018 • 1 heure, 14 minutes, 40 secondes
Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler
With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense?
To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these countries are monitored, Ariel spoke with Melissa Hanham and Dave Schmerler on this month's podcast. Melissa and Dave are both nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies, where they research weapons of mass destruction with a focus on North Korea. Topics discussed in this episode include:
the progression of North Korea's quest for nukes,
what happened and what’s next regarding the Iran deal,
how to use open-source data to monitor nuclear weapons testing, and
how younger generations can tackle nuclear risk.
In light of the on-again/off-again situation regarding the North Korea Summit, Melissa sent us a quote after the podcast was recorded, saying:
"Regardless of whether the summit in Singapore takes place, we all need to set expectations appropriately for disarmament. North Korea is not agreeing to give up nuclear weapons anytime soon. They are interested in a phased approach that will take more than a decade, multiple parties, new legal instruments, and new technical verification tools."
31/05/2018 • 42 minutes, 26 secondes
What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville
What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing.
On this month's podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability of Nuclear War. The report examines 60 historical incidents that could have escalated to nuclear war and presents a model for determining the odds are that we could have some type of nuclear war in the future.
30/04/2018 • 57 minutes, 55 secondes
AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell
Inverse Reinforcement Learning and Inferring Human Preferences is the first podcast in the new AI Alignment series, hosted by Lucas Perry. This series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across a variety of areas, such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we will hope that you join in the conversations by following or subscribing to us on Youtube, Soundcloud, or your preferred podcast site/application.
In this podcast, Lucas spoke with Dylan Hadfield-Menell, a fifth year Ph.D student at UC Berkeley. Dylan’s research focuses on the value alignment problem in artificial intelligence. He is ultimately concerned with designing algorithms that can learn about and pursue the intended goal of their users, designers, and society in general. His recent work primarily focuses on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems.
Topics discussed in this episode include:
-Inverse reinforcement learning
-Goodhart’s Law and it’s relation to value alignment
-Corrigibility and obedience in AI systems
-IRL and the evolution of human values
-Ethics and moral psychology in AI alignment
-Human preference aggregation
-The future of IRL
25/04/2018 • 1 heure, 25 minutes, 1 secondes
Navigating AI Safety -- From Malicious Use to Accidents
Is the malicious use of artificial intelligence inevitable? If the history of technological progress has taught us anything, it's that every "beneficial" technological breakthrough can be used to cause harm. How can we keep bad actors from using otherwise beneficial AI technology to hurt others? How can we ensure that AI technology is designed thoughtfully to prevent accidental harm or misuse?
On this month's podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER's recent report on forecasting, preventing, and mitigating the malicious uses of AI, along with the many efforts to ensure safe and beneficial AI.
30/03/2018 • 58 minutes, 3 secondes
AI, Ethics And The Value Alignment Problem With Meia Chita-Tegmark And Lucas Perry
What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can't even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration?
Ariel spoke with FLI's Meia Chita-Tegmark and Lucas Perry on this month's podcast about the value alignment problem: the challenge of aligning the goals and actions of AI systems with the goals and intentions of humans.
28/02/2018 • 49 minutes, 35 secondes
Top AI Breakthroughs and Challenges of 2017
AlphaZero, progress in meta-learning, the role of AI in fake news, the difficulty of developing fair machine learning -- 2017 was another year of big breakthroughs and big challenges for AI researchers!
To discuss this more, we invited FLI's Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month's podcast. They talked about some of the progress they were most excited to see last year and what they're looking forward to in the coming year.
31/01/2018 • 30 minutes, 52 secondes
Beneficial AI And Existential Hope In 2018
For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we've built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how we've honored one of civilization's greatest heroes.
21/12/2017 • 37 minutes, 29 secondes
Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe
What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future?
To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona State University School for the Future of Innovation in Society, where his work focuses on exploring how emerging and converging technologies can be developed and used responsibly within an increasingly complex world. Jack is a senior lecturer in science and technology studies at University College London where he works on science and innovation policy with a particular interest in emerging technologies.
30/11/2017 • 35 minutes, 2 secondes
AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene And Iyad Rahwan
As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI?
To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the podcast. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University. Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab.
31/10/2017 • 45 minutes, 29 secondes
80,000 Hours with Rob Wiblin and Brenton Mayer
If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours tries to answer. They try to figure out how individuals can set themselves up to help as many people as possible in as big a way as possible.
To learn more about their research, Ariel invited Rob Wiblin and Brenton Mayer of 80,000 Hours to the FLI podcast. In this podcast we discuss "earning to give", building career capital, the most effective ways for individuals to help solve the world's most pressing problems -- including artificial intelligence, nuclear weapons, biotechnology and climate change. If you're interested in tackling these problems, or simply want to learn more about them, this podcast is the perfect place to start.
29/09/2017 • 58 minutes, 46 secondes
Life 3.0: Being Human in the Age of Artificial Intelligence with Max Tegmark
Elon Musk has called it a compelling guide to the challenges and choices in our quest for a great future of life on Earth and beyond, while Stephen Hawking and Ray Kurzweil have referred to it as an introduction and guide to the most important conversation of our time. “It” is Max Tegmark's new book, Life 3.0: Being Human in the Age of Artificial Intelligence.
In this interview, Ariel speaks with Max about the future of artificial intelligence. What will happen when machines surpass humans at every task? Will superhuman artificial intelligence arrive in our lifetime? Can and should it be controlled, and if so, by whom? Can humanity survive in the age of AI? And if so, how can we find meaning and purpose if super-intelligent machines provide for all our needs and make all our contributions superfluous?
29/08/2017 • 34 minutes, 51 secondes
The Art Of Predicting With Anthony Aguirre And Andrew Critch
How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, what it takes to make money on the stock market, and the bystander effect regarding existential risks.
Visit metaculus.com to try your hand at the art of predicting.
Anthony is a professor of physics at the University of California at Santa Cruz. He's one of the founders of the Future of Life Institute, of the Foundational Questions Institute, and most recently of Metaculus.com, which is an online effort to crowdsource predictions about the future of science and technology. Andrew is on a two-year leave of absence from MIRI to work with UC Berkeley's Center for Human Compatible AI. He cofounded the Center for Applied Rationality, and previously worked as an algorithmic stock trader at James Street Capital.
31/07/2017 • 57 minutes, 59 secondes
Banning Nuclear & Autonomous Weapons With Richard Moyes And Miriam Struyk
How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36. He's worked closely with the International Campaign to Abolish Nuclear Weapons, he helped found the Campaign to Stop Killer Robots, and he coined the phrase “meaningful human control” regarding autonomous weapons.
30/06/2017 • 41 minutes, 5 secondes
Creative AI With Mark Riedl & Scientists Support A Nuclear Ban
This is a special two-part podcast. First, Mark and Ariel discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining "common sense reasoning." They also discuss the “big red button” problem in AI safety research, the process of teaching "rationalization" to AIs, and computational creativity. Mark is an associate professor at the Georgia Tech School of interactive computing, where his recent work has focused on human-AI interaction and how humans and AI systems can understand each other.
Then, we hear from scientists, politicians and concerned citizens about why they support the upcoming UN negotiations to ban nuclear weapons. Ariel interviewed a broad range of people over the past two months, and highlights are compiled here, including comments by Congresswoman Barbara Lee, Nobel Laureate Martin Chalfie, and FLI president Max Tegmark.
01/06/2017 • 43 minutes, 54 secondes
Climate Change With Brian Toon And Kevin Trenberth
I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different climate discussion: not about whether climate change is real, but about what it is, what its effects could be, and how can we prepare for the future.
27/04/2017 • 47 minutes
Law and Ethics of AI with Ryan Jenkins and Matt Scherer
The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology.
In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.
31/03/2017 • 58 minutes, 25 secondes
UN Nuclear Weapons Ban With Beatrice Fihn And Susi Snyder
Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. Previous nuclear treaties have included the Test Ban Treaty, and the Non-Proliferation Treaty. But in the 70 plus years of the United Nations, the countries have yet to agree on a treaty to completely ban nuclear weapons. The negotiations will begin this March. To discuss the importance of this event, I interviewed Beatrice Fihn and Susi Snyder. Beatrice is the Executive Director of the International Campaign to Abolish Nuclear Weapons, also known as ICAN, where she is leading a global campaign consisting of about 450 NGOs working together to prohibit nuclear weapons. Susi is the Nuclear Disarmament Program Manager for PAX in the Netherlands, and the principal author of the Don’t Bank on the Bomb series. She is an International Steering Group member of ICAN.
(Edited by Tucker Davey.)
28/02/2017 • 41 minutes, 15 secondes
AI Breakthroughs With Ian Goodfellow And Richard Mallah
2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of a deep learning textbook, and he’s the inventor of Generative Adversarial Networks. Listen to the podcast here or review the transcript here.
31/01/2017 • 54 minutes, 19 secondes
FLI 2016 - A Year In Reivew
FLI's founders and core team -- Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Victoria Krakovna, Richard Mallah, Lucas Perry, David Stanley, and Ariel Conn -- discuss the developments of 2016 they were most excited about, as well as why they're looking forward to 2017.
30/12/2016 • 32 minutes, 24 secondes
Heather Roff and Peter Asaro on Autonomous Weapons
Drs. Heather Roff and Peter Asaro, two experts in autonomous weapons, talk about their work to understand and define the role of autonomous weapons, the problems with autonomous weapons, and why the ethical issues surrounding autonomous weapons are so much more complicated than other AI systems.
30/11/2016 • 34 minutes
Nuclear Winter With Alan Robock and Brian Toon
I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.
31/10/2016 • 46 minutes, 48 secondes
Robin Hanson On The Age Of Em
Dr. Robin Hanson talks about the Age of Em, the future and evolution of humanity, and his research for his next book.
28/09/2016 • 24 minutes, 41 secondes
Nuclear Risk In The 21st Century
In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.
20/09/2016 • 15 minutes, 35 secondes
Concrete Problems In AI Safety With Dario Amodei And Seth Baum
Interview with Dario Amodei of OpenAI and Seth Baum of the Global Catastrophic Risk Institute about studying short-term vs. long-term risks of AI, plus lots of discussion about Amodei's recent paper, Concrete Problems in AI Safety.
30/08/2016 • 43 minutes, 21 secondes
Earthquakes As Existential Risks?
Could an earthquake become an existential or catastrophic risk that puts all of humanity at risk? Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of the Future of Life Institute consider extreme earthquake scenarios to figure out if such a risk is plausible. Featuring seismologist Martin Chapman of Virginia Tech. (Edit: This was just for fun, in a similar vein to MythBusters. We wanted to see just how far we could go.)
25/07/2016 • 27 minutes, 39 secondes
nuclear_interview_David_Wright
nuclear_interview_David_Wright by Future of Life Institute
14/01/2016 • 27 minutes, 30 secondes
Climate interview with Seth Baum
An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute, about whether the Paris Climate Agreement can be considered a success.