Winamp Logo
80,000 Hours Podcast with Rob Wiblin Cover
80,000 Hours Podcast with Rob Wiblin Profile

80,000 Hours Podcast with Rob Wiblin

English, Education, 1 season, 237 episodes, 18 hours, 11 minutes
About
Unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. Subscribe by searching for '80,000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
Episode Artwork

#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"

"Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right? "And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity." — Zach WeinersmithIn today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?Links to learn more, highlights, and full transcript.They cover:Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement.What Zach thinks are the best and worst arguments for settling space.Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist).How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement.A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates.Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe.How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing.The current state of space law and how it might set us up for international conflict.How space cannibalism legal loopholes might work on the International Space Station.And much more.Chapters:Space optimism and space bastards (00:03:04)Bad arguments for why we should settle space (00:14:01)Superficially plausible arguments for why we should settle space (00:28:54)Is settling space even biologically feasible? (00:32:43)Sex, pregnancy, and child development in space (00:41:41)Where’s the best space place to settle? (00:55:02)Creating self-sustaining habitats (01:15:32)What about AI advances? (01:26:23)A roadmap for settling space (01:33:45)Space law (01:37:22)Space signalling and propaganda (01:51:28) Space war (02:00:40)Mining asteroids (02:06:29)Company towns and communes in space (02:10:55)Sending digital minds into space (02:26:37)The most promising space governance models (02:29:07)The tragedy of the commons (02:35:02)The tampon bandolier and other bodily functions in space (02:40:14)Is space cannibalism legal? (02:47:09)The pregnadrome and other bizarre proposals (02:50:02)Space sexism (02:58:38)What excites Zach about the future (03:02:57)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
5/14/20243 hours, 6 minutes, 47 seconds
Episode Artwork

#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

In today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants.Links to learn more, highlights, and full transcript.They cover:The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies.The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support.The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes.How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond.How targeted health interventions stack up against direct cash transfers.Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation.And much more.Chapters:Why is low birthweight a major problem in Uttar Pradesh? (00:02:45)Neonatal mortality and maternal health in Uttar Pradesh (00:06:10)Kangaroo mother care (00:12:08)What would happen without this intervention? (00:16:07)Evidence of KMC’s effectiveness (00:18:15)Longer-term outcomes (00:32:14)GiveWell’s support and implementation challenges (00:41:13)How can KMC be so cost effective? (00:52:38)Programme evaluation (00:57:21)Is KMC is better than direct cash transfers? (00:59:12)Expanding the programme and what skills are needed (01:01:29)Fertility and population decline (01:07:28)What advice Dean would give his younger self (01:16:09)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
5/1/20241 hour, 18 minutes, 58 seconds
Episode Artwork

#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.' I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis BollardIn today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.Links to learn more, highlights, and full transcript.They cover:The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.Work to improve farmed animal welfare that Open Philanthropy is excited about funding.The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.The occasional tension between ending factory farming and curbing climate changeHow AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.And much more.Chapters:Common objections to ending factory farming (00:13:21)Potential solutions (00:30:55)Cage-free reforms (00:34:25)Broiler chicken welfare (00:46:48)Do companies follow through on these commitments? (01:00:21)Fish welfare (01:05:02)Alternatives to animal proteins (01:16:36)Farm animal welfare in Asia (01:26:00)Farm animal welfare in Europe (01:30:45)Animal welfare science (01:42:09)Approaches Lewis is less excited about (01:52:10)Will we end factory farming in our lifetimes? (01:56:36)Effect of AI (01:57:59)Recent big wins for farm animals (02:07:38)How animal advocacy has changed since Lewis first got involved (02:15:57)Response to the Moral Weight Project (02:19:52)How to help (02:28:14)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
4/18/20242 hours, 33 minutes, 12 seconds
Episode Artwork

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. Links to learn more, summary, and full transcript.In today’s episode, host Rob Wiblin asks Zvi for his takes on:US-China negotiationsWhether AI progress has stalledThe biggest wins and losses for alignment in 2023EU and White House AI regulationsWhich major AI lab has the best safety strategyThe pros and cons of the Pause AI movementRecent breakthroughs in capabilitiesIn what situations it’s morally acceptable to work at AI labsWhether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.Zvi and Rob also talk about:The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.And plenty more.Chapters:Zvi’s AI-related worldview (00:03:41)Sleeper agents (00:05:55)Safety plans of the three major labs (00:21:47)Misalignment vs misuse vs structural issues (00:50:00)Should concerned people work at AI labs? (00:55:45)Pause AI campaign (01:30:16)Has progress on useful AI products stalled? (01:38:03)White House executive order and US politics (01:42:09)Reasons for AI policy optimism (01:56:38)Zvi’s day-to-day (02:09:47)Big wins and losses on safety and alignment in 2023 (02:12:29)Other unappreciated technical breakthroughs (02:17:54)Concrete things we can do to mitigate risks (02:31:19)Balsa Research and the Jones Act (02:34:40)The National Environmental Policy Act (02:50:36)Housing policy (02:59:59)Underrated rationalist worldviews (03:16:22)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore
4/11/20243 hours, 31 minutes, 22 seconds
Episode Artwork

AI governance and policy (Article)

Today’s release is a reading of our career review of AI governance and policy, written and narrated by Cody Fenwick.Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks, and there are opportunities in the broad field of AI governance to positively shape how society responds to and prepares for the challenges posed by the technology.Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.If you want to check out the links, footnotes and figures in today’s article, you can find those here.Editing and audio proofing: Ben Cordell and Simon MonsourNarration: Cody Fenwick
3/28/202451 minutes, 6 seconds
Episode Artwork

#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

"When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, 'I solved your problem.' What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, 'Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?' And they’d be like, 'Hell no!' It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it." —Spencer GreenbergIn today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.Links to learn more, summary, and full transcript.They cover:How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.The importance of hype in making valuable things happen.How to recognise warning signs that someone is untrustworthy or likely to hurt you.Whether Registered Reports are successfully solving reproducibility issues in science.The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.The potential harms of lightgassing, which is the opposite of gaslighting.How Spencer’s team used non-statistical methods to test whether astrology works.Whether there’s any social value in retaliation.And much more.Chapters:Does money make you happy? (00:05:54)Hype vs value (00:31:27)Warning signs that someone is bad news (00:41:25)Integrity and reproducibility in social science research (00:57:54)Personal principles (01:16:22)Decision-making errors (01:25:56)Lightgassing (01:49:23)Astrology (02:02:26)Game theory, tit for tat, and retaliation (02:20:51)Parenting (02:30:00)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
3/14/20242 hours, 36 minutes, 38 seconds
Episode Artwork

#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

"[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics."Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered." — Bob FischerIn today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.Links to learn more, summary, and full transcript.They cover:The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.The results that most surprised Bob.Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.Confronting our own biases when estimating animal mental capacities and moral worth.The limitations of using neuron counts as a proxy for moral weights.How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.And plenty more.Chapters:Welfare ranges (00:10:19)Historical assessments (00:16:47)Method (00:24:02)The present / absent approach (00:27:39)Results (00:31:42)Chickens (00:32:42)Bees (00:50:00)Salmon and limits of methodology (00:56:18)Octopuses (01:00:31)Pigs (01:27:50)Surprises about the project (01:30:19)Objections to the project (01:34:25)Alternative decision theories and risk aversion (01:39:14)Hedonism assumption (02:00:54)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
3/8/20242 hours, 21 minutes, 31 seconds
Episode Artwork

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

"The question I care about is: What do I want to do? Like, when I'm 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that's true, what would they have to look like at the cellular level for that to be true? Then what do we have to do to make that happen? In my head, it's much more about agency and what choice do I have over my health. And even if I live the same number of years, can I live as an 80-year-old running every day happily with my grandkids?" — Laura DemingIn today’s episode, host Luisa Rodriguez speaks to Laura Deming — founder of The Longevity Fund — about the challenge of ending ageing.Links to learn more, summary, and full transcript.They cover:How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too.Why we irrationally accept age-related health decline as inevitable.The engineering mindset Laura takes to solving the problem of ageing.Laura’s thoughts on how ending ageing is primarily a social challenge, not a scientific one.The recent exciting regulatory breakthrough for an anti-ageing drug for dogs.Laura’s vision for how increased longevity could positively transform society by giving humans agency over when and how they age.Why this decade may be the most important decade ever for making progress on anti-ageing research.The beauty and fascination of biology, which makes it such a compelling field to work in.And plenty more.Chapters:The case for ending ageing (00:04:00)What might the world look like if this all goes well? (00:21:57)Reasons not to work on ageing research (00:27:25)Things that make mice live longer (00:44:12)Parabiosis, changing the brain, and organ replacement can increase lifespan (00:54:25)Big wins the field of ageing research (01:11:40)Talent shortages and other bottlenecks for ageing research (01:17:36)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
3/1/20241 hour, 37 minutes, 21 seconds
Episode Artwork

#180 – Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.Links to learn more, summary, and full transcript.In this interview, host Rob Wiblin and Hugo discuss:How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.Why fake news and conspiracy theories actually have less impact than most people assume.False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.And plenty more.Chapters:The view that humans are really gullible [00:04:26]The evolutionary argument against humans being gullible [00:07:46]Open vigilance [00:18:56]Intuitive and reflective beliefs [00:32:25]How people decide who to trust [00:41:15]Redefining beliefs [00:51:57]Bloodletting [01:00:38]Vaccine hesitancy and creationism [01:06:38]False beliefs without skin in the game [01:12:36]One consistent weakness in human judgement [01:22:57]Trying to explain harmful financial decisions [01:27:15]Astrology [01:40:40]Medical treatments that don’t work [01:45:47]Generative AI, LLMs, and persuasion [01:54:50]Ways AI could improve the information environment [02:29:59]Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
2/21/20242 hours, 36 minutes, 55 seconds
Episode Artwork

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.Links to learn more, summary, and full transcript.In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.The “smoke detector principle” of why we experience so many false alarms along with true threats.The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.Evolutionary theories on why we age and die.And much more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic ArmstrongTranscriptions: Katy Moore
2/12/20242 hours, 56 minutes, 48 seconds
Episode Artwork

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you're doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them. And you can then feel like, 'I've thought about this, and this is a life that I want. This is a life that we're trying to craft for our family, for our kids.' And that is distinct from thinking you're doing a good job in every moment — which you can't achieve. But you can achieve, 'I'm doing this the way that I think works for my family.'" — Emily OsterIn today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.Links to learn more, summary, and full transcript.They cover:Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula.Whether it’s fine to continue with antidepressants and coffee during pregnancy.What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff.How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family.The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women.Practical advice around managing the tradeoffs between career and family.What to consider when deciding whether and when to have kids.Relationship challenges after having kids, and the protective factors that help.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
2/1/20242 hours, 22 minutes, 36 seconds
Episode Artwork

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's alarming experience red-teaming an early version of GPT-4 and resulting conversations with OpenAI staff and board members.Today we go deeper, diving into:What AI now actually can and can’t do, across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.How we need to learn to talk about AI more productively, particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.Preparing for coming societal impacts and potential disruption from AI.Practical ways that curious listeners can try to stay abreast of everything that’s going on.And plenty more.Links to learn more, summary, and full transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
1/24/20242 hours, 47 minutes, 9 seconds
Episode Artwork

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

Rebroadcast: this episode was originally released in January 2021.You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?”You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.But then you get up, walk outside, and look at the number on your box.‘3’. Huh. Now you don’t know what to believe.If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving.Links to learn more, summary, and full transcript.Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.They also discuss:Which worldviews Open Phil finds most plausible, and how it balances themWhich worldviews Ajeya doesn’t embrace but almost doesHow hard it is to get to other solar systemsThe famous ‘simulation argument’When transformative AI might actually arriveThe biggest challenges involved in working on big research reportsWhat it’s like working at Open PhilAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel
1/12/20242 hours, 59 minutes, 17 seconds
Episode Artwork

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

Rebroadcast: this episode was originally released in October 2021.Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.According to Carl Shulman, research associate at Oxford University’s Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.Links to learn more, summary, and full transcript.The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.So saving all US citizens at any given point in time would be worth $1,300 trillion.If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.If the case is clear enough, why hasn’t it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.Carl suspects another reason is that it’s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn’t know what good performance looks like, politicians can’t be given incentives to do the right thing.It’s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we’ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we’ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:A few reasons Carl isn’t excited by ‘strong longtermism’How x-risk reduction compares to GiveWell recommendationsSolutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate changeThe history of bioweaponsWhether gain-of-function research is justifiableSuccesses and failures around COVID-19The history of existential riskAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
1/8/20243 hours, 50 minutes, 30 seconds
Episode Artwork

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

Rebroadcast: this episode was originally released in September 2021.If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft.They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here?According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil.Links to learn more, summary, and full transcript.In today’s episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world.Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country’s rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us.The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected.Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. To root out fraud, aid agencies try to impose institutions and laws that work in countries like the U.K. today. Everyone nods their heads and appears to go along, but years later they find nothing has changed, or worse — the new anti-corruption laws are mostly just used to persecute anyone who challenges the country’s rulers.As Mushtaq explains, to people who specialise in understanding why corruption is ubiquitous in some countries but not others, this is entirely predictable. Western agencies imagine a situation where most people are law abiding, but a handful of selfish fat cats are engaging in large-scale graft. In fact in the countries they’re trying to change everyone is breaking some rule or other, or participating in so-called ‘corruption’, because it’s the only way to get things done and always has been.Mushtaq’s rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they’re participating in, they almost always win out.To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers.Trying to impose a new way of doing things from the top down wasn’t how Europe modernised, and it won’t work elsewhere either.In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption.In this extensive interview Rob and Mushtaq cover this and much more, including:How does one test theories like this?Why are companies in some poor countries so much less productive than their peers in rich countries?Have rich countries just legalized the corruption in their societies?What are the big live debates in institutional economics?Should poor countries protect their industries from foreign competition?Where has industrial policy worked, and why?How can listeners use these theories to predict which policies will work in their own countries?Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel
1/4/20243 hours, 22 minutes, 17 seconds
Episode Artwork

Best of 2023: One highlight from every episode

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came out in 2023. That's 32 of our favourite ideas packed into one episode that's so bursting with substance it might be more than the human mind can safely handle.There's something for everyone here: Ezra Klein on punctuated equilibrium Tom Davidson on why AI takeoff might be shockingly fast Johannes Ackva on political action versus lifestyle changes Hannah Ritchie on how buying environmentally friendly technology helps low-income countries  Bryan Caplan on rational irrationality on the part of voters Jan Leike on whether the release of ChatGPT increased or reduced AI extinction risks Athena Aktipis on why elephants get deadly cancers less often than humans Anders Sandberg on the lifespan of civilisations Nita Farahany on hacking neural interfaces ...plus another 23 such gems. And they're in an order that our audio engineer Simon Monsour described as having an "eight-dimensional-tetris-like rationale."I don't know what the hell that means either, but I'm curious to find out.And remember: if you like these highlights, note that we release 20-minute highlights reels for every new episode over on our sister feed, which is called 80k After Hours. So even if you're struggling to make time to listen to every single one, you can always get some of the best bits of our episodes.We hope for all the best things to happen for you in 2024, and we'll be back with a traditional classic episode soon.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
12/31/20231 hour, 53 minutes, 43 seconds
Episode Artwork

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

Rebroadcast: this episode was originally released in May 2021.Today’s episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!).The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it’s rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so.Links to learn more, summary, and full transcript.The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today.The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort.Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better.Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. If you’re in a hurry, we’ve extracted the key advice that Howie has to share in a section below.Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world.Here are a few quotes from early reviewers:"I think there’s a big difference between admitting you have depression/seeing a psych and giving a warts-and-all account of a major depressive episode like Howie does in this episode… His description was relatable and really inspiring."Someone who works on mental health issues said:"This episode is perhaps the most vivid and tangible example of what it is like to experience psychological distress that I’ve ever encountered. Even though the content of Howie and Keiran’s discussion was serious, I thought they both managed to converse about it in an approachable and not-overly-somber way."And another reviewer said:"I found Howie’s reflections on what is actually going on in his head when he engages in negative self-talk to be considerably more illuminating than anything I’ve heard from my therapist."We also hope that the episode will: Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles. Give insight into what it’s like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully. Several early listeners have even made specific behavioral changes due to listening to the episode — including people who generally have good mental health but were convinced it’s well worth the low cost of setting up a plan in case they have problems in the future.So we think this episode will be valuable for: People who have experienced mental health problems or might in future; People who have had troubles with stress, anxiety, low mood, low self esteem, imposter syndrome and similar issues, even if their experience isn’t well described as ‘mental illness’; People who have never experienced these problems but want to learn about what it’s like, so they can better relate to and assist family, friends or colleagues who do. In other words, we think this episode could be worthwhile for almost everybody. Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts.If you don’t want to hear or read the most intense section, you can skip the chapter called ‘Disaster’. And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’.We’ve collected a large list of high quality resources for overcoming mental health problems in our links section.If you’re feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the US (800-273-8255) and Samaritans in the UK (116 123). You may also want to find and save a number for a local service where possible.Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel
12/27/20232 hours, 51 minutes, 32 seconds
Episode Artwork

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do this safely?That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast.Links to learn more, summary, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public.Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail.But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best.On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness.At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater.By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also: Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts. Which AI applications we should be urgently rolling out, with less worry about safety. Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments. Whether AI capabilities are advancing faster than safety efforts and controls. The costs and benefits of releasing powerful models like GPT-4. Nathan’s view on the game theory of AI arms races and China. Whether it’s worth taking some risk with AI for huge potential upside. The need for more “AI scouts” to understand and communicate AI progress. And plenty more. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore
12/22/20233 hours, 46 minutes, 52 seconds
Episode Artwork

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.Links to learn more, summary, and full transcript.Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting.Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?With host Robert Wiblin, Lucia answers all those questions and more: Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here). How bad lead poisoning is in rich countries. Why lead is still in aeroplane fuel. How lead got put straight in food in Bangladesh, and a handful of people got it removed. Why the enormous damage done by lead mostly goes unnoticed. The other major sources of lead exposure aside from paint. Lucia’s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship’s Incubation Program. Why Lucia pledges 10% of her income to cost-effective charities. Lucia’s take on why GiveWell didn’t support LEEP earlier on. How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer. Generalisable lessons LEEP has learned from coordinating with governments in poor countries. And plenty more. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore
12/14/20232 hours, 14 minutes, 8 seconds
Episode Artwork

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you."So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, 'Oh my god, mind reading is here. Now what?'" — Nita FarahanyIn today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.Links to learn more, summary, and full transcript.They cover: How close we are to actual mind reading — for example, a study showing 80%+ accuracy on decoding whole paragraphs of what a person was thinking. How hacking neural interfaces could cure depression. How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations. How close we are to being able to unlock our phones by singing a song in our heads. How neurodata has been used for interrogations, and even criminal prosecutions. The possibility of linking brains to the point where you could experience exactly the same thing as another person. Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind. And plenty more. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
12/7/20232 hours, 31 seconds
Episode Artwork

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual. "But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff SeboIn today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.Links to learn more, summary, and full transcript.They cover: The non-negligible chance that AI systems will be sentient by 2030 What AI systems might want and need, and how that might affect our moral concepts What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote? What kind of legal and political status should AI systems have? Legal personhood? Political citizenship? What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other? The repugnant conclusion and the rebugnant conclusion The experience of trying to build the field of AI welfare What improv comedy can teach us about doing good in the world And plenty more. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
11/22/20232 hours, 38 minutes, 20 seconds
Episode Artwork

#172 – Bryan Caplan on why you should stop reading the news

Is following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including: That it overwhelmingly provides us with information we can't usefully act on. That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant. That it obscures the big picture, falling into the trap of thinking 'something important happens every day.' That it's highly addictive, for many people chewing up 10% or more of their waking hours. That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought. And plenty more. Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.In the second half of the episode, Bryan and Rob cover:  Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly. Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions. How to allocate resources in space. Bryan's experience homeschooling his kids. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
11/17/20232 hours, 23 minutes, 22 seconds
Episode Artwork

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison YoungIn today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.Links to learn more, summary, and full transcript.They cover: The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reporting The Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many years The time the Soviets had a major anthrax leak, and then hid it for over a decade The 1977 influenza pandemic caused by vaccine trial gone wrong in China The last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK  Ways we could get more reliable oversight and accountability for these labs And the investigative work Alison’s most proud of Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
11/9/20231 hour, 46 minutes, 14 seconds
Episode Artwork

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh HarishIn today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.Links to learn more, summary, and full transcript.They cover: How bad air pollution is for our health and life expectancy The different kinds of harm that particulate pollution causes The strength of the evidence that it damages our brain function and reduces our productivity Whether it was a mistake to switch our attention to climate change and away from air pollution Whether most listeners to this show should have an air purifier running in their house right now Where air pollution in India is worst and why, and whether it's going up or down Where most air pollution comes from The policy blunders that led to many sources of air pollution in India being effectively unregulated Why indoor air pollution packs an enormous punch The politics of air pollution in India How India ended up spending a lot of money on outdoor air purifiers The challenges faced by foreign philanthropists in India Why Santosh has made the grants he has so far And plenty more Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
11/1/20232 hours, 57 minutes, 46 seconds
Episode Artwork

#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?” I think that's an interesting thought experiment -- and a good one -- to say, “Are there cases in which I think that's justifiable?” — Paul NiehausIn today’s episode, host Luisa Rodriguez interviews Paul Niehaus — co-founder of GiveDirectly — on the case for giving unconditional cash to the world's poorest households.Links to learn more, summary and full transcript.They cover: The empirical evidence on whether giving cash directly can drive meaningful economic growth How the impacts of GiveDirectly compare to USAID employment programmes GiveDirectly vs GiveWell’s top-recommended charities How long-term guaranteed income affects people's risk-taking and investments Whether recipients prefer getting lump sums or monthly instalments How GiveDirectly tackles cases of fraud and theft The case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and Liberia The political viability of UBI Plenty more Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore
10/26/20231 hour, 47 minutes, 56 seconds
Episode Artwork

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover: Some crazy anomalies in the historical record of civilisational progress Whether we should think about technology from an evolutionary perspective Whether we ought to expect war to make a resurgence or continue dying out Why we can't end up living like The Jetsons Whether stagnation or cyclical recurring futures seem very plausible What it means that the rate of increase in the economy has been increasing Whether violence is likely between humans and powerful AI systems The most likely reasons for Rob and Ian to be really wrong about all of this How professional historians react to this sort of talk The future of Ian’s work Plenty more Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
10/23/20232 hours, 43 minutes, 55 seconds
Episode Artwork

#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren KellLinks to learn more, summary and full transcript.In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.They cover: The basic case for alternative proteins, and why they’re so hard to make Why fermentation is a surprisingly promising technology for creating delicious alternative proteins  The main scientific challenges that need to be solved to make fermentation even more useful The progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordable How GFI Europe is helping with some of these challenges How people can use their careers to contribute to replacing factory farming with alternative proteins The best part of Seren’s job Plenty more Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore
10/18/20231 hour, 54 minutes, 49 seconds
Episode Artwork

#166 – Tantum Collins on what he’s learned as an AI policy insider

"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum CollinsIn today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.Links to learn more, summary and full transcript.They cover: How AI could strengthen government capacity, and how that's a double-edged sword How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't there To what extent policymakers take different threats from AI seriously Whether the US and China are in an AI arms race or not Whether it's OK to transform the world without much of the world agreeing to it The tyranny of small differences in AI policy Disagreements between different schools of thought in AI policy, and proposals that could unite them How the US AI Bill of Rights could be improved Whether AI will transform the labour market, and whether it will become a partisan political issue The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them What listeners might be able to do to help with this whole mess Panpsychism Plenty more Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
10/12/20233 hours, 8 minutes, 49 seconds
Episode Artwork

#165 – Anders Sandberg on war in space, whether civilizations age, and the best things possible in our universe

"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.Links to learn more, summary and full transcript.They cover: The epic new book Anders is working on, and whether he’ll ever finish it Whether there's a best possible world or we can just keep improving forever What wars might look like if the galaxy is mostly settled The impediments to AI or humans making it to other stars How the universe will end a million trillion years in the future Whether it’s useful to wonder about whether we’re living in a simulation The grabby aliens theory Whether civilizations get more likely to fail the older they get The best way to generate energy that could ever exist Black hole bombs Whether superintelligence is necessary to get a lot of value The likelihood that life from elsewhere has already visited Earth And plenty more. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
10/6/20232 hours, 48 minutes, 33 seconds
Episode Artwork

#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin EsveltIn today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.Links to learn more, summary and full transcript.They cover: Why it makes sense to focus on deliberately released pandemics Case studies of people who actually wanted to kill billions of humans How many people have the technical ability to produce dangerous viruses The different threats of stealth and wildfire pandemics that could crash civilisation The potential for AI models to increase access to dangerous pathogens Why scientists try to identify new pandemic-capable pathogens, and the case against that research Technological solutions, including UV lights and advanced PPE Using CRISPR-based gene drive to fight diseases and reduce animal suffering And plenty more. Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
10/2/20233 hours, 3 minutes, 42 seconds
Episode Artwork

Great power conflict (Article)

Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.If you want to check out the links, footnotes and figures in today’s article, you can find those here.And if you like this article, you might enjoy a couple of related episodes of this podcast: #128 – Chris Blattman on the five reasons wars happen #140 – Bear Braumoeller on the case that war isn’t in decline Audio mastering and editing for this episode: Dominic ArmstrongAudio Engineering Lead: Ben CordellProducer: Keiran Harris
9/22/20231 hour, 19 minutes, 46 seconds
Episode Artwork

#163 – Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”Links to learn more, summary and full transcript.Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.Toby and Rob also discuss: The rise and fall of FTX and some of its impacts What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground What utilitarianism has going for it, and what's wrong with it in Toby's view How to mathematically model the importance of personal integrity Which AI labs Toby thinks have been acting more responsibly than others How having a young child affects Toby’s feelings about AI risk Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial How Toby ended up being the source of the highest quality images of the Earth from space Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourTranscriptions: Katy Moore
9/8/20233 hours, 7 minutes, 8 seconds
Episode Artwork

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon and on Audible.If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.
9/4/20234 hours, 41 minutes, 13 seconds
Episode Artwork

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.Links to learn more, summary and full transcript.On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:1. Developing an Apollo programme for technical AI safety2. Instituting capability audits for AI models3. Buying time by exploiting hardware choke points4. Getting critics involved in directly engineering AI models5. Getting AI labs to be guided by motives other than profit6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working9. Creating a mass public movement that understands AI and can demand the necessary controls10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibriaAs Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. Rob and Mustafa discuss the above, as well as: Whether we should be open sourcing AI models Whether Mustafa's policy views are consistent with his timelines for transformative AI How people with very different views on these issues get along at AI labs The failed efforts (so far) to get a wider range of people involved in these decisions Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4 Whether we'll be blown away by AI progress over the next year What mandatory regulations government should be imposing on AI labs right now Appropriate priorities for the UK's upcoming AI safety summit Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
9/1/202359 minutes, 34 seconds
Episode Artwork

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael WebbIn today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market.Links to learn more, summary and full transcript.They cover: The jobs most and least exposed to AI Whether we’ll we see mass unemployment in the short term  How long it took other technologies like electricity and computers to have economy-wide effects Whether AI will increase or decrease inequality Whether AI will lead to explosive economic growth What we can we learn from history, and reasons to think this time is different Career advice for a world of LLMs Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved Michael's take as a musician on AI-generated music And plenty more If you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
8/23/20233 hours, 30 minutes, 32 seconds
Episode Artwork

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah RitchieIn today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.Links to learn more, summary and full transcript.They cover: Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get Her new book about how we could be the first generation to build a sustainable planet Whether climate change is the most worrying environmental issue How we reduced outdoor air pollution Why Hannah is worried about the state of ​​biodiversity Solutions that address multiple environmental issues at once How the world coordinated to address the hole in the ozone layer Surprises from Our World in Data’s research Psychological challenges that come up in Hannah’s work And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
8/14/20232 hours, 36 minutes, 42 seconds
Episode Artwork

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Links to learn more, summary and full transcript.Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.”But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including: If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about. How do you know that these technical problems can be solved at all, even in principle? At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do? In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover: OpenAI's current plans to achieve 'superalignment' and the reasoning behind them Why alignment work is the most fundamental and scientifically interesting research in ML The kinds of people he’s excited to hire to join his team and maybe save the world What most readers misunderstood about the OpenAI announcement The three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversight What the standard should be for confirming whether Jan's team has succeeded Whether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solved Whether Jan thinks OpenAI has deployed models too quickly or too slowly The many other actors who also have to do their jobs really well if we're going to have a good AI future Plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
8/7/20232 hours, 51 minutes, 20 seconds
Episode Artwork

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely.Get these highlight episodes by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Highlights put together by Simon Monsour and Milo McGuire
8/5/20236 minutes, 10 seconds
Episode Artwork

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as: Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies. What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists. Holden’s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world. What the ML and AI safety communities get wrong in Holden's view. Ways we might succeed with AI just by dumb luck. The value of laying out imaginable success stories. Why information security is so important and underrated. Whether it's good to work at an AI lab that you think is particularly careful. The track record of futurists’ predictions. And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
7/31/20233 hours, 13 minutes, 33 seconds
Episode Artwork

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.Links to learn more, summary and full transcript.Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:They cover: Whether it's desirable to slow down AI research The value of engaging with current policy debates even if they don't seem directly important Which AI business models seem more or less dangerous Tensions between people focused on existing vs emergent risks from AI Two major challenges of being a new parent Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
7/24/20231 hour, 18 minutes, 46 seconds
Episode Artwork

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus AnderljungIn today’s episode, host Luisa Rodriguez interviews the head of research at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.Links to learn more, summary and full transcript.They cover: The need for AI governance, including self-replicating models and ChaosGPT Whether or not AI companies will willingly accept regulation The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring Whether we can be confident that people won't train models covertly and ignore the licencing system The progress we’ve made so far in AI governance The key weaknesses of these approaches The need for external scrutiny of powerful models The emergent capabilities problem Why it really matters where regulation happens Advice for people wanting to pursue a career in this field And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
7/10/20232 hours, 6 minutes, 36 seconds
Episode Artwork

Bonus: The Worst Ideas in the History of the World

Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.If you have strong opinions about this one way or another, please email us at [email protected] to help us figure out whether more of this ought to exist.
6/30/202335 minutes, 24 seconds
Episode Artwork

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.Links to learn more, summary and full transcript.As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.Lennart and Rob discuss the above as well as: How can we best categorise all the ways AI could go wrong? Why did the US restrict the export of some chips to China and what impact has that had? Is the US in an 'arms race' with China or is that more an illusion? What is the deal with chips specialised for AI applications? How is the 'compute' industry organised? Downsides of using compute as a target for regulations Could safety mechanisms be built into computer chips themselves? Who would have the legal authority to govern compute if some disaster made it seem necessary? The reasons Rob doubts that any of this stuff will work Could AI be trained to operate as a far more severe computer worm than any we've seen before? What does the world look like when sluggish human reaction times leave us completely outclassed? And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Milo McGuire, Dominic Armstrong, and Ben CordellTranscriptions: Katy Moore
6/22/20233 hours, 12 minutes, 43 seconds
Episode Artwork

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work. Links to learn more, summary and full transcript.He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including: What he sees as the strongest case both for and against slowing down the rate of progress in AI research. Why he disagrees with most other ML researchers that training a model on a sensible 'reward function' is enough to get a good outcome. Why he disagrees with many on LessWrong that the bar for whether a safety technique is helpful is “could this contain a superintelligence.” That he thinks nobody has very compelling arguments that AI created via machine learning will be dangerous by default, or that it will be safe by default. He believes we just don't know. That he understands that analogies and visualisations are necessary for public communication, but is sceptical that they really help us understand what's going on with ML models, because they're different in important ways from every other case we might compare them to. Why he's optimistic about DeepMind’s work on scalable oversight, mechanistic interpretability, and dangerous capabilities evaluations, and what each of those projects involves. Why he isn't inherently worried about a future where we're surrounded by beings far more capable than us, so long as they share our goals to a reasonable degree. Why it's not enough for humanity to know how to align AI models — it's essential that management at AI labs correctly pick which methods they're going to use and have the practical know-how to apply them properly. Three observations that make him a little more optimistic: humans are a bit muddle-headed and not super goal-orientated; planes don't crash; and universities have specific majors in particular subjects. Plenty more besides. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Milo McGuire, Dominic Armstrong, and Ben CordellTranscriptions: Katy Moore
6/9/20233 hours, 9 minutes, 42 seconds
Episode Artwork

#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year.Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently.Links to learn more, summary and full transcript.Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them.An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else.Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as: Why GiveWell flipped from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on them What transferable lessons GiveWell learned from investigating different kinds of interventions Why the best treatment for premature babies in low-resource settings may involve less rather than more medicine. Severe malnourishment among children and what can be done about it. How to deal with hidden and non-obvious costs of a programme Some cheap early treatments that can prevent kids from developing lifelong disabilities The various roles GiveWell is currently hiring for, and what's distinctive about their organisational culture And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Simon Monsour and Ben CordellTranscriptions: Katy Moore
6/2/20232 hours, 56 minutes, 10 seconds
Episode Artwork

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.Links to learn more, summary and full transcript.To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'If true, it could revolutionise our comprehension of the universe and the way we ought to live...Other two ideas cut for length — click here to read the full post.These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.In today's challenging conversation, Joe and Rob discuss all of the above, as well as: What Joe doesn't like about the drowning child thought experiment An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others What Joe doesn't like about the expression “the train to crazy town” Whether Elon Musk should place a higher probability on living in a simulation than most other people Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises To what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thing How strong the case is that advanced AI will engage in generalised power-seeking behaviour Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Milo McGuire and Ben CordellTranscriptions: Katy Moore
5/19/20233 hours, 26 minutes, 58 seconds
Episode Artwork

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job — for all sorts of reasons.Today's guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.Links to learn more, summary and full transcript.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you're monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can't we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won't work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes: Saints — models that care about doing what we really want Sycophants — models that just want us to say they've done a good job, even if they get that praise by taking actions they know we wouldn't want them to Schemers — models that don't care about us or our interests at all, who are just pleasing us so long as that serves their own agenda And according to Ajeya, there are also ways we could end up actively selecting for motivations that we don't want.In today's interview, Ajeya and Rob discuss the above, as well as: How to predict the motivations a neural network will develop through training Whether AIs being trained will functionally understand that they're AIs being trained, the same way we think we understand that we're humans living on planet Earth Stories of AI misalignment that Ajeya doesn't buy into Analogies for AI, from octopuses to aliens to can openers Why it's smarter to have separate planning AIs and doing AIs The benefits of only following through on AI-generated plans that make sense to human beings What approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overrated How one might demo actually scary AI failure mechanisms Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Ryan Kessler and Ben CordellTranscriptions: Katy Moore
5/12/20232 hours, 49 minutes, 40 seconds
Episode Artwork

#150 – Tom Davidson on how quickly AI could transform the world

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from. For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before? You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.” But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least *consider* the idea that the world is about to get — at a minimum — incredibly weird. Links to learn more, summary and full transcript. As a teaser, consider the following: Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world. You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades. But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research. And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves. And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly. To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii. Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now. Wild. Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours. Luisa and Tom also discuss: • How we might go from GPT-4 to AI disaster • Tom’s journey from finding AI risk to be kind of scary to really scary • Whether international cooperation or an anti-AI social movement can slow AI progress down • Why it might take just a few years to go from pretty good AI to superhuman AI • How quickly the number and quality of computer chips we’ve been using for AI have been increasing • The pace of algorithmic progress • What ants can teach us about AI • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore
5/5/20233 hours, 1 minute, 58 seconds
Episode Artwork

Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)

In this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff. Links to learn more, highlights and full transcript. They cover: • The evidence for shrimp sentience • How farmers and the public feel about shrimp • The scale of the problem • What shrimp farming looks like • The killing process, and other welfare issues • Shrimp Welfare Project’s strategy • History of shrimp welfare work • What it’s like working in India and Vietnam • How to help Who this episode is for: • People who care about animal welfare • People interested in new and unusual problems • People open to shrimp sentience Who this episode isn’t for: • People who think shrimp couldn’t possibly be sentient • People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore
4/22/20231 hour, 17 minutes, 27 seconds
Episode Artwork

#149 – Tim LeBon on how altruistic perfectionism is self-defeating

Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself. But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you're doing as much as you think you should makes it hard to focus and get things done. So now you're performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat. This is the disastrous cycle today's guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset. Links to learn more, summary and full transcript. Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on “doing the most good you can,” Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it. But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it's their goal. Tim has treated hundreds of clients with all sorts of mental health challenges. But in today's conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he's most enthusiastic about: cognitive behavioural therapy. Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it's hard to feel truly happy and secure, and free to take risks, when we’re just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that's hard to shake. But there's hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you're not perfect. In today's extensive conversation, Tim and Rob cover: • How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personality • What leads people to adopt a perfectionist mindset • How 80,000 Hours contributes to perfectionism among some readers and listeners, and what it might change about its advice to address this • What happens in a session of cognitive behavioural therapy for someone struggling with perfectionism, and what factors are key to making progress • Experiments to test whether one's core beliefs (‘I need to be perfect to be valued’) are true • Using exposure therapy to treat phobias • How low-self esteem and imposter syndrome are related to perfectionism • Stoicism as an approach to life, and why Tim is enthusiastic about it • What the Stoics do better than utilitarian philosophers and vice versa • And how to decide which are the best virtues to live by Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore
4/12/20233 hours, 11 minutes, 47 seconds
Episode Artwork

#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting. In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment. Links to learn more, summary and full transcript. Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one. In short: we're uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world. That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don't succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels. In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don't work out? Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage. If you're optimistic about renewables, as Johannes is, then that's all the more reason to relax about scenarios where they work as planned, and focus one's efforts on the possibility that they don't. And Johannes notes that the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn't, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come. In today's extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as: • Retooling newly built coal plants in the developing world • Specific clean energy technologies like geothermal and nuclear fusion • Possible biases among environmentalists and climate philanthropists • How climate change compares to other risks to humanity • In what kinds of scenarios future emissions would be highest • In what regions climate philanthropy is most concentrated and whether that makes sense • Attempts to decarbonise aviation, shipping, and industrial processes • The impact of funding advocacy vs science vs deployment • Lessons for climate change focused careers • And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
4/3/20232 hours, 17 minutes, 27 seconds
Episode Artwork

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated. Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years. Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years. Links to learn more, summary and full transcript. He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, "when the original study's p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference." To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results. But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren't yet fully appreciated. One of these Spencer calls 'importance hacking': passing off obvious or unimportant results as surprising and meaningful. Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it's far from easy to stop people exaggerating the importance of their work. In this wide-ranging conversation, Rob and Spencer discuss the above as well as: • When you should and shouldn't use intuition to make decisions. • How to properly model why some people succeed more than others. • The difference between “Soldier Altruists” and “Scout Altruists.” • A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found. • Whether a 15-minute intervention could make people more likely to sustain a new habit two months later. • The most common way for groups with good intentions to turn bad and cause harm. • And Spencer's approach to a fulfilling life and doing good, which he calls “Valuism.” Here are two flashcard decks that might make it easier to fully integrate the most important ideas they talk about: • The first covers 18 core concepts from the episode • The second includes 16 definitions of unusual terms. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore
3/24/20232 hours, 38 minutes, 8 seconds
Episode Artwork

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user: "I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else." (It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.") Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious. What should we make of these AI systems? One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being. Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be. Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation. Links to learn more, summary and full transcript. In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious. To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us. To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious. In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as: • What artificial sentience might look like, concretely • Reasons to think AI systems might become sentient — and reasons they might not • Whether artificial sentience would matter morally • Ways digital minds might have a totally different range of experiences than humans • Whether we might accidentally design AI systems that have the capacity for enormous suffering You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore
3/14/20233 hours, 12 minutes, 50 seconds
Episode Artwork

#145 – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success. It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress. But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable. Links to learn more, summary and full transcript. While most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher doesn't believe any of the arguments for that conclusion pass muster. If he's right, a counterfactual history where slavery remains widespread in 2023 isn't so far-fetched. As Christopher lays out in his two key books, Moral Capital: Foundations of British Abolitionism and Arming Slaves: From Classical Times to the Modern Age, slavery has been ubiquitous throughout history. Slavery of some form was fundamental in Classical Greece, the Roman Empire, in much of the Islamic civilization, in South Asia, and in parts of early modern East Asia, Korea, China. It was justified on all sorts of grounds that sound mad to us today. But according to Christopher, while there’s evidence that slavery was questioned in many of these civilisations, and periodically attacked by slaves themselves, there was no enduring or successful moral advocacy against slavery until the British abolitionist movement of the 1700s. That movement first conquered Britain and its empire, then eventually the whole world. But the fact that there's only a single time in history that a persistent effort to ban slavery got off the ground is a big clue that opposition to slavery was a contingent matter: if abolition had been inevitable, we’d expect to see multiple independent abolitionist movements thoroughly history, providing redundancy should any one of them fail. Christopher argues that this rarity is primarily down to the enormous economic and cultural incentives to deny the moral repugnancy of slavery, and crush opposition to it with violence wherever necessary. Mere awareness is insufficient to guarantee a movement will arise to fix a problem. Humanity continues to allow many severe injustices to persist, despite being aware of them. So why is it so hard to imagine we might have done the same with forced labour? In this episode, Christopher describes the unique and peculiar set of political, social and religious circumstances that gave rise to the only successful and lasting anti-slavery movement in human history. These circumstances were sufficiently improbable that Christopher believes there are very nearby worlds where abolitionism might never have taken off. We also discuss: • Various instantiations of slavery throughout human history • Signs of antislavery sentiment before the 17th century • The role of the Quakers in early British abolitionist movement • The importance of individual “heroes” in the abolitionist movement • Arguments against the idea that the abolition of slavery was contingent • Whether there have ever been any major moral shifts that were inevitable Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore
2/11/20232 hours, 42 minutes, 23 seconds
Episode Artwork

#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer. But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function. If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead. Links to learn more, summary and full transcript. As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise: • Cells will proliferate when they shouldn't. • Cells won't die when they should. • Cells won't engage in the kind of division of labour that they should. • Cells won’t do the jobs that they're supposed to do. • Cells will monopolise resources. • And cells will trash the environment. When we think about animals in the wild, or even bacteria living inside our cells, we understand that they're facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics. We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster. Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since *Homo sapiens* came about. Here’s a quote from Athena: “So you have to shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we're going to map it onto anything like what we experience, a day is at least 10 years for them, right? So it's a very, very different way of thinking.” You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss: • Cheating within cells themselves • Cooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or stars • Whether it’s too out-there to think of humans as engaging in cancerous behaviour • Why elephants get deadly cancers less often than humans, despite having way more cells • When a cell should commit suicide • The strategy of deliberately not treating cancer aggressively • Superhuman cooperation And at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including: • Staying happy while thinking about the apocalypse • Practical steps to prepare for the apocalypse • And whether a zombie apocalypse is already happening among Tasmanian devils And if you’d rather see Rob and Athena’s facial expressions as they laugh and laugh while discussing cancer and the apocalypse — you can watch the video of the full interview. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire Transcriptions: Katy Moore
1/26/20233 hours, 15 minutes, 56 seconds
Episode Artwork

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Rebroadcast: this episode was originally released in June 2020. Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what, she's not so bad". Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history. He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His latest book asks: if we reframe global problems as puzzles, would the world be a better place? Links to learn more, summary and full transcript. This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.) We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.) Another reason to listen is for the facts: • The Bayer aspirin company invented heroin as a cough suppressant • Coriander is just the British way of saying cilantro • Dogs have a third eyelid to protect the eyeball from irritants • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.) One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.) I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; the construction worker who told me how to get to my subway platform on the morning of the interview; and Queen Jadwiga for making bagels popular in the 1300s, which kept me going during the recording. (Thanks a Thousand.) We also discuss: • Blackmailing yourself • The most extreme ideas A.J.’s ever considered • Utilitarian movie reviews • Doing good as a writer • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.
1/16/20232 hours, 35 minutes, 29 seconds
Episode Artwork

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Rebroadcast: this episode was originally released in July 2020. 80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.
1/9/20232 hours, 37 minutes, 10 seconds
Episode Artwork

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

Rebroadcast: this episode was originally released in July 2020. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three ways to effectively prevent crime that don't require police or prisons and the human toll they bring with them: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, the full blog post, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation, Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that reducing lead levels might be the best buy of all in crime prevention. There is really compelling evidence that lead not only increases crime, but also dramatically reduces educational outcomes. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.
1/4/20232 hours, 17 minutes, 45 seconds
Episode Artwork

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially. As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: • Why inter-service rivalry is one of the biggest constraints on US nuclear policy • Two times the US sabotaged nuclear nonproliferation among great powers • How his field uses jargon to exclude outsiders • How the US could prevent the revival of mass nuclear testing by the great powers • Why nuclear deterrence relies on the possibility that something might go wrong • Whether 'salami tactics' render nuclear weapons ineffective • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles • The problems that arise when you won't talk to people you think are evil • Why missile defences are politically popular despite being strategically foolish • How open source intelligence can prevent arms races • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
12/29/20222 hours, 40 minutes, 16 seconds
Episode Artwork

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work he's also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column. • Links to learn more, summary, and full transcript • Video version of the interview • Lecture: Why the world looks the same in any language Our show is mostly about the world's most pressing problems and what you can do to solve them. But what's the point of hosting a podcast if you can't occasionally just talk about something fascinating with someone whose work you appreciate? So today, just before the holidays, we're sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him: • Can you communicate faster in some languages than others, or is there some constraint that prevents that? • Does learning a second or third language make you smarter or not? • Can a language decay and get worse at communicating what people want to say? • If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own? • Did Shakespeare write in a foreign language, and if so, should we translate his plays? • How much does language really shape the way we think? • Are creoles the best languages in the world — languages that ideally we would all speak? • What would be the optimal number of languages globally? • Does trying to save dying languages do their speakers a favour, or is it more of an imposition? • Should we bother to teach foreign languages in UK and US schools? • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself? • Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make? We then put some of these questions to ChatGPT itself, asking it to play the role of a linguistics professor at Colombia University. We’ve also added John’s talk “Why the World Looks the Same in Any Language”  to the end of this episode. So stick around after the credits! And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full conversation here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Video editing: Ryan Kessler Transcriptions: Katy Moore
12/20/20221 hour, 47 minutes, 53 seconds
Episode Artwork

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer. However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable. Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve. We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter. In today's conversation we discuss the above, as well as: • Could speeding up AI development be a bad thing? • The balance between excitement and fear when it comes to AI advances • What OpenAI focuses its efforts where it does • Common misconceptions about machine learning • How many computer chips it might require to be able to do most of the things humans do • How Richard understands the 'alignment problem' differently than other people • Why 'situational awareness' may be a key concept for understanding the behaviour of AI models • What work to positively shape the development of AI Richard is and isn't excited about • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore
12/13/20222 hours, 44 minutes, 18 seconds
Episode Artwork

My experience with imposter syndrome — and how to (partly) overcome it (Article)

Today’s release is a reading of our article called My experience with imposter syndrome — and how to (partly) overcome it, written and narrated by Luisa Rodriguez. If you want to check out the links, footnotes and figures in today’s article, you can find those here. And if you like this article, you’ll probably enjoy episode #100 of this show: Having a successful career with depression, anxiety, and imposter syndrome Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering and editing for this episode: Milo McGuire
12/8/202244 minutes, 4 seconds
Episode Artwork

Rob's thoughts on the FTX bankruptcy

In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relevant 80,000 Hours pieces: • Episode #24 of this show – Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause. • Is it ever OK to take a harmful job in order to do more good? An in-depth analysis • What are the 10 most harmful jobs? • Ways people trying to do good accidentally make things worse, and how to avoid them
11/23/20225 minutes, 35 seconds
Episode Artwork

#140 – Bear Braumoeller on the case that war isn't in decline

Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age. Links to learn more, summary and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war". In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: • Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect? • What would Bear's critics say in response to all this? • What do the optimists get right? • How does one do proper statistical tests for events that are clumped together, like war deaths? • Why are deaths in war so concentrated in a handful of the most extreme events? • Did the ideas of the Enlightenment promote nonviolence, on balance? • Were early states more or less violent than groups of hunter-gatherers? • If Bear is right, what can be done? • How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century? • Which wars are remarkable but largely unknown? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
11/8/20222 hours, 47 minutes, 5 seconds
Episode Artwork

#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play? The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount! Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.” Links to learn more, summary and full transcript. The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped. We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits. These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good. Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact. Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong. In today's conversation, Alan and Rob explore these issues and many others: • Simple rules of thumb for having philosophical insights • A key flaw that hid in Pascal's wager from the very beginning • Whether we have to simply ignore infinities because they mess everything up • What fundamentally is 'probability'? • Some of the many reasons 'frequentism' doesn't work as an account of probability • Why the standard account of counterfactuals in philosophy is deeply flawed • And why counterfactuals present a fatal problem for one sort of consequentialism Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore
10/28/20223 hours, 38 minutes, 25 seconds
Episode Artwork

Preventing an AI-related catastrophe (Article)

Today’s release is a professional reading of our new problem profile on preventing an AI-related catastrophe, written by Benjamin Hilton. We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks. Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this. As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute. Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Editing and narration: Perrin Walker and Shaun Acker Audio proofing: Katy Moore
10/14/20222 hours, 24 minutes, 17 seconds
Episode Artwork

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more. The question is a classic that makes for great dorm-room philosophy discussion. But it's hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective. Today's guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself. Links to learn more, summary, full transcript, and full version of this blog post. That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations. Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering. As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves -- a position known as 'philosophical hedonism' -- has been one of the most enduringly popular ideas in ethics. And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things? Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value "a radical and important philosophical contribution." In today's interview, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes the most popular counterarguments are misguided. Host Rob Wiblin and Sharon also cover: • The essential need to disentangle intrinsic, instrumental, and other sorts of value • Why Sharon’s arguments lead to hedonistic utilitarianism rather than hedonistic egoism (in which we only care about our own feelings) • How do people react to the 'experience machine' thought experiment when surveyed? • Why hedonism recommends often thinking and acting as though it were false • Whether it's crazy to think that relationships are only useful because of their effects on our subjective experiences • Whether it will ever be possible to eliminate pain, and whether doing so would be desirable • If we didn't have positive or negative experiences, whether that would cause us to simply never talk about goodness and badness • Whether the plausibility of hedonism is affected by our theory of mind • And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
9/30/20222 hours, 24 minutes, 19 seconds
Episode Artwork

#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

Effective altruism, in a slogan, aims to 'do the most good.' Utilitarianism, in a slogan, says we should act to 'produce the greatest good for the greatest number.' It's clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism? Today's guest, Andreas Mogensen — senior research fellow at Oxford University's Global Priorities Institute — rejects utilitarianism, but as he explains, this does little to dampen his enthusiasm for the project of effective altruism. Links to learn more, summary and full transcript. Andreas leans towards 'deontological' or rule-based theories of ethics, rather than 'consequentialist' theories like utilitarianism which look exclusively at the effects of a person's actions. Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially. However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger's wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do. In a world as full of preventable suffering as our own, this simple 'principle of beneficence' is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance. As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one's income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they'll get the biggest 'bang for buck'. For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves. What arguments could a non-utilitarian moral theory mount against such giving? Many approaches to morality will say it's permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless. In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover: • Should we treat thought experiments that feature very large numbers with great suspicion? • If we had to allow someone to die to avoid preventing the World Cup final from being broadcast to the world, is that permissible? • What might a virtue ethicist regard as 'doing the most good'? • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be? • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore
9/8/20222 hours, 21 minutes, 33 seconds
Episode Artwork

#136 – Will MacAskill on what we owe the future

1. People who exist in the future deserve some degree of moral consideration. 2. The future could be very big, very long, and/or very good. 3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are. 4. So trying to make the world better for future generations is a key priority of our time. This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism • The three best lines of argument against longtermism • How to avoid moral fanaticism • Which technologies or events are most likely to have permanent effects • What 'longtermists' do today in practice • How to predict the long-term effect of our actions • Whether the future is likely to be good or bad • Concrete ideas to make the future better • What Will donates his money to personally • Potatoes and megafauna • And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
8/15/20222 hours, 54 minutes, 36 seconds
Episode Artwork

#135 – Samuel Charap on key lessons from five months of war in Ukraine

After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we're in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over. So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia. Links to learn more, summary and full transcript. As Sam lays out, Russia controls much of Ukraine's east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter. Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict. In today's brisk conversation, Rob and Sam cover the following topics: • Current territorial control and the level of attrition within Russia’s and Ukraine's military forces. • Russia's current goals. • Whether Sam's views have changed since March on topics like: Putin's motivations, the wisdom of Ukraine's strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends. • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion. • Whether there's anything to learn from many of our worst fears -- such as the use of bioweapons on civilians -- not coming to pass. • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires). • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it's still a long shot. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore
8/8/202254 minutes, 46 seconds
Episode Artwork

#134 – Ian Morris on what big picture history teaches us

Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs. Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women. Why such big systematic changes — and why these changes specifically? That's the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years. Links to learn more, summary and full transcript. There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer? In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels. On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength. There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another. Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career. In today's episode, we discuss all of Ian's major books, taking on topics such as: • Why the Industrial Revolution happened in England rather than China • Whether or not wars can lead to less violence • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces” • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa • In what sense Ian thinks Brexit was “10,000 years in the making” • The most common misconceptions about macrohistory Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
7/22/20223 hours, 41 minutes, 6 seconds
Episode Artwork

#133 – Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection

On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them. That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously. Links to learn more, summary and full transcript. Max's primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity's future including nuclear war, synthetic biology, and AI. Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his 'put up or shut up' resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a website called 'Improve The News' to help readers separate facts from spin. But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind. You can now give an AI system like GPT-3 the text: "I'm going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that's in?" And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago. So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them. He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?” Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What's the potential? What are the threats? How might this story play out? What should we be doing to prepare? Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem. They then spend roughly the last third talking about Max's current big passion: improving the news we consume — where Rob has a few reservations. They also cover: • Whether we could understand what superintelligent systems were doing • The value of encouraging people to think about the positive future they want • How to give machines goals • Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’ • Whether we’re sleepwalking into disaster • Whether people actually just want their biases confirmed • Why Max is worried about government-backed fact-checking • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
7/1/20222 hours, 57 minutes, 50 seconds
Episode Artwork

#132 – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge. Links to learn more, summary and full transcript. The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society. If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately. If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off. As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly. If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world. We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough. In today's conversation, Rob and Nova cover: • How good or bad is information security today • The most secure computer systems that exist • How to design an AI training compute centre for maximum efficiency • Whether 'formal verification' can help us design trustworthy systems • How wide the gap is between AI capabilities and AI safety • How to disincentivise hackers • What should listeners do to strengthen their own security practices • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Beppe Rådvik Transcriptions: Katy Moore
6/14/20222 hours, 42 minutes, 26 seconds
Episode Artwork

#131 – Lewis Dartnell on getting humanity to bounce back faster in a post-apocalyptic world

“We’re leaving these 16 contestants on an island with nothing but what they can scavenge from an abandoned factory and apartment block. Over the next 365 days, they’ll try to rebuild as much of civilisation as they can — from glass, to lenses, to microscopes. This is: The Knowledge!” If you were a contestant on such a TV show, you'd love to have a guide to how basic things you currently take for granted are done — how to grow potatoes, fire bricks, turn wood to charcoal, find acids and alkalis, and so on. Today’s guest Lewis Dartnell has gone as far compiling this information as anyone has with his bestselling book The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm. Links to learn more, summary and full transcript. But in the aftermath of a nuclear war or incredibly deadly pandemic that kills most people, many of the ways we do things today will be impossible — and even some of the things people did in the past, like collect coal from the surface of the Earth, will be impossible the second time around. As Lewis points out, there’s “no point telling this band of survivors how to make something ultra-efficient or ultra-useful or ultra-capable if it's just too damned complicated to build in the first place. You have to start small and then level up, pull yourself up by your own bootstraps.” So it might sound good to tell people to build solar panels — they’re a wonderful way of generating electricity. But the photovoltaic cells we use today need pure silicon, and nanoscale manufacturing — essentially the same technology as microchips used in a computer — so actually making solar panels would be incredibly difficult. Instead, you’d want to tell our group of budding engineers to use more appropriate technologies like solar concentrators that use nothing more than mirrors — which turn out to be relatively easy to make. A disaster that unravels the complex way we produce goods in the modern world is all too possible. Which raises the question: why not set dozens of people to plan out exactly what any survivors really ought to do if they need to support themselves and rebuild civilisation? Such a guide could then be translated and distributed all around the world. The goal would be to provide the best information to speed up each of the many steps that would take survivors from rubbing sticks together in the wilderness to adjusting a thermostat in their comfy apartments. This is clearly not a trivial task. Lewis's own book (at 300 pages) only scratched the surface of the most important knowledge humanity has accumulated, relegating all of mathematics to a single footnote. And the ideal guide would offer pretty different advice depending on the scenario. Are survivors dealing with a radioactive ice age following a nuclear war? Or is it an eerily intact but near-empty post-pandemic world with mountains of goods to scavenge from the husks of cities? As a brand-new parent, Lewis couldn’t do one of our classic three- or four-hour episodes — so this is an unusually snappy one-hour interview, where Rob and Lewis are joined by Luisa Rodriguez to continue the conversation from her episode of the show last year. They cover: • The biggest impediments to bouncing back • The reality of humans trying to actually do this • The most valuable pro-resilience adjustments we can make today • How to recover without much coal or oil • How to feed the Earth in disasters • And the most exciting recent findings in astrobiology Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
6/3/20221 hour, 5 minutes, 41 seconds
Episode Artwork

#130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office. But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP. You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have. This is roughly the situation faced by today's guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement. Links to learn more, summary and full transcript. Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing. While surely a huge success, it brings with it risks that he's never had to consider before: • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it. • Being seen as profligate could strike onlookers as selfish and disreputable. • Folks might start pretending to agree with their agenda just to get grants. • People working on nearby issues that are less flush with funding may end up resentful. • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living. • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely. But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious. People looking askance at you for paying high salaries to attract the staff you want is unpleasant. But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant — it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside. Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today's episode, Rob and Will discuss the above as well as: • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent? • Why are so many nonfiction books full of factual errors? • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever? • What does Will disagree with his colleagues on? • Should we focus on existential risks more or less the same way, whether we care about future generations or not? • Are potatoes one of the most important technologies ever developed? • And plenty more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
5/23/20222 hours, 16 minutes, 40 seconds
Episode Artwork

#129 – James Tibenderana on the state of the art in malaria control and elimination

The good news is deaths from malaria have been cut by a third since 2005. The bad news is it still causes 250 million cases and 600,000 deaths a year, mostly among young children in sub-Saharan Africa. We already have dirt-cheap ways to prevent and treat malaria, and the fraction of the Earth's surface where the disease exists at all has been halved since 1900. So why is it such a persistent problem in some places, even rebounding 15% since 2019? That's one of many questions I put to today's guest, James Tibenderana — doctor, medical researcher, and technical director at a major global health nonprofit known as Malaria Consortium. James studies the cutting edge of malaria control and treatment in order to optimise how Malaria Consortium spends £100 million a year across countries like Uganda, Nigeria, and Chad. Links to learn more, summary and full transcript. In sub-Saharan Africa, where 90% of malaria deaths occur, the infection is spread by a few dozen species of mosquito that are ideally suited to the local climatic conditions and have thus been impossible to eliminate so far. While COVID-19 may have an 'R' (reproduction number) of 5, in some situations malaria has a reproduction number in the 1,000s. A single person with malaria can pass the parasite to hundreds of mosquitoes, which themselves each go on to bite dozens of people each, allowing cases to quickly explode. The nets and antimalarial drugs Malaria Consortium distributes have been highly effective where distributed, but there are tens of millions of young children who are yet to be covered simply due to a lack of funding. Despite the success of these approaches, given how challenging it will be to create a malaria-free world, there's enthusiasm to find new approaches to throw at the problem. Two new interventions have recently generated buzz: vaccines and genetic approaches to control the mosquito species that carry malaria. The RTS,S vaccine is the first-ever vaccine that attacks a protozoa as opposed to a virus or bacteria. It's a great scientific achievement. But James points out that even after three doses, it's still only about 30% effective. Unless future vaccines are substantially more effective, they will remain just a complement to nets and antimalarial drugs, which are cheaper and each cut mortality by more than half. On the other hand, the latest mosquito-control technologies are almost too effective. It is possible to insert genes into specific mosquito populations that reduce their ability to reproduce. By using a 'gene drive,' you can ensure mosquitoes hand these detrimental genes down to 100% of their offspring. If deployed, these genes would spread and ultimately eliminate the mosquitoes that carry malaria at low cost, thereby largely ridding the world of the disease. Because a single country embracing this method would have global effects, James cautions that it's important to get buy-in from all the countries involved, and to have a way of reversing the intervention if we realise we've made a mistake. In this comprehensive conversation, Rob and James discuss all of the above, as well as most of what you could reasonably want to know about the state of the art in malaria control today, including: • How malaria spreads and the symptoms it causes • The use of insecticides and poison baits • How big a problem insecticide resistance is • How malaria was eliminated in North America and Europe • The key strategic choices faced by Malaria Consortium in its efforts to create a malaria-free world • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
5/9/20223 hours, 19 minutes, 35 seconds
Episode Artwork

#128 – Chris Blattman on the five reasons wars happen

In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great. Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out. The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today's episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they've learned. Links to learn more, summary and full transcript. Chris's first point is that while organised violence may feel like it's all around us, it's actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace. In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn't — so they can see what a healthy society looks like and what's missing in the places where war does take hold. Chris argues that social scientists have generated five cogent models of when war can be 'rational' for both sides of a conflict: 1. Unchecked interests — such as national leaders who bear few of the costs of launching a war. 2. Intangible incentives — such as an intrinsic desire for revenge. 3. Uncertainty — such as both sides underestimating each other's resolve to fight. 4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future. 5. Misperceptions — such as our inability to see the world through other people's eyes. In today's interview, we walk through how each of the five explanations work and what specific wars or actions they might explain. In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity). The interview also covers: • What Chris and Rob got wrong about the war in Ukraine • What causes might not fit into these five categories • The role of people's choice to escalate or deescalate a conflict • How great power wars or nuclear wars are different, and what can be done to prevent them • How much representative government helps to prevent war • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
4/28/20222 hours, 46 minutes, 50 seconds
Episode Artwork

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

On this episode of the show, host Rob Wiblin interviews Sam Bankman-Fried. This interview was recorded in February 2022, and released in April 2022. But on November 11 2022, Sam Bankman-Fried's company, FTX, filed for bankruptcy, and all staff at the Future Fund resigned — and the surrounding events led Rob to record a new intro on December 1st 2022 for this episode. • Read 80,000 Hours' statement on these events here. • You can also listen to host Rob’s reaction to the collapse of FTX on this podcast feed, above episode 140, or here. • Rob has shared some clarifications on his views about diminishing returns and risk aversion, and weaknesses in how it was discussed in this episode, here. • And you can read the original blog post associated with the episode here.
4/14/20223 hours, 20 minutes, 27 seconds
Episode Artwork

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't. Incredible though it might seem, according to today's guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children's lives play out once they're adults. Links to learn more, summary and full transcript. Of course, kids do resemble their parents. But just as we probably can't say it was attentive parenting that gave me my mother's nose, perhaps we can't say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can't really distinguish the impact of one from the other. But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you've got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise. The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it's differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment” take second place. Parenting style does matter for something, but it comes in a clear third. Bryan is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting. First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity). Second, parents can and do influence you quite a lot — so long as you're young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot. Third, this research only studies variation in parenting behaviour that was common among the families studied. And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes. But the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview Rob interrogates whether Bryan can really be right, or whether the research he's drawing on has taken a wrong turn somewhere. And that's just one topic we cover, some of the others being: • People’s biggest misconceptions about the labour market • Arguments against open borders • Whether most people actually vote based on self-interest • Whether philosophy should stick to common sense or depart from it radically • Personal autonomy vs. the possible benefits of government regulation • Bryan's perfect betting record • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
4/5/20222 hours, 15 minutes, 15 seconds
Episode Artwork

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have declined by 83%. Nuclear brinksmanship, proxy wars, and the game theory of mutually assured destruction (MAD) have come to feel like relics of another era. Russia's invasion of Ukraine has changed all that. According to Joan Rohlfing — President of the Nuclear Threat Initiative, a Washington, DC-based nonprofit focused on reducing threats from nuclear and biological weapons — the annual risk of a ‘global catastrophic nuclear event'’ never fell as low as people like to think, and for some time has been on its way back up. Links to learn more, summary and full transcript. At the same time, civil society funding for research and advocacy around nuclear risks is being cut in half over a period of years — despite the fact that at $60 million a year, it was already just a thousandth as much as the US spends maintaining its nuclear deterrent. If new funding sources are not identified to replace donors that are withdrawing, the existing pool of talent will have to leave for greener pastures, and most of the next generation will see a career in the field as unviable. While global poverty is on the decline and life expectancy increasing, the chance of a catastrophic nuclear event is probably trending in the wrong direction. Ukraine gave up its nuclear weapons in 1994 in exchange for security guarantees that turned out not to be worth the paper they were written on. States that have nuclear weapons (such as North Korea), states that are pursuing them (such as Iran), and states that have pursued nuclear weapons but since abandoned them (such as Libya, Syria, and South Africa) may take this as a valuable lesson in the importance of military power over promises. China has been expanding its arsenal and testing hypersonic glide missiles that can evade missile defences. Japan now toys with the idea of nuclear weapons as a way to ensure its security against its much larger neighbour. India and Pakistan both acquired nuclear weapons in the late 1980s and their relationship continues to oscillate from hostile to civil and back. At the same time, the risk that nuclear weapons could be interfered with due to weaknesses in computer security is far higher than during the Cold War, when systems were simpler and less networked. In the interview, Joan discusses several steps that can be taken in the immediate term, such as renewed efforts to extend and expand arms control treaties, changes to nuclear use policy, and the retirement of what they see as vulnerable delivery systems, such as land-based silos. In the bigger picture, NTI seeks to keep hope alive that a better system than deterrence through mutually assured destruction remains possible. The threat of retaliation does indeed make nuclear wars unlikely, but it necessarily means the system fails in an incredibly destructive way: with the death of hundreds of millions if not billions. In the long run, even a tiny 1 in 500 risk of a nuclear war each year adds up to around an 18% chance of catastrophe over the century. In this conversation we cover all that, as well as: • How arms control treaties have evolved over the last few decades • Whether lobbying by arms manufacturers is an important factor shaping nuclear strategy • The Biden Nuclear Posture Review • How easily humanity might recover from a nuclear exchange • Implications for the use of nuclear energy Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
3/29/20222 hours, 13 minutes, 41 seconds
Episode Artwork

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong. Links to learn more, summary and full transcript. Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish. First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running. Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. 'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves. While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing. Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget. In today's in-depth conversation, Karen Levy and I chat about the above, as well as: • Why it pays to figure out how you'll interpret the results of an experiment ahead of time • The trouble with misaligned incentives within the development industry • Projects that don't deliver value for money and should be scaled down • How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren • Logistical challenges in reaching huge numbers of people with essential services • Lessons from Karen's many-decades career • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell and Ryan Kessler Transcriptions: Katy Moore
3/21/20223 hours, 9 minutes, 52 seconds
Episode Artwork

#123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster

Russia's invasion of Ukraine is devastating the lives of Ukrainians, and so long as it continues there's a risk that the conflict could escalate to include other countries or the use of nuclear weapons. It's essential that NATO, the US, and the EU play their cards right to ideally end the violence, maintain Ukrainian sovereignty, and discourage any similar invasions in the future. But how? To pull together the most valuable information on how to react to this crisis, we spoke with Samuel Charap — a senior political scientist at the RAND Corporation, one of the US's foremost experts on Russia's relationship with former Soviet states, and co-author of Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia. Links to learn more, summary and full transcript. Samuel believes that Putin views the alignment of Ukraine with NATO as an existential threat to Russia — a perhaps unreasonable view, but a sincere one nevertheless. Ukraine has been drifting further into Western Europe's orbit and improving its defensive military capabilities, so Putin has concluded that if Russia wants to put a stop to that, there will never be a better time to act in the future. Despite early successes holding off the Russian military, Samuel is sceptical that time is on the Ukrainian side. If the war is to end before much of Ukraine is reduced to rubble, it will likely have to be through negotiation, rather than Russian defeat. The US policy response has so far been largely good, successfully balancing the need to punish Russia to dissuade large nations from bullying small ones in the future, while preventing NATO from being drawn into the war directly — which would pose a horrifying risk of escalation to a full nuclear exchange. The pressure from the general public to 'do something' might eventually cause national leaders to confront Russia more directly, but so far they are sensibly showing no interest in doing so. However, use of nuclear weapons remains a low but worrying possibility. Samuel is also worried that Russia may deploy chemical and biological weapons and blame it on the Ukrainians. Before war broke out, it's possible Russia could have been satisfied if Ukraine followed through on the Minsk agreements and committed not to join the EU and NATO. Or it might not have, if Putin was committed to war, come what may. In any case, most Ukrainians found those terms intolerable. At this point, the situation is even worse, and it's hard to see how an enduring ceasefire could be agreed upon. On top of the above, Russia is also demanding recognition that Crimea is part of Russia, and acceptance of the independence of the so-called Donetsk and Luhansk People's Republics. These conditions — especially the second — are entirely unacceptable to the Ukrainians. Hence the war continues, and could grind on for months or even years until one side is sufficiently beaten down to compromise on their core demands. Rob and Samuel discuss all of the above and also: • The chances that this conflict leads to a nuclear exchange • The chances of regime change in Russia • Whether the West should deliver MiG fighter jets to Ukraine • What are the implications if Sweden and/or Finland decide to join NATO? • What should NATO do now, and did it make any mistakes in the past? • What's the most likely situation for us to be looking at in three months' time? • Can Ukraine effectively win the war? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
3/14/202259 minutes, 16 seconds
Episode Artwork

#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

One of 80,000 Hours' main services is our free one-on-one careers advising, which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of people -- including many regular listeners to this show -- about how they might be able to do more good while also having a highly motivating career. Before joining 80,000 Hours, Michelle Hutchinson completed a PhD in Philosophy at Oxford University and helped launch Oxford's Global Priorities Institute, while Habiba Islam studied politics, philosophy, and economics at Oxford University and qualified as a barrister. Links to learn more, summary and full transcript. In this conversation, they cover many topics that recur in their advising calls, and what they've learned from watching advisees’ careers play out: • What they say when advisees want to help solve overpopulation • How to balance doing good against other priorities that people have for their lives • Why it's challenging to motivate yourself to focus on the long-term future of humanity, and how Michelle and Habiba do so nonetheless • How they use our latest guide to planning your career • Why you can specialise and take more risk if you're in a group • Gaps in the effective altruism community it would be really useful for people to fill • Stories of people who have spoken to 80,000 Hours and changed their career — and whether it went well or not • Why trying to have impact in multiple different ways can be a mistake The episode is split into two parts: the first section on The 80,000 Hours Podcast, and the second on our new show 80k After Hours. This is a shameless attempt to encourage listeners to our first show to subscribe to our second feed. That second part covers: • Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour • How much impact the one-on-one team has, the biggest challenges they face as a group, and different paths they could have gone down • Whether giving general advice is a doomed enterprise Get this second part by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app. Want to get free one-on-one advice from our team? We're here to help. We’ve helped thousands of people formulate their plans and put them in touch with mentors. We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. If you're a regular listener to the show we're especially likely to want to speak with you. Learn about and apply for advising. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
3/9/20221 hour, 36 minutes, 25 seconds
Episode Artwork

Introducing 80k After Hours

Today we're launching a new podcast called 80k After Hours. Like this show it’ll mostly still explore the best ways to do good — and some episodes will be even more laser-focused on careers than most original episodes. But we’re also going to widen our scope, including things like how to solve pressing problems while also living a happy and fulfilling life, as well as releases that are just fun, entertaining or experimental. It’ll feature: Conversations between staff on the 80,000 Hours team More eclectic formats and topics — one episode could be a structured debate about 'human challenge trials', the next a staged reading of a play about the year 2750 Niche content for specific audiences, such as high-school students, or active participants in the effective altruism community Extras and outtakes from interviews on the original feed 80,000 Hours staff interviewed on other podcasts Audio versions of our new articles and research You can find it by searching for 80k After Hours in whatever podcasting app you use, or by going to 80000hours.org/after-hours-podcast.
3/1/202213 minutes, 30 seconds
Episode Artwork

#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

If you read polls saying that the public supports a carbon tax, should you believe them? According to today's guest — journalist and blogger Matthew Yglesias — it's complicated, but probably not. Links to learn more, summary and full transcript. Interpreting opinion polls about specific policies can be a challenge, and it's easy to trick yourself into believing what you want to believe. Matthew invented a term for a particular type of self-delusion called the 'pundit's fallacy': "the belief that what a politician needs to do to improve his or her political standing is do what the pundit wants substantively." If we want to advocate not just for ideas that would be good if implemented, but ideas that have a real shot at getting implemented, we should do our best to understand public opinion as it really is. The least trustworthy polls are published by think tanks and advocacy campaigns that would love to make their preferred policy seem popular. These surveys can be designed to nudge respondents toward the desired result — for example, by tinkering with question wording and order or shifting how participants are sampled. And if a poll produces the 'wrong answer', there's no need to publish it at all, so the 'publication bias' with these sorts of surveys is large. Matthew says polling run by firms or researchers without any particular desired outcome can be taken more seriously. But the results that we ought to give by far the most weight are those from professional political campaigns trying to win votes and get their candidate elected because they have both the expertise to do polling properly, and a very strong incentive to understand what the public really thinks. The problem is, campaigns run these expensive surveys because they think that having exclusive access to reliable information will give them a competitive advantage. As a result, they often don’t publish the findings, and instead use them to shape what their candidate says and does. Journalists like Matthew can call up their contacts and get a summary from people they trust. But being unable to publish the polling itself, they're unlikely to be able to persuade sceptics. When assessing what ideas are winners, one thing Matthew would like everyone to keep in mind is that politics is competitive, and politicians aren't (all) stupid. If advocating for your pet idea were a great way to win elections, someone would try it and win, and others would copy. One other thing to check that's more reliable than polling is real-world experience. For example, voters may say they like a carbon tax on the phone — but the very liberal Washington State roundly rejected one in ballot initiatives in 2016 and 2018. Of course you may want to advocate for what you think is best, even if it wouldn't pass a popular vote in the face of organised opposition. The public's ideas can shift, sometimes dramatically and unexpectedly. But at least you'll be going into the debate with your eyes wide open. In this extensive conversation, host Rob Wiblin and Matthew also cover: • How should a humanitarian think about US military interventions overseas? • From an 'effective altruist' perspective, was the US wrong to withdraw from Afghanistan? • Has NATO ultimately screwed over Ukrainians by misrepresenting the extent of its commitment to their independence? • What philosopher does Matthew think is underrated? • How big a risk is ubiquitous surveillance? • What does Matthew think about wild animal suffering, anti-ageing research, and autonomous weapons? • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
2/16/20223 hours, 4 minutes, 17 seconds
Episode Artwork

#120 – Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy

In 2014 Taiwan was rocked by mass protests against a proposed trade agreement with China that was about to be agreed without the usual Parliamentary hearings. Students invaded and took over the Parliament. But rather than chant slogans, instead they livestreamed their own parliamentary debate over the trade deal, allowing volunteers to speak both in favour and against. Instead of polarising the country more, this so-called 'Sunflower Student Movement' ultimately led to a bipartisan consensus that Taiwan should open up its government. That process has gradually made it one of the most communicative and interactive administrations anywhere in the world. Today's guest — programming prodigy Audrey Tang — initially joined the student protests to help get their streaming infrastructure online. After the students got the official hearings they wanted and went home, she was invited to consult for the government. And when the government later changed hands, she was invited to work in the ministry herself. Links to learn more, summary and full transcript. During six years as the country's 'Digital Minister' she has been helping Taiwan increase the flow of information between institutions and civil society and launched original experiments trying to make democracy itself work better. That includes developing new tools to identify points of consensus between groups that mostly disagree, building social media platforms optimised for discussing policy issues, helping volunteers fight disinformation by making their own memes, and allowing the public to build their own alternatives to government websites whenever they don't like how they currently work. As part of her ministerial role Audrey also sets aside time each week to help online volunteers working on government-related tech projects get the help they need. How does she decide who to help? She doesn't — that decision is made by members of an online community who upvote the projects they think are best. According to Audrey, a more collaborative mentality among the country's leaders has helped increase public trust in government, and taught bureaucrats that they can (usually) trust the public in return. Innovations in Taiwan may offer useful lessons to people who want to improve humanity's ability to make decisions and get along in large groups anywhere in the world. We cover: • Why it makes sense to treat Facebook as a nightclub • The value of having no reply button, and of getting more specific when you disagree • Quadratic voting and funding • Audrey’s experiences with the Sunflower Student Movement • Technologies Audrey is most excited about • Conservative anarchism • What Audrey’s day-to-day work looks like • Whether it’s ethical to eat oysters • And much more Check out two current job opportunities at 80,000 Hours: Advisor and Head of Job Board. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
2/2/20222 hours, 5 minutes, 50 seconds
Episode Artwork

#43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

Rebroadcast: this episode was originally released in September 2018.In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”. Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked. Links to learn more, summary and full transcript. The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today. If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere. As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity. You might think the United States would have a more sensible nuclear launch policy. You’d be wrong. As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth. The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe. The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival. Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it. Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity. Strategically, the setup is stupid. Ethically, it is monstrous. So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization? Daniel explores these questions eloquently and urgently in his book. Today we cover: • Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold • How well are secrets kept in the government? • What was the risk of the first atomic bomb test? • Do we have a reliable estimate of the magnitude of a ‘nuclear winter’? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
1/18/20222 hours, 35 minutes, 27 seconds
Episode Artwork

#35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission

Rebroadcast: this episode was originally released in June 2018. How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure. That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator. In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. Links to learn more, summary and full transcript. People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face. But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms. We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’, as well as: • Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform. • How a student can save a hospital millions with a simple spreadsheet model. • The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better. • What most people misunderstand about operations, and how to tell if you have what it takes. • And finally, operations jobs people should consider applying for. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
1/10/20221 hour, 23 minutes, 33 seconds
Episode Artwork

#67 Classic episode – David Chalmers on the nature and ethics of consciousness

Rebroadcast: this episode was originally released in December 2019. What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. Links to learn more, summary and full transcript. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
1/3/20224 hours, 42 minutes, 4 seconds
Episode Artwork

#59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

Rebroadcast: this episode was originally released in June 2019. It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.Links to learn more, summary and full transcript. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site. The 80,000 Hours Podcast is produced by Keiran Harris.
12/27/20211 hour, 43 minutes, 4 seconds
Episode Artwork

#119 – Andrew Yang on our very long-term future, and other topics most politicians won’t touch

Andrew Yang — past presidential candidate, founder of the Forward Party, and leader of the 'Yang Gang' — is kind of a big deal, but is particularly popular among listeners to The 80,000 Hours Podcast. Maybe that's because he's willing to embrace topics most politicians stay away from, like universal basic income, term limits for members of Congress, or what might happen when AI replaces whole industries. Links to learn more, summary and full transcript. But even those topics are pretty vanilla compared to our usual fare on The 80,000 Hours Podcast. So we thought it’d be fun to throw Andrew some stranger or more niche questions we hadn't heard him comment on before, including: 1. What would your ideal utopia in 500 years look like? 2. Do we need more public optimism today? 3. Is positively influencing the long-term future a key moral priority of our time? 4. Should we invest far more to prevent low-probability risks? 5. Should we think of future generations as an interest group that's disenfranchised by their inability to vote? 6. The folks who worry that advanced AI is going to go off the rails and destroy us all... are they crazy, or a valuable insurance policy? 7. Will people struggle to live fulfilling lives once AI systems remove the economic need to 'work'? 8. Andrew is a huge proponent of ranked-choice voting. But what about 'approval voting' — where basically you just get to say “yea” or “nay” to every candidate that's running — which some experts prefer? 9. What would Andrew do with a billion dollars to keep the US a democracy? 10. What does Andrew think about the effective altruism community? 11. What's one thing we should do to reduce the risk of nuclear war? 12. Will Andrew's new political party get Trump elected by splitting the vote, the same way Nader got Bush elected back in 2000? As it turns out, Rob and Andrew agree on a lot, so the episode is less a debate than a chat about ideas that aren’t mainstream yet... but might be one day. They also talk about: • Andrew’s views on alternative meat • Whether seniors have too much power in American society • Andrew’s DC lobbying firm on behalf of humanity • How the rest of the world could support the US • The merits of 18-year term limits • What technologies Andrew is most excited about • How much the US should spend on foreign aid • Persistence and prevalence of inflation in the US economy • And plenty more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
12/20/20211 hour, 25 minutes, 56 seconds
Episode Artwork

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

If a rich country were really committed to pursuing an active biological weapons program, there’s not much we could do to stop them. With enough money and persistence, they’d be able to buy equipment, and hire people to carry out the work. But what we can do is intervene before they make that decision. Today’s guest, Jaime Yassif — Senior Fellow for global biological policy and programs at the Nuclear Threat Initiative (NTI) — thinks that stopping states from wanting to pursue dangerous bioscience in the first place is one of our key lines of defence against global catastrophic biological risks (GCBRs). Links to learn more, summary and full transcript. It helps to understand why countries might consider developing biological weapons. Jaime says there are three main possible reasons: 1. Fear of what their adversary might be up to 2. Belief that they could gain a tactical or strategic advantage, with limited risk of getting caught 3. Belief that even if they are caught, they are unlikely to be held accountable In response, Jaime has developed a three-part recipe to create systems robust enough to meaningfully change the cost-benefit calculation. The first is to substantially increase transparency. If countries aren’t confident about what their neighbours or adversaries are actually up to, misperceptions could lead to arms races that neither side desires. But if you know with confidence that no one around you is pursuing a biological weapons programme, you won’t feel motivated to pursue one yourself. The second is to strengthen the capabilities of the United Nations’ system to investigate the origins of high-consequence biological events — whether naturally emerging, accidental or deliberate — and to make sure that the responsibility to figure out the source of bio-events of unknown origin doesn’t fall between the cracks of different existing mechanisms. The ability to quickly discover the source of emerging pandemics is important both for responding to them in real time and for deterring future bioweapons development or use. And the third is meaningful accountability. States need to know that the consequences for getting caught in a deliberate attack are severe enough to make it a net negative in expectation to go down this road in the first place. But having a good plan and actually implementing it are two very different things, and today’s episode focuses heavily on the practical steps we should be taking to influence both governments and international organisations, like the WHO and UN — and to help them maximise their effectiveness in guarding against catastrophic biological risks. Jaime and Rob explore NTI’s current proposed plan for reducing global catastrophic biological risks, and discuss: • The importance of reducing emerging biological risks associated with rapid technology advances • How we can make it a lot harder for anyone to deliberately or accidentally produce or release a really dangerous pathogen • The importance of having multiples theories of risk reduction • Why Jaime’s more focused on prevention than response • The history of the Biological Weapons Convention • Jaime’s disagreements with the effective altruism community • And much more And if you might be interested in dedicating your career to reducing GCBRs, stick around to the end of the episode to get Jaime’s advice — including on how people outside of the US can best contribute, and how to compare career opportunities in academia vs think tanks, and nonprofits vs national governments vs international orgs. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore
12/13/20212 hours, 15 minutes, 39 seconds
Episode Artwork

#117 – David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah

If there's a nuclear war followed by nuclear winter, and the sun is blocked out for years, most of us are going to starve, right? Well, currently, probably we would, because humanity hasn't done much to prevent it. But it turns out that an ounce of forethought might be enough for most people to get the calories they need to survive, even in a future as grim as that one. Today's guest is engineering professor Dave Denkenberger, who co-founded the Alliance to Feed the Earth in Disasters (ALLFED), which has the goal of finding ways humanity might be able to feed itself for years without relying on the sun. Over the last seven years, Dave and his team have turned up options from the mundane, like mushrooms grown on rotting wood, to the bizarre, like bacteria that can eat natural gas or electricity itself. Links to learn more, summary and full transcript. One option stands out as potentially able to feed billions: finding a way to eat wood ourselves. Even after a disaster, a huge amount of calories will be lying around, stored in wood and other plant cellulose. The trouble is that, even though cellulose is basically a lot of sugar molecules stuck together, humans can't eat wood. But we do know how to turn wood into something people can eat. We can grind wood up in already existing paper mills, then mix the pulp with enzymes that break the cellulose into sugar and the hemicellulose into other sugars. Another option that shows a lot of promise is seaweed. Buffered by the water around them, ocean life wouldn't be as affected by the lower temperatures resulting from the sun being obscured. Sea plants are also already used to growing in low light, because the water above them already shades them to some extent. Dave points out that "there are several species of seaweed that can still grow 10% per day, even with the lower light levels in nuclear winter and lower temperatures. ... Not surprisingly, with that 10% growth per day, assuming we can scale up, we could actually get up to 160% of human calories in less than a year." Of course it will be easier to scale up seaweed production if it's already a reasonably sized industry. At the end of the interview, we're joined by Sahil Shah, who is trying to expand seaweed production in the UK with his business Sustainable Seaweed. While a diet of seaweed and trees turned into sugar might not seem that appealing, the team at ALLFED also thinks several perfectly normal crops could also make a big contribution to feeding the world, even in a truly catastrophic scenario. Those crops include potatoes, canola, and sugar beets, which are currently grown in cool low-light environments. Many of these ideas could turn out to be misguided or impractical in real-world conditions, which is why Dave and ALLFED are raising money to test them out on the ground. They think it's essential to show these techniques can work so that should the worst happen, people turn their attention to producing more food rather than fighting one another over the small amount of food humanity has stockpiled. In this conversation, Rob, Dave, and Sahil discuss the above, as well as: • How much one can trust the sort of economic modelling ALLFED does • Bacteria that turn natural gas or electricity into protein • How to feed astronauts in space with nuclear power • What individuals can do to prepare themselves for global catastrophes • Whether we should worry about humanity running out of natural resources • How David helped save $10 billion worth of electricity through energy efficiency standards • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
11/29/20213 hours, 8 minutes, 12 seconds
Episode Artwork

#116 – Luisa Rodriguez on why global catastrophes seem unlikely to kill us all

If modern human civilisation collapsed — as a result of nuclear war, severe climate change, or a much worse pandemic than COVID-19 — billions of people might die. That's terrible enough to contemplate. But what’s the probability that rather than recover, the survivors would falter and humanity would actually disappear for good? It's an obvious enough question, but very few people have spent serious time looking into it -- possibly because it cuts across history, economics, and biology, among many other fields. There's no Disaster Apocalypse Studies department at any university, and governments have little incentive to plan for a future in which their country probably no longer even exists. The person who may have spent the most time looking at this specific question is Luisa Rodriguez — who has conducted research at Rethink Priorities, Oxford University's Future of Humanity Institute, the Forethought Foundation, and now here, at 80,000 Hours. Links to learn more, summary and full transcript. She wrote a series of articles earnestly trying to foresee how likely humanity would be to recover and build back after a full-on civilisational collapse. There are a couple of main stories people put forward for how a catastrophe like this would kill every single human on Earth — but Luisa doesn’t buy them. Story 1: Nuclear war has led to nuclear winter. There's a 10-year period during which a lot of the world is really inhospitable to agriculture. The survivors just aren't able to figure out how to feed themselves in the time period, so everyone dies of starvation or cold. Why Luisa doesn’t buy it: Catastrophes will almost inevitably be non-uniform in their effects. If 80,000 people survive, they’re not all going to be in the same city — it would look more like groups of 5,000 in a bunch of different places. People in some places will starve, but those in other places, such as New Zealand, will be able to fish, eat seaweed, grow potatoes, and find other sources of calories. It’d be an incredibly unlucky coincidence if the survivors of a nuclear war -- likely spread out all over the world -- happened to all be affected by natural disasters or were all prohibitively far away from areas suitable for agriculture (which aren’t the same areas you’d expect to be attacked in a nuclear war). Story 2: The catastrophe leads to hoarding and violence, and in addition to people being directly killed by the conflict, it distracts everyone so much from the key challenge of reestablishing agriculture that they simply fail. By the time they come to their senses, it’s too late -- they’ve used up too much of the resources they’d need to get agriculture going again. Why Luisa doesn’t buy it: We‘ve had lots of resource scarcity throughout history, and while we’ve seen examples of conflict petering out because basic needs aren’t being met, we’ve never seen the reverse. And again, even if this happens in some places -- even if some groups fought each other until they literally ended up starving to death — it would be completely bizarre for it to happen to every group in the world. You just need one group of around 300 people to survive for them to be able to rebuild the species. In this wide-ranging and free-flowing conversation, Luisa and Rob also cover: • What the world might actually look like after one of these catastrophes • The most valuable knowledge for survivors • How fast populations could rebound • ‘Boom and bust’ climate change scenarios • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
11/19/20213 hours, 45 minutes, 43 seconds
Episode Artwork

#115 – David Wallace on the many-worlds theory of quantum mechanics and its implications

Quantum mechanics — our best theory of atoms, molecules, and the subatomic particles that make them up — underpins most of modern physics. But there are varying interpretations of what it means, all of them controversial in their own way. Famously, quantum theory predicts that with the right setup, a cat can be made to be alive and dead at the same time. On the face of it, that sounds either meaningless or ridiculous. According to today’s guest, David Wallace — professor at the University of Pittsburgh and one of the world's leading philosophers of physics — there are three broad ways experts react to this apparent dilemma: 1. The theory must be wrong, and we need to change our philosophy to fix it. 2. The theory must be wrong, and we need to change our physics to fix it. 3. The theory is OK, and cats really can in some way be alive and dead simultaneously. (David and Rob do their best to introduce quantum mechanics in the first 35 minutes of the episode, but it isn't the easiest thing to explain via audio alone. So if you need a refresher before jumping in, we recommend checking out our links to learn more, summary and full transcript.) In 1955, physicist Hugh Everett bit the bullet on Option 3 and proposed Wallace's preferred solution to the puzzle: each time it's faced with a ‘quantum choice,’ the universe 'splits' into different worlds. Anything that has a probability greater than zero (from the perspective of quantum theory) happens in some branch — though more probable things happen in far more branches. While not a consensus position, the ‘many-worlds’ approach is one of the top three most popular ways to make sense of what's going on, according to surveys of relevant experts. Setting aside whether it's correct for a moment, one thing that's not often spelled out is what this approach would concretely imply if it were right. Is there a world where Rob (the show's host) can roll a die a million times, and it comes up 6 every time? As David explains in this episode: absolutely, that’s completely possible — and if Rob rolled a die a million times, there would be a world like that. Is there a world where Rob becomes president of the US? David thinks probably not. The things stopping Rob from becoming US president don’t seem down to random chance at the quantum level. Is there a world where Rob deliberately murdered someone this morning? Only if he’s already predisposed to murder — becoming a different person in that way probably isn’t a matter of random fluctuations in our brains. Is there a world where a horse-version of Rob hosts the 80,000 Horses Podcast? Well, due to the chance involved in evolution, it’s plausible that there are worlds where humans didn’t evolve, and intelligent horses have in some sense taken their place. And somewhere, fantastically distantly across the vast multiverse, there might even be a horse named Rob Wiblin who hosts a podcast, and who sounds remarkably like Rob. Though even then — it wouldn’t actually be Rob in the way we normally think of personal identity. Rob and David also cover: • If the many-worlds interpretation is right, should that change how we live our lives? • Are our actions getting more (or less) important as the universe splits into finer and finer threads? • Could we conceivably influence other branches of the multiverse? • Alternatives to the many-worlds interpretation • The practical value of physics today • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel and Katy Moore
11/12/20213 hours, 9 minutes, 46 seconds
Episode Artwork

#114 – Maha Rehman on working with governments to rapidly deliver masks to millions of people

It’s hard to believe, but until recently there had never been a large field trial that addressed these simple and obvious questions: 1. When ordinary people wear face masks, does it actually reduce the spread of respiratory diseases? 2. And if so, how do you get people to wear masks more often? It turns out the first question is remarkably challenging to answer, but it's well worth doing nonetheless. Among other reasons, the first good trial of this prompted Maha Rehman — Policy Director at the Mahbub Ul Haq Research Centre — as well as a range of others to immediately use the findings to help tens of millions of people across South Asia, even before the results were public. Links to learn more, summary and full transcript. The groundbreaking Bangladesh RCT that inspired her to take action found that: • A 30% increase in mask wearing reduced total infections by 10%. • The effect was more pronounced for surgical masks compared to cloth masks (plus ~50% effectiveness). • Mask wearing also led to an increase in social distancing. • Of all the incentives tested, the only thing that impacted mask wearing was their colour (people preferred blue over green, and red over purple!). The research was done by social scientists at Yale, Berkeley, and Stanford, among others. It applied a program they called ‘NORM’ in half of 600 villages in which about 350,000 people lived. NORM has four components, which the researchers expected would work well for the general public: N: no-cost distribution O: offering information R: reinforcing the message and the information in the field M: modeling Basically you make sure a community has enough masks and you tell them why it’s important to wear them. You also reinforce the message periodically in markets and mosques, and via role models and promoters in the community itself. Tipped off that these positive findings were on the way, Maha took this program and rushed to put it into action in Lahore, Pakistan, a city with a population of about 13 million, before the Delta variant could sweep through the region. Maha had already been doing a lot of data work on COVID policy over the past year, and that allowed her to quickly reach out to the relevant stakeholders — getting them interested and excited. Governments aren’t exactly known for being super innovative, but in March and April Lahore was going through a very deadly third wave of COVID — so the commissioner quickly jumped on this approach, providing an endorsement as well as resources. Together with the original researchers, Maha and her team at LUMS collected baseline data that allowed them to map the mask-wearing rate in every part of Lahore, in both markets and mosques. And then based on that data, they adapted the original rural-focused model to a very different urban setting. The scale of this project was daunting, and in today’s episode Maha tells Rob all about the day-to-day experiences and stresses required to actually make it happen. They also discuss: • The challenges of data collection in this context • Disasters and emergencies she had to respond to in the middle of the project • What she learned from working closely with the Lahore Commissioner's Office • How to get governments to provide you with large amounts of data for your research • How she adapted from a more academic role to a ‘getting stuff done’ role • How to reduce waste in government procurement • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
10/22/20211 hour, 42 minutes, 54 seconds
Episode Artwork

We just put up a new compilation of ten core episodes of the show

We recently launched a new podcast feed that might be useful to you and people you know. It's called Effective Altruism: Ten Global Problems, and it's a collection of ten top episodes of this show, selected to help listeners quickly get up to speed on ten pressing problems that the effective altruism community is working to solve. It's a companion to our other compilation Effective Altruism: An Introduction, which explores the big picture debates within the community and how to set priorities in order to have the greatest impact.These ten episodes cover: The cheapest ways to improve education in the developing world How dangerous is climate change and what are the most effective ways to reduce it? Using new technologies to prevent another disastrous pandemic Ways to simultaneously reduce both police misconduct and crime All the major approaches being taken to end factory farming How advances in artificial intelligence could go very right or very wrong Other big threats to the future of humanity — such as a nuclear war — and how can we make our species wiser and more resilient One problem few even recognise as a problem at all The selection is ideal for people who are completely new to the effective altruist way of thinking, as well as those who are familiar with effective altruism but new to The 80,000 Hours Podcast.If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to.You can find it by searching for effective altruism in whatever podcasting app you use, or by going to 80000hours.org/ten.We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing [email protected].
10/20/20213 minutes, 1 second
Episode Artwork

#113 – Varsha Venugopal on using gossip to help vaccinate every child in India

Our failure to make sure all kids globally get all of their basic vaccinations leads to 1.5 million child deaths every year. According to today’s guest, Varsha Venugopal, for the great majority this has nothing to do with weird conspiracy theories or medical worries — in India 80% of undervaccinated children are already getting some shots. They just aren't getting all of them, for the tragically mundane reason that life can get in the way. Links to learn more, summary and full transcript. As Varsha says, we're all sometimes guilty of "valuing our present very differently from the way we value the future", leading to short-term thinking whether about getting vaccines or going to the gym. So who should we call on to help fix this universal problem? The government, extended family, or maybe village elders? Varsha says that research shows the most influential figures might actually be local gossips. In 2018, Varsha heard about the ideas around effective altruism for the first time. By the end of 2019, she’d gone through Charity Entrepreneurship’s strategy incubation program, and quit her normal, stable job to co-found Suvita, a non-profit focused on improving the uptake of immunization in India, which focuses on two models: 1. Sending SMS reminders directly to parents and carers 2. Gossip The first one is intuitive. You collect birth registers, digitize the paper records, process the data, and send out personalised SMS messages to hundreds of thousands of families. The effect size varies depending on the context but these messages usually increase vaccination rates by 8-18%. The second approach is less intuitive and isn't yet entirely understood either. Here’s what happens: Suvita calls up random households and asks, “if there were an event in town, who would be most likely to tell you about it?” In over 90% of the cases, the households gave both the name and the phone number of a local ‘influencer’. And when tracked down, more than 95% of the most frequently named 'influencers' agreed to become vaccination ambassadors. Those ambassadors then go on to share information about when and where to get vaccinations, in whatever way seems best to them. When tested by a team of top academics at the Poverty Action Lab (J-PAL) it raised vaccination rates by 10 percentage points, or about 27%. The advantage of SMS reminders is that they’re easier to scale up. But Varsha says the ambassador program isn’t actually that far from being a scalable model as well. A phone call to get a name, another call to ask the influencer join, and boom — you might have just covered a whole village rather than just a single family. Varsha says that Suvita has two major challenges on the horizon: 1. Maintaining the same degree of oversight of their surveyors as they attempt to scale up the program, in order to ensure the program continues to work just as well 2. Deciding between focusing on reaching a few more additional districts now vs. making longer term investments which could build up to a future exponential increase. In this episode, Varsha and Rob talk about making these kinds of high-stakes, high-stress decisions, as well as: • How Suvita got started, and their experience with Charity Entrepreneurship • Weaknesses of the J-PAL studies • The importance of co-founders • Deciding how broad a program should be • Varsha’s day-to-day experience • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
10/18/20212 hours, 5 minutes, 43 seconds
Episode Artwork

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation. But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster. According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future. Links to learn more, summary and full transcript. The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs: • The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American. • So saving all US citizens at any given point in time would be worth $1,300 trillion. • If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone. • Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today. This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein. If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve? Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds. Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on. Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover: • A few reasons Carl isn't excited by 'strong longtermism' • How x-risk reduction compares to GiveWell recommendations • Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change • The history of bioweapons • Whether gain-of-function research is justifiable • Successes and failures around COVID-19 • The history of existential risk • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Katy Moore
10/5/20213 hours, 48 minutes, 39 seconds
Episode Artwork

#111 – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines. The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft. They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here? According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil. Links to learn more, summary and full transcript. In today's episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world. Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country's rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us. The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected. Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. Mushtaq's rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they're participating in, they almost always win out. To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers. Trying to impose a new way of doing things from the top down wasn't how Europe modernised, and it won't work elsewhere either. In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption. In this extensive interview Rob and Mushtaq cover this and much more, including: • How does one test theories like this? • Why are companies in some poor countries so much less productive than their peers in rich countries? • Have rich countries just legalized the corruption in their societies? • What are the big live debates in institutional economics? • Should poor countries protect their industries from foreign competition? • How can listeners use these theories to predict which policies will work in their own countries? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
9/10/20213 hours, 20 minutes, 25 seconds
Episode Artwork

#110 – Holden Karnofsky on building aptitudes and kicking ass

Holden Karnofsky helped create two of the most influential organisations in the effective philanthropy world. So when he outlines a different perspective on career advice than the one we present at 80,000 Hours — we take it seriously. Holden disagrees with us on a few specifics, but it's more than that: he prefers a different vibe when making career choices, especially early in one's career. Links to learn more, summary and full transcript. While he might ultimately recommend similar jobs to those we recommend at 80,000 Hours, the reasons are often different. At 80,000 Hours we often talk about ‘paths’ to working on what we currently think of as the most pressing problems in the world. That’s partially because people seem to prefer the most concrete advice possible. But Holden thinks a problem with that kind of advice is that it’s hard to take actions based on it if your job options don’t match well with your plan, and it’s hard to get a reliable signal about whether you're making the right choices. How can you know you’ve chosen the right cause? How can you know the job you’re aiming for will be helpful to that cause? And what if you can’t get a job in this area at all? Holden prefers to focus on ‘aptitudes’ that you can build in all sorts of different roles and cause areas, which can later be applied more directly. Even if the current role doesn’t work out, or your career goes in wacky directions you’d never anticipated (like so many successful careers do), or you change your whole worldview — you’ll still have access to this aptitude. So instead of trying to become a project manager at an effective altruism organisation, maybe you should just become great at project management. Instead of trying to become a researcher at a top AI lab, maybe you should just become great at digesting hard problems. Who knows where these skills will end up being useful down the road? Holden doesn’t think you should spend much time worrying about whether you’re having an impact in the first few years of your career — instead you should just focus on learning to kick ass at something, knowing that most of your impact is going to come decades into your career. He thinks as long as you’ve gotten good at something, there will usually be a lot of ways that you can contribute to solving the biggest problems. But Holden’s most important point, perhaps, is this: Be very careful about following career advice at all. He points out that a career is such a personal thing that it’s very easy for the advice-giver to be oblivious to important factors having to do with your personality and unique situation. He thinks it’s pretty hard for anyone to really have justified empirical beliefs about career choice, and that you should be very hesitant to make a radically different decision than you would have otherwise based on what some person (or website!) tells you to do. Instead, he hopes conversations like these serve as a way of prompting discussion and raising points that you can apply your own personal judgment to. That's why in the end he thinks people should look at their career decisions through his aptitude lens, the '80,000 Hours lens', and ideally several other frameworks as well. Because any one perspective risks missing something important. Holden and Rob also cover: • Ways to be helpful to longtermism outside of careers • Why finding a new cause area might be overrated • Historical events that deserve more attention • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
8/26/20212 hours, 46 minutes, 5 seconds
Episode Artwork

#109 – Holden Karnofsky on the most important century

Will the future of humanity be wild, or boring? It's natural to think that if we're trying to be sober and measured, and predict what will really happen rather than spin an exciting story, it's more likely than not to be sort of... dull. But there's also good reason to think that that is simply impossible. The idea that there's a boring future that's internally coherent is an illusion that comes from not inspecting those scenarios too closely. At least that is what Holden Karnofsky — founder of charity evaluator GiveWell and foundation Open Philanthropy — argues in his new article series titled 'The Most Important Century'. He hopes to lay out part of the worldview that's driving the strategy and grantmaking of Open Philanthropy's longtermist team, and encourage more people to join his efforts to positively shape humanity's future. Links to learn more, summary and full transcript. The bind is this. For the first 99% of human history the global economy (initially mostly food production) grew very slowly: under 0.1% a year. But since the industrial revolution around 1800, growth has exploded to over 2% a year. To us in 2020 that sounds perfectly sensible and the natural order of things. But Holden points out that in fact it's not only unprecedented, it also can't continue for long. The power of compounding increases means that to sustain 2% growth for just 10,000 years, 5% as long as humanity has already existed, would require us to turn every individual atom in the galaxy into an economy as large as the Earth's today. Not super likely. So what are the options? First, maybe growth will slow and then stop. In that case we today live in the single miniscule slice in the history of life during which the world rapidly changed due to constant technological advances, before intelligent civilization permanently stagnated or even collapsed. What a wild time to be alive! Alternatively, maybe growth will continue for thousands of years. In that case we are at the very beginning of what would necessarily have to become a stable galaxy-spanning civilization, harnessing the energy of entire stars among other feats of engineering. We would then stand among the first tiny sliver of all the quadrillions of intelligent beings who ever exist. What a wild time to be alive! Isn't there another option where the future feels less remarkable and our current moment not so special? While the full version of the argument above has a number of caveats, the short answer is 'not really'. We might be in a computer simulation and our galactic potential all an illusion, though that's hardly any less weird. And maybe the most exciting events won't happen for generations yet. But on a cosmic scale we'd still be living around the universe's most remarkable time. Holden himself was very reluctant to buy into the idea that today’s civilization is in a strange and privileged position, but has ultimately concluded "all possible views about humanity's future are wild". In the conversation Holden and Rob cover each part of the 'Most Important Century' series, including: • The case that we live in an incredibly important time • How achievable-seeming technology - in particular, mind uploading - could lead to unprecedented productivity, control of the environment, and more • How economic growth is faster than it can be for all that much longer • Forecasting transformative AI • And the implications of living in the most important century Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
8/19/20212 hours, 19 minutes, 1 second
Episode Artwork

#108 – Chris Olah on working at top AI labs without an undergrad degree

Chris Olah has had a fascinating and unconventional career path. Most people who want to pursue a research career feel they need a degree to get taken seriously. But Chris not only doesn't have a PhD, but doesn’t even have an undergraduate degree. After dropping out of university to help defend an acquaintance who was facing bogus criminal charges, Chris started independently working on machine learning research, and eventually got an internship at Google Brain, a leading AI research group. In this interview — a follow-up to our episode on his technical work — we discuss what, if anything, can be learned from his unusual career path. Should more people pass on university and just throw themselves at solving a problem they care about? Or would it be foolhardy for others to try to copy a unique case like Chris’? Links to learn more, summary and full transcript. We also cover some of Chris' personal passions over the years, including his attempts to reduce what he calls 'research debt' by starting a new academic journal called Distill, focused just on explaining existing results unusually clearly. As Chris explains, as fields develop they accumulate huge bodies of knowledge that researchers are meant to be familiar with before they start contributing themselves. But the weight of that existing knowledge — and the need to keep up with what everyone else is doing — can become crushing. It can take someone until their 30s or later to earn their stripes, and sometimes a field will split in two just to make it possible for anyone to stay on top of it. If that were unavoidable it would be one thing, but Chris thinks we're nowhere near communicating existing knowledge as well as we could. Incrementally improving an explanation of a technical idea might take a single author weeks to do, but could go on to save a day for thousands, tens of thousands, or hundreds of thousands of students, if it becomes the best option available. Despite that, academics have little incentive to produce outstanding explanations of complex ideas that can speed up the education of everyone coming up in their field. And some even see the process of deciphering bad explanations as a desirable right of passage all should pass through, just as they did. So Chris tried his hand at chipping away at this problem — but concluded the nature of the problem wasn't quite what he originally thought. In this conversation we talk about that, as well as: • Why highly thoughtful cold emails can be surprisingly effective, but average cold emails do little • Strategies for growing as a researcher • Thinking about research as a market • How Chris thinks about writing outstanding explanations • The concept of 'micromarriages' and ‘microbestfriendships’ • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
8/11/20211 hour, 33 minutes, 23 seconds
Episode Artwork

#107 – Chris Olah on what the hell is going on inside neural networks

Big machine learning models can identify plant species better than any human, write passable essays, beat you at a game of Starcraft 2, figure out how a photo of Tobey Maguire and the word 'spider' are related, solve the 60-year-old 'protein folding problem', diagnose some diseases, play romantic matchmaker, write solid computer code, and offer questionable legal advice. Humanity made these amazing and ever-improving tools. So how do our creations work? In short: we don't know. Today's guest, Chris Olah, finds this both absurd and unacceptable. Over the last ten years he has been a leader in the effort to unravel what's really going on inside these black boxes. As part of that effort he helped create the famous DeepDream visualisations at Google Brain, reverse engineered the CLIP image classifier at OpenAI, and is now continuing his work at Anthropic, a new $100 million research company that tries to "co-develop the latest safety techniques alongside scaling of large ML models". Links to learn more, summary and full transcript. Despite having a huge fan base thanks to his explanations of ML and tweets, today's episode is the first long interview Chris has ever given. It features his personal take on what we've learned so far about what ML algorithms are doing, and what's next for this research agenda at Anthropic. His decade of work has borne substantial fruit, producing an approach for looking inside the mess of connections in a neural network and back out what functional role each piece is serving. Among other things, Chris and team found that every visual classifier seems to converge on a number of simple common elements in their early layers — elements so fundamental they may exist in our own visual cortex in some form. They also found networks developing 'multimodal neurons' that would trigger in response to the presence of high-level concepts like 'romance', across both images and text, mimicking the famous 'Halle Berry neuron' from human neuroscience. While reverse engineering how a mind works would make any top-ten list of the most valuable knowledge to pursue for its own sake, Chris's work is also of urgent practical importance. Machine learning models are already being deployed in medicine, business, the military, and the justice system, in ever more powerful roles. The competitive pressure to put them into action as soon as they can turn a profit is great, and only getting greater. But if we don't know what these machines are doing, we can't be confident they'll continue to work the way we want as circumstances change. Before we hand an algorithm the proverbial nuclear codes, we should demand more assurance than "well, it's always worked fine so far". But by peering inside neural networks and figuring out how to 'read their minds' we can potentially foresee future failures and prevent them before they happen. Artificial neural networks may even be a better way to study how our own minds work, given that, unlike a human brain, we can see everything that's happening inside them — and having been posed similar challenges, there's every reason to think evolution and 'gradient descent' often converge on similar solutions. Among other things, Rob and Chris cover: • Why Chris thinks it's necessary to work with the largest models • What fundamental lessons we've learned about how neural networks (and perhaps humans) think • How interpretability research might help make AI safer to deploy, and Chris’ response to skeptics • Why there's such a fuss about 'scaling laws' and what they say about future AI progress Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
8/4/20213 hours, 9 minutes, 20 seconds
Episode Artwork

#106 – Cal Newport on an industrial revolution for office work

If you wanted to start a university department from scratch, and attract as many superstar researchers as possible, what’s the most attractive perk you could offer? How about just not needing an email address. According to today's guest, Cal Newport — computer science professor and best-selling author of A World Without Email — it should seem obscene and absurd for a world-renowned vaccine researcher with decades of experience to spend a third of their time fielding requests from HR, building management, finance, and so on. Yet with offices organised the way they are today, nothing could be more natural. Links to learn more, summary and full transcript. But this isn’t just a problem at the elite level — this affects almost all of us. A typical U.S. office worker checks their email 80 times a day, once every six minutes on average. Data analysis by RescueTime found that a third of users checked email or Slack every three minutes or more, averaged over a full work day. Each time that happens our focus is broken, killing our momentum on the knowledge work we're supposedly paid to do. When we lament how much email and chat have reduced our focus and filled our days with anxiety and frenetic activity, we most naturally blame 'weakness of will'. If only we had the discipline to check Slack and email once a day, all would be well — or so the story goes. Cal believes that line of thinking fundamentally misunderstands how we got to a place where knowledge workers can rarely find more than five consecutive minutes to spend doing just one thing. Since the Industrial Revolution, a combination of technology and better organization have allowed the manufacturing industry to produce a hundred-fold as much with the same number of people. Cal says that by comparison, it's not clear that specialised knowledge workers like scientists, authors, or senior managers are *any* more productive than they were 50 years ago. If the knowledge sector could achieve even a tiny fraction of what manufacturing has, and find a way to coordinate its work that raised productivity by just 1%, that would generate on the order of $100 billion globally each year. Since the 1990s, when everyone got an email address and most lost their assistants, that lack of direction has led to what Cal calls the 'hyperactive hive mind': everyone sends emails and chats to everyone else, all through the day, whenever they need something. Cal points out that this is so normal we don't even think of it as a way of organising work, but it is: it's what happens when management does nothing to enable teams to decide on a better way of organising themselves. A few industries have made progress taming the 'hyperactive hive mind'. But on Cal's telling, this barely scratches the surface of the improvements that are possible within knowledge work. And reigning in the hyperactive hive mind won't just help people do higher quality work, it will free them from the 24/7 anxiety that there's someone somewhere they haven't gotten back to. In this interview Cal and Rob also cover: • Is this really one of the world's most pressing problems? • The historical origins of the 'hyperactive hive mind' • The harm caused by attention switching • Who's working to solve the problem and how • Cal's top productivity advice for high school students, university students, and early career workers • And much more Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
7/28/20211 hour, 53 minutes, 26 seconds
Episode Artwork

#105 – Alexander Berger on improving global health and wellbeing in clear and direct ways

The effective altruist research community tries to identify the highest impact things people can do to improve the world. Unsurprisingly, given the difficulty of such a massive and open-ended project, very different schools of thought have arisen about how to do the most good. Today's guest, Alexander Berger, leads Open Philanthropy's 'Global Health and Wellbeing' programme, where he oversees around $175 million in grants each year, and ultimately aspires to disburse billions in the most impactful ways he and his team can identify. This programme is the flagship effort representing one major effective altruist approach: try to improve the health and wellbeing of humans and animals that are alive today, in clearly identifiable ways, applying an especially analytical and empirical mindset. Links to learn more, summary, Open Phil jobs, and full transcript. The programme makes grants to tackle easily-prevented illnesses among the world's poorest people, offer cash to people living in extreme poverty, prevent cruelty to billions of farm animals, advance biomedical science, and improve criminal justice and immigration policy in the United States. Open Philanthropy's researchers rely on empirical information to guide their decisions where it's available, and where it's not, they aim to maximise expected benefits to recipients through careful analysis of the gains different projects would offer and their relative likelihoods of success. This 'global health and wellbeing' approach — sometimes referred to as 'neartermism' — contrasts with another big school of thought in effective altruism, known as 'longtermism', which aims to direct the long-term future of humanity and its descendants in a positive direction. Longtermism bets that while it's harder to figure out how to benefit future generations than people alive today, the total number of people who might live in the future is far greater than the number alive today, and this gain in scale more than offsets that lower tractability. The debate between these two very different theories of how to best improve the world has been one of the most significant within effective altruist research since its inception. Alexander first joined the influential charity evaluator GiveWell in 2011, and since then has conducted research alongside top thinkers on global health and wellbeing and longtermism alike, ultimately deciding to dedicate his efforts to improving the world today in identifiable ways. In this conversation Alexander advocates for that choice, explaining the case in favour of adopting the 'global health and wellbeing' mindset, while going through the arguments for the longtermist approach that he finds most and least convincing. Rob and Alexander also tackle: • Why it should be legal to sell your kidney, and why Alexander donated his to a total stranger • Why it's shockingly hard to find ways to give away large amounts of money that are more cost effective than distributing anti-malaria bed nets • How much you gain from working with tight feedback loops • Open Philanthropy's biggest wins • Why Open Philanthropy engages in 'worldview diversification' by having both a global health and wellbeing programme and a longtermist programme as well • Whether funding science and political advocacy is a good way to have more social impact • Whether our effects on future generations are predictable or unforeseeable • What problems the global health and wellbeing team works to solve and why • Opportunities to work at Open Philanthropy Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Transcriptions: Sofia Davis-Fogel
7/12/20212 hours, 54 minutes, 31 seconds
Episode Artwork

#104 – Pardis Sabeti on the Sentinel system for detecting and stopping pandemics

When the first person with COVID-19 went to see a doctor in Wuhan, nobody could tell that it wasn’t a familiar disease like the flu — that we were dealing with something new. How much death and destruction could we have avoided if we'd had a hero who could? That's what the last Assistant Secretary of Defense Andy Weber asked on the show back in March. Today’s guest Pardis Sabeti is a professor at Harvard, fought Ebola on the ground in Africa during the 2014 outbreak, runs her own lab, co-founded a company that produces next-level testing, and is even the lead singer of a rock band. If anyone is going to be that hero in the next pandemic — it just might be her. Links to learn more, summary and full transcript. She is a co-author of the SENTINEL proposal, a practical system for detecting new diseases quickly, using an escalating series of three novel diagnostic techniques. The first method, called SHERLOCK, uses CRISPR gene editing to detect familiar viruses in a simple, inexpensive filter paper test, using non-invasive samples. If SHERLOCK draws a blank, we escalate to the second step, CARMEN, an advanced version of SHERLOCK that uses microfluidics and CRISPR to simultaneously detect hundreds of viruses and viral strains. More expensive, but far more comprehensive. If neither SHERLOCK nor CARMEN detects a known pathogen, it's time to pull out the big gun: metagenomic sequencing. More expensive still, but sequencing all the DNA in a patient sample lets you identify and track every virus — known and unknown — in a sample. If Pardis and her team succeeds, our future pandemic potential patient zero may: 1. Go to the hospital with flu-like symptoms, and immediately be tested using SHERLOCK — which will come back negative 2. Take the CARMEN test for a much broader range of illnesses — which will also come back negative 3. Their sample will be sent for metagenomic sequencing, which will reveal that they're carrying a new virus we'll have to contend with 4. At all levels, information will be recorded in a cloud-based data system that shares data in real time; the hospital will be alerted and told to quarantine the patient 5. The world will be able to react weeks — or even months — faster, potentially saving millions of lives It's a wonderful vision, and one humanity is ready to test out. But there are all sorts of practical questions, such as: • How do you scale these technologies, including to remote and rural areas? • Will doctors everywhere be able to operate them? • Who will pay for it? • How do you maintain the public’s trust and protect against misuse of sequencing data? • How do you avoid drowning in the data the system produces? In this conversation Pardis and Rob address all those questions, as well as: • Pardis’ history with trying to control emerging contagious diseases • The potential of mRNA vaccines • Other emerging technologies • How to best educate people about pandemics • The pros and cons of gain-of-function research • Turning mistakes into exercises you can learn from • Overcoming enormous life challenges • Why it’s so important to work with people you can laugh with • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
6/29/20212 hours, 20 minutes, 57 seconds
Episode Artwork

#103 – Max Roser on building the world's best source of COVID-19 data at Our World in Data

History is filled with stories of great people stepping up in times of crisis. Presidents averting wars; soldiers leading troops away from certain death; data scientists sleeping on the office floor to launch a new webpage a few days sooner. That last one is barely a joke — by our lights, people like today’s guest Max Roser should be viewed with similar admiration by historians of COVID-19. Links to learn more, summary and full transcript. Max runs Our World in Data, a small education nonprofit which began the pandemic with just six staff. But since last February his team has supplied essential COVID statistics to over 130 million users — among them BBC, The Financial Times, The New York Times, the OECD, the World Bank, the IMF, Donald Trump, Tedros Adhanom, and Dr. Anthony Fauci, just to name a few. An economist at Oxford University, Max Roser founded Our World in Data as a small side project in 2011 and has led it since, including through the wild ride of 2020. In today's interview Max explains how he and his team realized that if they didn't start making COVID data accessible and easy to make sense of, it wasn't clear when anyone would. Our World in Data wasn't naturally set up to become the world's go-to source for COVID updates. Up until then their specialty had been long articles explaining century-length trends in metrics like life expectancy — to the point that their graphing software was only set up to present yearly data. But the team eventually realized that the World Health Organization was publishing numbers that flatly contradicted themselves, most of the press was embarrassingly out of its depth, and countries were posting case data as images buried deep in their sites where nobody would find them. Even worse, nobody was reporting or compiling how many tests different countries were doing, rendering all those case figures largely meaningless. Trying to make sense of the pandemic was a time-consuming nightmare. If you were leading a national COVID response, learning what other countries were doing and whether it was working would take weeks of study — and that meant, with the walls falling in around you, it simply wasn't going to happen. Ministries of health around the world were flying blind. Disbelief ultimately turned to determination, and the Our World in Data team committed to do whatever had to be done to fix the situation. Overnight their software was quickly redesigned to handle daily data, and for the next few months Max and colleagues like Edouard Mathieu and Hannah Ritchie did little but sleep and compile COVID data. In this episode Max tells the story of how Our World in Data ran into a huge gap that never should have been there in the first place — and how they had to do it all again in December 2020 when, eleven months into the pandemic, there was nobody to compile global vaccination statistics. We also talk about: • Our World in Data's early struggles to get funding • Why government agencies are so bad at presenting data • Which agencies did a good job during the COVID pandemic (shout out to the European CDC) • How much impact Our World in Data has by helping people understand the world • How to deal with the unreliability of development statistics • Why research shouldn't be published as a PDF • Why academia under-incentivises data collection • The history of war • And much more Producer: Keiran Harris. Audio mastering: Ryan Kessler. Transcriptions: Sofia Davis-Fogel.
6/21/20212 hours, 22 minutes, 24 seconds
Episode Artwork

#102 – Tom Moynihan on why prior generations missed some of the biggest priorities of all

It can be tough to get people to truly care about reducing existential risks today. But spare a thought for the longtermist of the 17th century: they were surrounded by people who thought extinction was literally impossible. Today’s guest Tom Moynihan, intellectual historian and author of the book X-Risk: How Humanity Discovered Its Own Extinction, says that until the 18th century, almost everyone — including early atheists — couldn’t imagine that humanity or life could simply disappear because of an act of nature. Links to learn more, summary and full transcript. This is largely because of the prevalence of the ‘principle of plenitude’, which Tom defines as saying: “Whatever can happen will happen. In its stronger form it says whatever can happen will happen reliably and recurrently. And in its strongest form it says that all that can happen is happening right now. And that's the way things will be forever.” This has the implication that if humanity ever disappeared for some reason, then it would have to reappear. So why would you ever worry about extinction? Here are 4 more commonly held beliefs from generations past that Tom shares in the interview: • All regions of matter that can be populated will be populated: In other words, there are aliens on every planet, because it would be a massive waste of real estate if all of them were just inorganic masses, where nothing interesting was going on. This also led to the idea that if you dug deep into the Earth, you’d potentially find thriving societies. • Aliens were human-like, and shared the same values as us: they would have the same moral beliefs, and the same aesthetic beliefs. The idea that aliens might be very different from us only arrived in the 20th century. • Fossils were rocks that had gotten a bit too big for their britches and were trying to act like animals: they couldn’t actually move, so becoming an imprint of an animal was the next best thing. • All future generations were contained in miniature form, Russian-doll style, in the sperm of the first man: preformation was the idea that within the ovule or the sperm of an animal is contained its offspring in miniature form, and the French philosopher Malebranche said, well, if one is contained in the other one, then surely that goes on forever. And here are another three that weren’t held widely, but were proposed by scholars and taken seriously: • Life preceded the existence of rocks: Living things, like clams and mollusks, came first, and they extruded the earth. • No idea can be wrong: Nothing we can say about the world is wrong in a strong sense, because at some point in the future or the past, it has been true. • Maybe we were living before the Trojan War: Aristotle said that we might actually be living before Troy, because it — like every other event — will repeat at some future date. And he said that actually, the set of possibilities might be so narrow that it might be safer to say that we actually live before Troy. But Tom tries to be magnanimous when faced with these incredibly misguided worldviews. In this nearly four-hour long interview, Tom and Rob cover all of these ideas, as well as: • How we know people really believed such things • How we moved on from these theories • How future intellectual historians might view our beliefs today • The distinction between ‘apocalypse’ and ‘extinction’ • Utopias and dystopias • Big ideas that haven’t flowed through into all relevant fields yet • Intellectual history as a possible high-impact career • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
6/11/20213 hours, 56 minutes, 43 seconds
Episode Artwork

#101 – Robert Wright on using cognitive empathy to save the world

In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. took that as confirmation that he had something to hide — probably an active WMD program. But what about alternative explanations? Maybe those scientists knew about past crimes. Or maybe they’d defect. Or maybe giving in to that kind of demand would have humiliated Hussein in the eyes of enemies like Iran and Saudi Arabia. According to today’s guest Robert Wright, host of the popular podcast The Wright Show, these are the kinds of things that might have come up if people were willing to look at things from Saddam Hussein’s perspective. Links to learn more, summary and full transcript. He calls this ‘cognitive empathy’. It's not feeling-your-pain-type empathy — it's just trying to understand how another person thinks. He says if you pitched this kind of thing back in 2003 you’d be shouted down as a 'Saddam apologist' — and he thinks the same is true today when it comes to regimes in China, Russia, Iran, and North Korea. The two Roberts in today’s episode — Bob Wright and Rob Wiblin — agree that removing this taboo against perspective taking, even with people you consider truly evil, could potentially significantly improve discourse around international relations. They feel that if we could spread the meme that if you’re able to understand what dictators are thinking and calculating, based on their country’s history and interests, it seems like we’d be less likely to make terrible foreign policy errors. But how do you actually do that? Bob’s new ‘Apocalypse Aversion Project’ is focused on creating the necessary conditions for solving non-zero-sum global coordination problems, something most people are already on board with. And in particular he thinks that might come from enough individuals “transcending the psychology of tribalism”. He doesn’t just mean rage and hatred and violence, he’s also talking about cognitive biases. Bob makes the striking claim that if enough people in the U.S. had been able to combine perspective taking with mindfulness — the ability to notice and identify thoughts as they arise — then the U.S. might have even been able to avoid the invasion of Iraq. Rob pushes back on how realistic this approach really is, asking questions like: • Haven’t people been trying to do this since the beginning of time? • Is there a great novel angle that will change how a lot of people think and behave? • Wouldn’t it be better to focus on a much narrower task, like getting more mindfulness and meditation and reflectiveness among the U.S. foreign policy elite? But despite the differences in approaches, Bob has a lot of common ground with 80,000 Hours — and the result is a fun back-and-forth about the best ways to achieve shared goals. Bob starts by questioning Rob about effective altruism, and they go on to cover a bunch of other topics, such as: • Specific risks like climate change and new technologies • How to achieve social cohesion • The pros and cons of society-wide surveillance • How Rob got into effective altruism If you're interested to hear more of Bob's interviews you can subscribe to The Wright Show anywhere you're getting this one. You can also watch videos of this and all his other episodes on Bloggingheads.tv. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
5/28/20211 hour, 35 minutes, 59 seconds
Episode Artwork

#100 – Having a successful career with depression, anxiety and imposter syndrome

Today's episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!). The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it's rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so. Links to learn more, summary and full transcript. The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today. The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort. Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better. Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world. We hope that the episode will: 1. Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles. 2. Give insight into what it's like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully. So we think this episode will be valuable for: • People who have experienced mental health problems or might in future; • People who have had troubles with stress, anxiety, low mood, low self esteem, and similar issues, even if their experience isn’t well described as ‘mental illness’; • People who have never experienced these problems but want to learn about what it's like, so they can better relate to and assist family, friends or colleagues who do. In other words, we think this episode could be worthwhile for almost everybody. Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts. If you don’t want to hear the most intense section, you can skip the chapter called ‘Disaster’ (44–57mins). And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’ (1hr 11mins). If you're feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the U.S. (800-273-8255) and Samaritans in the U.K. (116 123). Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
5/19/20212 hours, 51 minutes, 20 seconds
Episode Artwork

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

For a chance to prevent enormous amounts of suffering, would you be brave enough to drive five hours to a remote location to meet a man who seems likely to be your enemy, knowing that it might be an ambush? Today’s guest — Leah Garcés — was. That man was a chicken farmer named Craig Watts, and that ambush never happened. Instead, Leah and Craig forged a friendship and a partnership focused on reducing suffering on factory farms. Leah, now president of Mercy For Animals (MFA), tried for years to get access to a chicken farm to document the horrors she knew were happening behind closed doors. It made sense that no one would let her in — why would the evil chicken farmers behind these atrocities ever be willing to help her take them down? But after sitting with Craig on his living room floor for hours and listening to his story, she discovered that he wasn’t evil at all — in fact he was just stuck in a cycle he couldn’t escape, forced to use methods he didn’t endorse. Links to learn more, summary and full transcript. Most chicken farmers have enormous debts they are constantly struggling to pay off, make very little money, and have to work in terrible conditions — their main activity most days is finding and killing the sick chickens in their flock. Craig was one of very few farmers close to finally paying off his debts, which made him slightly less vulnerable to retaliation. That opened up the possibility for him to work with Leah. Craig let Leah openly film inside the chicken houses, and shared highly confidential documents about the antibiotics put into the feed. That led to a viral video, and a New York Times story. The villain of that video was Jim Perdue, CEO of one of the biggest meat companies in the world. They show him saying, "Farmers are happy. Chickens are happy. There's a lot of space. They're clean." And then they show the grim reality. For years, Perdue wouldn’t speak to Leah. But remarkably, when they actually met in person, she again managed to forge a meaningful relationship with a natural adversary. She was able to put aside her utter contempt for the chicken industry and see Craig and Jim as people, not cartoonish villains. Leah believes that you need to be willing to sit down with anyone who has the power to solve a problem that you don’t — recognising them as human beings with a lifetime of complicated decisions behind their actions. And she stresses that finding or making a connection is really important. In the case of Jim Perdue, it was the fact they both had adopted children. Because of this, they were able to forget that they were supposed to be enemies in that moment, and build some trust. The other lesson that Leah highlights is that you need to look for win-wins and start there, rather than starting with disagreements. With Craig Watts, instead of opening with “How do I end his job”, she thought, “How can I find him a better job?” If you find solutions where everybody wins, you don’t need to spend resources fighting the former enemy. They’ll come to you. It turns out that conditions in chicken houses are perfect for growing hemp or mushrooms, so MFA have started their ‘Transfarmation project’ to help farmers like Craig escape from the prison of factory farming by converting their production from animals to plants. To convince farmers to leave behind a life of producing suffering, all you need to do is find them something better — which for many of them is almost anything else. Leah and Rob also talk about: • Why conditions for farmers are so bad • The benefits of creating a public ranking, and scoring companies against each other • The difficulty of enforcing corporate pledges • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
5/13/20212 hours, 26 minutes, 3 seconds
Episode Artwork

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience. You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.” So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour. Which patient would you rather be? Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future. Christian Tarsney, a philosopher at Oxford University's Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences. Links to learn more, summary and full transcript. That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past? One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it. But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn't care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about! Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven't played yet are still on the way. If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction. It’s a live debate that’s playing out in the philosophy of time, as well as in physics. For Christian, there are two big practical implications of these past, present, and future ethical comparison cases. The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people's past goals, including the goals of people who are now dead. The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born? Christian and Rob also cover several other big topics, including: • A possible solution to moral fanaticism • How much of humanity's resources we should spend on improving the long-term future • How large the expected value of the continued existence of Earth-originating civilization might be • How we should respond to uncertainty about the state of the world • The state of global priorities research • And much more Producer: Keiran Harris. Audio mastering: Ryan Kessler. Transcriptions: Sofia Davis-Fogel.
5/5/20212 hours, 38 minutes, 21 seconds
Episode Artwork

#97 – Mike Berkowitz on keeping the US a liberal democratic country

Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official certification of electoral votes — but there were others who refused. These included Brad Raffensperger and Brian Kemp in Georgia, and Vice President Mike Pence. Although one could say that the latter Republicans showed great courage, the key to the split may lie less in differences of moral character or commitment to democracy, and more in what was being asked of them. Trump wanted the first group to break norms, but he wanted the second group to break the law. And while norms were indeed shattered, laws were upheld. Today’s guest Mike Berkowitz, executive director of the Democracy Funders Network, points out a problem we came to realize throughout the Trump presidency: So many of the things that we thought were laws were actually just customs. Links to learn more, summary and full transcript. So once you have leaders who don’t buy into those customs — like, say, that a president shouldn’t tell the Department of Justice who it should and shouldn’t be prosecuting — there’s nothing preventing said customs from being violated. And what happens if current laws change? A recent Georgia bill took away some of the powers of Georgia's Secretary of State — Brad Raffensberger. Mike thinks that's clearly retribution for Raffensperger's refusal to overturn the 2020 election results. But he also thinks it means that the next time someone tries to overturn the results of the election, they could get much farther than Trump did in 2020. In this interview Mike covers what he thinks are the three most important levers to push on to preserve liberal democracy in the United States: 1. Reforming the political system, by e.g. introducing new voting methods 2. Revitalizing local journalism 3. Reducing partisan hatred within the United States Mike says that American democracy, like democracy elsewhere in the world, is not an inevitability. The U.S. has institutions that are really important for the functioning of democracy, but they don't automatically protect themselves — they need people to stand up and protect them. In addition to the changes listed above, Mike also thinks that we need to harden more norms into laws, such that individuals have fewer opportunities to undermine the system. And inasmuch as laws provided the foundation for the likes of Raffensperger, Kemp, and Pence to exhibit political courage, if we can succeed in creating and maintaining the right laws — we may see many others following their lead. As Founding Father James Madison put it: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.” Mike and Rob also talk about: • What sorts of terrible scenarios we should actually be worried about, i.e. the difference between being overly alarmist and properly alarmist • How to reduce perverse incentives for political actors, including those to overturn election results • The best opportunities for donations in this space • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
4/20/20212 hours, 36 minutes, 9 seconds
Episode Artwork

The ten episodes of this show you should listen to first

Today we're launching a new podcast feed that might be useful to you and people you know. It's called 'Effective Altruism: An Introduction', and it's a carefully chosen selection of ten episodes of this show, with various new intros and outros to guide folks through them. Basically, as the number of episodes of this show has grown, it has become less and less practical to ask new subscribers to go back and listen through most of our archives. So naturally new subscribers want to know... what should I listen to first? What episodes will help me make sense of effective altruist thinking and get the most out of new episodes? We hope that 'Effective Altruism: An Introduction' will fill in that gap. Across the ten episodes, we cover what effective altruism at its core really is, what folks who are tackling a number of well-known problem areas are up to and why, some more unusual and speculative problems, and how we and the rest of the team here try to think through difficult questions as clearly as possible. Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well. Another gap it might fill is in helping you recommend the show to people, or suggest a way to learn more about effective altruist style thinking to people who are curious about it. If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to. You can find it by searching for effective altruism in your podcasting app, or by going to 80000hours.org/intro. We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing [email protected].
4/15/20213 minutes, 2 seconds
Episode Artwork

#96 – Nina Schick on disinformation and the rise of synthetic media

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.? Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think. But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that's currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff? Links to learn more, summary and full transcript. Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as: • Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source? • If photoshop didn’t lead to total chaos, why should this be any different? But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive. She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied. Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”? Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions. As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can't agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society? Nina and Rob also talk about a bunch of other topics, including: • The history of disinformation, and groups who sow disinformation professionally • How deepfake pornography is used to attack and silence women activitists • The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes • Whether we should make it illegal to make a deepfake of someone without their permission • And the coolest positive uses of this technology Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
4/6/20212 hours, 3 seconds
Episode Artwork

#95 – Kelly Wanser on whether to deliberately intervene in the climate

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado. 100 years? 50 years? 20? Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well. Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate. Links to learn more, summary and full transcript. Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have. Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy. After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere. Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them 'whiter' so they reflect even more sunlight back into space. These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter. Kelly says that scientists estimate that we're already lowering the global temperature this way by 0.5–1.1ºC, without even intending to. While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn't been funding to measure how much temperature change you get for a given amount of spray. And we won't want to dive into these methods head first because the atmosphere is a complex system we can't yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied. The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as: • It being riskier than doing nothing • That it will inevitably be dangerously political • And the risk of the 'double catastrophe', where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time. Kelly and Rob also talk about: • The many climate interventions that are already happening • The most promising ideas in the field • And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
3/26/20211 hour, 24 minutes, 7 seconds
Episode Artwork

#94 – Ezra Klein on aligning journalism, politics, and what matters most

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs? When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously? Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there's little pre-existing infrastructure to push them. Links to learn more, summary and full transcript. He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy). To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there's very little infrastructure for thinking about it. There isn't a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on. All of this generates a strong 'path dependence' that can lock the media in to covering less important topics despite having no intention to do so. According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important." One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: "This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.” Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “...like catnip for readers.” Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can't make the audience interested in it, that is your failure — never the audience's failure. But is that really true? In today’s episode we explore that claim, as well as: • How many hours of news the average person should consume • Where the progressive movement is failing to live up to its values • Why Ezra thinks 'price gouging' is a bad idea • Where the FDA has failed on rapid at-home testing for COVID-19 • Whether we should be more worried about tail-risk scenarios • And his biggest critiques of the effective altruism community Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
3/20/20211 hour, 45 minutes, 20 seconds
Episode Artwork

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

COVID-19 has provided a vivid reminder of the power of biological threats. But the threat doesn't come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United States, but developed in large numbers by the Soviet Union, right up until its collapse — have the potential to spread globally and kill just as many as an all-out nuclear war. For five years today’s guest — Andy Weber — was the US Assistant Secretary of Defense responsible for biological and other weapons of mass destruction. While people primarily associate the Pentagon with waging wars, including most within the Pentagon itself, Andy is quick to point out that you can't have national security if your population remains at grave risk from natural and lab-created diseases. Andy's current mission is to spread the word that while bioweapons are terrifying, scientific advances also leave them on the verge of becoming an outdated technology. Links to learn more, summary and full transcript. He thinks there is an overwhelming case to increase our investment in two new technologies that could dramatically reduce the risk of bioweapons and end natural pandemics in the process. First, advances in genetic sequencing technology allow direct, real-time analysis of DNA or RNA fragments collected from the environment. You sample widely, and if you start seeing DNA sequences that you don't recognise — that sets off an alarm. Andy says that while desktop sequencers may be expensive enough that they're only in hospitals today, they're rapidly getting smaller, cheaper, and easier to use. In fact DNA sequencing has recently experienced the most dramatic cost decrease of any technology, declining by a factor of 10,000 since 2007. It's only a matter of time before they're cheap enough to put in every home. The second major breakthrough comes from mRNA vaccines, which are today being used to end the COVID pandemic. The wonder of mRNA vaccines is that they can instruct our cells to make any random protein we choose — and trigger a protective immune response from the body. By using the sequencing technology above, we can quickly get the genetic code that matches the surface proteins of any new pathogen, and switch that code into the mRNA vaccines we're already making. Making a new vaccine would become less like manufacturing a new iPhone and more like printing a new book — you use the same printing press and just change the words. So long as we kept enough capacity to manufacture and deliver mRNA vaccines on hand, a whole country could in principle be vaccinated against a new disease in months. In tandem these technologies could make advanced bioweapons a threat of the past. And in the process contagious disease could be brought under control like never before. Andy has always been pretty open and honest, but his retirement last year has allowed him to stop worrying about being seen to speak for the Department of Defense, or for the president of the United States – and we were able to get his forthright views on a bunch of interesting other topics, such as: • The chances that COVID-19 escaped from a research facility • Whether a US president can really truly launch nuclear weapons unilaterally • What he thinks should be the top priorities for the Biden administration • The time he and colleagues found 600kg of unsecured, highly enriched uranium sitting around in a barely secured facility in Kazakhstan, and eventually transported it to the United States • And much more. Job opportunity: Executive Assistant to Will MacAskill Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
3/12/20211 hour, 54 minutes, 20 seconds
Episode Artwork

#92 – Brian Christian on the alignment problem

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science. Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer. Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all. Links to learn more, summary and full transcript. Here’s a tease of 10 Hollywood-worthy stories from the episode: • The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience. • ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch. • Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen. • Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net. • Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die. • The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination. • Montezuma's Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple. • Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies. • AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself. • Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip. We also cover: • How reinforcement learning actually works, and some of its key achievements and failures • How a lack of curiosity can cause AIs to fail to be able to do basic things • The pitfalls of getting AI to imitate how we ourselves behave • The benefits of getting AI to infer what we must be trying to achieve • Why it’s good for agents to be uncertain about what they're doing • Why Brian isn’t that worried about explicit deception • The interviewees Brian most agrees with, and most disagrees with • Developments since Brian finished the manuscript • The effective altruism and AI safety communities • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
3/5/20212 hours, 55 minutes, 45 seconds
Episode Artwork

#91 – Lewis Bollard on big wins against factory farming and how they happened

I suspect today's guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture. That's why I interviewed him back in 2017, and it's why I've come back for an updated second dose four years later. That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why. Lewis leads Open Philanthropy’s strategy for farm animal welfare, and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program. This episode certainly isn't only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest. Links to learn more, summary and full transcript. Some of those include: • Between 2019 and 2020, Beyond Meat's cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity? • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later? • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer? • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference? We also cover: • Switzerland’s ballot measure on eliminating factory farming • What a Biden administration could mean for reducing animal suffering • How chicken is cheaper than peanuts • The biggest recent wins for farmed animals • Things that haven’t gone to plan in animal advocacy • Political opportunities for farmed animal advocates in Europe • How the US is behind Brazil and Israel on animal welfare standards • The value of increasing media coverage of factory farming • The state of the animal welfare movement • And much more If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis, you could check out this 2013 cause report from Open Philanthropy. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
2/15/20212 hours, 33 minutes, 16 seconds
Episode Artwork

Rob Wiblin on how he ended up the way he is

This is a crosspost of an episode of the Eureka Podcast. The interviewer is Misha Saul, a childhood friend of Rob's, who he has known for over 20 years. While it's not an episode of our own show, we decided to share it with subscribers because it's fun, and because it touches on personal topics that we don't usually cover on the show. Rob and Misha cover: • How Rob's parents shaped who he is (if indeed they did) • Their shared teenage obsession with philosophy, which eventually led to Rob working at 80,000 Hours • How their politics were shaped by growing up in the 90s • How talking to Rob helped Misha develop his own very different worldview • Why The Lord of the Rings movies have held up so well • What was it like being an exchange student in Spain, and was learning Spanish a mistake? • Marriage and kids • Institutional decline and historical analogies for the US in 2021 • Making fun of teachers • Should we stop eating animals? Producer: Keiran Harris. Audio mastering: Ben Cordell.
2/3/20211 hour, 57 minutes, 56 seconds
Episode Artwork

#90 – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?” You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours. But then you get up, walk outside, and look at the number on your box. ‘3’. Huh. Now you don’t know what to believe. If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928? In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as 'anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving. Links to learn more, summary and full transcript. Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by 'longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future. Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time. But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live. If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed. If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called 'doomsday argument' alone. If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead. There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants. In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely. They also discuss: • Which worldviews Open Phil finds most plausible, and how it balances them • How hard it is to get to other solar systems • The 'simulation argument' • When transformative AI might actually arrive • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
1/21/20212 hours, 59 minutes, 4 seconds
Episode Artwork

Rob Wiblin on self-improvement and research ethics

This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin. Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own. Among other things they cover: • Is trying to become a better person a good strategy for self-improvement • Why Rob thinks many people could achieve much more by finding themselves a line manager • Why interviews on this show are so damn long • Is it complicated to figure out what human beings value, or actually simpler than it seems • Why Rob thinks research ethics and institutional review boards are causing immense harm • Where prediction markets might be failing today and how to tell If you like this go ahead and subscribe to Spencer's show by searching for Clearer Thinking in your podcasting app. In particular, you might want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler. The 80,000 Hours Podcast is produced by Keiran Harris.
1/13/20212 hours, 30 minutes, 36 seconds
Episode Artwork

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

Rebroadcast: this episode was originally released in March 2020. To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million. This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now. He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science. Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here. • Links to learn more, summary and full transcript. • Latest version of Phil’s paper on the topic. What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways? And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own. Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse? Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended? Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'. Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good. Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my colleague Howie Lempel, we try to answer that, and also discuss: • Historical attempts at patient philanthropy • Should we have a mixed strategy, where some altruists are patient and others impatient? • Which causes most need money now? • What is the research frontier here? • What does this all mean for what listeners should do differently? Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the transcript linked above. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcripts: Zakee Ulhaq.
1/7/20212 hours, 41 minutes, 5 seconds
Episode Artwork

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

Rebroadcast: this episode was originally released in April 2020. Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on their most plausible paths, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths. I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford's Global Priorities Institute, and these days I'm 80,000 Hours' Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers. So we thought it would be useful to discuss some on the show for everyone to hear. • Links to learn more, summary and full transcript. • See over 500 vacancies on our job board. • Apply for one-on-one career advising. Among other common topics, we cover: • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in. • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it's wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations. • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties. • Why many listeners aren't spending enough time finding out about what the day-to-day work is like in paths they're considering, or reaching out to people for advice or opportunities. • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you're already accomplishing. I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it. If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people: 1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address. 2. Who don’t yet have close connections with people working at effective altruist organisations. 3. Who aren’t strongly locationally constrained. If you’re unsure, it doesn’t take long to apply, and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds. Also in this episode: • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with. • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path. • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
12/30/20202 hours, 14 minutes, 49 seconds
Episode Artwork

#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

From one point of view academia forms one big 'epistemic' system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems. How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future. With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers leeway to try to understand how the world works. Links to learn more, summary and full transcript. Instead of you having to pay for a masters degree, the RSP pays *you* to spend significant amounts of time thinking about high-level questions, like "What is important to do?” and “How can I usefully contribute?" Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible. The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics. It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?” Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory. Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career. RSP applications are set to open in the Spring of 2021 — but Owen thinks it’s helpful for people to think about it in advance. In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover: • Extinction risk classification and reduction strategies • Preventing small disasters from becoming large disasters • How likely we are to go from being in a collapsed state to going extinct • What most people should do if longtermism is true • Advice for mathematically-minded people • And much more Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris Audio mastering: Ben Cordell Transcript: Zakee Ulhaq
12/17/20202 hours, 38 minutes, 11 seconds
Episode Artwork

#88 – Tristan Harris on the need to change the incentives of social media companies

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages. Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it's hard to remember how recently it was a fringe view. It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality. But while it all feels plausible, how strong is the evidence that it's true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory. At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. Fears about new technologies aren't always misguided. Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques. • Links to learn more, summary and full transcript. • FYI, the 2020 Effective Altruism Survey is closing soon: https://www.surveymonkey.co.uk/r/EAS80K2 Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address. Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what's in our interests as users and citizens? One way is to encourage a shift to a subscription model. One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site. But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50. Despite all the negatives, Tristan doesn’t want us to abandon the technologies he's concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world. Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we've ever had — tools that could educate and organise people better than anything that has come before. The tricky and open question is how to get there. Rob and Tristan also discuss: • Justified concerns vs. moral panics • The effect of social media on politics in the US and developing countries • Tips for individuals Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.
12/3/20202 hours, 35 minutes, 38 seconds
Episode Artwork

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

In the last '80k team chat' with Ben Todd and Arden Koehler, we discussed what effective altruism is and isn't, and how to argue for it. In this episode we turn now to what the effective altruism community most needs. • Links to learn more, summary and full transcript • The 2020 Effective Altruism Survey just opened. If you're involved with the effective altruism community, or sympathetic to its ideas, it's would be wonderful if you could fill it out: https://www.surveymonkey.co.uk/r/EAS80K2 According to Ben, we can think of the effective altruism movement as having gone through several stages, categorised by what kind of resource has been most able to unlock more progress on important issues (i.e. by what's the 'bottleneck'). Plausibly, these stages are common for other social movements as well. • Needing money: In the first stage, when effective altruism was just getting going, more money (to do things like pay staff and put on events) was the main bottleneck to making progress. • Needing talent: In the second stage, we especially needed more talented people being willing to work on whatever seemed most pressing. • Needing specific skills and capacity: In the third stage, which Ben thinks we're in now, the main bottlenecks are organizational capacity, infrastructure, and management to help train people up, as well as specialist skills that people can put to work now. What's next? Perhaps needing coordination -- the ability to make sure people keep working efficiently and effectively together as the community grows. Ben and I also cover the career implications of those stages, as well as the ability to save money and the possibility that someone else would do your job in your absence. If you’d like to learn more about these topics, you should check out a couple of articles on our site: • Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions • How replaceable are the top candidates in large hiring rounds? Why the answer flips depending on the distribution of applicant ability Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
11/12/20201 hour, 25 minutes, 20 seconds
Episode Artwork

#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

If you want to make the world a better place, would it be better to help your niece with her SATs, or try to join the State Department to lower the risk that the US and China go to war? People involved in 80,000 Hours or the effective altruism community would be comfortable recommending the latter. This week's guest — Russ Roberts, host of the long-running podcast EconTalk, and author of a forthcoming book on decision-making under uncertainty and the limited ability of data to help — worries that might be a mistake. Links to learn more, summary and full transcript. I've been a big fan of Russ' show EconTalk for 12 years — in fact I have a list of my top 100 recommended episodes — so I invited him to talk about his concerns with how the effective altruism community tries to improve the world. These include: • Being too focused on the measurable • Being too confident we've figured out 'the best thing' • Being too credulous about the results of social science or medical experiments • Undermining people's altruism by encouraging them to focus on strangers, who it's naturally harder to care for • Thinking it's possible to predictably help strangers, who you don't understand well enough to know what will truly help • Adding levels of wellbeing across people when this is inappropriate • Encouraging people to pursue careers they won't enjoy These worries are partly informed by Russ' 'classical liberal' worldview, which involves a preference for free market solutions to problems, and nervousness about the big plans that sometimes come out of consequentialist thinking. While we do disagree on a range of things — such as whether it's possible to add up wellbeing across different people, and whether it's more effective to help strangers than people you know — I make the case that some of these worries are founded on common misunderstandings about effective altruism, or at least misunderstandings of what we believe here at 80,000 Hours. We primarily care about making the world a better place over thousands or even millions of years — and we wouldn’t dream of claiming that we could accurately measure the effects of our actions on that timescale. I'm more skeptical of medicine and empirical social science than most people, though not quite as skeptical as Russ (check out this quiz I made where you can guess which academic findings will replicate, and which won't). And while I do think that people should occasionally take jobs they dislike in order to have a social impact, those situations seem pretty few and far between. But Russ and I disagree about how much we really disagree. In addition to all the above we also discuss: • How to decide whether to have kids • Was the case for deworming children oversold? • Whether it would be better for countries around the world to be better coordinated Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
11/3/20201 hour, 49 minutes, 35 seconds
Episode Artwork

How much does a vote matter? (Article)

Today’s release is the latest in our series of audio versions of our articles.In this one — How much does a vote matter? — I investigate the two key things that determine the impact of your vote: • The chances of your vote changing an election’s outcome • How much better some candidates are for the world as a whole, compared to others I then discuss what I think are the best arguments against voting in important elections: • If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake. • While voting itself doesn’t take long, knowing enough to accurately pick which candidate is better for the world actually does take substantial effort — effort that could be better allocated elsewhere. Finally, I look into the impact of donating to campaigns or working to ‘get out the vote’, which can be effective ways to generate additional votes for your preferred candidate. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris.
10/29/202031 minutes, 13 seconds
Episode Artwork

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

Had World War 1 never happened, you might never have existed. It’s very unlikely that the exact chain of events that led to your conception would have happened otherwise — so perhaps you wouldn't have been born. Would that mean that it's better for you that World War 1 happened (regardless of whether it was better for the world overall)? On the one hand, if you're living a pretty good life, you might think the answer is yes – you get to live rather than not. On the other hand, it sounds strange to say that it's better for you to be alive, because if you'd never existed there'd be no you to be worse off. But if you wouldn't be worse off if you hadn't existed, can you be better off because you do? In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can't be better for someone to exist vs. not. Links to learn more, summary and full transcript. Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn't better for them, and thus, perhaps, that it's not better at all. This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn't otherwise have existed) — which would affect how we try to make the world a better place. Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned. Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out. This is our second episode with Professor Greaves. The first one was a big hit, so we thought we'd come back and dive into even more complex ethical issues. We discuss: • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible • What it means for us to be 'clueless' about the consequences of our actions • Moral uncertainty -- what we should do when we don't know which moral theory is correct • Whether we should take a bet on a really small probability of a really great outcome • The field of global priorities research at the Global Priorities Institute and beyond Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
10/21/20202 hours, 24 minutes, 53 seconds
Episode Artwork

Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Today’s episode is the latest conversation between Arden Koehler, and our CEO, Ben Todd. Ben’s been thinking a lot about effective altruism recently, including what it really is, how it's framed, and how people misunderstand it. We recently released an article on misconceptions about effective altruism – based on Will MacAskill’s recent paper The Definition of Effective Altruism – and this episode can act as a companion piece. Links to learn more, summary and full transcript. Arden and Ben cover a bunch of topics related to effective altruism: • How it isn’t just about donating money to fight poverty • Whether it includes a moral obligation to give • The rigorous argument for its importance • Objections to that argument • How to talk about effective altruism for people who aren't already familiar with it Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
9/22/20201 hour, 24 minutes, 6 seconds
Episode Artwork

Ideas for high impact careers beyond our priority paths (Article)

Today’s release is the latest in our series of audio versions of our articles. In this one, we go through some more career options beyond our priority paths that seem promising to us for positively influencing the long-term future. Some of these are likely to be written up as priority paths in the future, or wrapped into existing ones, but we haven’t written full profiles for them yet—for example policy careers outside AI and biosecurity policy that seem promising from a longtermist perspective. Others, like information security, we think might be as promising for many people as our priority paths, but because we haven’t investigated them much we’re still unsure. Still others seem like they’ll typically be less impactful than our priority paths for people who can succeed equally in either, but still seem high-impact to us and like they could be top options for a substantial number of people, depending on personal fit—for example research management. Finally some—like becoming a public intellectual—clearly have the potential for a lot of impact, but we can’t recommend them widely because they don’t have the capacity to absorb a large number of people, are particularly risky, or both. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey before it closes on Sunday (13th of September). You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
9/7/202027 minutes, 53 seconds
Episode Artwork

Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

Today’s bonus episode is a conversation between Arden Koehler, and our CEO, Ben Todd. Ben’s been doing a bunch of research recently, and we thought it’d be interesting to hear about how he’s currently thinking about a couple of different topics – including different types of longtermism, and things 80,000 Hours might be getting wrong. Links to learn more, summary and full transcript. This is very off-the-cut compared to our regular episodes, and just 54 minutes long. In the first half, Arden and Ben talk about varieties of longtermism: • Patient longtermism • Broad urgent longtermism • Targeted urgent longtermism focused on existential risks • Targeted urgent longtermism focused on other trajectory changes • And their distinctive implications for people trying to do good with their careers. In the second half, they move on to: • How to trade-off transferable versus specialist career capital • How much weight to put on personal fit • Whether we might be highlighting the wrong problems and career paths. Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
9/1/202057 minutes, 50 seconds
Episode Artwork

Global issues beyond 80,000 Hours’ current priorities (Article)

Today’s release is the latest in our series of audio versions of our articles. In this one, we go through 30 global issues beyond the ones we usually prioritize most highly in our work, and that you might consider focusing your career on tackling. Although we spend the majority of our time at 80,000 Hours on our highest priority problem areas, and we recommend working on them to many of our readers, these are just the most promising issues among those we’ve spent time investigating. There are many other global issues that we haven’t properly investigated, and which might be very promising for more people to work on. In fact, we think working on some of the issues in this article could be as high-impact for some people as working on our priority problem areas — though we haven’t looked into them enough to be confident. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
8/28/202032 minutes, 53 seconds
Episode Artwork

#85 - Mark Lynas on climate change, societal collapse & nuclear energy

A golf-ball sized lump of uranium can deliver more than enough power to cover all of your lifetime energy use. To get the same energy from coal, you’d need 3,200 tonnes of black rock — a mass equivalent to 800 adult elephants, which would produce more than 11,000 tonnes of CO2. That’s about 11,000 tonnes more than the uranium. Many people aren’t comfortable with the danger posed by nuclear power. But given the climatic stakes, it’s worth asking: Just how much more dangerous is it compared to fossil fuels? According to today’s guest, Mark Lynas — author of Six Degrees: Our Future on a Hotter Planet (winner of the prestigious Royal Society Prizes for Science Books) and Nuclear 2.0 — it’s actually much, much safer. Links to learn more, summary and full transcript. Climatologists James Hansen and Pushker Kharecha calculated that the use of nuclear power between 1971 and 2009 avoided the premature deaths of 1.84 million people by avoiding air pollution from burning coal. What about radiation or nuclear disasters? According to Our World In Data, in generating a given amount of electricity, nuclear, wind, and solar all cause about the same number of deaths — and it's a tiny number. So what’s going on? Why isn’t everyone demanding a massive scale-up of nuclear energy to save lives and stop climate change? Mark and many other activists believe that unchecked climate change will result in the collapse of human civilization, so the stakes could not be higher. Mark says that many environmentalists — including him — simply grew up with anti-nuclear attitudes all around them (possibly stemming from a conflation of nuclear weapons and nuclear energy) and haven't thought to question them. But he thinks that once you believe in the climate emergency, you have to rethink your opposition to nuclear energy. At 80,000 Hours we haven’t analysed the merits and flaws of the case for nuclear energy — especially compared to wind and solar paired with gas, hydro, or battery power to handle intermittency — but Mark is convinced. He says it comes down to physics: Nuclear power is just so much denser. We need to find an energy source that provides carbon-free power to ~10 billion people, and we need to do it while humanity is doubling or tripling (or more) its energy demand. How do you do that without destroying the world's ecology? Mark thinks that nuclear is the only way. Read a more in-depth version of the case for nuclear energy in the full blog post. For Mark, the only argument against nuclear power is a political one -- that people won't want or accept it. He says that he knows people in all kinds of mainstream environmental groups — such as Greenpeace — who agree that nuclear must be a vital part of any plan to solve climate change. But, because they think they'll be ostracized if they speak up, they keep their mouths shut. Mark thinks this willingness to indulge beliefs that contradict scientific evidence stands in the way of actually fully addressing climate change, and so he’s helping to build a movement of folks who are out and proud about their support for nuclear energy. This is only one topic of many in today’s interview. Arden, Rob, and Mark also discuss: • At what degrees of warming does societal collapse become likely • Whether climate change could lead to human extinction • What environmentalists are getting wrong about climate change • And much more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
8/20/20202 hours, 8 minutes, 25 seconds
Episode Artwork

#84 - Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked

When COVID-19 struck the US, everyone was told that hand sanitizer needed to be saved for healthcare professionals, so they should just wash their hands instead. But in India, many homes lack reliable piped water, so they had to do the opposite: distribute hand sanitizer as widely as possible. American advocates for banning single-use plastic straws might be outraged at the widespread adoption of single-use hand sanitizer sachets in India. But the US and India are very different places, and it might be the only way out when you're facing a pandemic without running water. According to today’s guest, Shruti Rajagopalan, Senior Research Fellow at the Mercatus Center at George Mason University, that's typical and context is key to policy-making. This prompted Shruti to propose a set of policy responses designed for India specifically back in April. Unfortunately she thinks it's surprisingly hard to know what one should and shouldn't imitate from overseas. Links to learn more, summary and full transcript. For instance, some places in India installed shared handwashing stations in bus stops and train stations, which is something no developed country would advise. But in India, you can't necessarily wash your hands at home — so shared faucets might be the lesser of two evils. (Though note scientists have downgraded the importance of hand hygiene lately.) Stay-at-home orders offer a more serious example. Developing countries find themselves in a serious bind that rich countries do not. With nearly no slack in healthcare capacity, India lacks equipment to treat even a small number of COVID-19 patients. That suggests strict controls on movement and economic activity might be necessary to control the pandemic. But many people in India and elsewhere can't afford to shelter in place for weeks, let alone months. And governments in poorer countries may not be able to afford to send everyone money — even where they have the infrastructure to do so fast enough. India ultimately did impose strict lockdowns, lasting almost 70 days, but the human toll has been larger than in rich countries, with vast numbers of migrant workers stranded far from home with limited if any income support. There were no trains or buses, and the government made no provision to deal with the situation. Unable to afford rent where they were, many people had to walk hundreds of kilometers to reach home, carrying children and belongings with them. But in some other ways the context of developing countries is more promising. In the US many people melted down when asked to wear facemasks. But in South Asia, people just wore them. Shruti isn’t sure whether that's because of existing challenges with high pollution, past experiences with pandemics, or because intergenerational living makes the wellbeing of others more salient, but the end result is that masks weren’t politicised in the way they were in the US. In addition, despite the suffering caused by India's policy response to COVID-19, public support for the measures and the government remains high — and India's population is much younger and so less affected by the virus. In this episode, Howie and Shruti explore the unique policy challenges facing India in its battle with COVID-19, what they've tried to do, and how it has gone. They also cover: • What an economist can bring to the table during a pandemic • The mystery of India’s surprisingly low mortality rate • Policies that should be implemented today • What makes a good constitution Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
8/13/20202 hours, 58 minutes, 13 seconds
Episode Artwork

#83 - Prof Jennifer Doleac on preventing crime without police and prisons

The killing of George Floyd has prompted a great deal of debate over whether the US should reduce the size of its police departments. The research literature suggests that the presence of police officers does reduce crime, though they're expensive and as is increasingly recognised, impose substantial harms on the populations they are meant to be protecting, especially communities of colour. So maybe we ought to shift our focus to effective but unconventional approaches to crime prevention, approaches that don't require police or prisons and the human toll they bring with them. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three alternative ways to effectively prevent crime: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation, Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that lead reduction might be the best buy of all in crime prevention… Blog post truncated due to length limits. Finish reading the full post here. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
7/31/20202 hours, 23 minutes, 2 seconds
Episode Artwork

#82 - Prof James Forman Jr on reducing the cruelty of the US criminal legal system

No democracy has ever incarcerated as many people as the United States. To get its incarceration rate down to the global average, the US would have to release 3 in 4 people in its prisons today. The effects on Black Americans have been especially severe — Black people make up 12% of the US population but 33% of its prison population. In the early 2000's when incarceration reached its peak, the US government estimated that 32% of Black boys would go to prison at some point in their lives, 5.5 times the figure for whites. Contrary to popular understanding, nonviolent drug offenders make up less than a fifth of the incarcerated population. The only way to get its incarceration rate near the global average will be to shorten prison sentences for so-called 'violent criminals' — a politically toxic idea. But could we change that? According to today’s guest, Professor James Forman Jr — a former public defender in Washington DC, Pulitzer Prize-winning author of Locking Up Our Own: Crime and Punishment in Black America, and now a professor at Yale Law School — there are two things we have to do to make that happen. Links to learn more, summary and full transcript. First, he thinks we should lose the term 'violent offender', and maybe even 'violent crime'. When you say 'violent crime', most people immediately think of murder and rape — but they're only a small fraction of the crimes that the law deems as violent. In reality, the crime that puts the most people in prison in the US is robbery. And the law says that robbery is a violent crime whether a weapon is involved or not. By moving away from the catch-all category of 'violent criminals' we can judge the risk posed by individual people more sensibly. Second, he thinks we should embrace the restorative justice movement. Instead of asking "What was the law? Who broke it? What should the punishment be", restorative justice asks "Who was harmed? Who harmed them? And what can we as a society, including the person who committed the harm, do to try to remedy that harm?" Instead of being narrowly focused on how many years people should spend in prison as retribution, it starts a different conversation. You might think this apparently softer approach would be unsatisfying to victims of crime. But James has discovered that a lot of victims of crime find that the current system doesn't help them in any meaningful way. What they primarily want to know is: why did this happen to me? The best way to find that out is to actually talk to the person who harmed them, and in doing so gain a better understanding of the underlying factors behind the crime. The restorative justice approach facilitates these conversations in a way the current system doesn't allow, and can include restitution, apologies, and face-to-face reconciliation. That’s just one topic of many covered in today’s episode, with much of the conversation focusing on Professor Forman’s 2018 book Locking Up Our Own — an examination of the historical roots of contemporary criminal justice practices in the US, and his experience setting up a charter school for at-risk youth in DC. Rob and James also discuss: • How racism shaped the US criminal legal system • How Black America viewed policing through the 20th century • How class divisions fostered a 'tough on crime' approach • How you can have a positive impact as a public prosecutor Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
7/27/20201 hour, 28 minutes, 7 seconds
Episode Artwork

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
7/9/20202 hours, 38 minutes, 27 seconds
Episode Artwork

Advice on how to read our advice (Article)

This is the fourth release in our new series of audio articles. If you want to read the original article or check out the links within it, you can find them here. "We’ve found that readers sometimes interpret or apply our advice in ways we didn’t anticipate and wouldn’t exactly recommend. That’s hard to avoid when you’re writing for a range of people with different personalities and initial views. To help get on the same page, here’s some advice about our advice, for those about to launch into reading our site. We want our writing to inform people’s views, but only in proportion to the likelihood that we’re actually right. So we need to make sure you have a balanced perspective on how compelling the evidence is for the different claims we make on the site, and how much weight to put on our advice in your situation. This piece includes a list of points to bear in mind when reading our site, and some thoughts on how to avoid the communication problems we face..." As the title suggests, this was written with our web site content in mind, but plenty of it applies to the careers sections of the podcast too — as well as our bonus episodes with members of the 80,000 Hours team, such as Arden and Rob’s episode on demandingness, work-life balance and injustice, which aired on February 25th of this year. And if you have feedback on these, positive or negative, it’d be great if you could email us at [email protected].
6/29/202015 minutes, 22 seconds
Episode Artwork

#80 - Professor Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed. In his new book, Human Compatible, he outlines the 'standard model' of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we've stated explicitly. This is so obvious it almost doesn't even seem like a design choice, but it is. Unfortunately there's a big problem with this approach: it's incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we've asked it to. That's true even if the goal isn't what we really want, or the methods it's choosing are ones we would never accept. We already see AIs misbehaving for this reason. Stuart points to the example of YouTube's recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn't something we wanted, but it helped achieve the algorithm's objective: maximise viewing time. Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we've asked for. Links to learn more, summary and full transcript. This 'alignment' problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we're ever to hand over much of the economy to thinking machines, we can't count on ourselves correctly saying exactly what we want the AI to do every time. Stuart isn't just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles: 1. The AI system's objective is to achieve what humans want. 2. But the system isn't sure what we want. 3. And it figures out what we want by observing our behaviour. Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI. For instance, a machine built on these principles would be happy to be turned off if that's what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, "you can't fetch the coffee if you're dead." These principles lend themselves towards machines that are modest and cautious, and check in when they aren't confident they're truly achieving what we want. We've made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we've rejected an option because we've considered it and decided it's a bad idea, and when we simply haven't thought about it at all. Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political. When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want? Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
6/22/20202 hours, 13 minutes, 16 seconds
Episode Artwork

What anonymous contributors think about important life and career questions (Article)

Today we’re launching the final entry of our ‘anonymous answers' series on the website. It features answers to 23 different questions including “How have you seen talented people fail in their work?” and “What’s one way to be successful you don’t think people talk about enough?”, from anonymous people whose work we admire. We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they span a very wide range of opinions. So we decided to share some highlights here with you podcast subscribers. This is only a sample though, including a few answers from just 10 of those 23 questions. You can find the rest of the answers at 80000hours.org/anonymous or follow a link here to an individual entry: 1. What's good career advice you wouldn’t want to have your name on? 2. How have you seen talented people fail in their work? 3. What’s the thing people most overrate in their career? 4. If you were at the start of your career again, what would you do differently this time? 5. If you're a talented young person how risk averse should you be? 6. Among people trying to improve the world, what are the bad habits you see most often? 7. What mistakes do people most often make when deciding what work to do? 8. What's one way to be successful you don't think people talk about enough? 9. How honest & candid should high-profile people really be? 10. What’s some underrated general life advice? 11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower? 12. What are the biggest flaws of 80,000 Hours? 13. What are the biggest flaws of the effective altruism community? 14. How should the effective altruism community think about diversity? 15. Are there any myths that you feel obligated to support publicly? And five other questions. Finally, if you’d like us to produce more or less content like this, please let us know your opinion [email protected].
6/5/202037 minutes, 9 seconds
Episode Artwork

#79 - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what, she's not so bad". Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history. He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His next book will ask: if we reframe global problems as puzzles, would the world be a better place? Links to learn more, summary and full transcript. This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.) We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.) Another reason to listen is for the facts: • The Bayer aspirin company invented heroin as a cough suppressant • Coriander is just the British way of saying cilantro • Dogs have a third eyelid to protect the eyeball from irritants • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.) One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the Bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.) I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; the construction worker who told me how to get to my subway platform on the morning of the interview; and Queen Jadwiga for making bagels popular in the 1300s, which kept me going during the recording. (Thanks a Thousand.) We also discuss: • Blackmailing yourself • The most extreme ideas A.J.’s ever considered • Doing good as a writer • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
6/1/20202 hours, 38 minutes, 46 seconds
Episode Artwork

#78 - Danny Hernandez on forecasting and the drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks. These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today's episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy. Danny and I talk about how to understand his team's results and what they mean (and don't mean) for how we should think about progress in AI going forward. Links to learn more, summary and full transcript. Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field. If this research sounds appealing, you might be interested in applying to join OpenAI's Foresight team — they're currently hiring research engineers. In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including: • The question of which experts to believe • Danny's journey to working at OpenAI • The usefulness of "decision boundaries" • The importance of Moore's law for people who care about the long-term future • What OpenAI's Foresight Team's findings might imply for policy • The question whether progress in the performance of AI systems is linear • The safety teams at OpenAI and who they're looking to hire • One idea for finding someone to guide your learning • The importance of hardware expertise for making a positive impact Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
5/22/20202 hours, 11 minutes, 36 seconds
Episode Artwork

#77 - Professor Marc Lipsitch on whether we're winning or losing against COVID-19

In March Professor Marc Lipsitch — Director of Harvard's Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalists knocking down his door, as everyone turned to him for information they could trust. Here he lays out where the fight against COVID-19 stands today, why he's open to deliberately giving people COVID-19 to speed up vaccine development, and how we could do better next time. As Marc tells us, island nations like Taiwan and New Zealand are successfully suppressing SARS-COV-2. But everyone else is struggling. Links to learn more, summary and full transcript. Even Singapore, with plenty of warning and one of the best test and trace systems in the world, lost control of the virus in mid-April after successfully holding back the tide for 2 months. This doesn't bode well for how the US or Europe will cope as they ease their lockdowns. It also suggests it would have been exceedingly hard for China to stop the virus before it spread overseas. But sadly, there's no easy way out. The original estimates of COVID-19's infection fatality rate, of 0.5-1%, have turned out to be basically right. And the latest serology surveys indicate only 5-10% of people in countries like the US, UK and Spain have been infected so far, leaving us far short of herd immunity. To get there, even these worst affected countries would need to endure something like ten times the number of deaths they have so far. Marc has one good piece of news: research suggests that most of those who get infected do indeed develop immunity, for a while at least. To escape the COVID-19 trap sooner rather than later, Marc recommends we go hard on all the familiar options — vaccines, antivirals, and mass testing — but also open our minds to creative options we've so far left on the shelf. Despite the importance of his work, even now the training and grant programs that produced the community of experts Marc is a part of, are shrinking. We look at a new article he's written about how to instead build and improve the field of epidemiology, so humanity can respond faster and smarter next time we face a disease that could kill millions and cost tens of trillions of dollars. We also cover: • How listeners might contribute as future contagious disease experts, or donors to current projects • How we can learn from cross-country comparisons • Modelling that has gone wrong in an instructive way • What governments should stop doing • How people can figure out who to trust, and who has been most on the mark this time • Why Marc supports infecting people with COVID-19 to speed up the development of a vaccines • How we can ensure there's population-level surveillance early during the next pandemic • Whether people from other fields trying to help with COVID-19 has done more good than harm • Whether it's experts in diseases, or experts in forecasting, who produce better disease forecasts Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
5/18/20201 hour, 37 minutes, 4 seconds
Episode Artwork

Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Today’s release is the second experiment in making audio versions of our articles. The first was a narration of Greg Lewis’ terrific problem profile on ‘Reducing global catastrophic biological risks’, which you can find on the podcast feed just before episode #74 - that is, our interview with Greg about the piece. If you want to check out the links in today’s article, you can find those here. And if you have feedback on these, positive or negative, it’d be great if you could email us at [email protected]. 
5/12/202026 minutes, 45 seconds
Episode Artwork

#76 - Prof Tara Kirk Sell on misinformation, who's done well and badly, & what to reopen first

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we're especially thankful for is the Johns Hopkins Center for Health Security (CHS). CHS focuses on protecting us from major biological, chemical or nuclear disasters, through research that informs governments around the world. While this pandemic surprised many, just last October the Center ran a simulation of a 'new coronavirus' scenario to identify weaknesses in our ability to quickly respond. Their expertise has given them a key role in figuring out how to fight COVID-19. Today’s guest, Dr Tara Kirk Sell, did her PhD in policy and communication during disease outbreaks, and has worked at CHS for 11 years on a range of important projects. • Links to learn more, summary and full transcript. Last year she was a leader on Collective Intelligence for Disease Prediction, designed to sound the alarm about upcoming pandemics before others are paying attention. Incredibly, the project almost closed in December, with COVID-19 just starting to spread around the world — but received new funding that allowed the project to respond quickly to the emerging disease. She also contributed to a recent report attempting to explain the risks of specific types of activities resuming when COVID-19 lockdowns end. We can't achieve zero risk — so differentiating activities on a spectrum is crucial. Choosing wisely can help us lead more normal lives without reviving the pandemic. Dance clubs will have to stay closed, but hairdressers can adapt to minimise transmission, and Tara, who happens to be an Olympic silver-medalist in swimming, suggests outdoor non-contact sports could resume soon without much risk. Her latest project deals with the challenge of misinformation during disease outbreaks. Analysing the Ebola communication crisis of 2014, they found that even trained coders with public health expertise sometimes needed help to distinguish between true and misleading tweets — showing the danger of a continued lack of definitive information surrounding a virus and how it’s transmitted. The challenge for governments is not simple. If they acknowledge how much they don't know, people may look elsewhere for guidance. But if they pretend to know things they don't, the result can be a huge loss of trust. Despite their intense focus on COVID-19, researchers at CHS know that this is no one-off event. Many aspects of our collective response this time around have been alarmingly poor, and it won’t be long before Tara and her colleagues need to turn their mind to next time. You can now donate to CHS through Effective Altruism Funds. Donations made through EA Funds are tax-deductible in the US, the UK, and the Netherlands. Tara and Rob also discuss: • Who has overperformed and underperformed expectations during COVID-19? • Whe are people right to mistrust authorities? • The media’s responsibility to be right • What policy changes should be prioritised for next time • Should we prepare for future pandemic while the COVID-19 is still going? • The importance of keeping non-COVID health problems in mind • The psychological difference between staying home voluntarily and being forced to • Mistakes that we in the general public might be making • Emerging technologies with the potential to reduce global catastrophic biological risks Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
5/8/20201 hour, 52 minutes, 59 seconds
Episode Artwork

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on their most plausible paths, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths. I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford's Global Priorities Institute, and these days I'm 80,000 Hours' Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers. So we thought it would be useful to discuss some on the show for everyone to hear. • Links to learn more, summary and full transcript. • See over 500 vacancies on our job board. • Apply for one-on-one career advising. Among other common topics, we cover: • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in. • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it's wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations. • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties. • Why many listeners aren't spending enough time finding out about what the day-to-day work is like in paths they're considering, or reaching out to people for advice or opportunities. • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you're already accomplishing. I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it. If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people: 1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address. 2. Who don’t yet have close connections with people working at effective altruist organisations. 3. Who aren’t strongly locationally constrained. If you’re unsure, it doesn’t take long to apply, and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds. Also in this episode: • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with. • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path. • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
4/28/20202 hours, 13 minutes, 5 seconds
Episode Artwork

#74 - Dr Greg Lewis on COVID-19 & catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise. The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours. Today's guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University's Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future. COVID-19 is a vivid reminder that we are unprepared to contain or respond to new pathogens. How would we cope with a virus that was even more contagious and even more deadly? Greg's work focuses on these risks -- of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity. Links to learn more, summary and full transcript. If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature. There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar. This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion. COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources. But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves. Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel. 80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible. In this episode, Howie and Greg cover: • Reflections on the first few months of the pandemic • Common confusions around COVID-19 • How COVID-19 compares to other diseases • What types of interventions have been available to policymakers • Arguments for and against working on global catastrophic biological risks (GCBRs) • How to know if you’re a good fit to work on GCBRs • The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19 • And much more. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
4/17/20202 hours, 37 minutes, 16 seconds
Episode Artwork

Article: Reducing global catastrophic biological risks

In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute. Greg also wrote a new problem profile on that topic for our website, and reading that is a good lead-in to our interview with him. So in a bit of an experiment we decided to make this audio version of that article, narrated by the producer of the 80,000 Hours Podcast, Keiran Harris. We’re thinking about having audio versions of other important articles we write, so it’d be great if you could let us know if you’d like more of these. You can email us your view at [email protected]. If you want to check out all of Greg’s graphs and footnotes that we didn’t include, and get links to learn more about GCBRs - you can find those here. And if you want to read more about COVID-19, the 80,000 Hours team has produced a fantastic package of 10 pieces about how to stop the pandemic. You can find those here.
4/15/20201 hour, 4 minutes, 14 seconds
Episode Artwork

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

From home isolation Rob and Howie just recorded an episode on: 1. How many could die in the crisis, and the risk to your health personally. 2. What individuals might be able to do help tackle the coronavirus crisis. 3. What we suspect governments should do in response to the coronavirus crisis. 4. The importance of personally not spreading the virus, the properties of the SARS-CoV-2 virus, and how you can personally avoid it. 5. The many places society screwed up, how we can avoid this happening again, and why be optimistic.  We have rushed this episode out to share information as quickly as possible in a fast-moving situation. If you would prefer to read you can find the transcript here. We list a wide range of valuable resources and links in the blog post attached to the show (over 60, including links to projects you can join). See our 'problem profile' on global catastrophic biological risks for information on these grave threats and how you can contribute to preventing them. We have also just added a COVID-19 landing page on our site. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris.
3/19/20201 hour, 52 minutes, 11 seconds
Episode Artwork

#73 - Phil Trammell on patient philanthropy and waiting to do good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million. This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now. He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science. ADDED: Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here. • Links to learn more, summary and full transcript. What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways? And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own. Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse? Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended? Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'. Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good. Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my 80,000 Hours colleague Howie Lempel, we try to answer that, and also discuss: • Real attempts at patient philanthropy in history and how they worked out • Should we have a mixed strategy, where some altruists are patient and others impatient? • Which causes most need money now, and which later? • What is the research frontier here? • What does this all mean for what listeners should do differently? Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the transcript linked above. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
3/17/20202 hours, 35 minutes, 21 seconds
Episode Artwork

#72 - Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century. I loved the book and learned a great deal from it (buy it here, US and audiobook release March 24). While preparing for this interview I copied out 87 facts that were surprising, shocking or important. Here's a sample of 16: 1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined. 2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s. 3. In 2008 a 'gamma ray burst' reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren't sure what generates gamma ray bursts but one cause may be two neutron stars colliding. 4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped… N.B. I've had to cut off this list as we only get 4,000 characters in these show notes, so: Click here to read the whole list, see a full transcript, and find related links. And if you like the list, you can get a free copy of the introduction and first chapter by joining our mailing list. While I've been studying these topics for years and known Toby for the last eight, a remarkable amount of what's in The Precipice was new to me. Of course the book isn't a series of isolated amusing facts, but rather a systematic review of the many ways humanity's future could go better or worse, how we might know about them, and what might be done to improve the odds. And that's how we approach this conversation, first talking about each of the main threats, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved. Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which Arden Koehler and I barely even had to work for. Some topics Arden and I ask about include: • What Toby changed his mind about while writing the book • Are people exaggerating when they say that climate change could actually end civilization? • What can we learn from historical pandemics? • Toby’s estimate of unaligned AI causing human extinction in the next century • Is this century the most important time in human history, or is that a narcissistic delusion? • Competing vision for humanity's ideal future • And more. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
3/7/20203 hours, 14 minutes, 16 seconds
Episode Artwork

#71 - Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible. Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift. All of us added something to it, but the single biggest contributor was our CEO and today's guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012. This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we've discovered since we started investigating high impact careers. • Links to learn more, summary and full transcript. But it's perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words. Fortunately it's designed to be highly modular and it's easy to work through it over multiple sessions, scanning over the articles it links to on each topic. Perhaps though, you'd prefer to absorb our most essential ideas in conversation form, in which case this episode is for you. If you want to have a big impact with your career, and you say you're only going to read one article from us, we recommend you read our key ideas page. And likewise, if you're only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through: • Common misunderstandings of our advice • A high level overview of what 80,000 Hours generally recommends • Our key moral positions • What are the most pressing problems to work on and why? • Which careers effectively contribute to solving those problems? • Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration • As well as plenty more. One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we're least sure about, or didn’t yet cover within the article. Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we're aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page! Get the episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
3/2/20202 hours, 57 minutes, 28 seconds
Episode Artwork

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Today's bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice. Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include: • If you’re not going to be completely moral, should you try being a bit more ethical, or give up? • Should you feel angry if you see an injustice, and if so, why? • How much should we ask people to live frugally? So far the feedback on the post-episode chats that we've done have been positive, so we thought we'd go ahead and try out this freestanding one. But fair warning: it's among the more difficult episodes to follow, and probably not the best one to listen to first, as you'll benefit from having more context! If you'd like to listen to more of Arden you can find her in episode 67, David Chalmers on the nature and ethics of consciousness, or episode 66, Peter Singer on being provocative, EA, and how his moral views have changed. Here's more information on some of the issues we touch on: • Consequentialism on Wikipedia • Appropriate dispositions on the Stanford Encyclopaedia of Philosophy • Demandingness objection on Wikipedia • And a paper on epistemic normativity. ——— I mention the call for papers of the Academic Workshop on Global Priorities in the introduction — you can learn more here. And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I've read it and very much enjoyed it. Find out where you can pre-order it here. We'll have an interview with him coming up soon.
2/25/202044 minutes, 11 seconds
Episode Artwork

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad though it is, it's much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both. Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can't do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all. • Links to learn more, summary and full transcript. This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like. In today's episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University's Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we're to keep the risk at acceptable levels. The ideas are: Science 1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go. 2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes. 3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria. Response 4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely. 5. Rigorously evaluate in what situations travel bans are warranted. (They're more often counterproductive.) 6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible. 7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms. 8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out. Oversight 9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens. 10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen. 11. Require full cost-benefit analysis of 'dual-use' research projects that can generate global risks. 12. And finally, to maintain momentum, it's necessary to clearly assign responsibility for the above to particular individuals and organisations. These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem. In the episode Rob and Cassidy also talk about: • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential. • The pros, and significant cons, of travel restrictions. • Whether the same policies work for natural and anthropogenic pandemics. • Ways listeners can pursue a career in biosecurity. • Where we stand with nCoV as of today. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Transcriptions: Zakee Ulhaq.
2/13/20202 hours, 26 minutes, 32 seconds
Episode Artwork

#69 - Jeff Ding on China, its AI dream, and what we get wrong about both

The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard? In his paper Deciphering China's AI Dream, today's guest, PhD student Jeff Ding, outlines why he believes none of these claims are true. • Links to learn more, summary and full transcript. • What’s the best charity to donate to? He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development? Jeff emphasises that China's AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It's connected with a plan to develop an 'Internet of Things', and linked to a history of strategic planning for technology in areas like aerospace and biotechnology. And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse. What are the different levers that China is pulling to try to spur AI development? Here, Jeff wanted to challenge the myth that China's AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government. Are China's AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China's progress in AI. Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China's AI capabilities have surpassed the US or make it the world's leading AI power. Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we'd need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016. Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance. He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues. In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover: • The best analogies for thinking about the growing influence of AI • How do prominent Chinese figures think about AI? • Coordination with China • China’s social credit system • Suggestions for people who want to become professional China specialists • And more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
2/6/20201 hour, 37 minutes, 13 seconds
Episode Artwork

Rob & Howie on what we do and don't know about 2019-nCoV

Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.See this list of resources, including many discussed in the episode, to learn more.In the 1h15m conversation we cover:• What is it? • How many people have it? • How contagious is it? • What fraction of people who contract it die?• How likely is it to spread out of control?• What's the range of plausible fatalities worldwide?• How does it compare to other epidemics?• What don't we know and why? • What actions should listeners take, if any?• How should the complexities of the above be communicated by public health professionals?Here's a link to the hygiene advice from Laurie Garrett mentioned in the episode.Recorded 2 Feb 2020.The 80,000 Hours Podcast is produced by Keiran Harris.
2/3/20201 hour, 18 minutes, 43 seconds
Episode Artwork

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it? A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die. And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others. • Links to learn more, summary and full transcript. • Job opportunities at the Global Priorities Institute. To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person. That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies. As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today. Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise. So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it? This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers. Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here. So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover: • Are, or are we not, living in the most influential time in history? • The culture of the effective altruism community • Will's new lower estimate of the risk of human extinction • Why Will is now less focused on AI • The differences between Americans and Brits • Why feeling guilty about characteristics you were born with is crazy • And plenty more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
1/24/20203 hours, 25 minutes, 35 seconds
Episode Artwork

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

Rebroadcast: this episode was originally released in October 2018. Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening — Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world. A few of the topics we cover are:• Why Paul expects AI to transform the world gradually rather than explosively and what that would look like • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us • Why AI systems will probably be granted legal and property rights • How an advanced AI that doesn't share human goals could still have moral value • Why machine learning might take over science research from humans before it can do most other tasks • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan. • Links to learn more, summary and full transcript. • Rohin Shah's AI alignment newsletter. Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree. If given plenty of time — and enough arguments, counterarguments and counter-counter-arguments between all the experts — should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over? In other words: does 'debate', in principle, lead to truth? According to Paul Christiano — researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities — this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are. It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight. If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful. But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case. The 80,000 Hours Podcast is produced by Keiran Harris.
1/15/20203 hours, 51 minutes, 13 seconds
Episode Artwork

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan in episode #32, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including: • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened? • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians? • What long-shot drugs can people take in their 70s to stave off death? • Can science extend human (waking) life by cutting our need to sleep? • How bad would it be if a solar flare took down the electricity grid? Could it happen? • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it? • Will lifelike robots make us more inclined to dehumanise one another? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
1/8/20201 hour, 25 minutes, 10 seconds
Episode Artwork

#17 Classic episode - Prof Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?• Full transcript, key points & links to articles discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism (EA) community. In this interview we discuss a wide range of topics: • How would we go about a ‘long reflection’ to fix our moral errors? • Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? • If we basically solve existential risks, what does humanity do next? • What are some of Will’s most unusual philosophical positions? • What are the best arguments for and against utilitarianism? • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? • What are some the biases we should be aware of within academia? • What are some of the downsides of becoming a professor? • What are the merits of becoming a philosopher? • How does the media image of EA differ to the actual goals of the community? • What kinds of things would you like to see the EA community do differently? • How much should we explore potentially controversial ideas? • How focused should we be on diversity? • What are the best arguments against effective altruism? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
12/31/20191 hour, 52 minutes, 38 seconds
Episode Artwork

#67 - Prof David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. • Links to learn more, summary and full transcript. • Advice on how to read our advice. • Anonymous answers on: bad habits, risk and failure. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app. Producer: Keiran Harris.
12/16/20194 hours, 41 minutes, 49 seconds
Episode Artwork

#66 - Prof Peter Singer on being provocative, effective altruism, & how his moral views have changed

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off. According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention. But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times. • Singer's book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free e-book and audiobook, read by a range of celebrities. Get it here. • Links to learn more, summary and full transcript. Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one? Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences, but Singer says that he gives public relations considerations plenty of thought. One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump. Another is the focus of the effective altruism community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement. He suspects there's a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns. Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover: • What does he think is the most plausible alternatives to consequentialism? • Is it more humane to eat wild caught animals than farmed animals? • The re-release of The Life You Can Save • His most and least strategic career decisions • Population ethics, and other arguments for and against prioritising the long-term future • What led to his changing his mind on significant questions in moral philosophy? • And more. In the post-episode discussion, Rob and Arden continue talking about: • The pros and cons of keeping EA as one big movement • Singer’s thoughts on immigration • And consequentialism with side constraints. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. Illustration of Singer: Matthias Seifarth.
12/5/20192 hours, 1 minute, 20 seconds
Episode Artwork

#65 - Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that they wouldn't go to countries and sell that knowledge." Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security. Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. In 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS). But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries. • Links to learn more, summary and full transcript. • Talks from over 100 other speakers at EA Global. • Having trouble with podcast 'chapters' on this episode? Please report any problems to keiran at 80000hours dot org. What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next. Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the previous one. Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda discussed at length in episode 27. Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the 9/11 Commission. Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11. And as if that all weren't curious enough four years ago Bonnie decided to go vegan. We talk about her work so far as well as: • How listeners can start a career like hers • Mistakes made by Mr Obama and Mr Trump • Networking, the value of attention, and being a vegan in DC • And 2020 Presidential candidates. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
11/19/20191 hour, 40 minutes, 31 seconds
Episode Artwork

#64 - Bruce Schneier on surveillance without tyranny, secrets, & the big risks in computer security

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.   November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don't see how they can figure it out. What on Earth happens next? Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result. Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later. • Links to learn more, summary and full transcript. • Motivating article: Information security careers for global catastrophic risk reduction by Zabel and Muehlhauser The constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker. Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn't fair. Schneier thinks there's a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage. And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits. According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they're designing, because they have a bureaucrat's rather than a hacker's mindset. The ideal computer security expert walks into a shop and thinks, "You know, here's how I would shoplift." They automatically see where the cameras are, whether there are alarms, and where the security guards aren't watching. In this episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn't get access to them. We also cover: • How can we have surveillance of dangerous actors, without falling back into authoritarianism? • When if ever should information about weaknesses in society's security be kept secret? • How secure are nuclear weapons systems around the world? • How worried should we be about deep-fakes? • Schneier’s critiques of blockchain technology • How technologists should be vital in shaping policy • What are the most consequential computer security problems today? • Could a career in information security be very useful for reducing global catastrophic risks? • And more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. The 80,000 Hours Podcast is produced by Keiran Harris.
10/25/20192 hours, 11 minutes, 3 seconds
Episode Artwork

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Today's episode is a compilation of interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast.  If you've listened to absolutely everything on this podcast feed, you'll have heard four interviews with me already, but fortunately I don't think these two include much repetition, and I've gotten a decent amount of positive feedback on both.  First up, I speak with David Kadavy on his show, Love Your Work.  This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics.  • Our annual impact survey is about to close — I'd really appreciate if you could take 3–10 minutes to fill it out now.  • The blog post about this episode. At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar.  The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts 2 hours and 15 minutes into this recording.  Neoliberalism in the sense used by this show is not the free market fundamentalism you might associate with the term. Rather it's a centrist or even centre-left view that supports things like social liberalism, multilateral international institutions, trade, high rates of migration, racial justice, inclusive institutions, financial redistribution, prioritising the global poor, market urbanism, and environmental sustainability.  This is the more demanding of the two conversations, as listeners to that show have already heard of effective altruism, so we were able to get the best arguments Jeremiah could offer against focusing on improving the long term future of the world.  Jeremiah is more of a fan of donating to evidence-backed global health charities recommended by GiveWell, and does so himself.  I appreciate him having done his homework and forcing me to do my best to explain how well my views can stand up to counterarguments. It was a challenge for me to paint the whole picture in the half an hour we spent on longterm and I expect there's answers in there which will be fresh even for regular listeners.  I hope you enjoy both conversations! Feel free to email me with any feedback. The 80,000 Hours Podcast is produced by Keiran Harris.
9/25/20193 hours, 14 minutes, 32 seconds
Episode Artwork

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's new, every 2 weeks or so. 5. Or follow our pages on Facebook and Twitter. —— Once a year 80,000 Hours runs a survey to find out whether we've helped our users have a larger social impact with their life and career. We and our donors need to know whether our services, like this podcast, are helping people enough to continue them or scale them up, and it's only by hearing from you that we can make these decisions in a sensible way. So, if 80,000 Hours' podcast, job board, articles, headhunting, advising or other projects have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how. You can also let us know where we've fallen short, which helps us fix problems with what we're doing. We've refreshed the survey this year, hopefully making it easier to fill out than in the past. We'll keep this appeal up for about two weeks, but if you fill it out now that means you definitely won't forget! Thanks so much, and talk to you again in a normal episode soon. — RobTag for internal use: this RSS feed is originating in BackTracks.
9/16/20193 minutes, 38 seconds
Episode Artwork

#63 - Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future? Today's guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire. At the same time, far from indulging hype about these so-called 'blockchain' technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, *"blockchains as they currently exist are in many ways a joke, right?"* But Buterin is not just a realist. He's also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals. Links to learn more, summary and full transcript. By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of 'cryptoeconomics'. Economist Tyler Cowen hasobserved that, "at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers, without knowing about the papers at all." Along with previous guest Glen Weyl, Buterin has helped develop a model for so-called 'quadratic funding', which in principle could transform the provision of 'public goods'. That is, goods that people benefit from whether they help pay for them or not. Examples of goods that are fully or partially 'public goods' include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world. But these and other related proposals face major hurdles. They're vulnerable to collusion, might be used to fund scams, and remain untested at a small scale — not to mention that anything with a square root sign in it is going to struggle to achieve societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges? In today's extensive three-hour interview, Buterin and I cover: • What the blockchain has accomplished so far, and what it might achieve in the next decade; • Why many social problems can be viewed as a coordination failure to provide a public good; • Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work; • His view of 'effective altruism' and 'long-termism'; • Why he is optimistic about 'quadratic funding', but pessimistic about replacing existing voting with 'quadratic voting'; • Why humanity might have to abandon living in cities; • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
9/3/20193 hours, 18 minutes, 23 seconds
Episode Artwork

#62 - Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out? In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably are. • Links to learn more, summary, and full transcript. • Paul's first appearance on the show in episode 44. • An out-take on decision theory. We could tell them hard-won lessons from history; mention some research questions we wish we'd started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons. But, as Christiano points out, even if we could satisfactorily figure out what we'd like to be able to tell our ancestors, that's just the first challenge. We'd need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth's surface quickly gets buried far underground. But even if we figure out a satisfactory message, and a ways to ensure it's found, a civilization this far in the future won't speak any language like our own. And being another species, they presumably won't share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn't break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery? That's just one of many playful questions discussed in today's episode with Christiano — a frequent writer who's willing to brave questions that others find too strange or hard to grapple with. We also talk about why divesting a little bit from harmful companies might be more useful than I'd been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider. Finally, we get a big update on progress in machine learning and efforts to make sure it's reliably aligned with our goals, which is Paul's main research project. He responds to the views that DeepMind's Pushmeet Kohli espoused in a previous episode, and we discuss whether we'd be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors. Some other issues that come up along the way include: • Are there any supplements people can take that make them think better? • What implications do our views on meta-ethics have for aligning AI with our goals? • Is there much of a risk that the future will contain anything optimised for causing harm? • An out-take about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
8/5/20192 hours, 11 minutes, 46 seconds
Episode Artwork

#61 - Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did. Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century. In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances. How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace. • Links to learn more, summary and full transcript • Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and Will MacAskill • The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman • AI strategy and governance roles on the job board Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed. Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day. Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands. But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy? In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China. We cover: • Why immigration is the main policy area that should be affected by AI advances today. • Why talking about an 'arms race' in AI is premature. • How Bobby Kennedy may have positively affected the Cuban Missile Crisis. • Whether it's possible to become a China expert and still get a security clearance. • Can access to ML algorithms be restricted, or is that just not practical? • Whether AI could help stabilise authoritarian regimes. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
7/17/20191 hour, 54 minutes, 56 seconds
Episode Artwork

#60 - Prof Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race. Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day. He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better. Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely. • Links to learn more, summary and full transcript • The calibration training app • Sign up for the Civ-5 counterfactual forecasting tournament • A review of the evidence on good forecasting practices • Learn more about Effective Altruism Global In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014. That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement. We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.) We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades. We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
6/28/20192 hours, 11 minutes, 38 seconds
Episode Artwork

#59 - Prof Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. • Links to learn more, summary and full transcript. • 80,000 Hours Annual Review 2018. • How to donate to 80,000 Hours. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site. The 80,000 Hours Podcast is produced by Keiran Harris.
6/17/20191 hour, 43 minutes, 23 seconds
Episode Artwork

#58 - Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project. When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design. Far from being an overhead on the 'real' work, it’s an essential part of making AI systems work at all. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development. Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether. • Want to be notified about high-impact opportunities to help ensure AI remains safe and beneficial? Tell us a bit about yourself and we’ll get in touch if an opportunity matches your background and interests. • Links to learn more, summary and full transcript. • And a few added thoughts on non-research roles. With the goal of designing systems that are reliably consistent with desired specifications, DeepMind have recently published work on important technical challenges for the machine learning community. For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an 'adversary' that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable. He's also looking into 'training specification-consistent models' and formal verification', while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards. In today’s interview, we focus on the convergence between broader AI research and robustness, as well as: • DeepMind’s work on the protein folding problem • Parallels between ML problems and past challenges in software development and computer security • How can you analyse the thinking of a neural network? • Unique challenges faced by DeepMind’s technical AGI safety team • How do you communicate with a non-human intelligence? • What are the biggest misunderstandings about AI safety and reliability? • Are there actually a lot of disagreements within the field? • The difficulty of forecasting AI development Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
6/3/20191 hour, 30 minutes, 11 seconds
Episode Artwork

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

This is a cross-post of some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m). Some of the content will be familiar to regular listeners — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile. The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like: • Why Rob is wary of fiction • Egalitarianism in the evolution of hunter gatherers • How to stop social media screwing up politics • Careers in government versus business The second interview is with Prof Andrew Leigh - the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than we usually cover on the show, like: • What advice would Rob give to his teenage self? • Which person has most shaped Rob’s view of living an ethical life? • Rob’s approach to giving to the homeless • What does Rob do to maximise his own happiness? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
5/13/20192 hours, 18 minutes, 24 seconds
Episode Artwork

#57 - Tom Kalil on how to do the most good in government

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible - before you quit or get kicked out? That was the challenge put in front of Tom Kalil in 1993. He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things. But not everyone figures out how to move the needle. In today's interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in. Links to learn more, summary and full transcript. Interested in US AI policy careers? Apply for one-on-one career advice here. Vacancies at the Center for Security and Emerging Technology. Our high-impact job board, which features other related opportunities. He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren't; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored. Over years at the White House Office of Science and Technology Policy, 'Team Kalil' built up a white board of principles. For example, 'the schedule is your friend': setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate. Or 'talk to who owns the paper'. People would wonder how Tom could get so many lines into the President's speeches. The answer was "figure out who's writing the speech, find them with the document, and tell them to add the line." Obvious, but not something most were doing. Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person. In today's episode we get down to nuts & bolts, and discuss: • How did Tom spin work on a primary campaign into a job in the next White House? • Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team? • How do you get people to do things when you don't have formal power over them? • What roles in the US government are most likely to help with the long-term future, or reducing existential risks? • Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas? • What are 'policy entrepreneurs' and why do they matter? • What is the role for prizes in promoting science and technology? What are other promising policy ideas? • Why you can get more done by not taking credit. • What can the White House do if an agency isn't doing what it wants? • How can the effective altruism community improve the maturity of our policy recommendations? • How much can talented individuals accomplish during a short-term stay in government? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.. The 80,000 Hours Podcast is produced by Keiran Harris.
4/23/20192 hours, 50 minutes, 15 seconds
Episode Artwork

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right? Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems. But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns. Links to learn more, summary and full transcript. Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death. But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare? For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours. In today’s interview we explore wild animal welfare as a new field of research, and discuss: • Do we have a moral duty towards wild animals or not? • How should we measure the number of wild animals? • What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate? • Is there a danger in imagining how we as humans would feel if we were put into their situation? • Should we eliminate parasites and predators? • How important are insects? • How strongly should we focus on just avoiding humans going in and making things worse? • How does this compare to work on farmed animal suffering? • The most compelling arguments for humanity not dedicating resources to wild animal welfare • Is there much of a case for the idea that this work could improve the very long-term future of humanity? Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss: • The importance of figuring out your values • Chemistry, psychology, and other different paths towards working on wild animal welfare • How to break into new fields Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
4/15/20192 hours, 57 minutes, 57 seconds
Episode Artwork

#55 - Lutter & Winter on founding charter cities with outstanding governance to end poverty

Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governments are hard to reform and — to put it mildly — it's not easy to found a new country. This has prompted poverty-fighters and political dreamers to look for creative ways to get new and better 'pseudo-countries' off the ground. The poor could then voluntary migrate to in search of security and prosperity. And innovators would be free to experiment with new political and legal systems without having to impose their ideas on existing jurisdictions. The 'seasteading movement' imagined founding new self-governing cities on the sea, but obvious challenges have kept that one on the drawing board. Nobel Prize winner and World Bank President Paul Romer suggested 'charter cities', where a host country would volunteer for another country with better legal institutions to effectively govern some of its territory. But that idea too ran aground for political, practical and personal reasons. Now Mark Lutter and Tamara Winter, of The Center for Innovative Governance Research (CIGR), are reviving the idea of 'charter cities', with some modifications. Gone is the idea of transferring sovereignty. Instead these cities would look more like the 'special economic zones' that worked miracles for Taiwan and China among others. But rather than keep the rest of the country's rules with a few pieces removed, they hope to start from scratch, opting in to the laws they want to keep, in order to leap forward to "best practices in commercial law." Links to learn more, summary and full transcript. Rob on The Good Life: Andrew Leigh in Conversation — on 'making the most of your 80,000 hours'. The project has quickly gotten attention, with Mark and Tamara receiving funding from Tyler Cowen's Emergent Ventures (discussed in episode 45) and winning a Pioneer tournament. Starting afresh with a new city makes it possible to clear away thousands of harmful rules without having to fight each of the thousands of interest groups that will viciously defend their privileges. Initially the city can fund infrastructure and public services by gradually selling off its land, which appreciates as the city flourishes. And with 40 million people relocating to cities every year, there are plenty of prospective migrants. CIGR is fleshing out how these arrangements would work, advocating for them, and developing supporting services that make it easier for any jurisdiction to implement. They're currently in the process of influencing a new prospective satellite city in Zambia. Of course, one can raise many criticisms of this idea: Is it likely to be taken up? Is CIGR really doing the right things to make it happen? Will it really reduce poverty if it is? We discuss those questions, as well as: • How did Mark get a new organisation off the ground, with fundraising and other staff? • What made China's 'special economic zones' so successful? • What are the biggest challenges in getting new cities off the ground? • How did Mark find and hire Tamara? How did he know this was a good idea? • Should people care about this idea if they aren't focussed on tackling poverty? • Why aren't people already doing this? • Why does Tamara support more people starting families? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
3/31/20192 hours, 31 minutes, 13 seconds
Episode Artwork

#54 - OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm. How is this possible and what does it show? In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems. A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map. When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space. Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcript This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software. The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy. Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2. We discuss: • What are the most significant changes in the AI policy world over the last year or two? • What capabilities are likely to develop over the next five, 10, 15, 20 years? • How much should we focus on the next couple of years, versus the next couple of decades? • How should we approach possible malicious uses of AI? • What are some of the potential ways OpenAI could make things worse, and how can they be avoided? • Publication norms for AI research • Where do we stand in terms of arms races between countries or different AI labs? • The case for creating newsletters • Should the AI community have a closer relationship to the military? • Working at OpenAI vs. working in the US government • How valuable is Twitter in the AI policy world? Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss: • The reaction to OpenAI's release of GPT-2 • Jack’s critique of our US AI policy article • How valuable are roles in government? • Where do you start if you want to write content for a specific audience? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
3/19/20192 hours, 53 minutes, 39 seconds
Episode Artwork

#53 - Kelsey Piper on the room for important advocacy within journalism

“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets? Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that. Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work. But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle. They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics. Links to learn more, summary and full transcript. Links to Kelsey's top articles. Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them. Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.” Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics. If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems. Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself. In today’s episode we discuss that path, as well as: • What’s the day to day life of a Vox journalist like? • How can good journalism get funded? • Are there meaningful tradeoffs between doing what's in the interest of Vox and doing what’s good? • How concerned should we be about the risk of effective altruism being perceived as partisan? • How well can short articles effectively communicate complicated ideas? • Are there alternative business models that could fund high quality journalism on a larger scale? • How do you approach the case for taking AI seriously to a broader audience? • How valuable might it be for media outlets to do Tetlock-style forecasting? • Is it really a good idea to heavily tax billionaires? • How do you avoid the pressure to get clicks? • How possible is it to predict which articles are going to be popular? • How did Kelsey build the skills necessary to work at Vox? • General lessons for people dealing with very difficult life circumstances Rob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss: • The risk political polarisation poses to long-termist causes • How should specialists keep journalism available as a career option? • Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
2/27/20192 hours, 34 minutes, 30 seconds
Episode Artwork

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

This is a cross-post of an interview Rob did with Julia Galef on her podcast Rationally Speaking. Rob and Julia discuss how the career advice 80,000 Hours gives has changed over the years, and the biggest misconceptions about our views. The topics will be familiar to the most fervent fans of this show — but we think that if you’ve listened to less than about half of the episodes we've released so far, you’ll find something new to enjoy here. Julia may be familiar to you as the guest on episode 7 of the show, way back in September 2017. The conversation also covers topics like: • How many people should try to get a job in finance and donate their income? • The case for working to reduce global catastrophic risks in targeted ways, and historical precedents for this kind of work • Why reducing risk is a better way to help the future than increasing economic growth • What percentage of the world should ideally follow 80,000 Hours advice? Links to learn more, summary and full transcript. If you’re interested in the cooling and expansion of the universe, which comes up on the show, you should definitely check out our 29th episode with Dr Anders Sandberg. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into any podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
2/17/201956 minutes, 45 seconds
Episode Artwork

#52 - Prof Glen Weyl on uprooting capitalism and democracy for a just society

Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society about what people want and how to make it. But when it comes to politics and voting - which also aim to aggregate the preferences and knowledge found in millions of individuals - the enthusiasm for finding clever institutional designs often turns to skepticism. Today's guest, freewheeling economist Glen Weyl, won't have it, and is on a warpath to reform liberal democratic institutions in order to save them. Just last year he wrote Radical Markets: Uprooting Capitalism and Democracy for a Just Society with Eric Posner, but has already moved on, saying "in the 6 months since the book came out I've made more intellectual progress than in the whole 10 years before that." Weyl believes we desperately need more efficient, equitable and decentralised ways to organise society, that take advantage of what each person knows, and his research agenda has already been making breakthroughs. Links to learn more, summary and full transcript Our high impact job board Join our newsletter Despite a history in the best economics departments in the world - Harvard, Princeton, Yale and the University of Chicago - he is too worried for the future to sit in his office writing papers. Instead he has left the academy to try to inspire a social movement, RadicalxChange, with a vision of social reform as expansive as his own. You can sign up for their conference in Detroit in March here Economist Alex Tabarrok called his latest proposal, known as 'liberal radicalism', "a quantum leap in public-goods mechanism-design" - we explain how it works in the show. But the proposal, however good in theory, might struggle in the real world because it requires large subsidies, and compensates for people's selfishness so effectively that it might even be an overcorrection. An earlier mechanism - 'quadratic voting' (QV) - would allow people to express the relative strength of their preferences in the democratic process. No longer would 51 people who support a proposal, but barely care about the issue, outvote 49 incredibly passionate opponents, predictably making society worse in the process. We explain exactly how in the episode. Weyl points to studies showing that people are more likely to vote strongly not only about issues they *care* more about, but issues they *know* more about. He expects that allowing people to specialise and indicate when they know what they're talking about will create a democracy that does more to aggregate careful judgement, rather than just passionate ignorance. But these and indeed all of Weyl's ideas have faced criticism. Some say the risk of unintended consequences is too great, or that they solve the wrong problem. Others see these proposals as unproven, impractical, or just another example of an intellectual engaged in grand social planning. I raise these concerns to see how he responds. As big a topic as all of that is, this extended conversation also goes into the blockchain, problems with the effective altruism community and how auctions could replace private property. Don't miss it. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
2/8/20192 hours, 44 minutes, 26 seconds
Episode Artwork

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

Politics in rich countries seems to be going nuts. What's the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration? Martin Gurri spent decades as a CIA analyst and in his 2014 book The Revolt of The Public and Crisis of Authority in the New Millennium, predicted political turbulence for an entirely different reason: new communication technologies were flipping the balance of power between the public and traditional authorities. In 1959 the President could control the narrative by leaning on his friends at four TV stations, who felt it was proper to present the nation's leader in a positive light, no matter their flaws. Today, it's impossible to prevent someone from broadcasting any grievance online, whether it's a contrarian insight or an insane conspiracy theory. Links to learn more, summary and full transcript. According to Gurri, trust in society's institutions - police, journalists, scientists and more - has been undermined by constant criticism from outsiders, and exposed to a cacophony of conflicting opinions on every issue, the public takes fewer truths for granted. We are now free to see our leaders as the flawed human beings they always have been, and are not amused. Suspicious they are being betrayed by elites, the public can also use technology to coordinate spontaneously and express its anger. Keen to 'throw the bastards out' protesters take to the streets, united by what they don't like, but without a shared agenda or the institutional infrastructure to figure out how to fix things. Some popular movements have come to view any attempt to exercise power over others as suspect. If Gurri is to be believed, protest movements in Egypt, Spain, Greece and Israel in 2011 followed this script, while Brexit, Trump and the French yellow vests movement subsequently vindicated his theory. In this model, politics won't return to its old equilibrium any time soon. The leaders of tomorrow will need a new message and style if they hope to maintain any legitimacy in this less hierarchical world. Otherwise, we're in for decades of grinding conflict between traditional centres of authority and the general public, who doubt both their loyalty and competence. But how much should we believe this theory? Why do Canada and Australia remain pools of calm in the storm? Aren't some malcontents quite concrete in their demands? And are protest movements actually more common (or more nihilistic) than they were decades ago? In today's episode we ask these questions and add an hour-long discussion with two of Rob's colleagues - Keiran Harris and Michelle Hutchinson - to further explore the ideas in the book. The conversation covers: * How do we know that the internet is driving this rather than some other phenomenon? * How do technological changes enable social and political change? * The historical role of television * Are people also more disillusioned now with sports heroes and actors? * Which countries are finding good ways to make politics work in this new era? * What are the implications for the threat of totalitarianism? * What is this is going to do to international relations? Will it make it harder for countries to cooperate and avoid conflict? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
1/29/20192 hours, 31 minutes, 10 seconds
Episode Artwork

#50 - Dr David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

If an asteroid impact or nuclear winter blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feeding Everyone No Matter What: no. If he's to be believed, nobody need starve at all. Even without the sun, David sees the Earth as a bountiful food source. Mushrooms farmed on decaying wood. Bacteria fed with natural gas. Fish and mussels supported by sudden upwelling of ocean nutrients - and more. Dr Denkenberger is an Assistant Professor at the University of Alaska Fairbanks, and he's out to spread the word that while a nuclear winter might be horrible, experts have been mistaken to assume that mass starvation is an inevitability. In fact, the only thing that would prevent us from feeding the world is insufficient preparation. ∙ Links to learn more, summary and full transcript Not content to just write a book pointing this out, David has gone on to found a growing non-profit - the Alliance to Feed the Earth in Disasters (ALLFED) - to prepare the world to feed everyone come what may. He expects that today 10% of people would find enough food to survive a massive disaster. In principle, if we did everything right, nobody need go hungry. But being more realistic about how much we're likely to invest, David thinks a plan to inform people ahead of time could save 30%, and a decent research and development scheme 80%. ∙ 80,000 Hours' updated article on How to find the best charity to give to ∙ A potential donor evaluates ALLFED According to David's published cost-benefit analyses, work on this problem may be able to save lives, in expectation, for under $100 each, making it an incredible investment. These preparations could also help make humanity more resilient to global catastrophic risks, by forestalling an ‘everyone for themselves' mentality, which then causes trade and civilization to unravel. But some worry that David's cost-effectiveness estimates are exaggerations, so I challenge him on the practicality of his approach, and how much his non-profit's work would actually matter in a post-apocalyptic world. In our extensive conversation, we cover: * How could the sun end up getting blocked, or agriculture otherwise be decimated? * What are all the ways we could we eat nonetheless? What kind of life would this be? * Can these methods be scaled up fast? * What is his organisation, ALLFED, actually working on? * How does he estimate the cost-effectiveness of this work, and what are the biggest weaknesses of the approach? * How would more food affect the post-apocalyptic world? Won't people figure it out at that point anyway? * Why not just leave guidebooks with this information in every city? * Would these preparations make nuclear war more likely? * What kind of people is ALLFED trying to hire? * What would ALLFED do with more money? * How he ended up doing this work. And his other engineering proposals for improving the world, including ideas to prevent a supervolcano explosion. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
12/27/20182 hours, 57 minutes, 3 seconds
Episode Artwork

#49 - Dr Rachel Glennerster on a year's worth of education for 30c & other development 'best buys'

If I told you it's possible to deliver an extra year of ideal primary-level education for under $1, would you believe me? Hopefully not - the claim is absurd on its face. But it may be true nonetheless. The very best education interventions are phenomenally cost-effective, and they're not the kinds of things you'd expect, says Dr Rachel Glennerster. She's Chief Economist at the UK's foreign aid agency DFID, and used to run J-PAL, the world-famous anti-poverty research centre based in MIT's Economics Department, where she studied the impact of a wide range of approaches to improving education, health, and governing institutions. According to Dr Glennerster: "...when we looked at the cost effectiveness of education programs, there were a ton of zeros, and there were a ton of zeros on the things that we spend most of our money on. So more teachers, more books, more inputs, like smaller class sizes - at least in the developing world - seem to have no impact, and that's where most government money gets spent." "But measurements for the top ones - the most cost effective programs - say they deliver 460 LAYS per £100 spent ($US130). LAYS are Learning-Adjusted Years of Schooling. Each one is the equivalent of the best possible year of education you can have - Singapore-level." Links to learn more, summary and full transcript. "...the two programs that come out as spectacularly effective... well, the first is just rearranging kids in a class." "You have to test the kids, so that you can put the kids who are performing at grade two level in the grade two class, and the kids who are performing at grade four level in the grade four class, even if they're different ages - and they learn so much better. So that's why it's so phenomenally cost effective because, it really doesn't cost anything." "The other one is providing information. So sending information over the phone [for example about how much more people earn if they do well in school and graduate]. So these really small nudges. Now none of those nudges will individually transform any kid's life, but they are so cheap that you get these fantastic returns on investment - and we do very little of that kind of thing." In this episode, Dr Glennerster shares her decades of accumulated wisdom on which anti-poverty programs are overrated, which are neglected opportunities, and how we can know the difference, across a range of fields including health, empowering women and macroeconomic policy. Regular listeners will be wondering - have we forgotten all about the lessons from episode 30 of the show with Dr Eva Vivalt? She threw several buckets of cold water on the hope that we could accurately measure the effectiveness of social programs at all. According to Vivalt, her dataset of hundreds of randomised controlled trials indicates that social science findings don’t generalize well at all. The results of a trial at a school in Namibia tell us remarkably little about how a similar program will perform if delivered at another school in Namibia - let alone if it's attempted in India instead. Rachel offers a different and more optimistic interpretation of Eva's findings. To learn more and figure out who you sympathise with more, you'll just have to listen to the episode. Regardless, Vivalt and Glennerster agree that we should continue to run these kinds of studies, and today’s episode delves into the latest ideas in global health and development. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
12/20/20181 hour, 35 minutes, 41 seconds
Episode Artwork

#48 - Brian Christian on better living through the wisdom of computer science

Please let us know if we've helped you: Fill out our annual impact survey Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn't get much done? Computer scientists have a term for this - thrashing - and it's a common reason our computers freeze up. The solution, for people as well as laptops, is to 'work dumber': pick something at random and finish it, without wasting time thinking about the bigger picture. Bestselling author Brian Christian studied computer science, and in the book Algorithms to Live By he's out to find the lessons it can offer for a better life. He investigates into when to quit your job, when to marry, the best way to sell your house, how long to spend on a difficult decision, and how much randomness to inject into your life. In each case computer science gives us a theoretically optimal solution, and in this episode we think hard about whether its models match our reality. Links to learn more, summary and full transcript. One genre of problems Brian explores in his book are 'optimal stopping problems', the canonical example of which is ‘the secretary problem’. Imagine you're hiring a secretary, you receive *n* applicants, they show up in a random order, and you interview them one after another. You either have to hire that person on the spot and dismiss everybody else, or send them away and lose the option to hire them in future. It turns out most of life can be viewed this way - a series of unique opportunities you pass by that will never be available in exactly the same way again. So how do you attempt to hire the very best candidate in the pool? There's a risk that you stop before finding the best, and a risk that you set your standards too high and let the best candidate pass you by. Mathematicians of the mid-twentieth century produced an elegant optimal approach: spend exactly one over *e*, or approximately 37% of your search, just establishing a baseline without hiring anyone, no matter how promising they seem. Then immediately hire the next person who's better than anyone you've seen so far. It turns out that your odds of success in this scenario are also 37%. And the optimal strategy and the odds of success are identical regardless of the size of the pool. So as *n* goes to infinity you still want to follow this 37% rule, and you still have a 37% chance of success. Even if you interview a million people. But if you have the option to go back, say by apologising to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%. Today’s episode focuses on Brian’s book-length exploration of how insights from computer algorithms can and can't be applied to our everyday lives. We cover: * Computational kindness, and the best way to schedule meetings * How can we characterize a computational model of what people are actually doing, and is there a rigorous way to analyse just how good their instincts actually are? * What’s it like being a human confederate in the Turing test competition? * Is trying to detect fake social media accounts a losing battle? * The canonical explore/exploit problem in computer science: the multi-armed bandit * What’s the optimal way to buy or sell a house? * Why is information economics so important? * What kind of decisions should people randomize more in life? * How much time should we spend on prioritisation? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
11/22/20183 hours, 15 minutes, 29 seconds
Episode Artwork

#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option. He decided to apply to OpenAI, and spent about 6 weeks preparing for the interview before landing the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others. On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. She and Daniel share this piece of advice for those curious about this career path: just dive in. If you're trying to get good at something, just start doing that thing, and figure out that way what's necessary to be able to do it well. Catherine has even created a simple step-by-step guide for 80,000 Hours, to make it as easy as possible for others to copy her and Daniel's success. Please let us know how we've helped you: fill out our 2018 annual impact survey so that 80,000 Hours can continue to operate and grow. Blog post with links to learn more, a summary & full transcript. Daniel thinks the key for him was nailing the job interview. OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he'd be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working. Daniel emphasizes that the most important thing was to practice *exactly* those things that he knew he needed to be able to do. His dedicated preparation also led to an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him. Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they're right, it could greatly increase our ability to get new people into important ML roles in which they can make a difference, as quickly as possible. Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity. Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover: * What are OpenAI and Google Brain doing? * Why work on AI? * Do you learn more on the job, or while doing a PhD? * Controversial issues within ML * Is replicating papers a good way of determining suitability? * What % of software developers could make similar transitions? * How in-demand are research engineers? * The development of Dota 2 bots * Do research scientists have more influence on the vision of an org? * Has learning more made you more or less worried about the future? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
11/2/20182 hours, 4 minutes, 49 seconds
Episode Artwork

#46 - Prof Hilary Greaves on moral cluelessness & tackling crucial questions in academia

The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you’re never getting back? According to philosophy Professor Hilary Greaves - Director of Oxford University's Global Priorities Institute, which is hiring - this simple decision will completely change the long-term future by altering the identities of almost all future generations. How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day - including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child. By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child - who didn't exist because of what you did - would have done if you decided not to worry about it. As that child's actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both! Links to learn more, summary and full transcript. Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you’re totally clueless about what's going to lead to the best outcomes. It might lead to decision paralysis - you won’t be able to take any action at all. Prof Greaves doesn’t share this concern for most real life decisions. If there’s no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by equally likely opposite possibility. But, if instead we’re talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse -- for example if we increase economic growth -- Prof Greaves says that we don’t get to just ignore the unforeseeable effects. When there are complex arguments on both sides, it's unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers. So, what do we do? Today’s episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover: * How controversial is the multiverse interpretation of quantum physics? * Given moral uncertainty, how should population ethics affect our real life decisions? * How should we think about archetypal decision theory problems? * What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations? * How could reducing extinction risk be a good cause for risk-averse people? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
10/23/20182 hours, 49 minutes, 12 seconds
Episode Artwork

#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

I've probably spent more time reading Tyler Cowen - Professor of Economics at George Mason University - than any other author. Indeed it's his incredibly popular blog Marginal Revolution that prompted me to study economics in the first place. Having spent thousands of hours absorbing Tyler's work, it was a pleasure to be able to question him about his latest book and personal manifesto: Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals. Tyler makes the case that, despite what you may have heard, we *can* make rational judgments about what is best for society as a whole. He argues: 1. Our top moral priority should be preserving and improving humanity's long-term future 2. The way to do that is to maximise the rate of sustainable economic growth 3. We should respect human rights and follow general principles while doing so. We discuss why Tyler believes all these things, and I push back where I disagree. In particular: is higher economic growth actually an effective way to safeguard humanity's future, or should our focus really be elsewhere? In the process we touch on many of moral philosophy's most pressing questions: Should we discount the future? How should we aggregate welfare across people? Should we follow rules or evaluate every situation individually? How should we deal with the massive uncertainty about the effects of our actions? And should we trust common sense morality or follow structured theories? Links to learn more, summary and full transcript. After covering the book, the conversation ranges far and wide. Will we leave the galaxy, and is it a tragedy if we don't? Is a multi-polar world less stable? Will humanity ever help wild animals? Why do we both agree that Kant and Rawls are overrated? Today's interview is released on both the 80,000 Hours Podcast and Tyler's own show: Conversation with Tyler. Tyler may have had more influence on me than any other writer but this conversation is richer for our remaining disagreements. If the above isn't enough to tempt you to listen, we also look at: * Why couldn’t future technology make human life a hundred or a thousand times better than it is for people today? * Why focus on increasing the rate of economic growth rather than making sure that it doesn’t go to zero? * Why shouldn’t we dedicate substantial time to the successful introduction of genetic engineering? * Why should we completely abstain from alcohol and make it a social norm? * Why is Tyler so pessimistic about space? Is it likely that humans will go extinct before we manage to escape the galaxy? * Is improving coordination and international cooperation a major priority? * Why does Tyler think institutions are keeping up with technology? * Given that our actions seem to have very large and morally significant effects in the long run, are our moral obligations very onerous? * Can art be intrinsically valuable? * What does Tyler think Derek Parfit was most wrong about, and what was he was most right about that’s unappreciated today? Get this episode by subscribing: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
10/17/20182 hours, 30 minutes, 40 seconds
Episode Artwork

#44 - Dr Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem

Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening - Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world. A few of the topics we cover are: * Why Paul expects AI to transform the world gradually rather than explosively and what that would look like * Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us * Why AI systems will probably be granted legal and property rights * How an advanced AI that doesn't share human goals could still have moral value * Why machine learning might take over science research from humans before it can do most other tasks * Which decade we should expect human labour to become obsolete, and how this should affect your savings plan. Links to learn more, summary and full transcript. Important new article: These are the world’s highest impact career paths according to our research Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree. If given plenty of time - and enough arguments, counterarguments and counter-counter-arguments between all the experts - should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over? In other words: does 'debate', in principle, lead to truth? According to Paul Christiano - researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities - this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are. It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight. If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful. But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
10/2/20183 hours, 51 minutes, 50 seconds
Episode Artwork

#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”. Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked. Links to learn more, summary and full transcript. The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today. If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere. As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity. You might think the United States would have a more sensible nuclear launch policy. You’d be wrong. As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth. The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe. The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival. Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it. Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity. Strategically, the setup is stupid. Ethically, it is monstrous. So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization? Daniel explores these questions eloquently and urgently in his book. Today we cover: * Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold * How well are secrets kept in the government? * What was the risk of the first atomic bomb test? * The effect of Trump on nuclear security * Do we have a reliable estimate of the magnitude of a ‘nuclear winter’? * Why Gorbachev allowed Russia’s covert biological warfare program to continue Get this episode by subscribing: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
9/25/20182 hours, 44 minutes, 27 seconds
Episode Artwork

#42 - Dr Amanda Askell on moral empathy, the value of information & the ethics of infinity

Consider two familiar moments at a family reunion. Our host, Uncle Bill, takes pride in his barbecuing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But if seriously considered as a moral position, as they might if instead Becky were avoiding meat on religious grounds, it would usually receive a very different reaction. An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table; the family mostly think that he has no business in trying to foist his regressive preference on anyone. But if considered not as a matter of personal taste, but rather as a moral position - that Bill genuinely believes he’s opposing mass-murder - his comment might start a serious conversation. Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. All sides of the political spectrum struggle to get inside the mind of people we disagree with and see issues from their point of view. Links to learn more, summary and full transcript. This often happens because of confusion between preferences and moral positions. Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues. One potential path for progress surrounds contraception; a lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception, so why can’t we compromise and agree to have much more contraception available? According to Amanda, a charitable explanation for this is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions. So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world. Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually converge on a key point of agreement. Today’s episode blends such everyday topics with in-depth philosophy, including: * What is 'moral cluelessness' and how can we work around it? * Amanda's biggest criticisms of social justice activists, and of critics of social justice activists * Is there an ethical difference between prison and corporal punishment? * How to resolve 'infinitarian paralysis' - the inability to make decisions when infinities are involved. * What’s effective altruism doing wrong? * How should we think about jargon? Are a lot of people who don’t communicate clearly just scamming us? * How can people be more successful within the cocoon of school and university? * How did Amanda find doing a philosophy PhD, and how will she decide what to do now? Links: * Career review: Congressional staffer * Randomised experiment on quitting * Psychology replication quiz * Should you focus on your comparative advantage. Get this episode by subscribing: type 80,000 Hours into your podcasting app. The 80,000 Hours podcast is produced by Keiran Harris.
9/11/20182 hours, 46 minutes, 28 seconds
Episode Artwork

#41 - David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher

With 698 inmates per 100,000 citizens, the U.S. is by far the leader among large wealthy nations in incarceration. But what effect does imprisonment actually have on crime? According to David Roodman, Senior Advisor to the Open Philanthropy Project, the marginal effect is zero. * 80,000 HOURS IMPACT SURVEY - Let me know how this show has helped you with your career. * ROB'S AUDIOBOOK RECOMMENDATIONS This stunning rebuke to the American criminal justice system comes from the man Holden Karnofsky’s called "the gold standard for in-depth quantitative research", whose other investigations include the risk of geomagnetic storms, whether deworming improves health and test scores, and the development impacts of microfinance. Links to learn more, summary and full transcript. The effects of crime can be split into three categories; before, during, and after. Does having tougher sentences deter people from committing crime? After reviewing studies on gun laws and ‘three strikes’ in California, David concluded that the effect of deterrence is zero. Does imprisoning more people reduce crime by incapacitating potential offenders? Here he says yes, noting that crimes like motor vehicle theft have gone up in a way that seems pretty clearly connected with recent Californian criminal justice reforms (though the effect on violent crime is far lower). Finally, do the after-effects of prison make you more or less likely to commit future crimes? This one is more complicated. Concerned that he was biased towards a comfortable position against incarceration, David did a cost-benefit analysis using both his favored reading of the evidence and the devil's advocate view; that there is deterrence and that the after-effects are beneficial. For the devil’s advocate position David used the highest assessment of the harm caused by crime, which suggests a year of prison prevents about $92,000 in crime. But weighed against a lost year of liberty, valued at $50,000, plus the cost of operating prisons, the numbers came out exactly the same. So even using the least-favorable cost-benefit valuation of the least favorable reading of the evidence -- it just breaks even. The argument for incarceration melts further when you consider the significant crime that occurs within prisons, de-emphasised because of a lack of data and a perceived lack of compassion for inmates. In today’s episode we discuss how to conduct such impactful research, and how to proceed having reached strong conclusions. We also cover: * How do you become a world class researcher? What kinds of character traits are important? * Are academics aware of following perverse incentives? * What’s involved in data replication? How often do papers replicate? * The politics of large orgs vs. small orgs * Geomagnetic storms as a potential cause area * How much does David rely on interviews with experts? * The effects of deworming on child health and test scores * Should we have more ‘data vigilantes’? * What are David’s critiques of effective altruism? * What are the pros and cons of starting your career in the think tank world? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
8/28/20182 hours, 18 minutes
Episode Artwork

#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions? Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects. Note: Katja's organisation AI Impacts is currently hiring part- and full-time researchers. There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast. But there are also many things we’re able to predict confidently today -- like the climate of Oxford in five years -- that we no longer give ourselves much credit for. Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones. Links to learn more, summary and full transcript. One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability? And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity? A significant historical example was the development of nuclear weapons. Over thousands of years, the efficacy of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities. In today’s interview we also discuss: * Why is AI impacts one of the most important projects in the world? * How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions? * How does writing an academic paper differ from posting a summary online? * When will unguided machines be able to produce better and cheaper work than humans for every possible task? * What’s one of the most likely jobs to be automated soon? * Are people always just predicting the same timelines for new technologies? * How do AGI researchers different from other AI researchers in their predictions? * What are attitudes to safety research like within ML? Are there regional differences? * How much should we believe experts generally? * How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world? * How quickly has the processing capacity for machine learning problems been increasing? * What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive? * What should we expect from a post AI dominated economy? * How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.
8/21/20182 hours, 11 minutes, 16 seconds
Episode Artwork

#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner? Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real life problems. Let’s work through one here: how likely is it that you’ll enjoy listening to this episode? The first step is to figure out your ‘prior probability’; what’s your estimate of how likely you are to enjoy the interview before getting any further evidence? Other than applying common sense, one way to figure this out is called reference class forecasting: looking at similar cases and seeing how often something is true, on average. Spencer is our first ever return guest. So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of at most 1 - you’d probably want to add more data points to reduce variability. Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be your prior probability. But maybe the two you didn’t enjoy had something in common. If you’ve liked similar episodes in the past, you’d update in favour of expecting to enjoy it, and if you’ve disliked similar episodes in the past, you’d update negatively. You can zoom out further; what fraction of long-form interview podcasts have you ever enjoyed? Then you’d look to update whenever new information became available. Do the topics seem interesting? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential? Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode? Links to learn more, summary and full transcript. We’ll run through several diverse examples, and how to actually work out the changing probabilities as you update. But that’s only a fraction of the conversation. We also discuss: * How could we generate 20-30 new happy thoughts a day? What would that do to our welfare? * What do people actually value? How do EAs differ from non EAs? * Why should we care about the distinction between intrinsic and instrumental values? * Would hedonic utilitarians really want to hook themselves up to happiness machines? * What types of activities are people generally under-confident about? Why? * When should you give a lot of weight to your prior belief? * When should we trust common sense? * Does power posing have any effect? * Are resumes worthless? * Did Trump explicitly collude with Russia? What are the odds of him getting re-elected? * What’s the probability that China and the US go to War in the 21st century? * How should we treat claims of expertise on diets? * Why were Spencer’s friends suspicious of Theranos for years? * How should we think about the placebo effect? * Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.
8/7/20182 hours, 17 minutes, 29 seconds
Episode Artwork

#38 - Prof Ng on anticipating effective altruism decades ago & how to make a much happier world

Will people who think carefully about how to maximize welfare eventually converge on the same views? The effective altruism community has spent a lot of time over the past 10 years debating how best to increase happiness and reduce suffering, and gradually narrowed in on the world’s poorest people, all animals capable of suffering, and future generations. Yew-Kwang Ng, Professor of Economics at Nanyang Technological University in Singapore, was independently working on this exact question since the 70s. Many of his conclusions have ended up foreshadowing what is now conventional wisdom within effective altruism - though other views he holds remain controversial or little-known. For instance, he thinks we ought to explore increasing pleasure via direct brain stimulation, and that genetic engineering may be an important tool for increasing happiness in the future. His work has suggested that the welfare of most wild animals is on balance negative and he thinks that in the future this is a problem humanity might work to solve. Yet he thinks that greatly improved conditions for farm animals could eventually justify eating meat. He has spent most of his life advocating for the view that happiness, broadly construed, is the only intrinsically valuable thing. If it’s true that careful researchers will converge as Prof Ng believes, these ideas may prove as prescient as his other, now widely accepted, opinions. Link to our summary and appreciation of Kwang’s top publications and insights throughout a lifetime of research. Kwang has led an exceptional life. While in high school he was drawn to physics, mathematics, and philosophy, yet he chose to study economics because of his dream: to establish communism in an independent Malaya. But events in the Soviet Union and China, in addition to his burgeoning knowledge and academic appreciation of economics, would change his views about the practicability of communism. He would soon complete his journey from young revolutionary to academic economist, and eventually become a columnist writing in support of Deng Xiaoping’s Chinese economic reforms in the 80s. He got his PhD at Sydney University in 1971, and has since published over 250 refereed papers - covering economics, biology, politics, mathematics, philosophy, psychology, and sociology. He's most well-known for his work in welfare economics, and proposed ‘welfare biology’ as a new field of study. In 2007, he was made a Distinguished Fellow of the Economic Society of Australia, the highest award that the society bestows. Links to learn more, summary and full transcript. In this episode we discuss how he developed some of his most unusual ideas and his fascinating life story, including: * Why Kwang believes that *’Happiness Is Absolute, Universal, Ultimate, Unidimensional, Cardinally Measurable and Interpersonally Comparable’* * What are the most pressing questions in economics? * Did Kwang have to worry about censorship from the Chinese government when promoting market economics, or concern for animal welfare? * Welfare economics and where Kwang thinks it went wrong Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
7/26/20181 hour, 59 minutes, 29 seconds
Episode Artwork

#37 - GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

What’s the value of preventing the death of a 5-year-old child, compared to a 20-year-old, or an 80-year-old? The global health community has generally regarded the value as proportional to the number of health-adjusted life-years the person has remaining - but GiveWell, one of the world’s foremost charity evaluators, no longer uses that approach. They found that contrary to the years-remaining’ method, many of their staff actually value preventing the death of an adult more than preventing the death of a young child. However there’s plenty of disagreement: the team’s estimates of the relative value span a four-fold range. As James Snowden - a research consultant at GiveWell - explains in this episode, there’s no way around making these controversial judgement calls based on limited information. If you try to ignore a question like this, you just implicitly take an unreflective stand on it instead. And for each charity they look into there’s 1 or 2 dozen of these highly uncertain parameters they need to estimate. GiveWell has been trying to find better ways to make these decisions since its inception in 2007. Lives hang in the balance, so they want their staff to say what they really believe and bring their private knowledge to the table, rather than just defer to a imaginary consensus. Their strategy is a massive spreadsheet that lists dozens of things they need to estimate, and asking every staff member to give a figure and justification. Then once a year, the GiveWell team get together and try to identify what they really disagree about and think through what evidence it would take to change their minds. Full transcript, summary of the conversation and links to learn more. Often the people who have the greatest familiarity with a particular intervention are the ones who drive the decision, as others defer to them. But the group can also end up with very different figures, based on different prior beliefs about moral issues and how the world works. In that case then use the median of everyone’s best guess to make their key decisions. In making his estimate of the relative badness of dying at different ages, James specifically considered two factors: how many years of life do you lose, and how much interest do you have in those future years? Currently, James believes that the worst time for a person to die is around 8 years of age. We discuss his experiences with such calculations, as well as a range of other topics: * Why GiveWell’s recommendations have changed more than it looks. * What are the biggest research priorities for GiveWell at the moment? * How do you take into account the long-term knock-on effects from interventions? * If GiveWell's advice were going to end up being very different in a couple years' time, how might that happen? * Are there any charities that James thinks are really cost-effective which GiveWell hasn't funded yet? * How does domestic government spending in the developing world compare to effective charities? * What are the main challenges with policy related interventions? * How much time do you spend discovering new interventions? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
7/16/20181 hour, 44 minutes, 6 seconds
Episode Artwork

#36 - Tanya Singh on ending the operations management bottleneck in effective altruism

Almost nobody is able to do groundbreaking physics research themselves, and by the time his brilliance was appreciated, Einstein was hardly limited by funding. But what if you could find a way to unlock the secrets of the universe like Einstein nonetheless? Today’s guest, Tanya Singh, sees herself as doing something like that every day. She’s Executive Assistant to one of her intellectual heroes who she believes is making a huge contribution to improving the world: Professor Bostrom at Oxford University's Future of Humanity Institute (FHI). She couldn’t get more work out of Bostrom with extra donations, as his salary is already easily covered. But with her superior abilities as an Executive Assistant, Tanya frees up hours of his time every week, essentially ‘buying’ more Bostrom in a way nobody else can. She also help manage FHI more generally, in so doing freeing up more than an hour of other staff time for each hour she works. This gives her the leverage to do more good than other people or other positions. In our previous episode, Tara Mac Aulay objected to viewing operations work as predominately a way of freeing up other people's time: “A good ops person doesn’t just allow you to scale linearly, but also can help figure out bottlenecks and solve problems such that the organization is able to do qualitatively different work, rather than just increase the total quantity”, Tara said. Full transcript, summary and links to learn more. Tara’s right that buying time for people at the top of their field is just one path to impact, though it’s one Tanya says she finds highly motivating. Other paths include enabling complex projects that would otherwise be impossible, allowing you to hire and grow much faster, and preventing disasters that could bring down a whole organisation - all things that Tanya does at FHI as well. In today’s episode we discuss all of those approaches, as we dive deeper into the broad class of roles we refer to as ‘operations management’. We cover the arguments we made in ‘Why operations management is one of the biggest bottlenecks in effective altruism’, as well as: * Does one really need to hire people aligned with an org’s mission to work in ops? * The most notable operations successes in the 20th Century. * What’s it like being the only operations person in an org? * The role of a COO as compared to a CEO, and the options for career progression. * How do good operation teams allow orgs to scale quickly? * How much do operations staff get to set their org’s strategy? * Which personal weaknesses aren’t a huge problem in operations? * How do you automate processes? Why don’t most people do this? * Cultural differences between Britain and India where Tanya grew up. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours podcast is produced by Keiran Harris.
7/11/20182 hours, 4 minutes, 32 seconds
Episode Artwork

#35 - Tara Mac Aulay on the audacity to fix the world without asking permission

"You don't need permission. You don't need to be allowed to do something that's not in your job description. If you think that it's gonna make your company or your organization more successful and more efficient, you can often just go and do it." How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure. That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator. In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. Full transcript, key quotes and links to learn more. People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face. But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms. We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’, as well as: * Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform. * How a student can save a hospital millions with a simple spreadsheet model. * The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better. * What most people misunderstand about operations, and how to tell if you have what it takes. * And finally, operations jobs people should consider applying for, such as those open now at the Centre for Effective Altruism. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
6/21/20181 hour, 22 minutes, 34 seconds
Episode Artwork

Rob Wiblin on the art/science of a high impact career

Today's episode is a cross-post of an interview I did with The Jolly Swagmen Podcast which came out this week. I recommend regular listeners skip to 24 minutes in to avoid hearing things they already know. Later in the episode I talk about my contrarian views, utilitarianism, how 80,000 Hours has changed and will change in the future, where I think EA is performing worst, how to use social media most effectively, and whether or not effective altruism is any sacrifice. Subscribe and get the episode by searching for '80,000 Hours' in your podcasting app. Blog post of the episode to share, including a list of topics and links to learn more. "Most people want to help others with their career, but what’s the best way to do that? Become a doctor? A politician? Work at a non-profit? How can any of us figure out the best way to use our skills to improve the world? Rob Wiblin is the Director of Research at 80,000 Hours, an organisation founded in Oxford in 2011, which aims to answer just this question and help talented people find their highest-impact career path. He hosts a popular podcast on ‘the world’s most pressing problems and how you can use your career to solve them’. After seven years of research, the 80,000 Hours team recommends against becoming a teacher, or a doctor, or working at most non-profits. And they claim their research shows some common careers do 10 or 100x as much good as others. 80,000 Hours was one of the organisations that kicked off the effective altruism movement, was a Y Combinator-backed non-profit, and has already shifted over 80 million career hours through its advice. Joe caught up with Rob in Berkeley, California, to discuss how 80,000 Hours assesses which of the world’s problems are most pressing, how you can build career capital and succeed in any role, and why you could easily save more lives than a doctor - if you think carefully about your impact." Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
6/8/20181 hour, 31 minutes, 34 seconds
Episode Artwork

#34 - We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election? By 1991 Edwards was already notorious for his corruption. Actually, that’s not it. The truly strange thing is that Edwards was clearly the good guy in the race. How is that possible? His opponent was former Ku Klux Klan Grand Wizard David Duke. How could Louisiana end up having to choose between a criminal and a Nazi sympathiser? It’s not like they lacked other options: the state’s moderate incumbent governor Buddy Roemer ran for re-election. Polling showed that Roemer was massively preferred to both the career criminal and the career bigot, and would easily win a head-to-head election against either. Unfortunately, in Louisiana every candidate from every party competes in the first round, and the top two then go on to a second - a so-called ‘jungle primary’. Vote splitting squeezed out the middle, and meant that Roemer was eliminated in the first round. Louisiana voters were left with only terrible options, in a run-off election mostly remembered for the proliferation of bumper stickers reading “Vote for the Crook. It’s Important.” We could look at this as a cultural problem, exposing widespread enthusiasm for bribery and racism that will take generations to overcome. But according to Aaron Hamlin, Executive Director of The Center for Election Science (CES), there’s a simple way to make sure we never have to elect someone hated by more than half the electorate: change how we vote. He advocates an alternative voting method called approval voting, in which you can vote for as many candidates as you want, not just one. That means that you can always support your honest favorite candidate, even when an election seems like a choice between the lesser of two evils. Full transcript, links to learn more, and summary of key points. If you'd like to meet Aaron he's doing events for CES in San Francisco, DC, Philadelphia, New York and Brooklyn over the next two weeks - RSVP here. While it might not seem sexy, this single change could transform politics. Approval voting is adored by voting researchers, who regard it as the best simple voting system available. Which do they regard as unquestionably the worst? First-past-the-post - precisely the disastrous system used and exported around the world by the US and UK. Aaron has a practical plan to spread approval voting across the US using ballot initiatives - and it just might be our best shot at making politics a bit less unreasonable. The Center for Election Science is a U.S. non-profit which aims to fix broken government by helping the world adopt smarter election systems. They recently received a $600,000 grant from the Open Philanthropy Project to scale up their efforts. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
6/1/20182 hours, 18 minutes, 30 seconds
Episode Artwork

#33 - Dr Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to our last guest, Bryan Caplan, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.*** Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including: * Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? * How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened? * If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians? * What long-shot drugs can people take in their 70s to stave off death? * Can science extend human (waking) life by cutting our need to sleep? * How bad would it be if a solar flare took down the electricity grid? Could it happen? * If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it? * Will lifelike robots make us more inclined to dehumanise one another? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
5/29/20181 hour, 24 minutes, 53 seconds
Episode Artwork

#32 - Bryan Caplan on whether his Case Against Education holds up, totalitarianism, & open borders

Bryan Caplan’s claim in *The Case Against Education* is striking: education doesn’t teach people much, we use little of what we learn, and college is mostly about trying to seem smarter than other people - so the government should slash education funding. It’s a dismaying - almost profane - idea, and one people are inclined to dismiss out of hand. But having read the book, I have to admit that Bryan can point to a surprising amount of evidence in his favour. After all, imagine this dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which is the bigger benefit of college - learning or convincing people you’re smart? It’s not so easy to say. For this interview, I searched for the best counterarguments I could find and challenged Bryan on what seem like his weakest or most controversial claims. Wouldn’t defunding education be especially bad for capable but low income students? If you reduced funding for education, wouldn’t that just lower prices, and not actually change the number of years people study? Is it really true that students who drop out in their final year of college earn about the same as people who never go to college at all? What about studies that show that extra years of education boost IQ scores? And surely the early years of primary school, when you learn reading and arithmetic, *are* useful even if college isn’t. I then get his advice on who should study, what they should study, and where they should study, if he’s right that college is mostly about separating yourself from the pack. Full transcript, links to learn more, and summary of key points. We then venture into some of Bryan’s other unorthodox views - like that immigration restrictions are a human rights violation, or that we should worry about the risk of global totalitarianism. Bryan is a Professor of Economics at George Mason University, and a blogger at *EconLog*. He is also the author of *Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think*, and *The Myth of the Rational Voter: Why Democracies Choose Bad Policies*. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app. In this lengthy interview, Rob and Bryan cover: * How worried should we be about China’s new citizen ranking system as a means of authoritarian rule? * How will advances in surveillance technology impact a government’s ability to rule absolutely? * Does more global coordination make us safer, or more at risk? * Should the push for open borders be a major cause area for effective altruism? * Are immigration restrictions a human rights violation? * Why aren’t libertarian-minded people more focused on modern slavery? * Should altruists work on criminal justice reform or reducing land use regulations? * What’s the greatest art form: opera, or Nicki Minaj? * What are the main implications of Bryan’s thesis for society? * Is elementary school more valuable than university? * What does Bryan think are the best arguments against his view? * Do years of education affect political affiliation? * How do people really improve themselves and their circumstances? * Who should and who shouldn’t do a masters or PhD? * The value of teaching foreign languages in school * Are there some skills people can develop that have wide applicability? Get this episode by subscribing: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
5/22/20182 hours, 25 minutes, 12 seconds
Episode Artwork

#31 - Prof Dafoe on defusing the political & economic risks posed by existing AI capabilities

The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ - a general intellect that is much smarter than the best humans, in practically every field. But according to Allan Dafoe - Assistant Professor of Political Science at Yale University - even if we stopped at today's AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including: * Mass labor displacement, unemployment, and inequality; * The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order; * Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack; * Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance; * Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens. Allan is Co-Director of the Governance of AI Program, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. Full transcript, links to learn more, and summary of key points. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence. I ask Allan: * What are the distinctive characteristics of artificial intelligence from a political or international governance point of view? * Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons? * How can AI be well-governed? * How should we think about the idea of arms races between companies or countries? * What would you say to people skeptical about the importance of this topic? * How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today? * What’s the most urgent questions to deal with in this field? * What can people do if they want to get into the field? * Is there anything unusual that people can look for in themselves to tell if they're a good fit to do this kind of research? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
5/18/201848 minutes, 7 seconds
Episode Artwork

#30 - Dr Eva Vivalt on how little social science findings generalize from one study to another

If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else? Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development - including 15,024 estimates from 635 papers across 20 types of intervention - to help answer this question. Her finding: not confident at all. The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of a particular education program find that it improves test scores by 10 points - the next result is as likely to be negative or greater than 20 points, as it is to be between 0-20 points. She also observed that results from smaller studies done with an NGO - often pilot studies - were more likely to look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably. For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting. Is ‘evidence-based development’ writing a cheque its methodology can’t cash? Should this make us invest less in empirical research, or more to get actually reliable results? Or as some critics say, is interest in impact evaluation distracting us from more important issues, like national or macroeconomic reforms that can’t be easily trialled? We discuss this as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator. Full transcript, links to related papers, and highlights from the conversation. Links mentioned at the start of the show: * 80,000 Hours Job Board * 2018 Effective Altruism Survey **Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.** Questions include: * What is the YC basic income study looking at, and what motivates it? * How do we get people to accept clean meat? * How much can we generalize from impact evaluations? * How much can we generalize from studies in development economics? * Should we be running more or fewer studies? * Do most social programs work or not? * The academic incentives around data aggregation * How much can impact evaluations inform policy decisions? * How often do people change their minds? * Do policy makers update too much or too little in the real world? * How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group? * How often should we believe positive results? * What’s the state of development economics? * Eva’s thoughts on our article on social interventions * How much can we really learn from being empirical? * How much should we really value RCTs? * Is an Economics PhD overrated or underrated? Get this episode by subscribing to our podcast: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
5/15/20182 hours, 1 minute, 28 seconds
Episode Artwork

#29 - Dr Anders Sandberg on 3 new resolutions for the Fermi paradox & how to colonise the universe

Part 2 out now: #33 - Dr Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason. Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years. Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about. Full transcript, related links, and key quotes. But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us. It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species. This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity. In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like: * Should we want optimists or pessimists working on our most important problems? * How should we reason about low probability, high impact risks? * Would a galactic civilization want to stop the stars from burning? * What would be the best strategy for exploring and colonising the universe? * How can you stay coordinated when you’re spread across different galaxies? * What should humanity decide to do with its future? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
5/8/20181 hour, 21 minutes, 26 seconds
Episode Artwork

#28 - Dr Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll? In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way. So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance. Links to learn more, summary and full transcript. Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly. This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves. ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.*** Owen is currently hiring for a selective, two-year research scholars programme at Oxford. In this wide-ranging conversation Owen and I also discuss: * Are academics wrong to value personal interest in a topic over its importance? * What fraction of research has very large potential negative consequences? * Why do we have such different reactions to situations where the risks are known and unknown? * The downsides of waiting for tenure to do the work you think is most important. * What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly? * How should people balance the trade-offs between having a successful career and doing the most important work? * Are there any blind alleys we’ve gone down when thinking about AI safety? * Why did Owen give to an organisation whose research agenda he is skeptical of? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
4/27/20181 hour, 3 minutes, 5 seconds
Episode Artwork

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits. If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block. Dr Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’. In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US. Links to learn more, job opportunities, and full transcript. But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help: http://www.nti.org/analysis/articles/protect-us-investments-global-health-security/ ) But there are positive signs as well - the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the [Emerging Leaders in Biosecurity Fellowship](http://www.centerforhealthsecurity.org/our-work/emergingbioleaders/) to train the next generation of biosecurity experts for the US government. And Inglesby regularly testifies to Congress on the threats we all face and how to address them. In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security. Some of the topics we cover include: * Should more people in medicine work on security? * What are the top jobs for people who want to improve health security and how do they work towards getting them? * What people can do to protect funding for the Global Health Security Agenda. * Should we be more concerned about natural or human caused pandemics? Which is more neglected? * Should we be allocating more attention and resources to global catastrophic risk scenarios? * Why are senior figures reluctant to prioritize one project or area at the expense of another? * What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures? * Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them? * How is the current US government performing in these areas? * Which agencies are empowered to think about low probability high magnitude events? And more... Get this episode by subscribing: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
4/18/20182 hours, 16 minutes, 40 seconds
Episode Artwork

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells are in this comfortable state, they'll proliferate. One cell becomes two, two becomes four, four becomes eight, and so on. Continue until you have enough cells to make a burger, a nugget, a sausage, or a piece of bacon, then concentrate them until they bind into solid meat. It's all surprisingly straightforward in principle according to Marie Gibbons​, a research fellow with The Good Food Institute, who has been researching how to improve this process at Harvard Medical School. We might even see clean meat sold commercially within a year. The real technical challenge is developing large bioreactors and cheap solutions so that we can make huge volumes and drive down costs. This interview covers the science and technology involved at each stage of clean meat production, the challenges and opportunities that face cutting-edge researchers like Marie, and how you could become one of them. Full transcript, key points, and links to learn more. Marie’s research focuses on turkey cells. But as she explains, with clean meat the possibilities extend well beyond those of traditional meat. Chicken, cow, pig, but also panda - and even dinosaurs could be on the menus of the future. Today’s episode is hosted by Natalie Cargill, a barrister in London with a background in animal advocacy. Natalie and Marie also discuss: * Why Marie switched from being a vet to developing clean meat * For people who want to dedicate themselves to animal welfare, how does working in clean meat fare compared to other career options? How can people get jobs in the area? * How did this become an established field? * How important is the choice of animal species and cell type in this process? * What are the biggest problems with current production methods? * Is this kind of research best done in an academic setting, a commercial setting, or a balance between the two? * How easy will it be to get consumer acceptance? * How valuable would extra funding be for cellular agriculture? * Can we use genetic modification to speed up the process? * Is it reasonable to be sceptical of the possibility of clean meat becoming financially competitive with traditional meat any time in the near future? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
4/10/20181 hour, 44 minutes, 16 seconds
Episode Artwork

#25 - Prof Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week. Why did the doctors go this far? Prof, Robin Hanson, Associate Professor of Economics at George Mason University suspects that on top of any medical beliefs they also had a hidden motive: it needed to be clear, to the king and the public, that the physicians cared enormously about saving His Royal Majesty. Only by going ‘all out’ would they be protected against accusations of negligence should the King die. Full transcript, summary, and links to articles discussed in the show. If you believe Hanson, the same desire to be seen to care about our family and friends explains much of what’s perverse about our medical system today. And not just medicine - Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students and our politics are about choosing wise policies. So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others. Robin is a polymath economist, who has come up with surprising and novel insight in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, *The Elephant in the Brain: Hidden Motives in Everyday Life*, but also: * What was it like being part of a competitor group to the ‘World Wide Web’, and being beaten to the post? * If people aren’t going to school to learn, what’s education all about? * What split brain patients tell us about our ability to justify anything * The hidden motivations that shape religions * Why we choose the friends we do * Why is our attitude to medicine mysterious? * What would it look like if people were focused on doing as much good as possible? * Are we better off donating now, when we’re older, or even wait until well after our deaths? * How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible? * What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing? * And much more...
3/28/20182 hours, 39 minutes, 19 seconds
Episode Artwork

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves? Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals. Summary, related links and full transcript. In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better. But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour. In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ - including where it’s going right and where it’s going wrong. Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford. We discuss: * Should we trust our own judgement more than others’? * How hard is it to improve political discourse? * What should we make of well-respected academics writing articles that seem to be completely misinformed? * How is effective altruism (EA) changing? What might it be doing wrong? * How has Stefan’s view of EA changed? * Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics? * How much should we cooperate with those with whom we have disagreements? * What good reasons are there to be inconsiderate? * Should effective altruism potentially focused on a more narrow range of problems? *The 80,000 Hours podcast is produced by Keiran Harris.* **If you subscribe to our podcast, you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts.**
3/20/201855 minutes, 1 second
Episode Artwork

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at least one paper before you apply for a PhD. Find a supervisor who’ll have a lot of time for you. Go to the top conferences and meet your future colleagues. And finally, get yourself hired. That’s Dr Jan Leike’s advice on how to join him as a Research Scientist at DeepMind, the world’s leading AI team. Jan is also a Research Associate at the Future of Humanity Institute at the University of Oxford, and his research aims to make machine learning robustly beneficial. His current focus is getting AI systems to learn good ‘objective functions’ in cases where we can’t easily specify the outcome we actually want. Full transcript, summary and links to learn more. How might you know you’re a good fit for research? Jan says to check whether you get obsessed with puzzles and problems, and find yourself mulling over questions that nobody knows the answer to. To do research in a team you also have to be good at clearly and concisely explaining your new ideas to other people. We also discuss: * Where Jan's views differ from those expressed by Dario Amodei in episode 3 * Why is AGI safety one of the world’s most pressing problems? * Common misconceptions about AI * What are some of the specific things DeepMind is researching? * The ways in which today’s AI systems can fail * What are the best techniques available today for teaching an AI the right objective function? * What’s it like to have some of the world’s greatest minds as coworkers? * Who should do empirical research and who should do theoretical research * What’s the DeepMind application process like? * The importance of researchers being comfortable with the unknown. *The 80,000 Hours Podcast is produced by Keiran Harris.*
3/16/201845 minutes, 23 seconds
Episode Artwork

#22 - Dr Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise buildings, jumping from a height is more common. And in some countries in Asia and Africa with many poor agricultural communities, the leading means is drinking pesticide. There’s a good chance you’ve never heard of this issue before. And yet, of the 800,000 people who kill themselves globally each year 20% die from pesticide self-poisoning. Full transcript, summary and links to articles discussed in today's show. Research suggests most people who try to kill themselves with pesticides reflect on the decision for less than 30 minutes, and that less than 10% of those who don't die the first time around will try again. Unfortunately, the fatality rate from pesticide ingestion is 40% to 70%. Having such dangerous chemicals near people's homes is therefore an enormous public health issue not only for the direct victims, but also the partners and children they leave behind. Fortunately researchers like Dr Leah Utyasheva have figured out a very cheap way to massively reduce pesticide suicide rates. In this episode, Leah and I discuss: * How do you prevent pesticide suicide and what’s the evidence it works? * How do you know that most people attempting suicide don’t want to die? * What types of events are causing people to have the crises that lead to attempted suicide? * How much money does it cost to save a life in this way? * How do you estimate the probability of getting law reform passed in a particular country? * Have you generally found politicians to be sympathetic to the idea of banning these pesticides? What are their greatest reservations? * The comparison of getting policy change rather than helping person-by-person * The importance of working with locals in places like India and Nepal, rather than coming in exclusively as outsiders * What are the benefits of starting your own non-profit versus joining an existing org and persuading them of the merits of the cause? * Would Leah in general recommend starting a new charity? Is it more exciting than it is scary? * Is it important to have an academic leading this kind of work? * How did The Centre for Pesticide Suicide Prevention get seed funding? * How does the value of saving a life from suicide compare to savings someone from malaria * Leah’s political campaigning for the rights of vulnerable groups in Eastern Europe * What are the biggest downsides of human rights work?
3/7/20181 hour, 8 minutes, 3 seconds
Episode Artwork

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of a philanthropist willing to take a risky bet on a new idea. Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of the Open Philanthropy Project, which gives away over $100 million per year - and he’s hungry for big wins. Full transcript, related links, job opportunities and summary of the interview. In the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world - thereby massively increasing their food production. In the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick. In both cases, it was philanthropists rather than governments that led the way. The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves - but to seize that opportunity they have to hire outstanding researchers, think long-term and be willing to fail most of the time. Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then the Open Philanthropy Project, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism. We’ve recorded this episode now because [the Open Philanthropy Project is hiring](https://www.openphilanthropy.org/get-involved/jobs) for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel. But the conversation goes well beyond specifics about these jobs. We also discuss: * How did they pick the problems they focus on, and how will they change over time? * What would Holden do differently if he were starting Open Phil again today? * What can we learn from the history of philanthropy? * What makes a good Program Officer. * The importance of not letting hype get ahead of the science in an emerging field. * The importance of honest feedback for philanthropists, and the difficulty getting it. * How do they decide what’s above the bar to fund, and when it’s better to hold onto the money? * How philanthropic funding can most influence politics. * What Holden would say to a new billionaire who wanted to give away most of their wealth. * Why Open Phil is building a research field around the safe development of artificial intelligence * Why they invested in OpenAI. * Academia’s faulty approach to answering practical questions. * What potential utopias do people most want, according to opinion polls? Keiran Harris helped produce today’s episode.
2/27/20182 hours, 35 minutes, 35 seconds
Episode Artwork

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won because many no longer saw themselves as having a selfish stake in its continuation. Bruce Friedrich, executive director of The Good Food Institute (GFI), thinks the same may be true in the fight against speciesism. 98% of people currently eat meat. But if eating meat stops being part of most people’s daily lives -- it should be a lot easier to convince them that farming practices are just as cruel as they look, and that the suffering of these animals really matters. Full transcript, related links, job opportunities and summary of the interview. That’s why GFI is “working with scientists, investors, and entrepreneurs” to create plant-based meat, dairy and eggs as well as clean meat alternatives to animal products. In 2016, Animal Charity Evaluators named GFI one of its recommended charities. In this interview I’m joined by my colleague Natalie Cargill, and we ask Bruce about: * What’s the best meat replacement product out there right now? * How effective is meat substitute research for people who want to reduce animal suffering as much as possible? * When will we get our hands on clean meat? And why does Bruce call it clean meat, rather than in vitro meat or cultured meat? * What are the challenges of producing something structurally identical to meat? * Can clean meat be healthier than conventional meat? * Do plant-based alternatives have a better shot at success than clean meat? * Is there a concern that, even if the product is perfect, people still won’t eat it? Why might that happen? * What’s it like being a vegan in a family made up largely of hunters and meat-eaters? * What kind of pushback should be expected from the meat industry? Keiran Harris helped produce today’s episode.
2/19/20181 hour, 17 minutes, 59 seconds
Episode Artwork

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, and loaded onto a delivery truck. The truck is driven by an American citizen midway between the White House and the Capitol Building. The driver casually steps out of the vehicle, and detonates the weapon. There are more than 80,000 instant deaths. There are also at least 100,000 seriously wounded, with nowhere left to treat them. Full blog post about this episode, including a transcript, summary and links to resources mentioned in the show It’s likely that one of those immediately killed would be Samantha Pitts-Kiefer, who works only one block away from the White House. Samantha serves as Senior Director of The Global Nuclear Policy Program at the Nuclear Threat Initiative, and warns that the chances of a nuclear terrorist attack are alarmingly high. Terrorist groups have expressed a desire for nuclear weapons, and the material required to build those weapons is scattered throughout the world at a diverse range of sites – some of which lack the necessary security. When you combine the massive death toll with the accompanying social panic and economic disruption – the consequences of a nuclear 9/11 would be a disasterare almost unthinkable. And yet, Samantha reminds us – we must confront the possibility. Clearly, this is far from the only nuclear nightmare. We also discuss: * In the case of nuclear war, what fraction of the world's population would die? * What is the biggest nuclear threat? * How concerned should we be about North Korea? * How often has the world experienced nuclear near misses? * How might a conflict between India and Pakistan escalate to the nuclear level? * How quickly must a president make a decision in the result of a suspected first strike? * Are global sources of nuclear material safely secured? * What role does cyber security have in preventing nuclear disasters? * How can we improve relations between nuclear armed states? * What do you think about the campaign for complete nuclear disarmament? * If you could tell the US government to do three things, what are the key priorities today? * Is it practical to get members of congress to pay attention to nuclear risks? * Could modernisation of nuclear weapons actually make the world safer?
2/14/20181 hour, 4 minutes, 29 seconds
Episode Artwork

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

Ofir Reich started out doing math in the military, before spending 8 years in tech startups - but then made a sharp turn to become a data scientist focussed on helping the global poor. At UC Berkeley’s Center for Effective Global Action he helps prevent tax evasion by identifying fake companies in India, enable Afghanistan to pay its teachers electronically, and raise yields for Ethiopian farmers by messaging them when local conditions make it ideal to apply fertiliser. Or at least that’s the hope - he’s also working on ways to test whether those interventions actually work. Full post about this episode, including a transcript and relevant links to learn more. Why dedicate his life to helping the global poor? Ofir sees little moral difference between harming people and failing to help them. After all, if you had to press a button to keep all of your money from going to charity, and you pressed that button, would that be an action, or an inaction? Is there even an answer? After reflecting on cases like this, he decided that to not engage with a problem is an active choice, one whose consequences he is just as morally responsible for as if he were directly involved. On top of his life philosophy we also discuss: * The benefits of working in a top academic environment * How best to start a career in global development * Are RCTs worth the money? Should we focus on big picture policy change instead? Or more economic theory? * How the delivery standards of nonprofits compare to top universities * Why he doesn’t enjoy living in the San Francisco bay area * How can we fix the problem of most published research being false? * How good a career path is data science? * How important is experience in development versus technical skills? * How he learned much of what he needed to know in the army * How concerned should effective altruists be about burnout? Keiran Harris helped produce today’s episode.
1/31/20181 hour, 18 minutes, 48 seconds
Episode Artwork

#17 - Prof Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed? Full transcript, key points and links to articles and career guides discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics: * How would we go about a ‘long reflection’ to fix our moral errors? * Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? * If we basically solve existential risks, what does humanity do next? * What are some of Will’s most unusual philosophical positions? * What are the best arguments for and against utilitarianism? * Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? * What are some the biases we should be aware of within academia? * What are some of the downsides of becoming a professor? * What are the merits of becoming a philosopher? * How does the media image of EA differ to the actual goals of the community? * What kinds of things would you like to see the EA community do differently? * How much should we explore potentially controversial ideas? * How focused should we be on diversity? * What are the best arguments against effective altruism? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.
1/19/20181 hour, 52 minutes, 13 seconds
Episode Artwork

#16 - Dr Hutchinson on global priorities research & shaping the ideas of intellectuals

In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen? In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have. Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is currently hiring for three roles, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly. Link to the full blog post about this episode including transcript and links to learn more Its research agenda includes questions like: * How do we compare the good done by focussing on really different types of causes? * How does saving lives actually affect the world relative to other things we could do? * What are the biggest wins governments should be focussed on getting? Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health. We discuss: * What is global priorities research and why does it matter? * How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them? * Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing? * How hard is it to do something innovative inside a university? How serious are the administrative and other barriers? * Is it harder to fundraise for a new institute, or hire the right people? * Have other social movements benefitted from having a prominent academic arm? * How can people prepare themselves to get research roles at a place like GPI? * Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead? * What are the odds of the Institute’s work having an effect on the real world? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.
12/22/201755 minutes
Episode Artwork

#15 - Prof Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future. After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015. Full transcript, brief summary, apply for coaching and links to learn more. It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information. Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the Hybrid Forecasting Competition, which you can sign up and participate in. We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable: * Should people who want to be right just adopt the views of experts rather than apply their own judgement? * Why are Berkeley undergrads worse forecasters than dart-throwing chimps? * Should I keep my political views secret, so it will be easier to change them later? * How can listeners contribute to his latest cutting-edge research? * What do we know about our accuracy at predicting low-probability high-impact disasters? * Does his research provide an intellectual basis for populist political movements? * Was the Iraq War caused by bad politics, or bad intelligence methods? * What can we learn about forecasting from the 2016 election? * Can experience help people avoid overconfidence and underconfidence? * When does an AI easily beat human judgement? * Could more accurate forecasting methods make the world more dangerous? * How much does demographic diversity line up with cognitive diversity? * What are the odds we’ll go to war with China? * Should we let prediction tournaments run most of the government? Listen to it. Get free, one-on-one career advice. Want to work on important social science research like Tetlock? We’ve helped hundreds of people compare their options and get introductions. Find out if our coaching can help you.
11/20/20171 hour, 23 minutes, 59 seconds
Episode Artwork

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime? That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years. Blog post about the episode, including links and full transcript. A related previous episode, strongly recommended: Lewis Bollard on how to end factory farming as soon as possible. In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then - and can’t just look away. How big an impact is this having on users? Sharon Nuñez and Jose Valle also tackle: * How do they track their goals and metrics week to week? * How much does an undercover investigation cost? * Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans? * How risky is it to attempt to build a career in animal advocacy? * What led to a change in their focus from bullfighting in Spain to animal farming? * How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians? * Has their very rapid growth been difficult to handle? * What should our listeners study or do if they want to work in this area? * How can we get across the message that horrific cases are a feature - not a bug - of factory farming? * Do the owners or workers of factory farms ever express shame at what they do?
11/13/20171 hour, 25 minutes, 56 seconds
Episode Artwork

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor - in some countries governments only spend $100 on each citizen a year so they can’t afford to waste a single dollar. Enter MIT’s Poverty Action Lab (J-PAL). Since 2003 they’ve conducted experiments to figure out what policies actually help recipients, and then tried to get them implemented by governments and non-profits. Claire Walsh leads J-PAL’s Government Partnership Initiative, which works to evaluate policies and programs in collaboration with developing world governments, scale policies that have been shown to work, and generally promote a culture of evidence-based policymaking. Summary, links to career opportunities and topics discussed in the show. We discussed (her views only, not J-PAL’s): * How can they get evidence backed policies adopted? Do politicians in the developing world even care whether their programs actually work? Is the norm evidence-based policy, or policy-based evidence? * Is evidence-based policy an evidence-based strategy itself? * Which policies does she think would have a particularly large impact on human welfare relative to their cost? * How did she come to lead one of J-PAL’s departments at 29? * How do you evaluate the effectiveness of energy and environment programs (Walsh’s area of expertise), and what are the standout approaches in that area? * 80,000 Hours has warned people about the downsides of starting your career in a non-profit. Walsh started her career in a non-profit and has thrived, so are we making a mistake? * Other than J-PAL, what are the best places to work in development? What are the best subjects to study? Where can you go network to break into the sector? * Is living in poverty as bad as we think? And plenty of other things besides. We haven’t run an RCT to test whether this episode will actually help your career, but I suggest you listen anyway. Trust my intuition on this one.
10/31/201752 minutes, 27 seconds
Episode Artwork

#12 - Dr Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

“When you're in the middle of a crisis and you have to ask for money, you're already too late.” That’s Dr Beth Cameron, who leads Global Biological Policy and Programs at the Nuclear Threat Initiative. Beth should know. She has years of experience preparing for and fighting the diseases of our nightmares, on the White House Ebola Taskforce, in the National Security Council staff, and as the Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs. Summary, list of career opportunities, extra links to learn more and coaching application. Unfortunately, the countries of the world aren’t prepared for a crisis - and like children crowded into daycare, there’s a good chance something will make us all sick at once. During past pandemics countries have dragged their feet over who will pay to contain them, or struggled to move people and supplies where they needed to be. At the same time advanced biotechnology threatens to make it possible for terrorists to bring back smallpox - or create something even worse. In this interview we look at the current state of play in disease control, what needs to change, and how you can build the career capital necessary to make those changes yourself. That includes: * What and where to study, and where to begin a career in pandemic preparedness. Below you’ll find a lengthy list of people and places mentioned in the interview, and others we’ve had recommended to us. * How the Nuclear Threat Initiative, with just 50 people, collaborates with governments around the world to reduce the risk of nuclear or biological catastrophes, and whether they might want to hire you. * The best strategy for containing pandemics. * Why we lurch from panic, to neglect, to panic again when it comes to protecting ourselves from contagious diseases. * Current reform efforts within the World Health Organisation, and attempts to prepare partial vaccines ahead of time. * Which global health security groups most impress Beth, and what they’re doing. * What new technologies could be invented to make us safer. * Whether it’s possible to help solve the problem through mass advocacy. * Much more besides. Get free, one-on-one career advice to improve biosecurity Considering a relevant grad program like a biology PhD, medicine, or security studies? Able to apply for a relevant job already? We’ve helped dozens of people plan their careers to work on pandemic preparedness and put them in touch with mentors. If you want to work on the problem discussed in this episode, you should apply for coaching: Read more
10/25/20171 hour, 45 minutes, 15 seconds
Episode Artwork

#11 - Dr Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make academics more intellectually honest, so we can actually trust their findings? How can we speed up social science research ten-fold? Do most startups improve the world, or make it worse? If you’re interested in these question, this interview is for you. Click for a full transcript, links discussed in the show, etc. A scientist, entrepreneur, writer and mathematician, Spencer Greenberg is constantly working to create tools to speed up and improve research and critical thinking. These include: * Rapid public opinion surveys to find out what most people actually think about animal consciousness, farm animal welfare, the impact of developing world charities and the likelihood of extinction by various different means; * Tools to enable social science research to be run en masse very cheaply; * ClearerThinking.org, a highly popular site for improving people’s judgement and decision-making; * Ways to transform data analysis methods to ensure that papers only show true findings; * Innovative research methods; * Ways to decide which research projects are actually worth pursuing. In this interview, Spencer discusses all of these and more. If you don’t feel like listening, that just shows that you have poor judgement and need to benefit from his wisdom even more! Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.
10/17/20171 hour, 29 minutes, 17 seconds
Episode Artwork

#10 - Dr Nick Beckstead on how to spend billions of dollars preventing human extinction

What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead. Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion. Full transcript, coaching application form, overview of the conversation, and links to resources discussed in the episode: This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including: * * Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.) * Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives? * What are the greatest risks to human civilisation? * To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets? * Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions? * What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world? * Should we expect the future to be better if the economy grows more quickly - or more slowly? Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.
10/11/20171 hour, 51 minutes, 47 seconds
Episode Artwork

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo. Full transcript, coaching application form, overview of the conversation, and extra resources to learn more: In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it). Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides. We dive into: * Whether the poor security of computer systems poses a catastrophic risk for the world. Could all our essential services be taken down at once? And if so, what can be done about it? * Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly? * How Christine came up with the term ‘open source software’ (and why someone else had to propose it). * Will AIs designed for wide-scale automated hacking make computers more or less secure? * Would it be good to radically extend human lifespan? Is it sensible to cryogenically freeze yourself in the hope of being resurrected in the future? * Could atomically precise manufacturing (nanotechnology) really work? Why was it initially so controversial and why did people stop worrying about it? * Should people who try to do good in their careers work long hours and take low salaries? Or should they take care of themselves first of all? * How she thinks the the effective altruism community resembles the scene she was involved with when she was wrong, and where it might be going wrong. Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.
10/4/20171 hour, 45 minutes, 9 seconds
Episode Artwork

#8 - Lewis Bollard on how to end factory farming in our lifetimes

Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm Animal Welfare at the Open Philanthropy Project – has conducted extensive research into the best ways to eliminate animal suffering in farms as soon as possible. This has resulted in $30 million in grants to farm animal advocacy. Full transcript, coaching application form, overview of the conversation, and extra resources to learn more: We covered almost every approach being taken, which ones work, and how individuals can best contribute through their careers. We also had time to venture into a wide range of issues that are less often discussed, including: * Why Lewis thinks insect farming would be worse than the status quo, and whether we should look for ‘humane’ insecticides; * How young people can set themselves up to contribute to scientific research into meat alternatives; * How genetic manipulation of chickens has caused them to suffer much more than their ancestors, but could also be used to make them better off; * Why Lewis is skeptical of vegan advocacy; * Why he doubts that much can be done to tackle factory farming through legal advocacy or electoral politics; * Which species of farm animals is best to focus on first; * Whether fish and crustaceans are conscious, and if so what can be done for them; * Many other issues listed below in the Overview of the discussion. Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you. Overview of the discussion **2m40s** What originally drew you to dedicate your career to helping animals and why did Open Philanthropy end up focusing on it? **5m40s** Do you have any concrete way of assessing the severity of animal suffering? **7m10s** Do you think the environmental gains are large compared to those that we might hope to get from animal welfare improvement? **7m55s** What grants have you made at Open Phil? How did you go about deciding which groups to fund and which ones not to fund? **9m50s** Why does Open Phil focus on chickens and fish? Is this the right call? More...
9/27/20173 hours, 16 minutes, 54 seconds
Episode Artwork

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong. Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas. This interview complements a new detailed review of whether and how to follow Julia’s career path. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements. In our conversation we ended up speaking about a wide range of topics, including: * Her research on how people can have productive intellectual disagreements. * Why she once planned to become an urban designer. * Why she doubts people are more rational than 200 years ago. * What makes her a fan of Twitter (while I think it’s dystopian). * Whether people should write more books. * Whether it’s a good idea to run a podcast, and how she grew her audience. * Why saying you don’t believe X often won’t convince people you don’t. * Why she started a PhD in economics but then stopped. * Whether she would recommend an unconventional career like her own. * Whether the incentives in the intelligence community actually support sound thinking. * Whether big institutions will actually pick up new tools for improving decision-making if they are developed. * How to start out pursuing a career in which you enhance human judgement and foresight. Get free, one-on-one career advice to help you improve judgement and decision-making We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:** APPLY FOR COACHING Overview of the conversation **1m30s** So what projects are you working on at the moment? **3m50s** How are you working on the problem of expert disagreement? **6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality? **10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund? **13m** Is the double crux process actually that effective? **14m50s** Is Facebook dangerous? **17m** What makes for a good life? Can you be mistaken about having a good life? **19m** Should more people write books? Read more...
9/13/20171 hour, 14 minutes, 16 seconds
Episode Artwork

#6 - Dr Toby Ord on why the long-term future matters more than anything else & what to do about it

Of all the people whose well-being we should care about, only a small fraction are alive today. The rest are members of future generations who are yet to exist. Whether they’ll be born into a world that is flourishing or disintegrating – and indeed, whether they will ever be born at all – is in large part up to us. As such, the welfare of future generations should be our number one moral concern. This conclusion holds true regardless of whether your moral framework is based on common sense, consequences, rules of ethical conduct, cooperating with others, virtuousness, keeping options open – or just a sense of wonder about the universe we find ourselves in. That’s the view of Dr Toby Ord, a philosophy Fellow at the University of Oxford and co-founder of the effective altruism community. In this episode of the 80,000 Hours Podcast Dr Ord makes the case that aiming for a positive long-term future is likely the best way to improve the world. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. We then discuss common objections to long-termism, such as the idea that benefits to future generations are less valuable than those to people alive now, or that we can’t meaningfully benefit future generations beyond taking the usual steps to improve the present. Later the conversation turns to how individuals can and have changed the course of history, what could go wrong and why, and whether plans to colonise Mars would actually put humanity in a safer position than it is today. This episode goes deep into the most distinctive features of our advice. It’s likely the most in-depth discussion of how 80,000 Hours and the effective altruism community think about the long term future and why - and why we so often give it top priority. It’s best to subscribe, so you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts. Want to help ensure humanity has a positive future instead of destroying itself? We want to help. We’ve helped 100s of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, such as artificial intelligence or biosecurity, find out if our coaching can help you. Overview of the discussion 3m30s - Why is the long-term future of humanity such a big deal, and perhaps the most important issue for us to be thinking about? 9m05s - Five arguments that future generations matter 21m50s - How bad would it be if humanity went extinct or civilization collapses? 26m40s - Why do people start saying such strange things when this topic comes up? 30m30s - Are there any other reasons to prioritize thinking about the long-term future of humanity that you wanted to raise before we move to objections? 36m10s - What is this school of thought called? Read more...
9/6/20172 hours, 8 minutes, 49 seconds
Episode Artwork

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

Quantitative financial trading is one of the highest paying parts of the world’s highest paying industry. 25 to 30 year olds with outstanding maths skills can earn millions a year in an obscure set of ‘quant trading’ firms, where they program computers with predefined algorithms to allow them to trade very quickly and effectively. Update: we're headhunting people for quant trading roles Want to be kept up to date about particularly promising roles we're aware of for earning to give in quantitative finance? Get notified by letting us know here. This makes it an attractive place to work for people who want to ‘earn to give’, and we know several people who are able to donate over a million dollars a year to effective charities by working in quant trading. Who are these people? What is the job like? And is there a risk that their work harms the world in other ways? Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. I spoke at length with Alexander Gordon-Brown, who has worked as a quant trader in London for the last three and a half years and donated hundreds of thousands of pounds. We covered: * What quant traders do and how much they earn. * Whether their work is beneficial or harmful for the world. * How to figure out if you’re a good personal fit for quant trading, and if so how to break into the industry. * Whether he enjoys the work and finds it motivating, and what other careers he considered. * What variety of positions are on offer, and what the culture is like in different firms. * How he decides where to donate, and whether he has persuaded his colleagues to join him. Want to earn to give for effective charities in quantitative trading? We want to help. We’ve helped dozens of people plan their earning to give careers, and put them in touch with mentors. If you want to work in quant trading, apply for our free coaching service. APPLY FOR COACHING What questions are asked when? 1m30s - What is quant trading and how much do they earn? 4m45s - How do quant trading firms manage the risks they face and avoid bankruptcy? 7m05s - Do other traders also donate to charity and has Alex convinced them? 9m45s - How do they track the performance of each trader? 13m00s - What does the daily schedule of a quant trader look like? What do you do in the morning, afternoon, etc? More...
8/28/20171 hour, 45 minutes, 19 seconds
Episode Artwork

#4 - Howie Lempel on pandemics that kill hundreds of millions and how to stop them

What disaster is most likely to kill more than 10 million human beings in the next 20 years? Terrorism? Famine? An asteroid? Actually it’s probably a pandemic: a deadly new disease that spreads out of control. We’ve recently seen the risks with Ebola and swine flu, but they pale in comparison to the Spanish flu which killed 3% of the world’s population in 1918 to 1920. A pandemic of that scale today would kill 200 million. In this in-depth interview I speak to Howie Lempel, who spent years studying pandemic preparedness for the Open Philanthropy Project. We spend the first 20 minutes covering his work at the foundation, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. Full transcript, apply for personalised coaching to help you work on pandemic preparedness, see what questions are asked when, and read extra resources to learn more. In the second half we go through where you personally could study and work to tackle one of the worst threats facing humanity. Want to help ensure we have no severe pandemics in the 21st century? We want to help. We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on pandemic preparedness safety, apply for our free coaching service. APPLY FOR COACHING 2m - What does the Open Philanthropy Project do? What’s it like to work there? 16m27s - What grants did OpenPhil make in pandemic preparedness? Did they work out? 22m56s - Why is pandemic preparedness such an important thing to work on? 31m23s - How many people could die in a global pandemic? Is Contagion a realistic movie? 37m05s - Why the risk is getting worse due to scientific discoveries 40m10s - How would dangerous pathogens get released? 45m27s - Would society collapse if a billion people die in a pandemic? 49m25s - The plague, Spanish flu, smallpox, and other historical pandemics 58m30s - How are risks affected by sloppy research security or the existence of factory farming? 1h7m30s - What's already being done? Why institutions for dealing with pandemics are really insufficient. 1h14m30s - What the World Health Organisation should do but can’t. 1h21m51s - What charities do about pandemics and why they aren’t able to fix things 1h25m50s - How long would it take to make vaccines? 1h30m40s - What does the US government do to protect Americans? It’s a mess. 1h37m20s - What kind of people do you know work on this problem and what are they doing? 1h46m30s - Are there things that we ought to be banning or technologies that we should be trying not to develop because we're just better off not having them? 1h49m35s - What kind of reforms are needed at the international level? 1h54m40s - Where should people who want to tackle this problem go to work? 1h59m50s - Are there any technologies we need to urgently develop? 2h04m20s - What about trying to stop humans from having contact with wild animals? 2h08m5s - What should people study if they're young and choosing their major; what should they do a PhD in? Where should they study, and with who? More...
8/23/20172 hours, 35 minutes, 23 seconds
Episode Artwork

#3 - Dr Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal. Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society. I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about: * OpenAI’s latest plans and research progress. * His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. * How listeners can best go about pursuing a career in machine learning and AI development themselves. Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more. 1m33s - What OpenAI is doing, Dario’s research and why AI is important 13m - Why OpenAI scaled back its Universe project 15m50s - Why AI could be dangerous 24m20s - Would smarter than human AI solve most of the world’s problems? 29m - Paper on five concrete problems in AI safety 43m48s - Has OpenAI made progress? 49m30s - What this back flipping noodle can teach you about AI safety 55m30s - How someone can pursue a career in AI safety and get a job at OpenAI 1h02m30s - Where and what should people study? 1h4m15s - What other paradigms for AI are there? 1h7m55s - How do you go from studying to getting a job? What places are there to work? 1h13m30s - If there's a 17-year-old listening here what should they start reading first? 1h19m - Is this a good way to develop your broader career options? Is it a safe move? 1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in? 1h24m - What about doing this work in academia? 1h26m50s - Is the work frustrating because solutions may not exist? 1h31m35s - How do we prevent a dangerous arms race? 1h36m30s - Final remarks on how to get into doing useful work in machine learning
7/21/20171 hour, 38 minutes, 21 seconds
Episode Artwork

#2 - Prof David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.
6/21/201733 minutes, 42 seconds
Episode Artwork

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.
6/5/201755 minutes, 15 seconds
Episode Artwork

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it
5/1/20173 minutes, 53 seconds