Winamp Logo
Brain Inspired Cover
Brain Inspired Profile

Brain Inspired

English, Sciences, 1 season, 200 episodes, 4 days, 12 hours, 47 minutes
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Episode Artwork

BI 189 Joshua Vogelstein: Connectomes and Prospective Learning

Support the show to get full episodes and join the Discord community. Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on. The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward. At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fixed it... so just hang tight if you're viewing the podcast... it'll get better soon. 0:00 - Intro 05:25 - Jovo's approach 13:10 - Connectome of a fruit fly 26:39 - What to do with a connectome 37:04 - How important is a connectome? 51:48 - Prospective learning 1:15:20 - Efficiency 1:17:38 - AI doomerism
6/29/20241 hour, 27 minutes, 19 seconds
Episode Artwork

BI 188 Jolande Fooken: Coordinating Action and Perception

Support the show to get full episodes and join the Discord community. Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics. Jolande's website. Twitter: @ookenfooken. Related papers I am a parent. I am a scientist. Eye movement accuracy determines natural interception strategies. Perceptual-cognitive integration for goal-directed action in naturalistic environments. 0:00 - Intro 3:27 - Eye movements 8:53 - Hand-eye coordination 9:30 - Hand-eye coordination and naturalistic tasks 26:45 - Levels of expertise 34:02 - Yarbus and eye movements 42:13 - Varieties of experimental paradigms, varieties of viewing the brain 52:46 - Career vision 1:04:07 - Evolving view about the brain 1:10:49 - Coordination, robots, and AI
5/27/20241 hour, 28 minutes, 14 seconds
Episode Artwork

BI 187: COSYNE 2024 Neuro-AI Panel

Support the show to get full episodes and join the Discord community. Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience. COSYNE.
4/20/20241 hour, 3 minutes, 35 seconds
Episode Artwork

BI 186 Mazviita Chirimuuta: The Brain Abstracted

Support the show to get full episodes and join the Discord community. Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today. Mazviita's University of Edinburgh page. The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. Previous Brain Inspired episodes: BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind 0:00 - Intro 5:28 - Neuroscience to philosophy 13:39 - Big themes of the book 27:44 - Simplifying by mathematics 32:19 - Simplifying by reduction 42:55 - Simplification by analogy 46:33 - Technology precedes science 55:04 - Theory, technology, and understanding 58:04 - Cross-disciplinary progress 58:45 - Complex vs. simple(r) systems 1:08:07 - Is science bound to study stability? 1:13:20 - 4E for philosophy but not neuroscience? 1:28:50 - ANNs as models 1:38:38 - Study of mind
3/25/20241 hour, 43 minutes, 34 seconds
Episode Artwork

BI 185 Eric Yttri: Orchestrating Behavior

Support the show to get full episodes and join the Discord community. As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University. Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space. We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more. Yttri Lab Twitter: @YttriLab Related papers Opponent and bidirectional control of movement velocity in the basal ganglia. B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors. 0:00 - Intro 2:36 - Eric's background 14:47 - Different animal models 17:59 - ANNs as models for animal brains 24:34 - Main question 25:43 - How circuits produce appropriate behaviors 26:10 - Cerebellum 27:49 - What do motor cortex and basal ganglia do? 49:12 - Neuroethology 1:06:09 - What is a behavior? 1:11:18 - Categorize behavior (B-SOiD) 1:22:01 - Real behavior vs. ANNs 1:33:09 - Best era in neuroscience
3/6/20241 hour, 44 minutes, 50 seconds
Episode Artwork

BI 184 Peter Stratton: Synthesize Neural Principles

Support the show to get full episodes and join the Discord community. Peter Stratton is a research scientist at Queensland University of Technology. I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman. What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right? Peter's website. Related papers Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI? Making a Spiking Net Work: Robust brain-like unsupervised machine learning. Global segregation of cortical activity and metastable dynamics. Unlocking neural complexity with a robotic key 0:00 - Intro 3:50 - AI background, neuroscience principles 8:00 - Overall view of modern AI 14:14 - Moravec's paradox and robotics 20:50 -Understanding movement to understand cognition 30:01 - How close are we to understanding brains/minds? 32:17 - Pete's goal 34:43 - Principles from neuroscience to build AI 42:39 - Levels of abstraction and implementation 49:57 - Mental disorders and robustness 55:58 - Function vs. implementation 1:04:04 - Spiking networks 1:07:57 - The roadmap 1:19:10 - AGI 1:23:48 - The terms AGI and AI 1:26:12 - Consciousness
2/20/20241 hour, 30 minutes, 47 seconds
Episode Artwork

BI 183 Dan Goodman: Neural Reckoning

Support the show to get full episodes and join the Discord community. You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute. All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick. We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains. So what does it mean that modern neural networks disregard spiking altogether? Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics. Neural Reckoning Group. Twitter: @neuralreckoning. Related papers Neural heterogeneity promotes robust learning. Dynamics of specialization in neural modules under resource constraints. Multimodal units fuse-then-accumulate evidence across channels. Visualizing a joint future of neuroscience and neuromorphic engineering. 0:00 - Intro 3:47 - Why spiking neural networks, and a mathematical background 13:16 - Efficiency 17:36 - Machine learning for neuroscience 19:38 - Why not jump ship from SNNs? 23:35 - Hard and easy tasks 29:20 - How brains and nets learn 32:50 - Exploratory vs. theory-driven science 37:32 - Static vs. dynamic 39:06 - Heterogeneity 46:01 - Unifying principles vs. a hodgepodge 50:37 - Sparsity 58:05 - Specialization and modularity 1:00:51 - Naturalistic experiments 1:03:41 - Projects for SNN research 1:05:09 - The right level of abstraction 1:07:58 - Obstacles to progress 1:12:30 - Levels of explanation 1:14:51 - What has AI taught neuroscience? 1:22:06 - How has neuroscience helped AI?
2/6/20241 hour, 28 minutes, 54 seconds
Episode Artwork

BI 182: John Krakauer Returns… Again

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like Whether brains actually reorganize after damage The role of brain plasticity in general The path toward and the path not toward understanding higher cognition How to fix motor problems after strokes AGI Functionalism, consciousness, and much more. Relevant links: John's Lab. Twitter: @blamlab Related papers What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond. Against cortical reorganisation. Other episodes with John: BI 025 John Krakauer: Understanding Cognition BI 077 David and John Krakauer: Part 1 BI 078 David and John Krakauer: Part 2 BI 113 David Barack and John Krakauer: Two Views On Cognition Time stamps 0:00 - Intro 2:07 - It's a podcast episode! 6:47 - Stroke and Sherrington neuroscience 19:26 - Thinking vs. moving, representations 34:15 - What's special about humans? 56:35 - Does cortical reorganization happen? 1:14:08 - Current era in neuroscience
1/19/20241 hour, 25 minutes, 42 seconds
Episode Artwork

BI 181 Max Bennett: A Brief History of Intelligence

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination. The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book. Twitter: @maxsbennett Book: A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains. 0:00 - Intro 5:26 - Why evolution is important 7:22 - Maclean's triune brain 14:59 - Breakthrough 1: Steering 29:06 - Fish intelligence 40:38 - Breakthrough 3: Mentalizing 52:44 - How could we improve the human brain? 1:00:44 - What is intelligence? 1:13:50 - Breakthrough 5: Speaking
12/25/20231 hour, 27 minutes, 30 seconds
Episode Artwork

BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding

Support the show to get full episodes and join the Discord community. Welcome to another special panel discussion episode. I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on. There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe. Aspirational Neuroscience Panelists: Anton Arkhipov, Allen Institute for Brain Science. @AntonSArkhipov Konrad Kording, University of Pennsylvania. @KordingLab Tomás Ryan, Trinity College Dublin. @TJRyan_77 Srinivas Turaga, Janelia Research Campus. Dong Song, University of Southern California. @dongsong Zhihao Zheng, Princeton University. @zhihaozheng 0:00 - Intro 1:45 - Ken Hayworth 14:09 - Panel Discussion
12/11/20231 hour, 29 minutes, 27 seconds
Episode Artwork

BI 179 Laura Gradowski: Include the Fringe with Pluralism

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc. We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more. Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh. Facing the Fringe. Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe 0:00 - Intro 3:57 - What is fringe? 10:14 - What makes a theory fringe? 14:31 - Fringe to mainstream 17:23 - Garcia effect 28:17 - Fringe to mainstream: other examples 32:38 - Fringe and consciousness 33:19 - Words meanings change over time 40:24 - Pseudoscience 43:25 - How fringe becomes mainstream 47:19 - More fringe characteristics 50:06 - Pluralism as a solution 54:02 - Progress 1:01:39 - Encyclopedia of theories 1:09:20 - When to reject a theory 1:20:07 - How fringe becomes fringe 1:22:50 - Marginilization 1:27:53 - Recipe for fringe theorist
11/27/20231 hour, 39 minutes, 6 seconds
Episode Artwork

BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions. Eric's website. Related papers Predictive learning as a network mechanism for extracting low-dimensional latent space representations. A scale-dependent measure of system dimensionality. From lazy to rich to exclusive task representations in neural networks and neural codes. Feedback through graph motifs relates structure and function in complex networks. 0:00 - Intro 4:15 - Reflecting on the rise of dynamical systems in neuroscience 11:15 - DST view on macro scale 15:56 - Intuitions 22:07 - Eric's approach 31:13 - Are brains more or less impressive to you now? 38:45 - Why is dimensionality important? 50:03 - High-D in Low-D 54:14 - Dynamical motifs 1:14:56 - Theory for its own sake 1:18:43 - Rich vs. lazy learning 1:22:58 - Latent variables 1:26:58 - What assumptions give you most pause?
11/13/20231 hour, 35 minutes, 31 seconds
Episode Artwork

BI 177 Special: Bernstein Workshop Panel

Support the show to get full episodes and join the Discord community. I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion! Program: How can machine learning be used to generate insights and theories in neuroscience? Panelists: Katrin Franke Lab website. Twitter: @kfrankelab. Ralf Haefner Haefner lab. Twitter: @haefnerlab. Martin Hebart Hebart Lab. Twitter: @martin_hebart. Johannes Jaeger Yogi's website. Twitter: @yoginho. Fred Wolf Fred's university webpage. Organizers: Alexander Ecker | University of Göttingen, Germany Fabian Sinz | University of Göttingen, Germany Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany
10/30/20231 hour, 13 minutes, 54 seconds
Episode Artwork

BI 176 David Poeppel Returns

Support the show to get full episodes and join the Discord community. David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis. David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki. Poeppel lab Twitter: @davidpoeppel. Related papers We don’t know how the brain stores anything, let alone words. Memory in humans and deep language models: Linking hypotheses for model augmentation. The neural ingredients for a language of thought are available. 0:00 - Intro 11:17 - Across levels 14:598 - Nature of memory 24:12 - Using the right tools for the right question 35:46 - LLMs, what they need, how they've shaped David's thoughts 44:55 - Across levels 54:07 - Speed of progress 1:02:21 - Neuroethology and mental illness - patreon 1:24:42 - Language of Thought
10/14/20231 hour, 23 minutes, 57 seconds
Episode Artwork

BI 175 Kevin Mitchell: Free Agents

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex. We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more. Kevin's website. Twitter: @WiringtheBrain Book: Free Agents: How Evolution Gave Us Free Will 4:27 - From Innate to Free Agents 9:14 - Thinking of the whole organism 15:11 - Who the book is for 19:49 - What bothers Kevin 27:00 - Indeterminacy 30:08 - How it all began 33:08 - How indeterminacy helps 43:58 - Libet's free will experiments 50:36 - Creativity 59:16 - Selves, subjective experience, agency, and free will 1:10:04 - Levels of agency and free will 1:20:38 - How much free will can we have? 1:28:03 - Hierarchy of mind constraints 1:36:39 - Artificial agents and free will 1:42:57 - Next book?
10/3/20231 hour, 46 minutes, 32 seconds
Episode Artwork

BI 174 Alicia Juarrero: Context Changes Everything

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool. In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains. Book: Context Changes Everything: How Constraints Create Coherence 0:00 - Intro 3:37 - 25 years thinking about constraints 8:45 - Dynamics in Action and eliminativism 13:08 - Efficient and other kinds of causation 19:04 - Complexity via context independent and dependent constraints 25:53 - Enabling and limiting constraints 30:55 - Across scales 36:32 - Temporal constraints 42:58 - A constraint cookbook? 52:12 - Constraints in a mechanistic worldview 53:42 - How to explain using constraints 56:22 - Concepts and multiple realizabillity 59:00 - Kevin Mitchell question 1:08:07 - Mac Shine Question 1:19:07 - 4E 1:21:38 - Dimensionality across levels 1:27:26 - AI and constraints 1:33:08 - AI and life
9/13/20231 hour, 45 minutes
Episode Artwork

BI 173 Justin Wood: Origins of Visual Intelligence

Support the show to get full episodes and join the Discord community. In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin! Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture. Wood lab. Related papers: Controlled-rearing studies of newborn chicks and deep neural networks. Development of collective behavior in newborn artificial agents. A newborn embodied Turing test for view-invariant object recognition. Justin mentions these papers: Untangling invariant object recognition (Dicarlo & Cox 2007) 0:00 - Intro 5:39 - Origins of Justin's current research 11:17 - Controlled rearing approach 21:52 - Comparing newborns and AI models 24:11 - Nativism vs. empiricism 28:15 - CNNs and early visual cognition 29:35 - Smoothness and slowness 50:05 - Early biological development 53:27 - Naturalistic vs. highly controlled 56:30 - Collective behavior in animals and machines 1:02:34 - Curiosity and critical periods 1:09:05 - Controlled rearing vs. other developmental studies 1:13:25 - Breaking natural rules 1:16:33 - Deep RL collective behavior 1:23:16 - Bottom-up and top-down
8/30/20231 hour, 35 minutes, 45 seconds
Episode Artwork

BI 172 David Glanzman: Memory All The Way Down

Support the show to get full episodes and join the Discord community. David runs his lab at UCLA where he's also a distinguished professor.  David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.  So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on. David's Faculty Page. Related papers The central importance of nuclear mechanisms in the storage of memory. David mentions Arc and virus-like transmission: The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer. Structure of an Arc-ane virus-like capsid. David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life  Symposium. Related episodes: BI 126 Randy Gallistel: Where Is the Engram? BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
8/7/20231 hour, 30 minutes, 58 seconds
Episode Artwork

BI 171 Mike Frank: Early Language and Cognition

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition. We discuss that, his love for developing open data sets that anyone can use, The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches How early language learning in children differs from LLM learning Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue. Language & Cognition Lab Twitter: @mcxfrank. I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions: Related papers: Pragmatic language interpretation as probabilistic inference. Toward a “Standard Model” of Early Language Learning. The pervasive role of pragmatics in early language. The Structure of Developmental Variation in Early Childhood. Relational reasoning and generalization using non-symbolic neural networks. Unsupervised neural network models of the ventral visual stream.
7/22/20231 hour, 24 minutes, 40 seconds
Episode Artwork

BI 170 Ali Mohebi: Starting a Research Lab

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future. Ali's website. Twitter: @mohebial
7/11/20231 hour, 17 minutes, 15 seconds
Episode Artwork

BI 169 Andrea Martin: Neural Dynamics and Language

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains. Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over  time. One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more. Andrea's website. Twitter: @andrea_e_martin. Related papers A Compositional Neural Architecture for Language An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions Neural dynamics differentially encode phrases and sentences during spoken language comprehension Hierarchical structure in language and action: A formal comparison Andrea mentions this book: The Geometry of Biological Time.
6/28/20231 hour, 41 minutes, 30 seconds
Episode Artwork

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives? Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives. This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion! AWARE: Glimpses of Consciousness Umbrella Films 0:00 - Intro 19:42 - Mechanistic reductionism 45:33 - Changing views during lifetime 53:49 - Did making the film alter your views? 57:49 - ChatGPT 1:04:20 - Materialist assumption 1:11:00 - Science of consciousness 1:20:49 - Transhumanism 1:32:01 - Integrity 1:36:19 - Aesthetics 1:39:50 - Response to the film
6/2/20231 hour, 54 minutes, 42 seconds
Episode Artwork

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward. Poirazi Lab Twitter: @YiotaPoirazi. Related papers Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks. Illuminating dendritic function with computational models. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Pyramidal Neuron as Two-Layer Neural Network 0:00 - Intro 3:04 - Yiota's background 6:40 - Artificial networks and dendrites 9:24 - Dendrites special sauce? 14:50 - Where are we in understanding dendrite function? 20:29 - Algorithms, plasticity, and brains 29:00 - Functional unit of the brain 42:43 - Engrams 51:03 - Dendrites and nonlinearity 54:51 - Spiking neural networks 56:02 - Best level of biological detail 57:52 - Dendrify 1:05:41 - Experimental work 1:10:58 - Dendrites across species and development 1:16:50 - Career reflection 1:17:57 - Evolution of Yiota's thinking
5/27/20231 hour, 27 minutes, 43 seconds
Episode Artwork

BI 166 Nick Enfield: Language vs. Reality

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go. For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"  In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately. From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language. Nick's website Twitter: @njenfield Book: Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. Papers: Linguistic concepts are self-generating choice architectures 0:00 - Intro 4:23 - Is learning about language important? 15:43 - Linguistic Anthropology 28:56 - Language and truth 33:57 - How special is language 46:19 - Choice architecture and framing 48:19 - Language for thinking or communication 52:30 - Agency and language 56:51 - Large language models 1:16:18 - Getting language right 1:20:48 - Social relationships and language
5/9/20231 hour, 27 minutes, 12 seconds
Episode Artwork

BI 165 Jeffrey Bowers: Psychology Gets No Respect

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work. However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more. Website Twitter: @jeffrey_bowers Related papers: Deep Problems with Neural Network Models of Human Vision. Parallel Distributed Processing Theory in the Age of Deep Networks. Successes and critical failures of neural networks in capturing human-like speech recognition. 0:00 - Intro 3:52 - Testing neural networks 5:35 - Neuro-AI needs psychology 23:36 - Experiments in AI and neuroscience 23:51 - Why build networks like our minds? 44:55 - Vision problem spaces, solution spaces, training data 55:45 - Do we implement algorithms? 1:01:33 - Relational and combinatorial cognition 1:06:17 - Comparing representations in different networks 1:12:31 - Large language models 1:21:10 - Teaching LLMs nonsense languages
4/12/20231 hour, 38 minutes, 45 seconds
Episode Artwork

BI 164 Gary Lupyan: How Language Affects Thought

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we  partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics. And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test. Lupyan Lab. Twitter: @glupyan. Related papers: Hidden Differences in Phenomenal Experience. Verbal interference paradigms: A systematic review investigating the role of language in cognition. Gary mentioned Richard Feynman's Ways of Thinking video. Gary and Andy Clark's Aeon article: Super-cooperators. 0:00 - Intro 2:36 - Words and communication 14:10 - Phenomenal variability 26:24 - Co-operating minds 38:11 - Large language models 40:40 - Neuro-symbolic AI, scale 44:43 - How LLMs have changed Gary's thoughts about language 49:26 - Meaning, grounding, and language 54:26 - Development of language 58:53 - Symbols and emergence 1:03:20 - Language evolution in the LLM era 1:08:05 - Concepts 1:11:17 - How special is language? 1:18:08 - AGI
4/1/20231 hour, 31 minutes, 54 seconds
Episode Artwork

BI 163 Ellie Pavlick: The Mind of a Language Model

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more. Language Understanding and Representation Lab Twitter: @Brown_NLP Related papers Semantic Structure in Deep Learning. Pretraining on Interactions for Learning Grounded Affordance Representations. Mapping Language Models to Grounded Conceptual Spaces. 0:00 - Intro 2:34 - Will LLMs make us dumb? 9:01 - Evolution of language 17:10 - Changing views on language 22:39 - Semantics, grounding, meaning 37:40 - LLMs, humans, and prediction 41:19 - How to evaluate LLMs 51:08 - Structure, semantics, and symbols in models 1:00:08 - Dimensionality 1:02:08 - Limitations of LLMs 1:07:47 - What do linguists think? 1:14:23 - What is language for?
3/20/20231 hour, 21 minutes, 34 seconds
Episode Artwork

BI 162 Earl K. Miller: Thoughts are an Emergent Property

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition. Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.  In episode 160, Ole Jensen discussed his work in humans showing that  low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics. Miller lab. Twitter: @MillerLabMIT. Related papers: An integrative theory of prefrontal cortex function. Annual Review of Neuroscience. Working Memory Is Complex and Dynamic, Like Your Thoughts. Traveling waves in the prefrontal cortex during working memory. 0:00 - Intro 6:22 - Evolution of Earl's thinking 14:58 - Role of the prefrontal cortex 25:21 - Spatial computing 32:51 - Homunculus problem 35:34 - Self 37:40 - Dimensionality and thought 46:13 - Reductionism 47:38 - Working memory and capacity 1:01:45 - Capacity as a principle 1:05:44 - Silent synapses 1:10:16 - Subspaces in dynamics
3/8/20231 hour, 23 minutes, 27 seconds
Episode Artwork

BI 161 Hugo Spiers: Navigation and Spatial Cognition

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on. Spiers Lab. Twitter: @hugospiers. Related papers Predictive maps in rats and humans for spatial navigation. From cognitive maps to spatial schemas. London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London. Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest.
2/24/20231 hour, 34 minutes, 38 seconds
Episode Artwork

BI 160 Ole Jensen: Rhythms of Cognition

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence. The Neuronal Oscillations Group. Twitter: @neuosc. Related papers Shaping functional architecture by oscillatory alpha activity: gating by inhibition FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex The theta-gamma neural code A pipelining mechanism supporting previewing during visual exploration and reading. Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity. 0:00 - Intro 2:58 - Oscillations import over the years 5:51 - Oscillations big picture 17:62 - Oscillations vs. traveling waves 22:00 - Oscillations and algorithms 28:53 - Alpha oscillations and working memory 44:46 - Alpha as the controller 48:55 - Frequency tagging 52:49 - Timing of attention 57:41 - Pipelining neural processing 1:03:38 - Previewing during reading 1:15:50 - Previewing, prediction, and large language models 1:24:27 - Dyslexia
2/7/20231 hour, 28 minutes, 39 seconds
Episode Artwork

BI 159 Chris Summerfield: Natural General Intelligence

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples. Human Information Processing Lab. Twitter: @summerfieldlab. Book: Natural General Intelligence: How understanding the brain can help us build AI. Other books mentioned: Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal The Mind is Flat by Nick Chater. 0:00 - Intro 2:20 - Natural General Intelligence 8:05 - AI and Neuro interaction 21:42 - How to build AI 25:54 - Umwelts and affordances 32:07 - Different kind of intelligence 39:16 - Ecological validity and AI 48:30 - Is reward enough? 1:05:14 - Beyond brains 1:15:10 - Large language models and brains
1/26/20231 hour, 28 minutes, 53 seconds
Episode Artwork

BI 158 Paul Rosenbloom: Cognitive Architectures

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book On Computing: The Fourth Great Scientific Domain. He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers. All of what I just said, and much of what we discuss, can be found in Paul's memoir, In Search of Insight: My Life as an Architectural Explorer. Paul's website. Related papers Working memoir: In Search of Insight: My Life as an Architectural Explorer. Book: On Computing: The Fourth Great Scientific Domain. A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains. Common Model of Cognition Bulletin. 0:00 - Intro 3:26 - A career of exploration 7:00 - Alan Newell 14:47 - Relational model and dichotomic maps 24:22 - Cognitive architectures 28:31 - SOAR cognitive architecture 41:14 - Sigma cognitive architecture 43:58 - SOAR vs. Sigma 53:06 - Cognitive architecture community 55:31 - Common model of cognition 1:11:13 - What's missing from the common model 1:17:48 - Brains vs. cognitive architectures 1:21:22 - Mapping the common model onto the brain 1:24:50 - Deep learning 1:30:23 - AGI
1/16/20231 hour, 35 minutes, 12 seconds
Episode Artwork

BI 157 Sarah Robins: Philosophy of Memory

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 Tomás Ryan: Memory, Instinct, and Forgetting). Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory. We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory. Sarah's website. Twitter: @SarahKRobins. Related papers: Her Memory chapter, with Felipe de Brigard, in the book Mind, Cognition, and Neuroscience: A Philosophical Introduction. Memory and Optogenetic Intervention: Separating the engram from the ecphory. Stable Engrams and Neural Dynamics. 0:00 - Intro 4:18 - Philosophy of memory 5:10 - Making a move 6:55 - State of philosophy of memory 11:19 - Memory traces or the engram 20:44 - Taxonomy of memory 25:50 - Cognitive ontologies, neuroscience, and psychology 29:39 - Optogenetics 33:48 - Memory traces vs. neural dynamics and consolidation 40:32 - What is the boundary of a memory? 43:00 - Process philosophy and memory 45:07 - Memory vs. imagination 49:40 - Constructivist view of memory and imagination 54:05 - Is memory for the future? 58:00 - Memory errors and intelligence 1:00:42 - Memory and AI 1:06:20 - Creativity and memory errors
1/2/20231 hour, 20 minutes, 59 seconds
Episode Artwork

BI 156 Mariam Aly: Memory, Attention, and Perception

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health. Aly Lab. Twitter: @mariam_s_aly. Related papers Attention promotes episodic encoding by stabilizing hippocampal representations. The medial temporal lobe is critical for spatial relational perception. Cholinergic modulation of hippocampally mediated attention and perception. Preparation for upcoming attentional states in the hippocampus and medial prefrontal cortex. How hippocampal memory shapes, and is shaped by, attention. Attentional fluctuations and the temporal organization of memory. 0:00 - Intro 3:50 - Mariam's background 9:32 - Hippocampus history and current science 12:34 - hippocampus and perception 13:42 - Relational information 18:30 - How much memory is explicit? 22:32 - How attention affects hippocampus 32:40 - fMRI levels vs. stability 39:04 - How is hippocampus necessary for attention 57:00 - How much does attention affect memory? 1:02:24 - How memory affects attention 1:06:50 - Attention and memory relation big picture 1:07:42 - Current state of memory and attention 1:12:12 - Modularity 1:17:52 - Practical advice to improve attention/memory 1:21:22 - Mariam's challenges
12/23/20221 hour, 40 minutes, 45 seconds
Episode Artwork

BI 155 Luiz Pessoa: The Entangled Brain

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Luiz Pessoa runs his Laboratory of Cognition and Emotion at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X does function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also BI 152 Michael L. Anderson: After Phrenology: Neural Reuse). Luiz and I discuss the implications of studying the brain from a complex systems perspective, why we need go beyond thinking about anatomy and instead think about functional organization, some of the brain's principles of organization, and a lot more. Laboratory of Cognition and Emotion. Twitter: @PessoaBrain. Book: The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together 0:00 - Intro 2:47 - The Entangled Brain 16:24 - How to think about complex systems 23:41 - Modularity thinking 28:16 - How to train one's mind to think complex 33:26 - Problem or principle? 44:22 - Complex behaviors 47:06 - Organization vs. structure 51:09 - Principles of organization: Massive Combinatorial Anatomical Connectivity 55:15 - Principles of organization: High Distributed Functional Connectivity 1:00:50 - Principles of organization: Networks as Functional Units 1:06:15 - Principles of Organization: Interactions via Cortical-Subcortical Loops 1:08:53 - Open and closed loops 1:16:43 - Principles of organization: Connectivity with the Body 1:21:28 - Consciousness 1:24:53 - Emotions 1:32:49 - Emottions and AI 1:39:47 - Emotion as a concept 1:43:25 - Complexity and functional organization in AI
12/10/20221 hour, 54 minutes, 26 seconds
Episode Artwork

BI 154 Anne Collins: Learning with Working Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Anne Collins runs her  Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI. Computational Cognitive Neuroscience Lab. Twitter: @ccnlab or @Anne_On_Tw. Related papers: How Working Memory and Reinforcement Learning Are Intertwined: A Cognitive, Neural, and Computational Perspective.  Beyond simple dichotomies in reinforcement learning. The Role of Executive Function in Shaping Reinforcement Learning. What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience. 0:00 - Intro 5:25 - Dimensionality of learning 11:19 - Modularity of function and computations 16:51 - Is working memory a thing? 19:33 - Model-free model-based dichotomy 30:40 - Working memory and RL 44:43 - How working memory and RL interact 50:50 - Working memory and attention 59:37 - Computations vs. implementations 1:03:25 - Interpreting results 1:08:00 - Working memory and AI
11/29/20221 hour, 22 minutes, 27 seconds
Episode Artwork

BI 153 Carolyn Dicey-Jennings: Attention and the Self

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception. Carolyn's website. Books: The Attending Mind. Aeon article: I Attend, Therefore I Am. Related papers The Subject of Attention. Consciousness and Mind. Practical Realism about the Self. 0:00 - Intro 12:15 - Reconceptualizing attention 16:07 - Types of attention 19:02 - Predictive processing and attention 23:19 - Consciousness, identity, and self 30:39 - Attention and the brain 35:47 - Integrated information theory 42:05 - Neural attention 52:08 - Decoupling oscillations from spikes 57:16 - Selves in other organisms 1:00:42 - AI and the self 1:04:43 - Attention, consciousness, conscious perception 1:08:36 - Meaning and attention 1:11:12 - Conscious entrainment 1:19:57 - Is attention a switch or knob?
11/18/20221 hour, 25 minutes, 30 seconds
Episode Artwork

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively. Michael's website. Twitter: @mljanderson. Book: After Phrenology: Neural Reuse and the Interactive Brain. Related papers Neural reuse: a fundamental organizational principle of the brain. Some dilemmas for an account of neural representation: A reply to Poldrack. Debt-free intelligence: Ecological information in minds and machines Describing functional diversity of brain regions and brain networks. 0:00 - Intro 3:02 - After Phrenology 13:18 - Typical neuroscience experiment 16:29 - Neural reuse 18:37 - 4E cognition and representations 22:48 - John Krakauer question 27:38 - Gibsonian perception 36:17 - Autoencoders without representations 49:22 - Pluralism 52:42 - Alex Gomez-Marin question - metaphysics 1:01:26 - Stimulus-response historical neuroscience 1:10:59 - After Phrenology influence 1:19:24 - Origins of neural reuse 1:35:25 - The way forward
11/8/20221 hour, 45 minutes, 11 seconds
Episode Artwork

BI 151 Steve Byrnes: Brain-like AGI Safety

Support the show to get full episodes and join the Discord community. Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI. Steve's website.Twitter: @steve47285Intro to Brain-Like-AGI Safety.
10/30/20221 hour, 31 minutes, 17 seconds
Episode Artwork

BI 150 Dan Nicholson: Machines, Organisms, Processes

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more. Dan's website. Google Scholar.Twitter: @NicholsonHPBioBookEverything Flows: Towards a Processual Philosophy of Biology.Related papersIs the Cell Really a Machine?The Machine Conception of the Organism in Development and Evolution: A Critical Analysis.On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology.Related episode: BI 118 Johannes Jäger: Beyond Networks. 0:00 - Intro 2:49 - Philosophy and science 16:37 - Role of history 23:28 - What Is Life? And interaction with James Watson 38:37 - Arguments against the machine conception of organisms 49:08 - Organisms as streams (processes) 57:52 - Process philosophy 1:08:59 - Alfred North Whitehead 1:12:45 - Process and consciousness 1:22:16 - Artificial intelligence and process 1:31:47 - Language and symbols and processes
10/15/20221 hour, 38 minutes, 29 seconds
Episode Artwork

BI 149 William B. Miller: Cell Intelligence

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more. William's website.Twitter: @BillMillerMD.Book: Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. 0:00 - Intro 3:43 - Bioverse 7:29 - Bill's cell appreciation origins 17:03 - Microbiomes 27:01 - Complexity of microbiomes and the "Era of the cell" 46:00 - Robustness 55:05 - Cell vs. human intelligence 1:10:08 - Artificial intelligence 1:21:01 - Neuro-AI 1:25:53 - Hard problem of consciousness
10/5/20221 hour, 33 minutes, 54 seconds
Episode Artwork

BI 148 Gaute Einevoll: Brain Simulations

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception). Gaute's website.Twitter: @GauteEinevoll.Related papers:The Scientific Case for Brain Simulations.Brain signal predictions from multi-scale networks using a linearized framework.Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortexLFPy: a Python module for calculation of extracellular potentials from multicompartment neuron models.Gaute's Sense and Science podcast. 0:00 - Intro 3:25 - Beautiful and messy models 6:34 - In Silico 9:47 - Goals of human brain project 15:50 - Brain simulation approach 21:35 - Degeneracy in parameters 26:24 - Abstract principles from simulations 32:58 - Models as tools 35:34 - Predicting brain signals 41:45 - LFPs closer to average 53:57 - Plasticity in simulations 56:53 - How detailed should we model neurons? 59:09 - Lessons from predicting signals 1:06:07 - Scaling up 1:10:54 - Simulation as a tool 1:12:35 - Oscillations 1:16:24 - Manifolds and simulations 1:20:22 - Modeling cortex like Hodgkin and Huxley
9/25/20221 hour, 28 minutes, 48 seconds
Episode Artwork

BI 147 Noah Hutton: In Silico

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project. In Silico website.Rent or buy In Silico.Noah's website.Twitter: @noah_hutton. 0:00 - Intro 3:36 - Release and premier 7:37 - Noah's background 9:52 - Origins of In Silico 19:39 - Recurring visits 22:13 - Including the critics 25:22 - Markram's shifting outlook and salesmanship 35:43 - Promises and delivery 41:28 - Computer and brain terms interchange 49:22 - Progress vs. illusion of progress 52:19 - Close to quitting 58:01 - Salesmanship vs bad at estimating timelines 1:02:12 - Brain simulation science 1:11:19 - AGI 1:14:48 - Brain simulation vs. neuro-AI 1:21:03 - Opinion on TED talks 1:25:16 - Hero worship 1:29:03 - Feedback on In Silico
9/13/20221 hour, 37 minutes, 8 seconds
Episode Artwork

BI 146 Lauren Ross: Causal and Non-Causal Explanation

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints. Lauren's website.Twitter: @ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters.Distinguishing topological and causal explanation.Multiple Realizability from a Causal Perspective.Cascade versus mechanism: The diversity of causal structure in science. 0:00 - Intro 2:46 - Lauren's background 10:14 - Jim Woodward legacy 15:37 - Golden era of causality 18:56 - Mechanistic explanation 28:51 - Pathways 31:41 - Cascades 36:25 - Topology 41:17 - Constraint 50:44 - Hierarchy of explanations 53:18 - Structure and function 57:49 - Brain and mind 1:01:28 - Reductionism 1:07:58 - Constraint again 1:14:38 - Multiple realizability
9/7/20221 hour, 22 minutes, 51 seconds
Episode Artwork

BI 145 James Woodward: Causation with a Human Face

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures. Jim's website.Making Things Happen: A Theory of Causal Explanation.Causation with a Human Face: Normative Theory and Descriptive Psychology. 0:00 - Intro 4:14 - Causation with a Human Face & Functionalist approach 6:16 - Interventionist causality; Epistemology and metaphysics 9:35 - Normative and descriptive 14:02 - Rationalist approach 20:24 - Normative vs. descriptive 28:00 - Varying notions of causation 33:18 - Invariance 41:05 - Causality in complex systems 47:09 - Downward causation 51:14 - Natural laws 56:38 - Proportionality 1:01:12 - Intuitions 1:10:59 - Normative and descriptive relation 1:17:33 - Causality across disciplines 1:21:26 - What would help our understanding of causation
8/28/20221 hour, 25 minutes, 52 seconds
Episode Artwork

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more. Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words. Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more. EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender) 0:00 - Intro 4:35 - Language and cognition 15:38 - Grasping for meaning 21:32 - Are large language models producing language? 23:09 - Next-word prediction in brains and models 32:09 - Interface between language and thought 35:18 - Studying language in nonhuman animals 41:54 - Do we understand language enough? 45:51 - What do language models need? 51:45 - Are LLMs teaching us about language? 54:56 - Is meaning necessary, and does it matter how we learn language? 1:00:04 - Is our biology important for language? 1:04:59 - Future outlook
8/17/20221 hour, 11 minutes, 41 seconds
Episode Artwork

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics. Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience 0:00 - Intro 4:38 - Control engineer 9:52 - Control vs. dynamical systems 13:34 - Building vs. understanding 17:38 - Mixed feedback signals 26:00 - Robustness 28:28 - Eve Marder 32:00 - Loneliness 37:35 - Across levels 44:04 - Neuromorphics and neuromodulation 52:15 - Barrier to adopting neuromorphics 54:40 - Deep learning influence 58:04 - Beyond energy efficiency 1:02:02 - Deep learning for neuro 1:14:15 - Role of philosophy 1:16:43 - Doing it right
8/5/20221 hour, 24 minutes, 53 seconds
Episode Artwork

BI 142 Cameron Buckner: The New DoGMA

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.  Cameron's Website.Twitter: @cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter). 0:00 - Intro 4:55 - Interpreting old philosophy 8:26 - AI and philosophy 17:00 - Empiricism vs. rationalism 27:09 - Domain-general faculties 33:10 - Faculty psychology 40:28 - New faculties? 46:11 - Human faculties 51:15 - Cognitive architectures 56:26 - Language 1:01:40 - Beyond dichotomous thinking 1:04:08 - Lower-level faculties 1:10:16 - Animal cognition 1:14:31 - A Forward-Looking Theory of Content
7/26/20221 hour, 43 minutes, 16 seconds
Episode Artwork

BI 141 Carina Curto: From Structure to Dynamics

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition. Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis 0:00 - Intro 4:25 - Background: Physics and math to study brains 20:45 - Beautiful and ugly models 35:40 - Topology 43:14 - Topology in hippocampal navigation 56:04 - Topology vs. dynamical systems theory 59:10 - Combinatorial linear threshold networks 1:25:26 - How much more math do we need to invent?
7/12/20221 hour, 31 minutes, 40 seconds
Episode Artwork

BI 140 Jeff Schall: Decisions and Eye Movements

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time). Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time. 0:00 - Intro 6:51 - Neurophysiology old and new 14:50 - Linking propositions 24:18 - Psychology working with neurophysiology 35:40 - Neuron doctrine, population doctrine 40:28 - Strong Inference and deep learning 46:37 - Model mimicry 51:56 - Scientific fads 57:07 - Current projects 1:06:38 - On leaving academia 1:13:51 - How academia has changed for better and worse
6/30/20221 hour, 20 minutes, 22 seconds
Episode Artwork

BI 139 Marc Howard: Compressed Time and Memory

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales. Theoretical Cognitive Neuroscience Lab. Twitter: @marcwhoward777.Related papers:Memory as perception of the past: Compressed time in mind and brain.Formal models of memory based on temporally-varying representations.Cognitive computation using neural representations of time and space in the Laplace domain.Time as a continuous dimension in natural and artificial networks.DeepSITH: Efficient learning via decomposition of what and when across time scales. 0:00 - Intro 4:57 - Main idea: Laplace transforms 12:00 - Time cells 20:08 - Laplace, compression, and time cells 25:34 - Everywhere in the brain 29:28 - Episodic memory 35:11 - Randy Gallistel's memory idea 40:37 - Adding Laplace to deep nets 48:04 - Reinforcement learning 1:00:52 - Brad Wyble Q: What gets filtered out? 1:05:38 - Replay and complementary learning systems 1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki 1:15:10 - Obstacles
6/20/20221 hour, 20 minutes, 11 seconds
Episode Artwork

BI 138 Matthew Larkum: The Dendrite Hypothesis

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more. Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments) 0:00 - Intro 5:31 - Background: Dendrites 23:20 - Cortical neuron bodies vs. branches 25:47 - Theories of cortex 30:49 - Feedforward and feedback hierarchy 37:40 - Dendritic integration hypothesis 44:32 - DIT vs. other consciousness theories 51:30 - Mac Shine Q1 1:04:38 - Are dendrites conceptually useful? 1:09:15 - Insights from implementation level 1:24:44 - How detailed to model? 1:28:15 - Do action potentials cause consciousness? 1:40:33 - Mac Shine Q2
6/6/20221 hour, 51 minutes, 42 seconds
Episode Artwork

BI 137 Brian Butterworth: Can Fish Count?

Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics. Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds 0:00 - Intro 3:19 - Why Counting? 5:31 - Dyscalculia 12:06 - Dyslexia 19:12 - Counting 26:37 - Origins of counting vs. language 34:48 - Counting vs. higher math 46:46 - Counting some things and not others 53:33 - How to test counting 1:03:30 - How does the brain count? 1:13:10 - Are numbers real?
5/27/20221 hour, 17 minutes, 49 seconds
Episode Artwork

BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more. Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience  The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series) 0:00 - Intro 4:32 - The Blind Spot 15:53 - Phenomenology and interpretation 22:51 - Personal stories: appreciating phenomenology 37:42 - Quantum physics example 47:16 - Scientific explanation vs. phenomenological description 59:39 - How can phenomenology and science complement each other? 1:08:22 - Neurophenomenology 1:17:34 - Use of language 1:25:46 - Mutual constraints
5/17/20221 hour, 34 minutes, 12 seconds
Episode Artwork

BI 135 Elena Galea: The Stars of the Brain

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control. Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops. 0:00 - Intro 5:23 - The changing story of astrocytes 14:58 - Astrocyte research lags neuroscience 19:45 - Types of astrocytes 23:06 - Astrocytes vs neurons 26:08 - Computational roles of astrocytes 35:45 - Feedback control 43:37 - Energy efficiency 46:25 - Current technology 52:58 - Computational astroscience 1:10:57 - Do names for things matter
5/6/20221 hour, 17 minutes, 25 seconds
Episode Artwork

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience. Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics. 0:00 - Intro 3:34 - Background 8:20 - Bee experiments 14:30 - Bee flight and navigation 28:05 - Landing 33:06 - Umwelt and perception 37:26 - Bee-inspired aerial robotics 49:10 - Motion camouflage 51:52 - Cognition in bees 1:03:10 - Small vs. big brains 1:06:42 - Pain in bees 1:12:50 - Subjective experience 1:15:25 - Deep learning 1:23:00 - Path forward
4/27/20221 hour, 26 minutes, 17 seconds
Episode Artwork

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning. Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep. 0:00 - Intro 2:48 - Background and types of memory 14:44 -Consciousness and memory 23:32 - Phases and sleep and wakefulness 28:19 - Sleep, memory, and learning 33:50 - Targeted memory reactivation 48:34 - Problem solving during sleep 51:50 - 2-way communication with lucid dreamers 1:01:43 - Confounds to the paradigm 1:04:50 - Limitations and future studies 1:09:35 - Lucid dreaming app 1:13:47 - How sleep can inform AI 1:20:18 - Advice for students
4/15/20221 hour, 29 minutes, 14 seconds
Episode Artwork

BI 132 Ila Fiete: A Grid Scaffold for Memory

Announcement: I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here. Support the show to get full episodes and join the Discord community. Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework. The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain. 0:00 - Intro 3:36 - "Neurophysicist" 9:30 - Bottom-up vs. top-down 15:57 - Tool scavenging 18:21 - Cognitive maps and hippocampus 22:40 - Hopfield networks 27:56 - Internal scaffold 38:42 - Place cells 43:44 - Grid cells 54:22 - Grid cells encoding place cells 59:39 - Scaffold model: stacked hopfield networks 1:05:39 - Attractor landscapes 1:09:22 - Landscapes across scales 1:12:27 - Dimensionality of landscapes
4/3/20221 hour, 17 minutes, 20 seconds
Episode Artwork

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Support the show to get full episodes and join the Discord community. Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs". Neural Circuits Laboratory.Twitter: Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems. 0:00 - Intro 3:10 - Background 9:19 - Bottom-up vs. top-down 14:42 - Levels of abstraction 22:46 - Biological neuromodulation 33:18 - Inventing neuromodulators 41:10 - How far along are we? 53:31 - Multiple realizability 1:09:40 -Modeling dendrites 1:15:24 - Across-species neuromodulation
3/26/20221 hour, 26 minutes, 52 seconds
Episode Artwork

BI 130 Eve Marder: Modulation of Networks

Support the show to get full episodes and join the Discord community. Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains. The Marder Lab.Twitter: @MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks). 0:00 - Intro 3:58 - Background 8:00 - Levels of ambiguity 9:47 - Stomatogastric nervous system 17:13 - Structure vs. function 26:08 - Role of theory 34:56 - Technology vs. understanding 38:25 - Higher cognitive function 44:35 - Adaptability, resilience, evolution 50:23 - Climate change 56:11 - Deep learning 57:12 - Dynamical systems
3/13/20221 hour, 56 seconds
Episode Artwork

BI 129 Patryk Laurent: Learning from the Real World

Support the show to get full episodes and join the Discord community. Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world. Patryk's homepage.Twitter: @paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network. 0:00 - Intro 2:22 - Patryk's background 8:37 - Importance of diverse skills 16:14 - What is intelligence? 20:34 - Important brain principles 22:36 - Learning from the real world 35:09 - Language models 42:51 - AI contribution to neuroscience 48:22 - Criteria for "real" AI 53:11 - Neuroscience for AI 1:01:20 - What can we ignore about brains? 1:11:45 - Advice to past self
3/2/20221 hour, 21 minutes, 1 second
Episode Artwork

BI 128 Hakwan Lau: In Consciousness We Trust

Support the show to get full episodes and join the Discord community. Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness. Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. 0:00 - Intro 4:37 - In Consciousness We Trust 12:19 - Too many consciousness theories? 19:26 - Philosophy and neuroscience of consciousness 29:00 - Local vs. global theories 31:20 - Perceptual reality monitoring and GANs 42:43 - Functions of consciousness 47:17 - Mental quality space 56:44 - Cognitive maps 1:06:28 - Performance capacity confounds 1:12:28 - Blindsight 1:19:11 - Philosophy vs. empirical work
2/20/20221 hour, 25 minutes, 40 seconds
Episode Artwork

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Support the show to get full episodes and join the Discord community. Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram? Ryan Lab.Twitter: @TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon. 0:00 - Intro 4:05 - Response to Randy Gallistel 10:45 - Computation in the brain 14:52 - Instinct and memory 19:37 - Dynamics of memory 21:55 - Wiring vs. connection strength plasticity 24:16 - Changing one's mind 33:09 - Optogenetics and memory experiments 47:24 - Forgetting as learning 1:06:35 - Folk psychological terms 1:08:49 - Memory becoming instinct 1:21:49 - Instinct across the lifetime 1:25:52 - Boundaries of memories 1:28:52 - Subjective experience of memory 1:31:58 - Interdisciplinary research 1:37:32 - Communicating science
2/10/20221 hour, 42 minutes, 39 seconds
Episode Artwork

BI 126 Randy Gallistel: Where Is the Engram?

Support the show to get full episodes and join the Discord community. Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views. Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem 0:00 - Intro 6:50 - Cognitive science vs. computational neuroscience 13:23 - Brain as computing device 15:45 - Noam Chomsky's influence 17:58 - Memory must be stored within cells 30:58 - Theoretical support for the idea 34:15 - Cerebellum evidence supporting the idea 40:56 - What is the write mechanism? 51:11 - Thoughts on deep learning 1:00:02 - Multiple memory mechanisms? 1:10:56 - The role of plasticity 1:12:06 - Trying to convince molecular biologists
1/31/20221 hour, 19 minutes, 57 seconds
Episode Artwork

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Support the show to get full episodes and join the Discord community. Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence. From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning. 0:00 - Intro 4:16 - Tony Zador 5:38 - Doris Tsao 10:44 - Blake Richards 15:46 - Deductive, inductive, abductive inference 16:32 - NAISys 33:09 - Evolution, development, learning 38:23 - Learning: plasticity vs. dynamical structures 54:13 - Different kinds of understanding 1:03:05 - Do we understand evolution well enough? 1:04:03 - Neuro-AI fad? 1:06:26 - Are your problems bigger or smaller now?
1/19/20221 hour, 11 minutes, 5 seconds
Episode Artwork

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

Support the show to get full episodes and join the Discord community. Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks. Hiesinger Neurogenetics LaboratoryTwitter: @HiesingerLab.Book: The Self-Assembling Brain: How Neural Networks Grow Smarter 0:00 - Intro 3:01 - The Self-Assembling Brain 21:14 - Including growth in networks 27:52 - Information unfolding and algorithmic growth 31:27 - Cellular automata 40:43 - Learning as a continuum of growth 45:01 - Robustness, autonomous agents 49:11 - Metabolism vs. connectivity 58:00 - Feedback at all levels 1:05:32 - Generality vs. specificity 1:10:36 - Whole brain emulation 1:20:38 - Changing view of intelligence 1:26:34 - Popular and wrong vs. unknown and right
1/5/20221 hour, 39 minutes, 27 seconds
Episode Artwork

BI 123 Irina Rish: Continual Learning

Support the show to get full episodes and join the Discord community. Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks. Irina's website.Twitter: @irinarishRelated papers:Beyond Backprop: Online Alternating Minimization with Auxiliary Variables.Towards Continual Reinforcement Learning: A Review and Perspectives.Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish. 0:00 - Intro 3:26 - AI for Neuro, Neuro for AI 14:59 - Utility of philosophy 20:51 - Artificial general intelligence 24:34 - Back-propagation alternatives 35:10 - Inductive bias vs. scaling generic architectures 45:51 - Continual learning 59:54 - Neuro-inspired continual learning 1:06:57 - Learning trajectories
12/26/20211 hour, 18 minutes, 59 seconds
Episode Artwork

BI 122 Kohitij Kar: Visual Intelligence

Support the show to get full episodes and join the Discord community. Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo's lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition. VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LABTwitter: @KohitijKar.Related papersEvidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.Neural population control via deep image synthesis.BI 075 Jim DiCarlo: Reverse Engineering Vision 0:00 - Intro 3:49 - Background 13:51 - Where are we in understanding vision? 19:46 - Benchmarks 21:21 - Falsifying models 23:19 - Modeling vs. experiment speed 29:26 - Simple vs complex models 35:34 - Dorsal visual stream and deep learning 44:10 - Modularity and brain area roles 50:58 - Chemogenetic perturbation, DREADDs 57:10 - Future lab vision, clinical applications 1:03:55 - Controlling visual neurons via image synthesis 1:12:14 - Is it enough to study nonhuman animals? 1:18:55 - Neuro/AI intersection 1:26:54 - What is intelligence?
12/12/20211 hour, 33 minutes, 18 seconds
Episode Artwork

BI 121 Mac Shine: Systems Neurobiology

Support the show to get full episodes and join the Discord community. Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence. Shine LabTwitter: @jmacshineRelated papersThe thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics.Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics. 0:00 - Intro 6:32 - Background 10:41 - Holistic approach 18:19 - Importance of thalamus 35:19 - Thalamus circuitry 40:30 - Cerebellum 46:15 - Predictive processing 49:32 - Brain as dynamical attractor landscape 56:48 - System 1 and system 2 1:02:38 - How to think about the thalamus 1:06:45 - Causality in complex systems 1:11:09 - Clinical applications 1:15:02 - Ascending arousal system and neuromodulators 1:27:48 - Implications for AI 1:33:40 - Career serendipity 1:35:12 - Advice
12/2/20211 hour, 43 minutes, 12 seconds
Episode Artwork

BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories

Support the show to get full episodes and join the Discord community. James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior. James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory 0:00 - Intro 3:57 - Guest Intros 15:04 - Organizing memories for generalization 26:48 - Teacher, student, and notebook models 30:51 - Shallow linear networks 33:17 - How to optimize generalization 47:05 - Replay as a generalization regulator 54:57 - Whole greater than sum of its parts 1:05:37 - Unpredictability 1:10:41 - Heuristics 1:13:52 - Theoretical neuroscience for AI 1:29:42 - Current personal thinking
11/21/20211 hour, 40 minutes, 2 seconds
Episode Artwork

BI 119 Henry Yin: The Crisis in Neuroscience

Support the show to get full episodes and join the Discord community. Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter. Yin lab at Duke.Twitter: @HenryYin19.Related papersThe Crisis in Neuroscience.Restoring Purpose in Behavior.Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control. 0:00 - Intro 5:40 - Kuhnian crises 9:32 - Control theory and cybernetics 17:23 - How much of brain is control system? 20:33 - Higher order control representation 23:18 - Prediction and control theory 27:36 - The way forward 31:52 - Compatibility with mental representation 38:29 - Teleology 45:53 - The right number of subjects 51:30 - Continuous measurement 57:06 - Artificial intelligence and control theory
11/11/20211 hour, 6 minutes, 36 seconds
Episode Artwork

BI 118 Johannes Jäger: Beyond Networks

Support the show to get full episodes and join the Discord community. Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell. Yogi's website and blog: Untethered in the Platonic Realm.Twitter: @yoginho.His youtube course: Beyond Networks: The Evolution of Living Systems.Kevin Mitchell's previous episode: BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness. 0:00 - Intro 4:10 - Yogi's background 11:00 - Beyond Networks - limits of dynamical systems models 16:53 - Kevin Mitchell question 20:12 - Process metaphysics 26:13 - Agency in evolution 40:37 - Agent-environment interaction, open-endedness 45:30 - AI and agency 55:40 - Life and intelligence 59:08 - Deep learning and neuroscience 1:03:21 - Mental autonomy 1:06:10 - William Wimsatt's biopsychological thicket 1:11:23 - Limtiations of mechanistic dynamic explanation 1:18:53 - Synthesis versus multi-perspectivism 1:30:31 - Specialization versus generalization
11/1/20211 hour, 36 minutes, 8 seconds
Episode Artwork

BI 117 Anil Seth: Being You

Support the show to get full episodes and join the Discord community. Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science. Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self,  is that it's rooted in predicting our bodily states to control them. We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests. Anil's website.Twitter: @anilkseth.Anil's book: BEING YOU A New Science of Consciousness.Megan's previous episode:BI 073 Megan Peters: Consciousness and MetacognitionSteve's previous episodesBI 099 Hakwan Lau and Steve Fleming: Neuro-AI ConsciousnessBI 107 Steve Fleming: Know Thyself 0:00 - Intro 6:32 - Megan Peters Q: Communicating Consciousness 15:58 - Human vs. animal consciousness 19:12 - BEING YOU A New Science of Consciousness 20:55 - Megan Peters Q: Will the hard problem go away? 30:55 - Steve Fleming Q: Contents of consciousness 41:01 - Megan Peters Q: Phenomenal character vs. content 43:46 - Megan Peters Q: Lempels of complexity 52:00 - Complex systems and emergence 55:53 - Psychedelics 1:06:04 - Free will 1:19:10 - Consciousness vs. life vs. intelligence
10/19/20211 hour, 32 minutes, 9 seconds
Episode Artwork

BI 116 Michael W. Cole: Empirical Neural Networks

Support the show to get full episodes and join the Discord community. Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent. The Cole Neurocognition lab.Twitter: @TheColeLab.Related papersDiscovering the Computational Relevance of Brain Network Organization.Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.Kendrick Kay's previous episode: BI 026 Kendrick Kay: A Model By Any Other Name.Kanaka Rajan's previous episode: BI 054 Kanaka Rajan: How Do We Switch Behaviors? 0:00 - Intro 4:58 - Cognitive control 7:44 - Rapid Instructed Task Learning and Flexible Hub Theory 15:53 - Patryk Laurent question: free will 26:21 - Kendrick Kay question: fMRI limitations 31:55 - Empirically-estimated neural networks (ENNs) 40:51 - ENNs vs. deep learning 45:30 - Clinical relevance of ENNs 47:32 - Kanaka Rajan question: a proposed collaboration 56:38 - Advantage of modeling multiple regions 1:05:30 - How ENNs work 1:12:48 - How ENNs might benefit artificial intelligence 1:19:04 - The need for causality 1:24:38 - Importance of luck and serendipity
10/12/20211 hour, 31 minutes, 20 seconds
Episode Artwork

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

Support the show to get full episodes and join the Discord community. Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer. Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory 0:00 - Intro 2:38 - Conscious Mind, Resonant Brain 11:49 - Theoretical method 15:54 - ART, learning, and consciousness 22:58 - Conscious vs. unconscious resonance 26:56 - Györy Buzsáki question 30:04 - Remaining mysteries in visual system 35:16 - John Krakauer question 39:12 - Jay McClelland question 51:34 - Any missing principles to explain human cognition? 1:00:16 - Importance of an early good career start 1:06:50 - Has modeling training caught up to experiment training? 1:17:12 - Universal development code
10/2/20211 hour, 23 minutes, 41 seconds
Episode Artwork

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

Support the show to get full episodes and join the Discord community. Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more. Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe related book we discuss:The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors) 0:00 - Intro 5:26 - Philosophy contributing to mind science 15:45 - Trend toward hyperspecialization 21:38 - Practice-focused philosophy of science 30:42 - Computationalism 33:05 - Philosophy of mind: identity theory, functionalism 38:18 - Computations as descriptions 41:27 - Pluralism and perspectivalism 54:18 - How much of brain function is computation? 1:02:11 - AI as computationalism 1:13:28 - Naturalizing representations 1:30:08 - Are you doing it right?
9/22/20211 hour, 38 minutes, 7 seconds
Episode Artwork

BI 113 David Barack and John Krakauer: Two Views On Cognition

Support the show to get full episodes and join the Discord community. David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher. David's webpage.John's Lab.Twitter: David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2 Timestamps 0:00 - Intro 3:13 - David's philosophy and neuroscience experience 20:01 - Renaissance person 24:36 - John's medical training 31:58 - Two Views on the Cognitive Brain 44:18 - Representation 49:37 - Studying populations of neurons 1:05:17 - What counts as representation 1:18:49 - Does this approach matter for AI?
9/12/20211 hour, 30 minutes, 38 seconds
Episode Artwork

BI ViDA Panel Discussion: Deep RL and Dopamine

9/2/202157 minutes, 25 seconds
Episode Artwork

BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine

8/26/20211 hour, 13 minutes, 56 seconds
Episode Artwork

BI NMA 06: Advancing Neuro Deep Learning Panel

8/19/20211 hour, 20 minutes, 32 seconds
Episode Artwork

BI NMA 05: NLP and Generative Models Panel

8/13/20211 hour, 23 minutes, 50 seconds
Episode Artwork

BI NMA 04: Deep Learning Basics Panel

8/6/202159 minutes, 21 seconds
Episode Artwork

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

Erik, Kevin, and I discuss... well a lot of things. Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence. Kevin's website.Eriks' website.Twitter: @WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory. Timestamps 0:00 - Intro 3:28 - The Revelations - Erik's novel 15:15 - Innate - Kevin's book 22:56 - Cycle of progress 29:05 - Brains for movement or consciousness? 46:46 - Freud's influence 59:18 - Theories of consciousness 1:02:02 - Meaning and emergence 1:05:50 - Reduction in neuroscience 1:23:03 - Micro and macro - emergence 1:29:35 - Agency and intelligence
7/28/20211 hour, 38 minutes, 4 seconds
Episode Artwork

BI NMA 03: Stochastic Processes Panel

Panelists: Yael Niv.@yael_nivKonrad [email protected] BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam [email protected] BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle & Human Machines.Tim [email protected] BI episodes:BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay.BI 024 Tim Behrens: Cognitive Maps. This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality. The other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
7/22/20211 hour, 48 seconds
Episode Artwork

BI NMA 02: Dynamical Systems Panel

Panelists: Adrienne [email protected] [email protected] [email protected] 054 Kanaka Rajan: How Do We Switch Behaviors? This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. Other panels: First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
7/15/20211 hour, 15 minutes, 28 seconds
Episode Artwork

BI NMA 01: Machine Learning Panel

Panelists: Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei. This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning. Other panels: Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
7/12/20211 hour, 27 minutes, 12 seconds
Episode Artwork

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more. Catherine's website.Jessica's blog.Twitter: Jess: @tsonj.Related papersFrom Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence - CatherineForms of explanation and understanding for neuroscience and artificial intelligence - JessJess is a postdoc in Chris Summerfield's lab, and Chris and San Gershman were on a recent episode.Understanding Scientific Understanding by Henk de Regt. Timestamps: 0:00 - Intro 11:11 - Background and approaches 27:00 - Understanding distinct from explanation 36:00 - Explanations as programs (early explanation) 40:42 - Explaining classes of phenomena 52:05 - Constitutive (neuro) vs. etiological (AI) explanations 1:04:04 - Do nonphysical objects count for explanation? 1:10:51 - Advice for early philosopher/scientists
7/6/20211 hour, 25 minutes, 2 seconds
Episode Artwork

BI 109 Mark Bickhard: Interactivism

Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle. For related discussions on the foundations (and issues of) representations, check out episode 60 with Michael Rescorla, episode 61 with Jörn Diedrichsen and Niko Kriegeskorte, and especially episode 79 with Romain Brette. Mark's website.Related papersInteractivism: A manifesto.Plenty of other papers available via his website.Also mentioned:The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes. 2006, Haluk Ögmen, Bruno G. BreitmeyerMaiken Nedergaard's work on sleep. Timestamps 0:00 - Intro 5:06 - Previous and upcoming book 9:17 - Origins of Mark's thinking 14:31 - Process vs. substance metaphysics 27:10 - Kinds of emergence 32:16 - Normative emergence to normative function and representation 36:33 - Representation in Interactivism 46:07 - Situation knowledge 54:02 - Interactivism vs. Enactivism 1:09:37 - Interactivism vs Predictive/Bayesian brain 1:17:39 - Interactivism vs. Free energy principle 1:21:56 - Microgenesis 1:33:11 - Implications for neuroscience 1:38:18 - Learning as variation and selection 1:45:07 - Implications for AI 1:55:06 - Everything is a clock 1:58:14 - Is Mark a philosopher?
6/26/20212 hours, 3 minutes, 43 seconds
Episode Artwork

BI 108 Grace Lindsay: Models of the Mind

Grace's websiteTwitter: @neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace's work using convolutional neural networks to study vision and attention way back on episode 11. Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book.  Timestamps 0:00 - Intro 4:19 - Cognition beyond vision 12:38 - Models of the Mind - book overview 14:00 - The good and bad of using math 21:33 - I quiz Grace on her own book 25:03 - Birth of AI and computational approach 38:00 - Rediscovering old math for new neuroscience 41:00 - Topology as good math to know now 45:29 - Physics vs. neuroscience math 49:32 - Neural code and information theory 55:03 - Rate code vs. timing code 59:18 - Graph theory - can you deduce function from structure? 1:06:56 - Multiple realizability 1:13:01 - Grand Unified theories of the brain
6/16/20211 hour, 26 minutes, 12 seconds
Episode Artwork

BI 107 Steve Fleming: Know Thyself

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea. Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness. Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Science of Self-Awareness. Timestamps 0:00 - Intro 3:25 - Steve's Career 10:43 - Sub-personal vs. personal metacognition 17:55 - Meditation and metacognition 20:51 - Replay tools for mind-wandering 30:56 - Evolutionary cultural origins of self-awareness 45:02 - Animal metacognition 54:25 - Aging and self-awareness 58:32 - Is more always better? 1:00:41 - Political dogmatism and overconfidence 1:08:56 - Reliance on AI 1:15:15 - Building self-aware AI 1:23:20 - Future evolution of metacognition
6/6/20211 hour, 29 minutes, 24 seconds
Episode Artwork

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

Jackie and Bob discuss their research and thinking about curiosity. Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI). Jackie's lab: Jacqueline Gottlieb Laboratory at Columbia University.Bob's lab: Neuroscience of Reinforcement Learning and Decision Making.Twitter: Bob: @NRDLab (Jackie's not on twitter).Related papersCuriosity, information demand and attentional priority.Balancing exploration and exploitation with information and randomization.Deep exploration as a unifying account of explore-exploit behavior.Bob mentions an influential talk by Benjamin Van Roy:Generalization and Exploration via Value Function Randomization.Bob mentions his paper with Anne Collins:Ten simple rules for the computational modeling of behavioral data. Timestamps: 0:00 - Intro 4:15 - Central scientific interests 8:32 - Advent of mathematical models 12:15 - Career exploration vs. exploitation 28:03 - Eye movements and active sensing 35:53 - Status of eye movements in neuroscience 44:16 - Why are we curious? 50:26 - Curiosity vs. Exploration vs. Intrinsic motivation 1:02:35 - Directed vs. random exploration 1:06:16 - Deep exploration 1:12:52 - How to know what to pay attention to 1:19:49 - Does AI need curiosity? 1:26:29 - What trait do you wish you had more of?
5/27/20211 hour, 31 minutes, 53 seconds
Episode Artwork

BI 105 Sanjeev Arora: Off the Convex Path

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets. Sanjeev's website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep LearningRelatedThe episode with Andrew Saxe covers related deep learning theory in episode 52.Omri Barak discusses the importance of learning trajectories to understand RNNs in episode 97.Sanjeev mentions Christos Papadimitriou. Timestamps 0:00 - Intro 7:32 - Computational complexity 12:25 - Algorithms 13:45 - Deep learning vs. traditional optimization 17:01 - Evolving view of deep learning 18:33 - Reproducibility crisis in AI? 21:12 - Surprising effectiveness of deep learning 27:50 - "Optimization" isn't the right framework 30:08 - Infinitely wide nets 35:41 - Exponential learning rates 42:39 - Data as the next frontier 44:12 - Neuroscience and AI differences 47:13 - Focus on algorithms, architecture, and objective functions 55:50 - Advice for deep learning theorists 58:05 - Decoding minds
5/17/20211 hour, 1 minute, 43 seconds
Episode Artwork

BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight

What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more. John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study. Timestamps 0:00 - Intro 16:20 - Where are we broadly in science of creativity? 18:23 - Origins of creativity research 22:14 - Divergent and convergent thought 26:31 - Secret Chord Labs 32:40 - Familiar surprise 38:55 - The Eureka Factor 42:27 - Dual process model 52:54 - Creativity and jazz expertise 55:53 - "Be creative" behavioral study 59:17 - Stimulating the creative brain 1:02:04 - Brain circuits underlying creativity 1:14:36 - What does this tell us about creativity? 1:16:48 - Intelligence vs. creativity 1:18:25 - Switching between creative modes 1:25:57 - Flow states and insight 1:34:29 - Creativity and insight in AI 1:43:26 - Creative products vs. process
5/7/20211 hour, 50 minutes, 32 seconds
Episode Artwork

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind. Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements. Randal A KoeneTwitter: @randalkoeneCarboncopies Foundation.Randal's website.Ken HayworthTwitter: @KennethHayworthBrain Preservation Foundation.Youtube videos. Timestamps 0:00 - Intro 6:14 - What Ken wants 11:22 - What Randal wants 22:29 - Brain preservation 27:18 - Aldehyde stabilized cryopreservation 31:51 - Scan and copy vs. gradual replacement 38:25 - Building a roadmap 49:45 - Limits of current experimental paradigms 53:51 - Our evolved brains 1:06:58 - Counterarguments 1:10:31 - Animal models for whole brain emulation 1:15:01 - Understanding vs. emulating brains 1:22:37 - Current challenges
4/26/20211 hour, 27 minutes, 26 seconds
Episode Artwork

BI 102 Mark Humphries: What Is It Like To Be A Spike?

Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode! The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion. Timestamps: 0:00 - Intro 3:25 - Writing a book 15:37 - Mark's main interest 19:41 - Future explanation of brain/mind 27:00 - Stochasticity and excitation/inhibition balance 36:56 - Dendritic computation for network dynamics 39:10 - Do details matter for AI? 44:06 - Spike failure 51:12 - Dark neurons 1:07:57 - Intrinsic spontaneous activity 1:16:16 - Best scientific moment 1:23:58 - Failure 1:28:45 - Advice
4/16/20211 hour, 32 minutes, 20 seconds
Episode Artwork

BI 101 Steve Potter: Motivating Brains In and Out of Dishes

Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book. The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own  learning. Potter Lab.Twitter: @stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie. 0:00 - Intro 6:38 - Brain organoids 18:48 - Glial cell plasticity 24:50 - Whole brain emulation 35:28 - Industry vs. academia 45:32 - Intro to book: How To Motivate Your Students To Love Learning 48:29 - Steve's childhood influences 57:21 - Developing one's own intrinsic motivation 1:02:30 - Real-world assignments 1:08:00 - Keys to motivation 1:11:50 - Peer pressure 1:21:16 - Autonomy 1:25:38 - Wikipedia real-world assignment 1:33:12 - Relation to running a lab
4/6/20211 hour, 45 minutes, 22 seconds
Episode Artwork

BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?

We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests: Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not? Timestamps: 0:00 - Intro 5:04 - Andrew Saxe 7:04 - Thomas Naselaris 7:46 - John Krakauer 9:03 - Federico Turkheimer 11:57 - Steve Potter 13:31 - David Krakauer 17:22 - Dean Buonomano 20:28 - Konrad Kording 22:00 - Uri Hasson 23:15 - Rodrigo Quian Quiroga 24:41 - Jim DiCarlo 25:26 - Marcel van Gerven 28:02 - Mazviita Chirimuuta 29:27 - Brad Love 31:23 - Patrick Mayo 32:30 - György Buzsáki 37:07 - Pieter Roelfsema 37:26 - David Poeppel 40:22 - Paul Cisek 44:52 - Talia Konkle 47:03 - Steve Grossberg
3/28/202150 minutes, 3 seconds
Episode Artwork

BI 100.4 Special: What Ideas Are Holding Us Back?

In the 4th installment of our 100th episode celebration, previous guests responded to the question: What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why? As usual, the responses are varied and wonderful! Timestamps: 0:00 - Intro 6:41 - Pieter Roelfsema 7:52 - Grace Lindsay 10:23 - Marcel van Gerven 11:38 - Andrew Saxe 14:05 - Jane Wang 16:50 - Thomas Naselaris 18:14 - Steve Potter 19:18 - Kendrick Kay 22:17 - Blake Richards 27:52 - Jay McClelland 30:13 - Jim DiCarlo 31:17 - Talia Konkle 33:27 - Uri Hasson 35:37 - Wolfgang Maass 38:48 - Paul Cisek 40:41 - Patrick Mayo 41:51 - Konrad Kording 43:22 - David Poeppel 44:22 - Brad Love 46:47 - Rodrigo Quian Quiroga 47:36 - Steve Grossberg 48:47 - Mark Humphries 52:35 - John Krakauer 55:13 - György Buzsáki 59:50 - Stefan Leijnan 1:02:18 - Nathaniel Daw
3/21/20211 hour, 4 minutes, 26 seconds
Episode Artwork

BI 100.3 Special: Can We Scale Up to AGI with Current Tech?

Part 3 in our 100th episode celebration. Previous guests answered the question: Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3): Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing? It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing. Timestamps: 0:00 - Intro 3:56 - Wolgang Maass 5:34 - Paul Humphreys 9:16 - Chris Eliasmith 12:52 - Andrew Saxe 16:25 - Mazviita Chirimuuta 18:11 - Steve Potter 19:21 - Blake Richards 22:33 - Paul Cisek 26:24 - Brad Love 29:12 - Jay McClelland 34:20 - Megan Peters 37:00 - Dean Buonomano 39:48 - Talia Konkle 40:36 - Steve Grossberg 42:40 - Nathaniel Daw 44:02 - Marcel van Gerven 45:28 - Kanaka Rajan 48:25 - John Krakauer 51:05 - Rodrigo Quian Quiroga 53:03 - Grace Lindsay 55:13 - Konrad Kording 57:30 - Jeff Hawkins 102:12 - Uri Hasson 1:04:08 - Jess Hamrick 1:06:20 - Thomas Naselaris
3/17/20211 hour, 8 minutes, 43 seconds
Episode Artwork

BI 100.2 Special: What Are the Biggest Challenges and Disagreements?

In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on. Timestamps: 0:00 - Intro 7:10 - Rodrigo Quian Quiroga 8:33 - Mazviita Chirimuuta 9:15 - Chris Eliasmith 12:50 - Jim DiCarlo 13:23 - Paul Cisek 16:42 - Nathaniel Daw 17:58 - Jessica Hamrick 19:07 - Russ Poldrack 20:47 - Pieter Roelfsema 22:21 - Konrad Kording 25:16 - Matt Smith 27:55 - Rafal Bogacz 29:17 - John Krakauer 30:47 - Marcel van Gerven 31:49 - György Buzsáki 35:38 - Thomas Naselaris 36:55 - Steve Grossberg 48:32 - David Poeppel 49:24 - Patrick Mayo 50:31 - Stefan Leijnen 54:24 - David Krakuer 58:13 - Wolfang Maass 59:13 - Uri Hasson 59:50 - Steve Potter 1:01:50 - Talia Konkle 1:04:30 - Matt Botvinick 1:06:36 - Brad Love 1:09:46 - Jon Brennan 1:19:31 - Grace Lindsay 1:22:28 - Andrew Saxe
3/12/20211 hour, 25 minutes
Episode Artwork

BI 100.1 Special: What Has Improved Your Career or Well-being?

Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go... Timestamps: 0:00 - Intro 6:13 - David Krakauer 8:50 - David Poeppel 9:32 - Jay McClelland 11:03 - Patrick Mayo 11:45 - Marcel van Gerven 12:11 - Blake Richards 12:25 - John Krakauer 14:22 - Nicole Rust 15:26 - Megan Peters 17:03 - Andrew Saxe 18:11 - Federico Turkheimer 20:03 - Rodrigo Quian Quiroga 22:03 - Thomas Naselaris 23:09 - Steve Potter 24:37 - Brad Love 27:18 - Steve Grossberg 29:04 - Talia Konkle 29:58 - Paul Cisek 32:28 - Kanaka Rajan 34:33 - Grace Lindsay 35:40 - Konrad Kording 36:30 - Mark Humphries
3/9/202142 minutes, 32 seconds
Episode Artwork

BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness

Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness, related to metacognition. So we discuss HOTs in particular and their relation to other approaches/theories, the idea of approaching consciousness as a computational problem to be tackled with computational modeling, we talk about the cultural, social, and career aspects of choosing to study something as elusive and controversial as consciousness, we talk about two of the models they're working on now to account for various properties of conscious experience, and, of course, the prospects of consciousness in AI. For more on metacognition and awareness, check out episode 73 with Megan Peters. Hakwan's lab: Consciousness and Metacognition Lab.Steve's lab: The MetaLab.Twitter: @hakwanlau; @smfleming.Hakwan's brief Aeon article: Is consciousness a battle between your beliefs and perceptions?Related papersAn Informal Internet Survey on the Current State of Consciousness Science.Opportunities and challenges for a maturing science of consciousness.What is consciousness, and could machines have it?"Understanding the higher-order approach to consciousness.Awareness as inference in a higher-order state space. (Steve's bayesian predictive generative model)Consciousness, Metacognition, & Perceptual Reality Monitoring. (Hakwan's reality-monitoring model a la generative adversarial networks) Timestamps 0:00 - Intro 7:25 - Steve's upcoming book 8:40 - Challenges to study consciousness 15:50 - Gurus and backscratchers 23:58 - Will the problem of consciousness disappear? 27:52 - Will an explanation feel intuitive? 29:54 - What do you want to be true? 38:35 - Lucid dreaming 40:55 - Higher order theories 50:13 - Reality monitoring model of consciousness 1:00:15 - Higher order state space model of consciousness 1:05:50 - Comparing their models 1:10:47 - Machine consciousness 1:15:30 - Nature of first order representations 1:18:20 - Consciousness prior (Yoshua Bengio) 1:20:20 - Function of consciousness 1:31:57 - Legacy 1:40:55 - Current projects
2/28/20211 hour, 46 minutes, 35 seconds
Episode Artwork

BI 098 Brian Christian: The Alignment Problem

Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about: The history of machine learning and how we got this point;Some methods researches are creating to understand what's being represented in neural nets and how they generate their output;Some modern proposed solutions to the alignment problem, like programming the machines to learn our preferences so they can help achieve those preferences - an idea called inverse reinforcement learning;The thorny issue of accurately knowing our own values- if we get those wrong, will machines also get it wrong? Links: Brian's website.Twitter: @brianchristian.The Alignment Problem: Machine Learning and Human Values.Related papersNorbert Wiener from 1960: Some Moral and Technical Consequences of Automation. Timestamps: 4:22 - Increased work on AI ethics 8:59 - The Alignment Problem overview 12:36 - Stories as important for intelligence 16:50 - What is the alignment problem 17:37 - Who works on the alignment problem? 25:22 - AI ethics degree? 29:03 - Human values 31:33 - AI alignment and evolution 37:10 - Knowing our own values? 46:27 - What have learned about ourselves? 58:51 - Interestingness 1:00:53 - Inverse RL for value alignment 1:04:50 - Current progress 1:10:08 - Developmental psychology 1:17:36 - Models as the danger 1:25:08 - How worried are the experts?
2/18/20211 hour, 32 minutes, 38 seconds
Episode Artwork

BI 097 Omri Barak and David Sussillo: Dynamics and Structure

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss: The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);The difference between classical approaches to modeling brains and the machine learning approach;The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains. Barak LabTwitter: @SussilloDavidThe papers we discuss or mention:Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.Computation Through Neural Population Dynamics.Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.Dynamics of random recurrent networks with correlated low-rank structure.Quality of internal representation shapes learning performance in feedback neural networks.Feigenbaum's universality constant original paper: Feigenbaum, M. J. (1976) "Universality in complex discrete dynamics", Los Alamos Theoretical Division Annual Report 1975-1976TalksUniversality and individuality in neural dynamics across large populations of recurrent networks.World Wide Theoretical Neuroscience Seminar: Omri Barak, January 6, 2021 Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture
2/8/20211 hour, 23 minutes, 57 seconds
Episode Artwork

BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths

K, Josh, and I were postdocs together in Jeff Schall's and Geoff Woodman's labs. K and Josh had backgrounds in psychology and were getting their first experience with neurophysiology, recording single neuron activity in awake behaving primates. This episode is a discussion surrounding their reflections and perspectives on neuroscience and psychology, given their backgrounds and experience (we reference episode 84 with György Buzsáki and David Poeppel). We also talk about their divergent paths - K stayed in academia and runs an EEG lab studying human decision-making and memory, and Josh left academia and has worked for three different pharmaceutical and tech companies. So this episode doesn't get into gritty science questions, but is a light discussion about the state of neuroscience, psychology, and AI, and reflections on academia and industry, life in lab, and plenty more. The Fukuda Lab.Josh's website.Twitter: @KeisukeFukuda4 Time stamps 0:00 - Intro 4:30 - K intro 5:30 - Josh Intro 10:16 - Academia vs. industry 16:01 - Concern with legacy 19:57 - Best scientific moment 24:15 - Experiencing neuroscience as a psychologist 27:20 - Neuroscience as a tool 30:38 - Brain/mind divide 33:27 - Shallow vs. deep knowledge in academia and industry  36:05 - Autonomy in industry 42:20 - Is this a turning point in neuroscience? 46:54 - Deep learning revolution 49:34 - Deep nets to understand brains 54:54 - Psychology vs. neuroscience 1:06:42 - Is language sufficient? 1:11:33 - Human-level AI 1:13:53 - How will history view our era of neuroscience? 1:23:28 - What would you have done differently? 1:26:46 - Something you wish you knew
1/29/20211 hour, 34 minutes, 10 seconds
Episode Artwork

BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?

It's generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam's previous appearance on the podcast. Chris's lab: Human Information Processing lab.Sam's lab: Computational Cognitive Neuroscience Lab.Twitter: @gershbrain; @summerfieldlabPapers we discuss or mention or are related:If deep learning is the answer, then what is the question?Neuroscience-Inspired Artificial Intelligence.Building Machines that Learn and Think Like People. 0:00 - Intro 5:00 - Good ol' days 13:50 - AI for neuro, neuro for AI 24:25 - Intellectual diversity in AI 28:40 - Role of philosophy 30:20 - Operationalization and benchmarks 36:07 - Prediction vs. understanding 42:48 - Role of humans in the loop 46:20 - Value alignment 51:08 - Andrew Saxe question 53:16 - Explainable AI 58:55 - Generalization 1:01:09 - What has AI revealed about us? 1:09:38 - Neuro for AI 1:20:30 - Concluding remarks
1/19/20211 hour, 25 minutes, 28 seconds
Episode Artwork

BI 094 Alison Gopnik: Child-Inspired AI

Alison and I discuss her work to accelerate learning and thus improve AI by studying how children learn, as Alan Turing suggested in his famous 1950 paper. The ways children learn are via imitation, by learning abstract causal models, and active learning by implementing a high exploration/exploitation ratio. We also discuss child consciousness, psychedelics, the concept of life history, the role of grandparents and elders, and lots more. Alison's Website.Cognitive Development and Learning Lab.Twitter: @AlisonGopnik.Related papers:Childhood as a solution to explore-exploit tensions.The Aeon article about grandparents, children, and evolution: Vulnerable Yet Vital.Books:The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children.The Scientist in the Crib: What Early Learning Tells Us About the Mind.The Philosophical Baby: What Children's Minds Tell Us About Truth, Love, and the Meaning of Life. Take-home points: Children learn by imitation, and not just unthinking imitation. They pay attention to and evaluate the intentions of others and judge whether a person seems to be a reliable source of information. That is, they learn by sophisticated socially-constrained imitation.Children build abstract causal models of the world. This allows them to simulate potential outcomes and test their actions against those simulations, accelerating learning.Children keep their foot on the exploration pedal, actively learning by exploring a wide spectrum of actions to determine what works. As we age, our exploratory cognition decreases, and we begin to exploit more what we've learned. Timestamps 0:00 - Intro 4:40 - State of the field 13:30 - Importance of learning 20:12 - Turing's suggestion 22:49 - Patience for one's own ideas 28:53 - Learning via imitation 31:57 - Learning abstract causal models 41:42 - Life history 43:22 - Learning via exploration 56:19 - Explore-exploit dichotomy 58:32 - Synaptic pruning 1:00:19 - Breakthrough research in careers 1:04:31 - Role of elders 1:09:08 - Child consciousness 1:11:41 - Psychedelics as child-like brain 1:16:00 - Build consciousness into AI?
1/8/20211 hour, 19 minutes, 13 seconds
Episode Artwork

BI 093 Dileep George: Inference in Brain Microcircuits

Dileep and I discuss his theoretical account of how the thalamus and cortex work together to implement visual inference. We talked previously about his Recursive Cortical Network (RCN) approach to visual inference, which is a probabilistic graph model that can solve hard problems like CAPTCHAs, and more recently we talked about using his RCNs with cloned units to account for cognitive maps related to the hippocampus. On this episode, we walk through how RCNs can map onto thalamo-cortical circuits so a given cortical column can signal whether it believes some concept or feature is present in the world, based on bottom-up incoming sensory evidence, top-down attention, and lateral related features. We also briefly compare this bio-RCN version with Randy O'Reilly's Deep Predictive Learning account of thalamo-cortical circuitry. Vicarious website - Dileeps AGI robotics company.Twitter: @dileeplearningThe papers we discuss or mention:A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence.Probabilistic graphical models.Hierarchical temporal memory. Time Stamps: 0:00 - Intro 5:18 - Levels of abstraction 7:54 - AGI vs. AHI vs. AUI 12:18 - Ideas and failures in startups 16:51 - Thalamic cortical circuitry computation  22:07 - Recursive cortical networks 23:34 - bio-RCN 27:48 - Cortical column as binary random variable 33:37 - Clonal neuron roles 39:23 - Processing cascade 41:10 - Thalamus 47:18 - Attention as explaining away 50:51 - Comparison with O'Reilly's predictive coding framework 55:39 - Subjective contour effect 1:01:20 - Necker cube
12/29/20201 hour, 6 minutes, 31 seconds
Episode Artwork

BI 092 Russ Poldrack: Cognitive Ontologies

Russ and I discuss cognitive ontologies - the "parts" of the mind and their relations - as an ongoing dilemma of how to map onto each other what we know about brains and what we know about minds. We talk about whether we have the right ontology now, how he uses both top-down and data-driven approaches to analyze and refine current ontologies, and how all this has affected his own thinking about minds. We also discuss some of the current  meta-science issues and challenges in neuroscience  and AI, and Russ answers guest questions from Kendrick Kay and David Poeppel. Russ’s website.Poldrack Lab.Stanford Center For Reproducible Neuroscience.Twitter: @russpoldrack.Book:The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts.The papers we discuss or mention:Atlases of cognition with large-scale human brain mapping.Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.Uncovering the structure of self-regulation through data-driven ontology discoveryTalks:Reproducibility: NeuroHackademy: Russell Poldrack - Reproducibility in fMRI: What is the problem?Cognitive Ontology: Cognitive Ontologies, from Top to BottomA good series of talks about cognitive ontologies: Online Seminar Series: Problem of Cognitive Ontology. Some take-home points: Our folk psychological cognitive ontology hasn't changed much since early Greek Philosophy, and especially since William James wrote about attention, consciousness, and so on.Using encoding models, we can predict brain responses pretty well based on what task a subject is performing or what "cognitive function" a subject is engaging, at least to a course approximation.Using a data-driven approach has potential to help determine mental structure, but important human decisions must still be made regarding how exactly to divide up the various "parts" of the mind. Time points 0:00 - Introduction 5:59 - Meta-science issues 19:00 - Kendrick Kay question 23:00 - State of the field 30:06 - fMRI for understanding minds 35:13 - Computational mind 42:10 - Cognitive ontology 45:17 - Cognitive Atlas 52:05 - David Poeppel question 57:00 - Does ontology matter? 59:18 - Data-driven ontology 1:12:29 - Dynamical systems approach 1:16:25 - György Buzsáki's inside-out approach 1:22:26 - Ontology for AI 1:27:39 - Deep learning hype 
12/15/20201 hour, 42 minutes, 12 seconds
Episode Artwork

BI 091 Carsen Stringer: Understanding 40,000 Neurons

Carsen and I discuss how she uses 2-photon calcium imaging data from over 10,000 neurons to understand the information processing of such large neural population activity. We talk about the tools she makes and uses to analyze the data, and the type of high-dimensional neural activity structure they found, which seems to allow efficient and robust information processing. We also talk about how these findings may help build better deep learning networks, and Carsen's thoughts on how to improve the diversity, inclusivity, and equality in neuroscience research labs. Guest question from Matt Smith. Stringer Lab.Twitter: @computingnature.The papers we discuss or mention:High-dimensional geometry of population responses in visual cortexSpontaneous behaviors drive multidimensional, brain-wide population activity. Timestamps: 0:00 - Intro 5:51 - Recording > 10k neurons 8:51 - 2-photon calcium imaging 14:56 - Balancing scientific questions and tools 21:16 - Unsupervised learning tools and rastermap 26:14 - Manifolds 32:13 - Matt Smith question 37:06 - Dimensionality of neural activity 58:51 - Future plans 1:00:30- What can AI learn from this? 1:13:26 - Diversity, inclusivity, equality
12/4/20201 hour, 28 minutes, 19 seconds
Episode Artwork

BI 090 Chris Eliasmith: Building the Human Brain

Chris and I discuss his Spaun large scale model of the human brain (Semantic Pointer Architecture Unified Network), as detailed in his book How to Build a Brain. We talk about his philosophical approach, how Spaun compares to Randy O'Reilly's Leabra networks, the Applied Brain Research Chris co-founded, and I have guest questions from Brad Aimone, Steve Potter, and Randy O'Reilly. Chris's website.Applied Brain Research.The book: How to Build a Brain.Nengo (you can run Spaun).Paper summary of Spaun: A large-scale model of the functioning brain. Some takeaways: Spaun is an embodied fully functional cognitive architecture with one eye for task instructions and an arm for responses.Chris uses elements from symbolic, connectionist, and dynamical systems approaches in cognitive science.The neural engineering framework (NEF) is how functions get instantiated in spiking neural networks.The semantic pointer architecture (SPA) is how representations are stored and transformed - i.e. the symbolic-like cognitive processing. Time Points: 0:00 - Intro 2:29 - Sense of awe 6:20 - Large-scale models 9:24 - Descriptive pragmatism 15:43 - Asking better questions 22:48 - Brad Aimone question: Neural engineering framework 29:07 - Engineering to build vs. understand 32:12 - Why is AI world not interested in brains/minds? 37:09 - Steve Potter neuromorphics question 44:51 - Spaun 49:33 - Semantic Pointer Architecture 56:04 - Representations 58:21 - Randy O'Reilly question 1 1:07:33 - Randy O'Reilly question 2 1:10:31 - Spaun vs. Leabra 1:32:43 - How would Chris start over?
11/23/20201 hour, 38 minutes, 57 seconds
Episode Artwork

BI 089 Matt Smith: Drifting Cognition

Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it. How does the brain continue to produce steady perception and action in the midst of such drift? We also talk about how to think about variability in neural activity. How much of it is noise and how much of it is hidden important activity? Finally, we discuss the effect of recording more and more neurons simultaneously, collecting bigger and bigger datasets, plus guest questions from Adam Snyder and Patrick Mayo. Smith Lab.Twitter: @SmithLabNeuro.Related:Slow drift of neural activity as a signature of impulsivity in macaque visual and prefrontal cortex.Artwork by Melissa Neely Take home points: The “noise” in the variability of neural activity is likely just activity devoted to processing other things.Recording lots of neurons simultaneously helps resolve the question of what’s noise and how much information is in a population of neurons.There’s a neural signature of the behavioral “slow drift” of our internal cognitive state.The neural signature is global, and it’s an open question how the brain compensates to produce steady perception and action. Timestamps: 0:00 - Intro 4:35 - Adam Snyder question  15:26 - Multi-electrode recordings  17:48 - What is noise in the brain?  23:55 - How many neurons is enough?  27:43 - Patrick Mayo question  33:17 - Slow drift  54:10 - Impulsivity  57:32 - How does drift happen?  59:49 - Relation to AI  1:06:58 - What AI and neuro can teach each other  1:10:02 - Ecologically valid behavior  1:14:39 - Brain mechanisms vs. mind  1:17:36 - Levels of description  1:21:14 - Hard things to make in AI  1:22:48 - Best scientific moment 
11/12/20201 hour, 26 minutes, 52 seconds
Episode Artwork

BI 088 Randy O’Reilly: Simulating the Human Brain

Randy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more. Computational Cognitive Neuroscience Laboratory.The papers we discuss or mention:The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!Deep Predictive Learning in Neocortex and Pulvinar.Unraveling the Mysteries of Motivation.His youTube series detailing the theory and workings of Leabra:Computational Cognitive Neuroscience.The free textbook:Computational Cognitive Neuroscience A few take-home points: Leabra has been a slow incremental project, inspired in part by Alan Newell’s suggested approach.Randy began by developing a learning algorithm that incorporated both kinds of biological learning (error-driven and associative).Leabra's core is 3 brain areas - frontal cortex, parietal cortex, and hippocampus - and has grown from there.There’s a constant balance between biological realism and computational feasibility.It’s important that a cognitive architecture address multiple levels- micro-scale, macro-scale, mechanisms, functions, and so on.Deep predictive learning is a possible brain mechanism whereby predictions from higher layer cortex precede input from lower layer cortex in the thalamus, where an error is computed and used to drive learning.Randy believes our metacognitive ability to know what we do and don’t know is a key next function to build into AI. Timestamps: 0:00 -  Intro  3:54 - Skip Intro  6:20 - Being in awe  18:57 - How current AI can inform neuro  21:56 - Anna Schapiro question - how current neuro can inform AI. 29:20 - Learned vs. innate cognition  33:43 - LEABRA  38:33 - Developing Leabra  40:30 - Macroscale 42:33 - Thalamus as microscale  43:22 - Thalamocortical circuitry  47:25 - Deep predictive learning  56:18 - Deep predictive learning vs. backrop  1:01:56 - 10 Hz learning cycle  1:04:58 - Better theory vs. more data  1:08:59 - Leabra vs. Spaun  1:13:59 - Biological realism  1:21:54 - Bottom-up inspiration  1:27:26 - Biggest mistake in Leabra  1:32:14 - AI consciousness  1:34:45 - How would Randy begin again? 
11/2/20201 hour, 39 minutes, 8 seconds
Episode Artwork

BI 087 Dileep George: Cloning for Cognitive Maps

When a waiter hands me the bill, how do I know whether to pay it myself or let my date pay? On this episode, I get a progress update from Dileep on his company, Vicarious, since Dileep's last episode. We also talk broadly about his experience running Vicarious to develop AGI and robotics. Then we turn to his latest brain-inspired AI efforts using cloned structured probabilistic graph models to develop an account of how the hippocampus makes a model of the world and represents our cognitive maps in different contexts, so we can simulate possible outcomes to choose how to act. Special guest questions from Brad Love (episode 70: How We Learn Concepts) . Vicarious website - Dileep's AGI robotics company.Twitter: @dileeplearning.Papers we discuss:Learning cognitive maps as structured graphs for vicarious evaluation.A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.Probabilistic graphical models.Hierarchical temporal memory. Time stamps: 0:00 - Intro 3:00 - Skip Intro 4:00 - Previous Dileep episode 10:22 - Is brain-inspired AI over-hyped? 14:38 - Compteition in robotics field 15:53 - Vicarious robotics 22:12 - Choosing what product to make 28:13 - Running a startup 30:52 - Old brain vs. new brain 37:53 - Learning cognitive maps as structured graphs 41:59 - Graphical models 47:10 - Cloning and merging, hippocampus 53:36 - Brad Love Question 1 1:00:39 - Brad Love Question 2 1:02:41 - Task examples 1:11:56 - What does hippocampus do? 1:14:14 - Intro to thalamic cortical microcircuit 1:15:21 - What AI folks think of brains 1:16:57 - Which levels inform which levels 1:20:02 - Advice for an AI startup
10/23/20201 hour, 23 minutes
Episode Artwork

BI 086 Ken Stanley: Open-Endedness

Ken and I discuss open-endedness, the pursuit of ambitious goals by seeking novelty and interesting products instead of advancing directly toward defined objectives. We talk about evolution as a prime example of an open-ended system that has produced astounding organisms, Ken relates how open-endedness could help advance artificial intelligence and neuroscience, and we discuss a range of topics related to the general concept of open-endedness, and Ken takes a couple questions from Stefan Leijnen and Melanie Mitchell. Related: Ken's website.Twitter: @kenneth0stanley.The book:Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth Stanley and Joel Lehman.Papers:Evolving Neural Networks Through Augmenting Topologies (2002)Minimal Criterion Coevolution: A New Approach to Open-Ended Search Some key take-aways: Many of the best inventions were not the result of trying to achieve a specific objective.Open-endedness is the pursuit of ambitious advances without a clearly defined objective.Evolution is a quintessential example of an open-ended process: it produces a vast array of complex beings by searching the space of possible organisms, constrained by the environment, survival, and reproduction.Perhaps the key to developing artificial general intelligence is by following an open-ended path rather that pursing objectives (solving the same old benchmark tasks, etc.). 0:00 - Intro 3:46 - Skip Intro 4:30 - Evolution as an Open-ended process 8:25 - Why Greatness Cannot Be Planned 20:46 - Open-endedness in AI 29:35 - Constraints vs. objectives 36:26 - The adjacent possible 41:22 - Serendipity 44:33 - Stefan Leijnen question 53:11 - Melanie Mitchell question 1:00:32 - Efficiency 1:02:13 - Gentle Earth 1:05:25 - Learning vs. evolution 1:10:53 - AGI 1:14:06 - Neuroscience, AI, and open-endedness 1:26:06 - Open AI
10/12/20201 hour, 35 minutes, 43 seconds
Episode Artwork

BI 085 Ida Momennejad: Learning Representations

Ida and I discuss the current landscape of reinforcement learning in both natural and artificial intelligence, and how the old story of two RL systems in brains - model-free and model-based - is giving way to a more nuanced story of these two systems constantly interacting and additional RL strategies between model-free and model-based to drive the vast repertoire of our habits and goal-directed behaviors. We discuss Ida’s work on one of those “in-between” strategies, the successor representation RL strategy, which maps onto brain activity and accounts for behavior. We also discuss her interesting background and how it affects her outlook and research pursuit, and the role philosophy has played and continues to play in her thought processes. Related links: Ida’s website.Twitter: @criticalneuro.A nice review of what we discuss:Learning Structures: Predictive Representations, Replay, and Generalization. Time stamps: 0:00 - Intro 4:50 - Skip intro 9:58 - Core way of thinking 19:58 - Disillusionment 27:22 - Role of philosophy 34:51 - Optimal individual learning strategy 39:28 - Microsoft job 44:48 - Field of reinforcement learning 51:18 - Learning vs. innate priors 59:47 - Incorporating other cognition into RL 1:08:24 - Evolution 1:12:46 - Model-free and model-based RL 1:19:02 - Successor representation 1:26:48 - Are we running all algorithms all the time? 1:28:38 - Heuristics and intuition 1:33:48 - Levels of analysis 1:37:28 - Consciousness
9/30/20201 hour, 43 minutes, 41 seconds
Episode Artwork

BI 084 György Buzsáki and David Poeppel

David, Gyuri, and I discuss the issues they argue for in their back and forth commentaries about the importance of neuroscience and psychology, or implementation-level and computational-level, to advance our understanding of brains and minds - and the names we give to the things we study. Gyuri believes it’s time we use what we know and discover about brain mechanisms to better describe the psychological concepts we refer to as explanations for minds; David believes the psychological concepts are constantly being refined and are just as valid as objects of study to understand minds. They both agree these are important and enjoyable topics to debate. Also, special guest questions from Paul Cisek and John Krakauer. Related: Buzsáki lab; Poeppel labTwitter: @davidpoeppel.The papers we discuss or mention:Calling Names by Christophe BernardThe Brain–Cognitive Behavior Problem: A Retrospective by György Buzsáki.Against the Epistemological Primacy of the Hardware: The Brain from Inside Out, Turned Upside Down by David Poeppel.Books:The Brain from Inside Out by György Buzsáki.The Cognitive Neurosciences (edited by David Poeppel et al). Timeline: 0:00 - Intro 5:31 - Skip intro 8:42 - Gyuri and David summaries 25:45 - Guest questions 36:25 - Gyuri new language 49:41 - Language and oscillations 53:52 - Do we know what cognitive functions we're looking for? 58:25 - Psychiatry 1:00:25 - Steve Grossberg approach 1:02:12 - Neuroethology 1:09:08 - AI as tabula rasa 1:17: 40 - What's at stake? 1:36:20 - Will the space between neuroscience and psychology disappear?
9/15/20201 hour, 56 minutes, 1 second
Episode Artwork

BI 083 Jane Wang: Evolving Altruism in AI

Jane and I discuss the relationship between AI and neuroscience (cognitive science, etc), from her perspective at Deepmind after a career researching natural intelligence. We also talk about her meta-reinforcement learning work that connects deep reinforcement learning with known brain circuitry and processes, and finally we talk about her recent work using evolutionary strategies to develop altruism and cooperation among the agents in a multi-agent reinforcement learning environment. Related: Jane’s website.Twitter: @janexwang. The papers we discuss or mention:Learning to reinforcement learn.Blog post with a link to the paper: Prefrontal cortex as a meta-reinforcement learning system.Deep Reinforcement Learning and its Neuroscientific ImplicationsEvolving Intrinsic Motivations for Altruistic Behavior.Books she recommended:Human Compatible: AI and the Problem of Control, by Stuart Russell:Algorithms to Live By, by Brian Christian and Tom Griffiths. Timeline: 0:00 - Intro 3:36 - Skip Intro 4:45 - Transition to Deepmind 19:56 - Changing perspectives on neuroscience 24:49 - Is neuroscience useful for AI? 33:11 - Is deep learning hitting a wall? 35:57 - Meta-reinforcement learning 52:00 - Altruism in multi-agent RL
9/5/20201 hour, 13 minutes, 16 seconds
Episode Artwork

BI 082 Steve Grossberg: Adaptive Resonance Theory

Steve and I discuss his long and productive career as a theoretical neuroscientist. We cover his tried and true method of taking a large body of psychological behavioral findings, determining how they fit together and what’s paradoxical about them, developing design principles, theories, and models from that body of data, and using experimental neuroscience to inform and confirm his model predictions. We talk about his Adaptive Resonance Theory (ART) to describe how our brains are self-organizing, adaptive, and deal with changing environments. We also talk about his complementary computing paradigm to describe how two systems can complement each other to create emergent properties neither system can create on its own , how the resonant states in ART support consciousness, his place in the history of both neuroscience and AI, and quite a bit more. Related: Steve's BU website.Some papers we discuss or mention (much more on his website):Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.Towards solving the Hard Problem of Consciousness: The varieties of brain resonances and the conscious experiences that they support.A Path Toward Explainable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion, and Action. Topics Time stamps: 0:00 - Intro 5:48 - Skip Intro 9:42 - Beginnings 18:40 - Modeling method 44:05 - Physics vs. neuroscience 54:50 - Historical credit for Hopfield network 1:03:40 - Steve's upcoming book 1:08:24 - Being shy 1:11:21 - Stability plasticity dilemma 1:14:10 - Adaptive resonance theory 1:18:25 - ART matching rule 1:21:35 - Consciousness as resonance 1:29:15 - Complementary computing 1:38:58 - Vigilance to re-orient 1:54:58 - Deep learning vs. ART
8/26/20202 hours, 15 minutes, 38 seconds
Episode Artwork

BI 081 Pieter Roelfsema: Brain-propagation

Pieter and I discuss his ongoing quest to figure out how the brain implements learning that solves the credit assignment problem, like backpropagation does for neural networks. We also talk about his work to understand how we perceive individual objects in a crowded scene, his neurophysiological recordings in support of the global neuronal workspace hypothesis of consciousness, and the visual prosthetic device he’s developing to cure blindness by directly stimulating early visual cortex.  Related: Pieter's lab website.Twitter: @Pieters_Tweet.His startup to cure blindness: Phosphoenix.Talk:Seeing and thinking with your visual brainThe papers we discuss or mention:Control of synaptic plasticity in deep cortical networks.A Biologically Plausible Learning Rule for Deep Learning in the Brain.Conscious Processing and the Global Neuronal Workspace Hypothesis.Pieter's neuro-origin book inspiration (like so many others): Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter.
8/16/20201 hour, 22 minutes, 5 seconds
Episode Artwork

BI 080 Daeyeol Lee: Birth of Intelligence

Daeyeol and I discuss his book Birth of Intelligence: From RNA to Artificial Intelligence, which argues intelligence is a function of and inseparable from life, bound by self-replication and evolution. The book covers a ton of neuroscience related to decision making and learning, though we focused on a few theoretical frameworks and ideas like division of labor and principal-agent relationships to understand how our brains and minds are related to our genes, how AI is related to humans (for now), metacognition, consciousness, and a ton more. Related: Lee Lab for Learning and Decision Making.Twitter: @daeyeol_lee.Daeyeol’s side passion, creating music.His book: Birth of Intelligence: From RNA to Artificial Intelligence.
8/6/20201 hour, 31 minutes, 9 seconds
Episode Artwork

BI 079 Romain Brette: The Coding Brain Metaphor

Romain and I discuss his theoretical/philosophical work examining how neuroscientists rampantly misuse the word "code" when making claims about information processing in brains. We talk about the coding metaphor, various notions of information, the different roles and facets of mental representation, perceptual invariance, subjective physics, process versus substance metaphysics, and the experience of writing a Behavior and Brain Sciences article (spoiler: it's a demanding yet rewarding experience). Romain's website.Twitter: @RomainBrette.The papers we discuss or mention:.Philosophy of the spike: rate-based vs. spike-based theories of the brain.Is coding a relevant metaphor for the brain? (bioRxiv link).Subjective physics.Related worksThe Ecological Approach to Visual Perception by James Gibson.Why Red Doesn't Sound Like a Bell by Kevin O’Reagan.
7/27/20201 hour, 19 minutes, 4 seconds
Episode Artwork

BI 078 David and John Krakauer: Part 2

In this second part of our conversation David, John, and I continue to discuss the role of complexity science in the study of intelligence, brains, and minds. We also get into functionalism and multiple realizability, dynamical systems explanations, the role of time in thinking, and more. Be sure to listen to the first part, which lays the foundation for what we discuss in this episode. Notes: David’s page at the Santa Fe Institute.John’s BLAM lab website.Follow SFI on twitter: @sfiscience.BLAM on Twitter: @blamlab Related Krakauer stuff:At the limits of thought. An Aeon article by DavidComplex Time: Cognitive Regime Shift II - When/Why/How the Brain Breaks. A video conversation with both John and David.Complexity Podcast.Books mentioned:Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, ed. David Krakauer.Understanding Scientific Understanding by Henk de Regt.The Idea of the Brain by Matthew Cobb.New Dark Age: Technology and the End of the Future by James Bridle.The River of Consciousness by Oliver Sacks.
7/17/20201 hour, 14 minutes, 37 seconds
Episode Artwork

BI 077 David and John Krakauer: Part 1

David, John, and I discuss the role of complexity science in the study of intelligence. In this first part, we talk about complexity itself, its role in neuroscience, emergence and levels of explanation, understanding, epistemology and ontology, and really quite a bit more. Notes: David’s page at the Santa Fe Institute.John’s BLAM lab website.Follow SFI on twitter: @sfiscience.BLAM on Twitter: @blamlab Related Krakauer stuff:At the limits of thought. An Aeon article by DavidComplex Time: Cognitive Regime Shift II - When/Why/How the Brain Breaks. A video conversation with both John and David.Complexity Podcast.Books mentioned:Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, ed. David Krakauer.Understanding Scientific Understanding by Henk de Regt.The Idea of the Brain by Matthew Cobb.New Dark Age: Technology and the End of the Future by James Bridle.The River of Consciousness by Oliver Sacks.
7/14/20201 hour, 33 minutes, 4 seconds
Episode Artwork

BI 076 Olaf Sporns: Network Neuroscience

Olaf and I discuss the explosion of network neuroscience, which uses network science tools to map the structure (connectome) and activity of the brain at various spatial and temporal scales. We talk about the possibility of bridging physical and functional connectivity via communication dynamics, and about the relation between network science and artificial neural networks and plenty more. Notes: Computational Cognitive Neuroscience Laboratory.Twitter: @spornslabHis excellent book: Networks of the Brain.Related papers:Network Neuroscience.The economy of brain network organization.Communication dynamics in complex brain networks.
7/4/20201 hour, 45 minutes, 57 seconds
Episode Artwork

BI 075 Jim DiCarlo: Reverse Engineering Vision

Jim and I discuss his reverse engineering approach to visual intelligence, using deep models optimized to perform object recognition tasks. We talk about the history of his work developing models to match the neural activity in the ventral visual stream, how deep learning connects with those models, and some of his recent work: adding recurrence to the models to account for more difficult object recognition, using unsupervised learning to account for plasticity in the visual stream, and controlling neural activity  by creating specific images for subjects to view. Notes: The DiCarlo Lab at MIT.Related papers:Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks.Fast recurrent processing via ventral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition.Unsupervised changes in core object recognition behavioral performance are accurately predicted by unsupervised neural plasticity in inferior temporal cortex.Neural population control via deep image synthesis.
6/24/20201 hour, 16 minutes, 3 seconds
Episode Artwork

BI 074 Ginger Campbell: Are You Sure?

Ginger and I discuss her book Are You Sure? The Unconscious Origins of Certainty, which summarizes Richard Burton's work exploring the experience and phenomenal origin of feeling confident, and how the vast majority of our brain processing occurs outside our conscious awareness. Are You Sure? The Unconscious Origins of Certainty.Brain Science Podcast.
6/16/20201 hour, 22 minutes, 35 seconds
Episode Artwork

BI 073 Megan Peters: Consciousness and Metacognition

Megan and I discuss her work using metacognition as a way to study subjective awareness, or confidence. We talk about using computational and neural network models to probe how decisions are related to our confidence, the current state of the science of consciousness, and her newest project using fMRI decoded neurofeedback to induce particular brain states in subjects so we can learn about conscious and unconscious brain processing. Notes: Visit Megan's cognitive & neural computation lab.Twitter: @meganakpetersThe papers we discuss or mention:Human intracranial electrophysiology suggests suboptimal calculations underlie perceptual confidenceTuned normalization in perceptual decision-making circuits can explain seemingly suboptimal confidence behavior.
6/10/20201 hour, 25 minutes, 10 seconds
Episode Artwork

BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality

Mazviita and I discuss the growing divide between prediction and understanding as neuroscience models and deep learning networks become bigger and more complex. She describes her non-factive account of understanding, which among other things suggests that the best predictive models may deliver less understanding. We also discuss the brain as a computer metaphor, and whether it's really possible to ignore all the traditionally "non-computational" parts of the brain like metabolism and other life processes. Show notes: Her website.Outside color website (with links to more of her publications)Her book Outside Color: Perceptual Science and the Puzzle of Color in Philosophy.Papers we discuss or mention:Prediction Versus Understanding in Computationally Enhanced Neuroscience.Your brain is like a computer: function, analogy, simplification.Charting the Heraclitean Brain: Perspectivism and Simplification in Models of the Motor Cortex.
6/1/20201 hour, 18 minutes, 53 seconds
Episode Artwork

BI 071 J. Patrick Mayo: The Path To Faculty

Patrick and I mostly discuss his path from a technician in the then nascent Jim DiCarlo lab, through his graduate school and two postdoc experiences, and finally landing a faculty position, plus the culture and issues in academia in general. We also cover plenty of science, like the role of eye movements in the study of vision, the neuroscience (and concept) of attention, what Patrick thinks of the deep learning hype, and more. But, this is a special episode, less about the science and more about the experience of an academic neuroscience trajectory/life. Episodes like this will appear in Patreon supporters' private feeds from now on. Show notes: His pre-lab website university page.Twitter: @mayo_lab.Here’s the paper he recommends to understand attention:Attention can be subdivided into neurobiological components corresponding to distinct behavioral effects.
5/25/20201 hour, 10 minutes, 57 seconds
Episode Artwork

BI 070 Bradley Love: How We Learn Concepts

Brad and I discuss his battle-tested, age-defying cognitive model for how we learn and store concepts by forming and rearranging clusters, how the model maps onto brain areas, and how he's using deep learning models to explore how attention and sensory information interact with concept formation. We also discuss the cognitive modeling approach, Marr's levels of analysis, the term "biological plausibility", emergence and reduction, and plenty more. Notes: Visit Brad’s website.Follow Brad on twitter: @ProfData.Related papers:Levels of Biological Plausibility.Models in search of a brain.A non-spatial account of place and grid cells based on clustering models of concept learning.Abstract neural representations of category membership beyond information coding stimulus or response.Ventromedial prefrontal cortex compression during concept learning.The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural NetworksLearning as the unsupervised alignment of conceptual systems.
5/15/20201 hour, 47 minutes, 7 seconds
Episode Artwork

BI 069 David Ferrucci: Machines To Understand Stories

David and I discuss the latest efforts he and his Elemental Cognition team have made to create machines that can understand stories the way humans can and do. The long term vision is to create what David calls "thought partners", which are virtual assistants that can learn and synthesize a massive amount of information for us when we need that information for whatever project we're working on. We also discuss the nature of understanding, language, the role of the biological sciences for AI, and more. Dave’s business Elemental Cognition.The paper we discuss:To Test Machine Comprehension, Start by Defining Comprehension.
5/5/20201 hour, 26 minutes, 35 seconds
Episode Artwork

BI 068 Rodrigo Quian Quiroga: NeuroScience Fiction

Rodrigo and I discuss concept cells and his latest book, NeuroScience Fiction. The book is a whirlwind of many of the big questions in neuroscience, each one framed by of one of Rodrigo’s favorite science fiction films and buttressed by tons of history, literature, and philosophy. We discuss a few of the topics in the book, like AI, identity, free will, consciousness, and immortality, and we keep returning to concept cells and the role of abstraction in human cognition. Notes: Rodrigo's lab website: Centre for Systems Neuroscience at the University of Leicester, UKHis book:NeuroScience Fiction: From "2001: A Space Odyssey" to "Inception," How Neuroscience Is Transforming Sci-Fi into Reality―While Challenging Our Beliefs About the Mind, Machines, and What Makes us Human.Papers we discuss or mention:Concept cells: the building blocks of declarative memory functions.Neural representations across species.Searching for the neural correlates of human intelligence.Talks:Concept cells and their role in memory - Part 1 and Part 2
4/24/20201 hour, 34 minutes, 44 seconds
Episode Artwork

BI 067 Paul Cisek: Backward Through The Brain

In this second part of my conversion with Paul (listen to the first part), we continue our discussion about how to understand brains as feedback control mechanisms - controlling our internal state and extending that control into the world - and how Paul thinks the key to understanding intelligence is to trace our evolutionary past through phylogenetic refinement. Paul's lab website.(A few of) his papers we discuss or mention:Resynthesizing behavior through phylogenetic refinement.Navigating the affordance landscape: Feedback control as a process model of behavior and cognition.Neural Mechanisms for Interacting with a World Full of Action Choices.Books Paul recommends about these topics:The Ecological Approach to Visual Perception by Gibson.Brains Through Time: A Natural History of Vertebrates by Striedter and Northcutt.The Neurobiology of the Prefrontal Cortex: Anatomy, Evolution, And The Origin Of Insight by Passingham and Wise.The Evolution of Memory Systems: Ancestors, Anatomy, and Adaptations by Murray, Wise, and Graham.The ancient origins of consciousness:How the brain created experience by Feinberg and Mallatt.Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought by Hendriks-Jansen.In case, like me, you didn’t know what an amphioxus is… here you go.
4/18/202049 minutes
Episode Artwork

BI 066 Paul Cisek: Forward Through Evolution

In this first part of our conversation, Paul and I discuss his approach to understanding how the brain (and intelligence) works. Namely, he believes we are fundamentally action and movement oriented - all of our behavior and cognition is based on controlling ourselves and our environment through feedback control mechanisms, and basically all neural activity should be understood through that lens. This contrasts with the view that we serially perceive the environment, make internal representations of what we perceive, do some cognition on those representations, and transform that cognition into decisions about how to move. From that premise, Paul also believes the best (and perhaps only) way to understand our current brains is by tracing out the evolutionary steps that took us from our single celled first organisms all the way to us - a process he calls phylogenetic refinement. Paul's lab website.(A few of) his papers we discuss or mention:Resynthesizing behavior through phylogenetic refinement.Navigating the affordance landscape: Feedback control as a process model of behavior and cognition.Neural Mechanisms for Interacting with a World Full of Action Choices.Books Paul recommends about these topics:The Ecological Approach to Visual Perception by Gibson.Brains Through Time: A Natural History of Vertebrates by Striedter and Northcutt.The Neurobiology of the Prefrontal Cortex: Anatomy, Evolution, And The Origin Of Insight by Passingham and Wise.The Evolution of Memory Systems: Ancestors, Anatomy, and Adaptations by Murray, Wise, and Graham.The ancient origins of consciousness:How the brain created experience by Feinberg and Mallatt.Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought by Hendriks-Jansen.In case, like me, you didn’t know what an amphioxus is… here you go.
4/15/20201 hour, 34 minutes, 11 seconds
Episode Artwork

BI 065 Thomas Serre: How Recurrence Helps Vision

Thomas and I discuss the role of recurrence in visual cognition: how brains somehow excel with so few “layers” compared to deep nets, how feedback recurrence can underlie visual reasoning, how LSTM gate-like processing could explain the function of canonical cortical microcircuits, the current limitations of deep learning networks like adversarial examples, and a bit of history in modeling our hierarchical visual system, including his work with the HMAX model and interacting with the deep learning folks as convolutional neural networks were being developed. Show Notes: Visit the Serre Lab website. Follow Thomas on twitter: @tserre.Good reviews that references all the work we discussed, including the HMAX model: Beyond the feedforward sweep: feedback computations in the visual cortex. Deep learning: the good, the bad and the ugly. Papers about the topics we discuss: Complementary Surrounds Explain Diverse Contextual Phenomena Across Visual Modalities. Recurrent neural circuits for contour detection.Learning long-range spatial dependencies with horizontal gated-recurrent units.
4/5/20201 hour, 40 minutes, 13 seconds
Episode Artwork

BI 064 Galit Shmueli: Explanation vs. Prediction

Galit and I discuss the independent roles of prediction and explanation in scientific models, their history and eventual separation in the philosophy of science, how they can inform each other, and how statisticians like Galit view the current deep learning explosion. Galit's website. Follow her on twitter: @gshmueli. The papers we discuss or mention: To Explain or To Predict?Predictive Analytics in Information Systems Research.
3/28/20201 hour, 28 minutes, 25 seconds
Episode Artwork

BI 063 Uri Hasson: The Way Evolution Does It

Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale. Show notes: Uri's lab website. Follow his lab on twitter: @HassonLab.The paper we discuss: Direct Fit to Nature: An EvolutionaryPerspective on Biological and Artificial Neural Networks. Here’s the BioRxiv version in case the above doesn’t work.  Uri mentioned his newest paper: Keep it real: rethinking the primacy of experimental control in cognitive neuroscience.
3/15/20201 hour, 32 minutes, 28 seconds
Episode Artwork

BI 062 Stefan Leijnen: Creativity and Constraint

Stefan and I discuss creativity and constraint in artificial and biological intelligence. We talk about his Asimov Institute and its goal of artificial creativity and constraint, different types and functions of creativity, the neuroscience of creativity and its relation to intelligence, how constraint is an essential factor in all creative processes, and how computational accounts of intelligence may need to be discarded to account for our unique creative abilities.  Show notes: The Asimov Institute.Get that Zoo of Networks poster we talk about! See preview below.His site at Utrecht University of Applied Sciences. Stefan’s personal website. Follow the Asimov Institute on twitter: @asimovinstitute .Stuff mentioned: Creativity and Constraint in Artificial Systems (Leijnen 2014 Dissertation). Incomplete Nature - Terrance Deacon’s long, challenging read with fascinating original ideas. Neither Ghost Nor Machine - Jeremy Sherman’s succinct, readable summary of some arguments in Incomplete Nature.
3/4/20201 hour, 57 minutes, 16 seconds
Episode Artwork

BI 061 Jörn Diedrichsen and Niko Kriegeskorte: Brain Representations

Jörn, Niko and I continue the discussion of mental representation from last episode with Michael Rescorla, then we discuss their review paper, Peeling The Onion of Brain Representations, about different ways to extract and understand what information is represented in measured brain activity patterns. Show notes: Jörn's lab website. Niko's lab website. Jörn on twitter: DiedrichsenLab. Niko on twitter: KriegeskorteLab.The papers we discuss or mention: Peeling the Onion of Brain Representations. Annual Review of Neuroscience, 2019 Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS, 2017.
2/21/20201 hour, 29 minutes, 17 seconds
Episode Artwork

BI 060 Michael Rescorla: Mind as Representation Machine

Michael and I discuss the philosophy and a bit of history of mental representation including the computational theory of mind and the language of thought hypothesis, how science and philosophy interact, how representation relates to computation in brains and machines, levels of computational explanation, and we discuss some examples of representational approaches to mental processes like bayesian modeling. Show notes: Michael's website (with links to a ton of his publications). Science and PhilosophyWhy science needs philosophy by Laplane et al 2019.Why Cognitive Science Needs Philosophy and Vice Versa by Paul Thagard, 2009.Some of Michael's papers/articles we discuss or mention: The Computational Theory of Mind. Levels of Computational Explanation. Computational Modeling of the Mind: What Role for Mental Representation?From Ockham to Turing --- and Back Again. Talks: Predictive coding “debate” with Michael and a few other folks. An overview and history of the philosophy of representation. Books we mentioned: The Structure of Scientific Revolutions by Thomas Kuhn. Memory and the Computational Brain by Randy Gallistel and Adam King. Representation In Cognitive Science by Nicholas Shea. Types and Tokens: On Abstract Objects by Linda Wetzel. Probabilistic Robotics by Thrun, Burgard, and Fox.
2/11/20201 hour, 36 minutes, 3 seconds
Episode Artwork

BI 059 Wolfgang Maass: How Do Brains Compute?

In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing. Wolfgang's website.Advice To a Young Investigator (has the quote at the beginning of the episode) by Santiago Ramon y Cajal.Papers we discuss or mention: Searching for principles of brain computation. Brain Computation: A Computer Science Perspective.Long short-term memory and learning-to-learn in networks of spiking neurons.A solution to the learning dilemma for recurrent networks of spiking neurons.Reservoirs learn to learn.Talks that cover some of these topics:Computation in Networks of Neurons in the Brain I.Computation in Networks of Neurons in the Brain II.
1/22/20201 hour, 6 seconds
Episode Artwork

BI 058 Wolfgang Maass: Computing Brains and Spiking Nets

In this first part of our conversation (here's the second part), Wolfgang and I discuss the state of theoretical and computational neuroscience, and how experimental results in neuroscience should guide theories and models to understand and explain how brains compute. We also discuss brain-machine interfaces, neuromorphics, and more. In the next part (here), we discuss principles of brain processing to inform and constrain theories of computations, and we briefly talk about some of his most recent work making spiking neural networks that incorporate some of these brain processing principles. Wolfgang's website. The book Wolfgang recommends: The Brain from Inside Out by György Buzsáki.Papers we discuss or mention: Searching for principles of brain computation. Brain Computation: A Computer Science Perspective.Long short-term memory and learning-to-learn in networks of spiking neurons.A solution to the learning dilemma for recurrent networks of spiking neurons.Reservoirs learn to learn.Talks that cover some of these topics:Computation in Networks of Neurons in the Brain I.Computation in Networks of Neurons in the Brain II.
1/15/202055 minutes, 10 seconds
Episode Artwork

BI 057 Nicole Rust: Visual Memory and Novelty

Nicole and I discuss how a signature for visual memory can be coded among the same population of neurons known to encode object identity, how the same coding scheme arises in convolutional neural networks trained to identify objects, and how neuroscience and machine learning (reinforcement learning) can join forces to understand how curiosity and novelty drive efficient learning. Check out Nicole’s Visual Memory Laboratory website. Follow her on twitter: @VisualMemoryLab The papers we discuss or mention: Single-exposure visual memory judgments are reflected in inferotemporal cortex. Population response magnitude variation in inferotemporal cortex predicts image memorability.Visual novelty, curiosity, and intrinsic reward in machine learning and the brain.The work by Dan Yamins’s group that Nicole mentions: Local Aggregation for Unsupervised Learning of Visual Embeddings
1/3/20201 hour, 21 minutes, 12 seconds
Episode Artwork

BI 056 Tom Griffiths: The Limits of Cognition

I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon's bounded rationality and Stuart Russel’s bounded optimality concepts. The resource-rational framework illuminates how the constraints of optimizing our available cognition can help us understand what algorithms our brains use to get things done, and can serve as a bridge between Marr’s computational, algorithmic, and implementation levels of understanding. We also talk cognitive prostheses, artificial general intelligence, consciousness, and more. Visit Tom's Computational Cognitive Science Lab. Check out his book with Brian Christian, Algorithms To Live By.Some of the papers we discuss or mention:Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources.Data on the mind - the data repository we discussed briefly A paper that discusses it: Finding the traces of behavioral and cognitive processes in big data and naturally occurring datasets.
12/22/20191 hour, 27 minutes, 37 seconds
Episode Artwork

BI 055 Thomas Naselaris: Seeing Versus Imagining

Thomas and I talk about what happens in the brain’s visual system when you see something versus imagine it. He uses generative encoding and decoding models and brain signals like fMRI and EEG to test the nature of mental imagery. We also discuss the huge fMRI dataset of natural images he’s collected to infer models of the entire visual system, how we’ve still not tapped the potential of fMRI, and more. Thomas's lab website.  Papers we discuss or mention: Resolving Ambiguities of MVPA Using Explicit Models of Representation. Human brain activity during mental imagery exhibits signatures of inference in a hierarchical generative model.
12/9/20191 hour, 26 minutes, 18 seconds
Episode Artwork

BI 054 Kanaka Rajan: How Do We Switch Behaviors?

Support the Podcast Kanaka and I discuss a few different ways she uses recurrent neural networks to understand how brains give rise to behaviors. We talk about her work showing how neural circuits transition from active to passive coping behavior in zebrafish, and how RNNs could be used to understand how we switch tasks in general and how we multi-task. Plus the usual fun speculation, advice, and more. Kanaka’s google scholar profile. Follow her on twitter: @rajankdr. Papers we discuss: Neuronal Dynamics Regulating Brain and Behavioral State Transitions. How to study the neural mechanisms of multiple tasks. Gilbert Strang's linear algebra video lectures Kanaka suggested.
11/27/20191 hour, 15 minutes, 24 seconds
Episode Artwork

BI 053 Jon Brennan: Linguistics in Minds and Machines

Support the Podcast Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and words, neuro-computational models, and neural data like EEG and fMRI to figure out how we process and understand language while listening to the natural language found in everyday conversations and stories. I also get his take on the current state of natural language processing and other AI advances, and how linguistics, neurolinguistics, and AI can contribute to each other.  Jon's Computational Neurolinguistics Lab. His personal website. The papers we discuss or mention: Hierarchical structure guides rapid linguistic predictions during naturalistic listening.Finding syntax in human encephalography with beam search.
11/17/20191 hour, 33 minutes, 24 seconds
Episode Artwork

BI 052 Andrew Saxe: Deep Learning Theory

Support the Podcast Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains. Show notes: Visit Andrew's website. The papers we discuss or mention: Are Efficient Deep Representations Learnable?A theory of memory replay and generalization performance in neural networks.A mathematical theory of semantic development in deep neural networks.A good talk: High-Dimensional Dynamics Of Generalization Errors. A few recommended texts to dive deeper: Introduction To The Theory Of Neural Computation.Statistical Mechanics of Learning.Theoretical Neuroscience.
11/6/20191 hour, 25 minutes, 48 seconds
Episode Artwork

BI 051 Jess Hamrick: Mental Simulation and Construction

Support the Podcast Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a simulated blocks and glue world using graph neural networks and deep reinforcement learning. We also discuss her work modeling mental simulation in humans and how it could be implemented in machines, and plenty more. Show notes: Jess’s website. Follow her on twitter: @jhamrick The papers we discuss or mention: Analogues of mental simulation and imagination in deeplearning. Structured agents for physical construction. Relational inductive biases, deep learning, and graph networks.Build your own graph networks: Open source graph network library.
10/27/20191 hour, 28 minutes, 21 seconds
Episode Artwork

BI 050 Kyle Dunovan: Academia to Industry

Kyle and I talk about his work modeling the basal ganglia and its circuitry to control whether we take an action and how we select among alternative actions. We also reflect on his experiences in academia, the larger picture of what it’s like in graduate school and after - at least in a computational neuroscience program - why he left, what he’s doing now, and how it all fits together. Show notes: Kyle’s website. Follow him on twitter: @dunovank Examples of his work on basal ganglia and decision-making and control: Believer-Skeptic Meets Actor-Critic: Rethinking the Role of Basal Ganglia Pathways during Decision-Making and Reinforcement Learning. Reward-driven changes in striatal pathway competition shape evidence evaluation in decision-making. Errors in Action Timing and Inhibition Facilitate Learning by Tuning Distinct Mechanisms in the Underlying Decision Process. Mark Humphries’ article on Medium: Academia is the Alternative Career Path. For fun, a bit about the “free will” experiments of Benjamin Libet. 
10/17/20191 hour, 36 minutes, 2 seconds
Episode Artwork

BI 049 Phillip Alvelda: Trustworthy Brain Machines

Support the Podcast Phillip and I discuss his company Brainworks, which uses the latest neuroscience to build AI into its products. We talk about their first product, Ambient Biometrics, that measures vital signs using your smartphone's camera. We also dive into entrepreneurship in the AI startup world, ethical issues in AI and social media companies, his early days using neural networks at NASA, where he thinks this is all headed, and more. Show notes: His company, Brainworks.Follow Phillip on twitter: @alvelda.Here's a talk he gave: Building Synthetic Brains.A guest post on Rodney Brooks's blog: Pondering the Empathy Gap.
10/6/20191 hour, 24 minutes, 45 seconds
Episode Artwork

BI 048 Liz Spelke: What Makes Us Special?

Support the Podcast Liz and I discuss her work on cognitive development, specially in infants, and what it can tell us about what makes human cognition different from other animals, what core cognitive abilities we’re born with, and how those abilities may form the foundation for much of our other cognitive abilities to develop. We also talk about natural language as the potential key faculty that synthesizes our early core abilities into the many higher cognitive functions that make us unique as a species, the potential for AI to capitalize on what we know about cognition in infants, plus plenty more. Show notes: Visit Liz’s lab website. Related talks/lectures by Liz:The Power and Limits of Artificial Intelligence. A developmental perspective on brains, minds and machines. Visit the CCN conference website to learn more and see more talks.
9/25/20191 hour, 24 minutes, 37 seconds
Episode Artwork

BI 047 David Poeppel: Wrong in Interesting Ways

In this second part of our conversation, (listen to the first part) David and I discuss his thoughts about current language and speech techniques in AI, his thoughts about the prospects of artificial general intelligence, the challenge of mapping the parts of linguistics onto the parts of neuroscience, the state of graduate training, and more. Visit David's lab website at NYU. He’s also a director at Max Planck Institute for Empirical Aesthetics. Follow him on twitter: @davidpoeppel. Some of the papers we discuss or mention (lots more on his website): The cortical organization of speech processing.The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language.A good talk: What Language Processing in the Brain Tells Us About the Structure of the Mind. Transformer model: How do Transformers Work in NLP? A Guide to the Latest State-of-the-Art Models. Attention Is All You Need.
9/15/201948 minutes, 12 seconds
Episode Artwork

BI 046 David Poeppel: From Sounds to Meanings

Support the Podcast David and I talk about his work to understand how sound waves floating in the air get transformed into meaningful concepts in your mind. He studies speech processing and production, language, music, and everything in between, approaching his work with steadfast principles to help frame what it means to understand something scientifically. We discuss many of the hurdles to understanding how our brains work and making real progress in science, plus a ton more. Show Notes Visit David's lab website at NYU. He’s also a director at Max Planck Institute for Empirical Aesthetics. Follow him on twitter: @davidpoeppel. For a related episode (philosophically), you might re-visit my discussion with John Krakauer. Some of the papers we discuss or mention (lots more on his website): The cortical organization of speech processing.The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language.A good talk: What Language Processing in the Brain Tells Us About the Structure of the Mind. NLP Transformer model: How do Transformers Work in NLP? A Guide to the Latest State-of-the-Art Models. Attention Is All You Need.
9/8/20191 hour, 37 minutes, 12 seconds
Episode Artwork

BI 045 Raia Hadsell: Robotics and Deep RL

Support the Podcast Show notes: Raia and I discuss her work at DeepMind figuring out how to build robots using deep reinforcement learning to do things like navigate cities and generalize intelligent behaviors across different tasks. We also talk about challenges specific for embodied AI (robots), how much of it takes inspiration from neuroscience, and lots more. Raia’s website. Follow her on Twitter: @RaiaHadsell Papers relevant to our discussion: Learning to Navigate in Cities without a Map. Overcoming catastrophic forgetting in neural networks. Progressive neural networks. A few Talks: Deep reinforcement learning in complex environments. Progressive Nets & Transfer. The new Neuro-AI conference she's starting with Tony Zador and Blake Richards:From Neuroscience to Artificially Intelligent Systems (NAISys) 
8/28/20191 hour, 16 minutes, 52 seconds
Episode Artwork

BI 044 Talia Konkle: Turning Vision On Its Side

Talia and I discuss her work on how our visual system is organized topographically, and divides into three main categories: big inanimate things, small inanimate things, and animals. Her work is unique in that it focuses not on the classic hierarchical processing of vision (though she does that, too), but what kinds of things are represented along that hierarchy. She also uses deep networks to learn more about the visual system. We also talk about her keynote talk at the Cognitive Computational Neuroscience conference and plenty more. Show notes: Talia’s lab website. Follow her on twitter: @talia_konkle. Check out the Cognitive Computational Neuroscience conference, where she'll give a keynote address.Papers we discuss/reference: Early work on the tripartite organization. Tripartite Organization of the Ventral Stream by Animacy and Object Size. A more recent update, with the texforms we discuss and comparision too deep learning CNN networks used to model the ventral visual stream. Mid-level visual features underlie the high-level categorical organization of the ventral stream.The article Talia references about an elegant solution to an old problem in computer science.
8/18/20191 hour, 15 minutes, 33 seconds
Episode Artwork

BI 043 Anna Schapiro: Learning in Hippocampus and Cortex

How does knowledge in the world get into our brains and integrated with the rest of our knowledge and memories? Anna and I talk about the complementary learning systems theory introduced in 1995 that posits a fast episodic hippopcampal learning system and a slower statistical cortical learning system. We then discuss her work that advances and adds missing pieces to the CLS framework, and explores how sleep and sleep cycles contribute to the process. We also discuss how her work might contribute to AI systems by using multiple types of memory buffers, a little about being a woman in science, and how it’s going with her brand new lab. Show Notes: Anna’s Penn Computational Cognitive Neuroscience Lab. Follow Anna on Twitter: @annaschapiro. Papers we discuss: The original Complimentary Learning Systems paper:  Complimentary Learning Systems Theory and Its Recent Update. Anna’s work on CLS and Hippocampus: The hippocampus is necessary for the consolidation of a task that does not require the hippocampus for initial learning. Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning. Examples of her work on sleep: Active and effective replay: Systems consolidation reconsidered again. Switching between internal and external modes: A multiscale learning principle. Sleep Benefts Memory for Semantic Category Structure While Preserving Exemplar-Specifc Information.
8/7/20191 hour, 30 minutes, 30 seconds
Episode Artwork

BI 042 Brad Aimone: Brains at the Funeral of Moore’s Law

This is part 2 of my conversation with Brad (listen to part 1 here). We discuss how Moore’s law is on its last legs, and his ideas for how neuroscience - in particular neural algorithms - may help computing continue to scale in a post-Moore’s law world. We also discuss neuromporphics in general, and more. Brad's homepage.Follow Brad on Twitter: @jbimaknee.The paper we discuss:Neural Algorithms and Computing Beyond Moore's Law.Check out the Neuro Inspired Computing Elements (NICE) workshop - lots of great talks and panel discussions.
7/28/201959 minutes, 35 seconds
Episode Artwork

BI 041 Brad Aimone: Neurogenesis and Spiking in Deep Nets

In this first part of our discussion, Brad and I discuss the state of neuromorphics and its relation to neuroscience and artificial intelligence.  He describes his work adding new neurons to deep learning networks during training, called neurogenesis deep learning, inspired by how neurogenesis in the dentate gyrus of the hippocampus helps learn new things while keeping previous memories intact. We also talk about his method to transform deep learning networks into spiking neural networks so they can run on neuromorphic hardware, and the neuromorphics workshop he puts on every year, the Neuro Inspired Computational Elements (NICE) workshop. Show Notes: Brad's homepage.Follow Brad on Twitter: @jbimaknee.The papers we discuss:Computational Influence of Adult Neurogenesis on Memory Encoding.Neurogenesis Deep Learning.Training deep neural networks for binary communication with the Whetstone method.And here's the arXiv version.Check out the Neuro Inspired Computing Elements (NICE) workshop - lots of great talks and panel discussions.
7/19/20191 hour, 6 minutes, 33 seconds
Episode Artwork

BI 040 Nando de Freitas: Enlightenment, Compassion, Survival

Show Notes: Nando’s CIFAR page.Follow Nando on Twitter: @NandoDF He's giving a keynote address at Cognitive Computational Neuroscience Meeting 2020.Check out his famous machine learning lectures on Youtube. Papers we (more allude to than) discuss: Neural Programmer-Interpreters. Learning to learn by gradient descent by gradient descent. Dueling Network Architectures for Deep Reinforcement Learning.Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions. One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL.
7/9/20191 hour, 2 minutes, 19 seconds
Episode Artwork

BI 039 Anne Churchland: Decisions, Lapses, and Fidgets

Show notes: Check out Anne's lab website.Follow her on twitter: @anne_churchlandAnne's List, the list of female systems neuroscientists to invite as speakers.The papers we discuss: Single-trial neural dynamics are dominated by richly varied movements.Lapses in perceptual judgments reflect exploration.Complexity vs Stimulus-Response Compatibility vs Stimulus-response ethological validity.Perceptual Decision-Making: A Field in the Midst of a Transformation.
6/29/20191 hour, 19 minutes, 9 seconds
Episode Artwork

BI 038 Máté Lengyel: Probabilistic Perception and Learning

Show notes: Máté's Cambridge website.He's part of the Computational Learning and Memory Group there.Here's his webpage at Central European University.A review to introduce his subsequent work:Statistically optimal perception and learning: from behavior to neural representations.Related recent talks:Bayesian models of perception, cognition and learning - CCCN 2017. Sampling: coding, dynamics, and computation in the cortex (Cosyne 2018).
6/19/20191 hour, 18 minutes, 37 seconds
Episode Artwork

BI 037 Nathaniel Daw: Thinking the Right Thoughts

Show notes: Nathaniel will deliver a keynote address at the upcoming CCN conference.Check out his lab website.Follow him on Twitter: @nathanieldaw.The paper we discuss:Prioritized memory access explains planning and hippocampal replayOr see a related talk: Rational planning using prioritized experience replay.
6/9/20191 hour, 29 minutes, 52 seconds
Episode Artwork

BI 036 Roshan Cools: Cognitive Control and Dopamine

Show notes: Roshan will deliver a keynote address at the upcoming CCN conference.Roshan's Motivational and Cognitive Control lab.Follow her on Twitter: @CoolsControl.Her TED Talk on Trusting Science.Papers related to the research we discuss:The costs and benefits of brain dopamine for cognitive control.Or see her variety of related works.
5/30/20191 hour, 11 minutes, 8 seconds
Episode Artwork

BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay

Show notes: This is the first in a series of episodes where I interview keynote speakers at the upcoming Cognitive Computational Neuroscience conference in September in Berlin. Thomas Naseralis summarizes the origins and vision of the CCN. Tim’s Neuroscience homepage: The papers we discuss: Generalisation of structural knowledge in the hippocampal-entorhinal system (referred to in the podcast at "The Tolman Eichenbaum Machine”) Human replay spontaneously reorganizes experience. (In press at Cell - below is an abstract for it from COSYNE 2018) Inference in replay through factorized representations.
5/20/20191 hour, 11 minutes, 3 seconds
Episode Artwork

BI 034 Tony Zador: How DNA and Evolution Can Inform AI

Show notes: Tony’s lab site, where there are links to his auditory decision making work and connectome work we discuss. Here are a few talks online about that: Corticostriatal circuits underlying auditory decisions. Can we upload our mind to the cloud?. Follow Tony on Twitter: @TonyZador The paper we discuss: A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains. Conferences we talk about: COSYNE conference. Neural Information and Coding workshops. Neural Information Processing conference.
5/9/20191 hour, 18 minutes, 36 seconds
Episode Artwork

BI 033 Federico Turkheimer: Weak Versus Strong Emergence

Show Notes: Federico's website.Federico’s papers we discuss: Conflicting emergences. Weak vs. strong emergence for the modelling of brain functionFrom homeostasis to behavior: balanced activity in an exploration of embodied dynamic environmental-neural interactionFree Energy Principle. Integrated Information Theory. The Tononi paper about Integrated Information Theory and its relation to emergence: Quantifying causal emergence shows that macro can beat micro Default mode as large scale oscillation: The brain's code and its canonical computational motifs. From sensory cortex to the default mode network: A multi-scale model of brain function in health and disease.
4/29/20191 hour, 6 minutes, 5 seconds
Episode Artwork

BI 032 Rafal Bogacz: Back-Propagation in Brains

Show notes: Visit Rafal’s Lab Website. Rafal's papers we discuss: Theories of Error Back-Propagation in the Brain. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.A tutorial on the free-energy framework for modelling perception and learning.Check out Episode 9 with Blake Richards about how apical dendrites could do back-prop. The Randall O’Reilly early paper describing biologically plausible back propagation: O'Reilly, R.C. (1996). Biologically Plausible Error-driven Learning using Local Activation Differences: The Generalized Recirculation Algorithm. Neural Computation, 8, 895-938.
4/19/20191 hour, 15 minutes, 44 seconds
Episode Artwork

BI 031 Francisco de Sousa Webber: Natural Language Understanding

Cortical.ioThe white paper we discuss: Semantic Folding Theory And its Application in Semantic Fingerprinting. A nice talk Francisco gave: Semantic fingerprinting: Democratising natural language processing Francisco was influenced by Jeff Hawkins’ work and book On Intelligence. See episode 017 to learn more about Jeff Hawkins’ approach to modeling cortex. Douglas Hofstadter’s Analogy as the Core of Cognition.
4/12/20191 hour, 44 minutes, 47 seconds
Episode Artwork

BI 030 Jay McClelland: Mathematical Reasoning and PDP

Jay's homepage at Stanford.Implementing mathematical reasoning in machines:The video lecture.The paper.Parallel Distributed Processing by Rumelhart and McClelland.Complimentary Learning Systems Theory and Its Recent Update.Episode 28 with Sam Gershman about building machines that learn and think like humans.Check out my interview on Ginger Campbell's Brain Science podcast.
3/27/20191 hour, 4 minutes, 40 seconds
Episode Artwork

BI 029 Paul Humphreys & Zac Irving: Emergence & Mind Wandering

Show notes: Paul Humphreys' website.Zac Irving's website.Emergence: Emergence: A Philosophical Account. (book by Paul)The Oxford Handbook of Philosophy of Science. Mind Wandering: Mind-Wandering is Unguided Attention.The Philosophy of Mind-Wandering.The Neuroscience of Spontaneous Thought.
3/6/20191 hour, 44 minutes, 23 seconds
Episode Artwork

BI 028 Sam Gershman: Free Energy Principle & Human Machines

Show notes: Sam's Computational Cognitive Neuroscience Lab.Follow Sam on Twitter: @gershbrain.The papers we discuss: What does the free energy principle tell us about the brain?Building machines that learn and think like people. A video summarizing that work. The book Sam recommended: What Is Thought by Eric Baum.
2/19/20191 hour, 14 minutes, 9 seconds
Episode Artwork

BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments

Show notes: Websites: Ioana Marinescu, Konrad KordingTwitter: Twitter: @mioana; @KordingLabThe paper we discuss: Quasi-experimental causality in neuroscience and behavioral research.A Pre-print version. Judea Pearl's The Book of Why.Judea Pearl's online lecture about causality: The Art and Science of Cause and Effect.Ioana's review of Universal Basic Income: No Strings Attached.The post on hedge drift by Stefan Schubert: Hedge drift and advanced motte-and-bailey.Books recommended by Ioana and Konrad to understand causality: Mostly Harmless Econometrics.Mastering 'Metrics: The Path from Cause to Effect.
2/12/20191 hour, 20 minutes, 4 seconds
Episode Artwork

BI 026 Kendrick Kay: A Model By Any Other Name

Image courtesy of Kendrick Kay: Brain art Show notes: Check out Kendrick’s lab website: CVN lab. Follow him on twitter: @cvnlab. The papers we discuss: Bottom-up and top-down computations in word- and face-selective cortex. Principles for models of neural information processing. Appreciating diversity of goals in computational neuroscience. A nice talk about the model we discuss: Bottom Up and Top Down Computations in Word- and Face-Selective Cortex Cognitive Computational Neuroscience conference.
1/31/20191 hour, 22 minutes, 30 seconds
Episode Artwork

BI 025 John Krakauer: Understanding Cognition

Show notes BLAM (Brain, Learning, Animation, and Movement) Lab homepage: BLAM on Twitter: @blamlab Papers we discuss: Neuroscience Needs Behavior: Correcting a Reductionist Bias. John and his brother David’s interview in Current Biology: John and David Krakauer Mark Humphries’ piece on John’s “Neuroscience Needs Behavior” paper. Tinbergen’s 4 Questions. John’s book on stroke recovery: Broken Movement: The Neurobiology of Motor Recovery after Stroke (The MIT Press). David Marr’s classic work, Vision. More is Different by Phillip Anderson, and the hierarchical nature of scientific disciplines. Understanding Scientific Understanding by Henk W. De Regt. Inventing Temperature: Measurement and Scientific Progress by Hasock Chang. Soul Dust: The Magic of Consciousness by Nicholas Humphrey. The Organisation of Mind by Tim Shallice and Richard Cooper.
1/25/20191 hour, 46 minutes, 16 seconds
Episode Artwork

BI 024 Tim Behrens: Cognitive Maps

Show notes: Tim’s Neuroscience homepage: Follow Tim on Twitter: @behrenstim. Edward Tolman’s cognitive maps work: Cognitive maps in rats and men. Place Cells and Grid Cells: O’Keefe early place cell paper: The Hippocampus as a Spatial Map. O’Keefe’s book about the work: The Hippocampus as a Cognitive Map. Mosers and their students discover grid cells: Microstructure of a spatial map in the entorhinal cortex. Overview. Here’s a good talk Tim gives on the subject: Building models of the world for behavioural control. Tim’s papers we discuss: What is a cognitive map? (Neuron, 2018) Organizing conceptual knowledge in humans with a grid-like code (Science, 2016) Two of the books he recommended: Theoretical Neuroscience by Dayan and Abbott. Information Theory, Inference and Learning Algorithms by David MacKay.
1/15/20191 hour, 9 minutes, 27 seconds
Episode Artwork

BI 023 Marcel van Gerven: Mind Decoding with GANs

Show notes: Donders Institute for Brain, Cognition and Behaviour Artificial Cognitive Systems on Twitter. Artificial Cognitive Systems research group. The paper we discuss: Generative adversarial networks for reconstructing natural images from brain activity. Read their Mindcodec blog. Nice overview of GANs (appropriately titled): Generative Adversarial Networks: An Overview.
12/28/20181 hour, 4 minutes, 5 seconds
Episode Artwork

BI 022 Melanie Mitchell: Complexity, and AI Shortcomings

Show notes: Follow Melanie on Twitter: @MelMitchell1. Learn more about what she does at her homepage. Here is her New York Times Op-Ed about AI. And here is a talk which goes into more depth: AI and the Barrier of Meaning. Her book, Complexity: A Guided Tour, has won awards. Check out all the free coures and tutorials at Complexity Explorer. Here’s a talk she gave: Introduction to Complexity. Melanie’s PhD advisor was Douglas Hofstadter, author of Godel, Escher, Bach: The Eternal Golden Braid. I mentioned Terrence Deacon’s book Incomplete Nature: How Mind Emerged from Matter.
12/18/201859 minutes, 22 seconds
Episode Artwork

BI 021 Matt Botvinick: Neuroscience and AI at DeepMind

      Show notes: DeepMind. The papers we discuss: Neuroscience-Inspired Artificial Intelligence. A nice summary of the meta-reinforcement learning work. Learning to reinforcement learn. Prefrontal cortex as a meta-reinforcement learning system. Machine Theory of Mind. Reinforcement learning.
12/11/20181 hour, 19 minutes, 39 seconds
Episode Artwork

BI 020 Anna Wexler: Stimulate Your Brain?

Show notes: Anna’s website: Follow Anna on Twitter: @anna_wexler. Check out her documentary Unorthodox. The papers we discuss: Recurrent themes in the history of the home use of electrical stimulation: Transcranial direct current stimulation (tDCS) and the medical battery (1870–1920). The Social Context of “Do-It-Yourself” Brain Stimulation: Neurohackers, Biohackers, and Lifehackers. Who Uses Direct-to-Consumer Brain Stimulation Products, and Why? A Study of Home Users of tDCS Devices. Mind-Reading or Misleading? Assessing Direct-to-Consumer Electroencephalography (EEG) Devices Marketed for Wellness and Their Ethical and Regulatory Implications. Anna mentioned as a great place see what’s really in the supplements you take. p300 ERP component as a signature of a person’s reaction to a stimulus.
11/29/20181 hour, 9 minutes, 4 seconds
Episode Artwork

BI 019 Julie Grollier: Spintronic Neuromorphic Nano-Oscillators!

11/22/201853 minutes, 14 seconds
Episode Artwork

BI 018 Dean Buonomano: Time in Brains and AI

Show notes: Follow Dean on Twitter: @deanbuonoVisit his lab website at UCLA.The review we discuss: The Neural Basis of Timing: Distributed Mechanisms for Diverse Functions The two books we discuss:
11/15/20181 hour, 6 minutes, 3 seconds
Episode Artwork

BI 017 Jeff Hawkins: Location, Location, Location

Follow Jeff and Numenta on twitter: @JeffCHawkins and @Numenta. The Numenta website- access all the previous papers leading up to this point. The paper we discuss: A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex. His influential and excellent book On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines
11/8/201857 minutes, 14 seconds
Episode Artwork

BI 016 Ryota Kanai: Artificial Consciousness

Show notes: Ryota founded the company Araya. Follow him on twitter: @kanair. Integrated Information Theory. Pansychism. The paper we discuss: A unified strategy for implementing curiosity and empowerment driven reinforcement learning. An excellent article by Ryota in Nautilus.. We Need Conscious Robots. Higher order theories of consciousness. The hard problem of consciousness. The free energy principle.
11/1/201855 minutes, 36 seconds
Episode Artwork

BI 015 Terrence Sejnowski: How to Start a Deep Learning Revolution

Show notes: His new book, The Deep Learning Revolution: His Computational Neurobiology Laboratory at the Salk Institute. His faculty page at UCSD. His first book, The Computational Brain. His online course with Barbara Oakley, Learning How To Learn. Steven Johnson’s book Where Good Ideas Come From.
10/25/201849 minutes, 34 seconds
Episode Artwork

BI 014 Konrad Kording: Regulators, Mount Up!

  Show Notes: Follow Konrad on Twitter: @KordingLab. Konrad’s lab website. The paper we discuss: Bioscience-scale automated detection of figure element reuse.
10/18/201841 minutes, 34 seconds
Episode Artwork

BI 013 Dileep George: Vicarious Robot AI

  Dileep’s homepage. Dileep on Twitter: @dileeplearning Vicarious, the general AI robotics company Dileep cofounded. Vicarious on Twitter: @vicariousai. The papers we discuss: A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Cortical Microcircuits from a Generative Vision Model Probabilistic graphical models. Hierarchical temporal memory. HTM School on Youtube. Dileep’s earlier paper with Jeff Hawkins of Numenta: Towards a Mathematical Theory of Cortical Micro-circuits
10/11/201852 minutes, 27 seconds
Episode Artwork

BI 012 Niko Kriegeskorte: Black Box, White Box

Mentioned in the show: Follow Niko on twitter @KriegeskorteLab. Visit his lab website. The Cognitive Computational Neuroscience Conference. The review papers we base the conversation on: Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Cognitive computational neuroscience. Deep Neural Networks in Computational Neuroscience. Want a thorough breakdown of recent research articles? Try his blog. Not mentioned in the show, but check out the computational cognitive neuroscience summer school.
10/5/20181 hour, 6 minutes, 25 seconds
Episode Artwork

BI 011 Grace Lindsay: Visual Attention in CNNs

  Mentioned in the show: The paper we discuss: How biological attention mechanisms improve task performance in a large-scale visual system model. Follow Grace on Twitter: @neurograce. She co-hosts the AI/Neuroscience/Science podcast, Unsupervised Thinking. Her excellent blog Neurdiness. The CNN blog post I highly recommend for a deeper dive: Deep Convolutional Neural Networks as Models of the Visual System: Q&A. Or listen to my interview with Dan Yamins. The podcast that inspired Grace’s Unsupervised Thinking podcast: Partially Examined Life. The (famous) Cocktail Party Effect. 3 excellent books by Yuval Noah Harari (whose name I mar) I mention in passing: Sapiens: A Brief History of Humankind Homo Deus: A Brief History of Tomorrow 21 Lessons for the 21st Century  
9/27/201850 minutes, 30 seconds
Episode Artwork

BI 010 Adam Marblestone: Brain Cost Functions

  Mentioned in the show Adam’s Website. Follow him on Twitter. He made Technology Review’s 35 Innovators Under 35. The paper we discuss: Toward an Integration of DeepLearning and Neuroscience Some of the peripheral things we discussed: Cortical microcircuits: here’s an example paper. Autoencoders are unsupervised learning methods to reconstruct their input. A nice introduction to Generative Adversarial Networks. Marvin Minsky’s classic Society of Mind. Some of his talks online: A Billion-Year-Old Information Technology. What sets the exponent of neuroscience progress?
9/20/20181 hour, 4 minutes, 46 seconds
Episode Artwork

BI 009 Blake Richards: Deep Learning in the Brain

Mentioned in the show Follow Blake on twitter: @tyrell_turing Blake’s Learning in Neural Circuits (LiNC) Laboratory. He’s a Fellow with the Learning in Machines and Brains Program of the Canadian Institute for Advanced Research (CIFAR). The paper we discuss: Towards Deep Learning With Segregated Dendrites. Code to run the model on Github. If you’d rather watch a talk, here’s the same topic in a great talk by Blake. The idea of approaching neuroscience from the perspective there are general principles of computation applicable to both brains and AI: Geoffrey Hinton. Cybernetics. McCullough and Pitts artificial neuron: Their original paper and a nice tutorial. Frank Rosenblaut. Demis Hassabis, who founded Deepmind, wrote a great review of how AI and neuroscience can work together. Donald Hebb of the famed Hebbian Learning in his famous book The Organization of Behavior: A Neuropsychological Theory. Konrad Kording’s 2001 paper articulating the same idea we discuss: Supervised and Unsupervised Learning with Two Sites of Synaptic Integration. MNIST dataset of handwritten digits – used to train and test a lot of machine learning networks. Eliminative Materialism, the idea our common sense conception of the mind is false.
9/13/20181 hour, 11 minutes
Episode Artwork

BI 008 Joshua Glaser: Supervised ML for Neuroscience

  Mentioned in the show The two papers we discuss: The Roles of Supervised Machine Learning in Systems Neuroscience Machine learning for neural decoding Kording lab, where Josh did his PhD work          
9/7/201853 minutes, 34 seconds
Episode Artwork

BI 007 Daniel Yamins: Infant AI and CNNs

Mentioned in the show: Dan’s Stanford Neuroscience and Artificial Intelligence Laboratory: The 2 papers we discuss Performance-optimized hierarchical models predict neural responses in higher visual cortex Learning to Play with Intrinsically-Motivated Self-Aware Agents ImageNet as one of the most important things to stimulate research in AI, developed by these folks. Ventral visual stream (as opposed to the Dorsal stream). Retinotopy Convolutional neural networks were inspired by Kunihiko Fukushima’s Neocognitron 2 modern of approaches to solve the ImageNet database: Google’s NASNet architecture, with about 12 layers Microsoft’s super deep ResNet. Object permanence and some video examples The distinction between the intrinsic motivation of Dan and colleagues’ AI agent and the reinforcement learning motivation of the OpenAI 5 team
9/2/20181 hour, 1 minute, 48 seconds
Episode Artwork

BI 006 Ryan Poplin: Deep Solutions

[bctt tweet=”Check out episode 6 of the Brain Inspired podcast: Deep learning, eyeballs, and brains” username=”pgmid”] Mentioned in the show Ryan Poplin What is a convolutional neural network? Here’s a good summary. Here’s another good summary. Deep learning on eye images: CV risk factors can be predicted from retinal fundus images DeepVariant: Deep learning on genomics The paper: Creating a universal SNP and small indel variant caller with deep neural networks A nice, less technical summary of the work The code on Github  
8/25/201837 minutes, 16 seconds
Episode Artwork

BI 005 David Sussillo: RNNs are Back!

Mentioned in the show: David’s Twitter account Papers we discuss: Sussillo, D.S. & Abbott, L. F. (2009). Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron 63(4). Sussillo, D. (2014) Neural circuits as computational dynamical systems. Current Opinion in Neurobiology 25:156-63. Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Computation 25(3). Sussillo, D., Jozefowicz, R., Abbott, L.F., Pandarinath, C. (2016) LFADS – Latent Factor Analysis via Dynamical Systems. arXiv:1608.06315 What’s a recurrent neural network? Here And Here What’s a dynamical system? Neuropixels Hopfield Networks
8/2/201845 minutes, 39 seconds
Episode Artwork

BI 004 Mark Humphries: Learning to Remember

Mentioned in the show: Mark’s lab The excellent blog he writes on Medium The paper we discuss: An ensemble code in medial prefrontal cortex links prior events to outcomes during learning The code to replicate their findings Dynamical networks: Finding, measuring, and tracking neural population activity using network science
8/2/201841 minutes, 39 seconds
Episode Artwork

BI 003 Blake Porter: Effortful Rats

Mentioned during the show: The paper we discuss: Hippocampal place cell encoding of sloping terrain. Blake’s website, where he writes his blog Noise mystery story Deepmind released a package to design your own bots for Starcraft OpenAI and Dota2: Blake says they hadn’t beat pro teams, but we agreed it’s inevitable. And now they have…
8/2/201842 minutes, 47 seconds
Episode Artwork

BI 002 Steven Potter Part 2: Brains in Dishes

Find out more about Steve at his website. Things mentioned during the show: Papers we talked about: Publishing negative results! Wagenaar, D. A., Pine, J., & Potter, S. M. (2006). Searching for plasticity in dissociated cortical cultures on multi-electrode arrays. Journal of Negative Results in BioMedicine 5:16. Download Solving the bursting neurons problem: Wagenaar, D. A. Madhavan, R. Pine, J. and Potter, S. M. (2005) Controlling bursting in cortical cultures with closed-loop multi-electrode stimulation. J. Neuroscience 25: 680-68 Download Training the cultured networks: Chao, Z. C., Bakkum, D. J., & Potter, S. M. (2008). Shaping Embodied Neural Networks for Adaptive Goal-directed Behavior. PLoS Computational Biology, 4(3): e1000042. Online Open-Access paper, supplement, and movie. Bakkum, D. J., Chao, Z. C. (Co-First Authors), & Potter, S. M. (2008). Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task. Journal of Neural Engineering, 5, 310-323. Download reprint (3MB PDF) The richness of the bursting activity, and how to get good signals from neurons in dishes: Wagenaar, D. A., Pine, J. and Potter, S. M. (2006). An extremely rich repertoire of bursting patterns during the development of cortical cultures. BMC Neuroscience 7:11.Reprint (2.79 MB PDF). You can find tons (over 40GB) of data from that paper here. Non-synaptic plasticity (what?!) Bakkum, D. J., Chao, Z. C., & Potter, S. M. (2008). Long-term activity-dependent plasticity of action potential propagation delay and amplitude in cortical networks. PLoS One, 3(5), e2088. Online Open-Access paper. DIY neuroscience: Backyard brains Citizen neuroscience: Elizabeth Rickers‘ citizen science efforts at Neuroeducate Sapiens Labs Steve’s lab provides open-source electrophysiology rig plans, called NeuroRighter. Follow Steve’s Instrucable projects. Go make Steve’s window light! Or watch a time-lapse of him building an awesome work-bench Here’s are detailed posters of his SunRisa sun-like alarm clock. Extra fun stuff The world-record setting skinny-dippers. Pop-locking dancing robots!  
8/2/20181 hour, 11 minutes, 23 seconds
Episode Artwork

BI 001 Steven Potter: Brains in Dishes

Find out more about Steve at his website. I discovered him when I found his book chapter “What Can AI Get from Neuroscience?” in the following: “50 Years of Artificial Intelligence: Essays Dedicated to the 50th Anniversary of Artificial Intelligence,” M. Lungarella, J. Bongard, & R. Pfeifer (eds.) (pp. 174-185). Berlin: Springer-Verlag. Download the chapter. Link to the whole book at Springer. These days Steve is semi-retired, but is an active consultant for high-tech startups, companies, or individuals. Things mentioned in the show (check out his part 2 episode for more links!) Papers we talked about: Publishing negative results! Wagenaar, D. A., Pine, J., & Potter, S. M. (2006). Searching for plasticity in dissociated cortical cultures on multi-electrode arrays. Journal of Negative Results in BioMedicine 5:16. Download Solving the bursting neurons problem: Wagenaar, D. A. Madhavan, R. Pine, J. and Potter, S. M. (2005) Controlling bursting in cortical cultures with closed-loop multi-electrode stimulation. J. Neuroscience 25: 680-68 Download Training the cultured networks: Chao, Z. C., Bakkum, D. J., & Potter, S. M. (2008). Shaping Embodied Neural Networks for Adaptive Goal-directed Behavior. PLoS Computational Biology, 4(3): e1000042. Online Open-Access paper, supplement, and movie. Bakkum, D. J., Chao, Z. C. (Co-First Authors), & Potter, S. M. (2008). Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task. Journal of Neural Engineering, 5, 310-323. Download reprint (3MB PDF) The richness of the bursting activity: Wagenaar, D. A., Pine, J. and Potter, S. M. (2006). An extremely rich repertoire of bursting patterns during the development of cortical cultures. BMC Neuroscience 7:11.Reprint (2.79 MB PDF). You can find tons (over 40GB) of data from that paper here. Animals to Animats Douglas Hofstadter Gödel, Escher, Bach: An Eternal Golden Braid Metamagical Themas: Questing for the Essence of Mind and Pattern Rodney Brooks: embodied cognition and AI. Neuromorphics: Carver Mead. Real-World Teaching (Steve’s award winning teaching method). Science as Psychology: Sense-Making and Identity in Science Practice. This is the book about Steve’s and others’ process of dealing with failure etc. MEART: The semi-living artist. Silent Barage: noisy pole robots. SymboticA at University of Western Australia.
8/2/201841 minutes, 58 seconds