Winamp Logo
Machine Learning Street Talk (MLST) Cover
Machine Learning Street Talk (MLST) Profile

Machine Learning Street Talk (MLST)

English, Technology, 1 season, 183 episodes, 5 days, 1 hour, 24 minutes
About
Welcome! The team at MLST is inspired by academic research and each week we engage in dynamic discussion with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field without succumbing to hype. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/)
Episode Artwork

Michael Levin - Why Intelligence Isn't Limited To Brains.

Professor Michael Levin explores the revolutionary concept of diverse intelligence, demonstrating how cognitive capabilities extend far beyond traditional brain-based intelligence. Drawing from his groundbreaking research, he explains how even simple biological systems like gene regulatory networks exhibit learning, memory, and problem-solving abilities. Levin introduces key concepts like "cognitive light cones" - the scope of goals a system can pursue - and shows how these ideas are transforming our approach to cancer treatment and biological engineering. His insights challenge conventional views of intelligence and agency, with profound implications for both medicine and artificial intelligence development. This deep discussion reveals how understanding intelligence as a spectrum, from molecular networks to human minds, could be crucial for humanity's future technological development. Contains technical discussion of biological systems, cybernetics, and theoretical frameworks for understanding emergent cognition. Prof. Michael Levin https://as.tufts.edu/biology/people/faculty/michael-levin https://x.com/drmichaellevin Sponsor message: DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? Interested? Apply for an ML research position: [email protected] TOC 1. Intelligence Fundamentals and Evolution [00:00:00] 1.1 Future Evolution of Human Intelligence and Consciousness [00:03:00] 1.2 Science Fiction's Role in Exploring Intelligence Possibilities [00:08:15] 1.3 Essential Characteristics of Human-Level Intelligence and Relationships [00:14:20] 1.4 Biological Systems Architecture and Intelligence 2. Biological Computing and Cognition [00:24:00] 2.1 Agency and Intelligence in Biological Systems [00:30:30] 2.2 Learning Capabilities in Gene Regulatory Networks [00:35:37] 2.3 Biological Control Systems and Competency Architecture [00:39:58] 2.4 Scientific Metaphors and Polycomputing Paradigm 3. Systems and Collective Intelligence [00:43:26] 3.1 Embodiment and Problem-Solving Spaces [00:44:50] 3.2 Perception-Action Loops and Biological Intelligence [00:46:55] 3.3 Intelligence, Wisdom and Collective Systems [00:53:07] 3.4 Cancer and Cognitive Light Cones [00:57:09] 3.5 Emergent Intelligence and AI Agency Shownotes: https://www.dropbox.com/scl/fi/i2vl1vs009thg54lxx5wc/LEVIN.pdf?rlkey=dtk8okhbsejryiu2vrht19qp6&st=uzi0vo45&dl=0 REFS: [0:05:30] A Fire Upon the Deep - Vernor Vinge sci-fi novel on AI and consciousness [0:05:35] Maria Chudnovsky - MacArthur Fellow, Princeton mathematician, graph theory expert [0:14:20] Bow-tie architecture in biological systems - Network structure research by Csete & Doyle [0:15:40] Richard Watson - Southampton Professor, evolution and learning systems expert [0:17:00] Levin paper on human issues in AI and evolution [0:19:00] Bow-tie architecture in Darwin's agential materialism - Levin [0:22:55] Philip Goff - Work on panpsychism and consciousness in Galileo's Error [0:23:30] Strange Loop - Hofstadter's work on self-reference and consciousness [0:25:00] The Hard Problem of Consciousness - Van Gulick [0:26:15] Daniel Dennett - Theories on consciousness and intentional systems [0:29:35] Principle of Least Action - Light path selection in physics [0:29:50] Free Energy Principle - Friston's unified behavioral framework [0:30:35] Gene regulatory networks - Learning capabilities in biological systems [0:36:55] Minimal networks with learning capacity - Levin [0:38:50] Multi-scale competency in biological systems - Levin [0:41:40] Polycomputing paradigm - Biological computation by Bongard & Levin [0:45:40] Collective intelligence in biology - Levin et al. [0:46:55] Niche construction and stigmergy - Torday [0:53:50] Tasmanian Devil Facial Tumor Disease - Transmissible cancer research [0:55:05] Cognitive light cone - Computational boundaries of self - Levin [0:58:05] Cognitive properties in sorting algorithms - Zhang, Goldstein & Levin
10/24/20241 hour, 3 minutes, 35 seconds
Episode Artwork

Speechmatics CTO - Next-Generation Speech Recognition

Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas: * Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with 100x less data than fully supervised approaches. Williams explains why this is more efficient and generalizable than end-to-end models like Whisper. * Their production architecture implementing multiple operating points for different latency-accuracy trade-offs, with careful latency padding (up to 1.8 seconds) to ensure consistent user experience. The system uses lattice-based decoding with language model integration for improved accuracy. * The challenges and solutions in real-time ASR, including their approach to diarization (speaker identification), handling cross-talk, and implicit source separation. Williams explains why these problems remain difficult even with modern deep learning approaches. * Their testing and deployment infrastructure, including the use of mirrored environments for catching edge cases in production, and their strategy of maintaining global models rather than allowing customer-specific fine-tuning. * Technical evolution in ASR, from early days of custom CUDA kernels and manual memory management to modern frameworks, with Williams offering interesting critiques of current PyTorch memory management approaches and arguing for more efficient direct memory allocation in production systems. Get coding with their API! This is their URL: https://www.speechmatics.com/ DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Interested? Apply for an ML research position: [email protected] TOC 1. ASR Core Technology & Real-time Architecture [00:00:00] 1.1 ASR and Diarization Fundamentals [00:05:25] 1.2 Real-time Conversational AI Architecture [00:09:21] 1.3 Neural Network Streaming Implementation [00:12:49] 1.4 Multi-modal System Integration 2. Production System Optimization [00:29:38] 2.1 Production Deployment and Testing Infrastructure [00:35:40] 2.2 Model Architecture and Deployment Strategy [00:37:12] 2.3 Latency-Accuracy Trade-offs [00:39:15] 2.4 Language Model Integration [00:40:32] 2.5 Lattice-based Decoding Architecture 3. Performance Evaluation & Ethical Considerations [00:44:00] 3.1 ASR Performance Metrics and Capabilities [00:46:35] 3.2 AI Regulation and Evaluation Methods [00:51:09] 3.3 Benchmark and Testing Challenges [00:54:30] 3.4 Real-world Implementation Metrics [01:00:51] 3.5 Ethics and Privacy Considerations 4. ASR Technical Evolution [01:09:00] 4.1 WER Calculation and Evaluation Methodologies [01:10:21] 4.2 Supervised vs Self-Supervised Learning Approaches [01:21:02] 4.3 Temporal Learning and Feature Processing [01:24:45] 4.4 Feature Engineering to Automated ML 5. Enterprise Implementation & Scale [01:27:55] 5.1 Future AI Systems and Adaptation [01:31:52] 5.2 Technical Foundations and History [01:34:53] 5.3 Infrastructure and Team Scaling [01:38:05] 5.4 Research and Talent Strategy [01:41:11] 5.5 Engineering Practice Evolution Shownotes: https://www.dropbox.com/scl/fi/d94b1jcgph9o8au8shdym/Speechmatics.pdf?rlkey=bi55wvktzomzx0y5sic6jz99y&st=6qwofv8t&dl=0
10/23/20241 hour, 46 minutes, 23 seconds
Episode Artwork

Dr. Sanjeev Namjoshi - Active Inference

Dr. Sanjeev Namjoshi, a machine learning engineer who recently submitted a book on Active Inference to MIT Press, discusses the theoretical foundations and practical applications of Active Inference, the Free Energy Principle (FEP), and Bayesian mechanics. He explains how these frameworks describe how biological and artificial systems maintain stability by minimizing uncertainty about their environment. DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2. Interested? Apply for an ML research position: [email protected] Namjoshi traces the evolution of these fields from early 2000s neuroscience research to current developments, highlighting how Active Inference provides a unified framework for perception and action through variational free energy minimization. He contrasts this with traditional machine learning approaches, emphasizing Active Inference's natural capacity for exploration and curiosity through epistemic value. He sees Active Inference as being at a similar stage to deep learning in the early 2000s - poised for significant breakthroughs but requiring better tools and wider adoption. While acknowledging current computational challenges, he emphasizes Active Inference's potential advantages over reinforcement learning, particularly its principled approach to exploration and planning. Dr. Sanjeev Namjoshi https://snamjoshi.github.io/ TOC: 1. Theoretical Foundations: AI Agency and Sentience [00:00:00] 1.1 Intro [00:02:45] 1.2 Free Energy Principle and Active Inference Theory [00:11:16] 1.3 Emergence and Self-Organization in Complex Systems [00:19:11] 1.4 Agency and Representation in AI Systems [00:29:59] 1.5 Bayesian Mechanics and Systems Modeling 2. Technical Framework: Active Inference and Free Energy [00:38:37] 2.1 Generative Processes and Agent-Environment Modeling [00:42:27] 2.2 Markov Blankets and System Boundaries [00:44:30] 2.3 Bayesian Inference and Prior Distributions [00:52:41] 2.4 Variational Free Energy Minimization Framework [00:55:07] 2.5 VFE Optimization Techniques: Generalized Filtering vs DEM 3. Implementation and Optimization Methods [00:58:25] 3.1 Information Theory and Free Energy Concepts [01:05:25] 3.2 Surprise Minimization and Action in Active Inference [01:15:58] 3.3 Evolution of Active Inference Models: Continuous to Discrete Approaches [01:26:00] 3.4 Uncertainty Reduction and Control Systems in Active Inference 4. Safety and Regulatory Frameworks [01:32:40] 4.1 Historical Evolution of Risk Management and Predictive Systems [01:36:12] 4.2 Agency and Reality: Philosophical Perspectives on Models [01:39:20] 4.3 Limitations of Symbolic AI and Current System Design [01:46:40] 4.4 AI Safety Regulation and Corporate Governance 5. Socioeconomic Integration and Modeling [01:52:55] 5.1 Economic Policy and Public Sentiment Modeling [01:55:21] 5.2 Free Energy Principle: Libertarian vs Collectivist Perspectives [01:58:53] 5.3 Regulation of Complex Socio-Technical Systems [02:03:04] 5.4 Evolution and Current State of Active Inference Research 6. Future Directions and Applications [02:14:26] 6.1 Active Inference Applications and Future Development [02:22:58] 6.2 Cultural Learning and Active Inference [02:29:19] 6.3 Hierarchical Relationship Between FEP, Active Inference, and Bayesian Mechanics [02:33:22] 6.4 Historical Evolution of Free Energy Principle [02:38:52] 6.5 Active Inference vs Traditional Machine Learning Approaches Transcript and shownotes with refs and URLs: https://www.dropbox.com/scl/fi/qj22a660cob1795ej0gbw/SanjeevShow.pdf?rlkey=w323r3e8zfsnve22caayzb17k&st=el1fdgfr&dl=0
10/22/20242 hours, 45 minutes, 32 seconds
Episode Artwork

Joscha Bach - Why Your Thoughts Aren't Yours.

Dr. Joscha Bach discusses advanced AI, consciousness, and cognitive modeling. He presents consciousness as a virtual property emerging from self-organizing software patterns, challenging panpsychism and materialism. Bach introduces "Cyberanima," reinterpreting animism through information processing, viewing spirits as self-organizing software agents. He addresses limitations of current large language models and advocates for smaller, more efficient AI models capable of reasoning from first principles. Bach describes his work with Liquid AI on novel neural network architectures for improved expressiveness and efficiency. The interview covers AI's societal implications, including regulation challenges and impact on innovation. Bach argues for balancing oversight with technological progress, warning against overly restrictive regulations. Throughout, Bach frames consciousness, intelligence, and agency as emergent properties of complex information processing systems, proposing a computational framework for cognitive phenomena and reality. SPONSOR MESSAGE: DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2. Interested? Apply for an ML research position: [email protected] TOC [00:00:00] 1.1 Consciousness and Intelligence in AI Development [00:07:44] 1.2 Agency, Intelligence, and Their Relationship to Physical Reality [00:13:36] 1.3 Virtual Patterns and Causal Structures in Consciousness [00:25:49] 1.4 Reinterpreting Concepts of God and Animism in Information Processing Terms [00:32:50] 1.5 Animism and Evolution as Competition Between Software Agents 2. Self-Organizing Systems and Cognitive Models in AI [00:37:59] 2.1 Consciousness as self-organizing software [00:45:49] 2.2 Critique of panpsychism and alternative views on consciousness [00:50:48] 2.3 Emergence of consciousness in complex systems [00:52:50] 2.4 Neuronal motivation and the origins of consciousness [00:56:47] 2.5 Coherence and Self-Organization in AI Systems 3. Advanced AI Architectures and Cognitive Processes [00:57:50] 3.1 Second-Order Software and Complex Mental Processes [01:01:05] 3.2 Collective Agency and Shared Values in AI [01:05:40] 3.3 Limitations of Current AI Agents and LLMs [01:06:40] 3.4 Liquid AI and Novel Neural Network Architectures [01:10:06] 3.5 AI Model Efficiency and Future Directions [01:19:00] 3.6 LLM Limitations and Internal State Representation 4. AI Regulation and Societal Impact [01:31:23] 4.1 AI Regulation and Societal Impact [01:49:50] 4.2 Open-Source AI and Industry Challenges Refs in shownotes and MP3 metadata Shownotes: https://www.dropbox.com/scl/fi/g28dosz19bzcfs5imrvbu/JoschaInterview.pdf?rlkey=s3y18jy192ktz6ogd7qtvry3d&st=10z7q7w9&dl=0
10/20/20241 hour, 52 minutes, 45 seconds
Episode Artwork

Decompiling Dreams: A New Approach to ARC? - Alessandro Palmarini

Alessandro Palmarini is a post-baccalaureate researcher at the Santa Fe Institute working under the supervision of Melanie Mitchell. He completed his undergraduate degree in Artificial Intelligence and Computer Science at the University of Edinburgh. Palmarini's current research focuses on developing AI systems that can efficiently acquire new skills from limited data, inspired by François Chollet's work on measuring intelligence. His work builds upon the DreamCoder program synthesis system, introducing a novel approach called "dream decompiling" to improve library learning in inductive program synthesis. Palmarini is particularly interested in addressing the Abstraction and Reasoning Corpus (ARC) challenge, aiming to create AI systems that can perform abstract reasoning tasks more efficiently than current approaches. His research explores the balance between computational efficiency and data efficiency in AI learning processes. DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2. Interested? Apply for an ML research position: [email protected] TOC: 1. Intelligence Measurement in AI Systems [00:00:00] 1.1 Defining Intelligence in AI Systems [00:02:00] 1.2 Research at Santa Fe Institute [00:04:35] 1.3 Impact of Gaming on AI Development [00:05:10] 1.4 Comparing AI and Human Learning Efficiency 2. Efficient Skill Acquisition in AI [00:06:40] 2.1 Intelligence as Skill Acquisition Efficiency [00:08:25] 2.2 Limitations of Current AI Systems in Generalization [00:09:45] 2.3 Human vs. AI Cognitive Processes [00:10:40] 2.4 Measuring AI Intelligence: Chollet's ARC Challenge 3. Program Synthesis and ARC Challenge [00:12:55] 3.1 Philosophical Foundations of Program Synthesis [00:17:14] 3.2 Introduction to Program Induction and ARC Tasks [00:18:49] 3.3 DreamCoder: Principles and Techniques [00:27:55] 3.4 Trade-offs in Program Synthesis Search Strategies [00:31:52] 3.5 Neural Networks and Bayesian Program Learning 4. Advanced Program Synthesis Techniques [00:32:30] 4.1 DreamCoder and Dream Decompiling Approach [00:39:00] 4.2 Beta Distribution and Caching in Program Synthesis [00:45:10] 4.3 Performance and Limitations of Dream Decompiling [00:47:45] 4.4 Alessandro's Approach to ARC Challenge [00:51:12] 4.5 Conclusion and Future Discussions Refs: Full reflist on YT VD, Show Notes and MP3 metadata Show Notes: https://www.dropbox.com/scl/fi/x50201tgqucj5ba2q4typ/Ale.pdf?rlkey=0ubvk7p5gtyx1gpownpdadim8&st=5pniu3nq&dl=0
10/19/202451 minutes, 34 seconds
Episode Artwork

It's Not About Scale, It's About Abstraction - Francois Chollet

François Chollet discusses the limitations of Large Language Models (LLMs) and proposes a new approach to advancing artificial intelligence. He argues that current AI systems excel at pattern recognition but struggle with logical reasoning and true generalization. This was Chollet's keynote talk at AGI-24, filmed in high-quality. We will be releasing a full interview with him shortly. A teaser clip from that is played in the intro! Chollet introduces the Abstraction and Reasoning Corpus (ARC) as a benchmark for measuring AI progress towards human-like intelligence. He explains the concept of abstraction in AI systems and proposes combining deep learning with program synthesis to overcome current limitations. Chollet suggests that breakthroughs in AI might come from outside major tech labs and encourages researchers to explore new ideas in the pursuit of artificial general intelligence. TOC 1. LLM Limitations and Intelligence Concepts [00:00:00] 1.1 LLM Limitations and Composition [00:12:05] 1.2 Intelligence as Process vs. Skill [00:17:15] 1.3 Generalization as Key to AI Progress 2. ARC-AGI Benchmark and LLM Performance [00:19:59] 2.1 Introduction to ARC-AGI Benchmark [00:20:05] 2.2 Introduction to ARC-AGI and the ARC Prize [00:23:35] 2.3 Performance of LLMs and Humans on ARC-AGI 3. Abstraction in AI Systems [00:26:10] 3.1 The Kaleidoscope Hypothesis and Abstraction Spectrum [00:30:05] 3.2 LLM Capabilities and Limitations in Abstraction [00:32:10] 3.3 Value-Centric vs Program-Centric Abstraction [00:33:25] 3.4 Types of Abstraction in AI Systems 4. Advancing AI: Combining Deep Learning and Program Synthesis [00:34:05] 4.1 Limitations of Transformers and Need for Program Synthesis [00:36:45] 4.2 Combining Deep Learning and Program Synthesis [00:39:59] 4.3 Applying Combined Approaches to ARC Tasks [00:44:20] 4.4 State-of-the-Art Solutions for ARC Shownotes (new!): https://www.dropbox.com/scl/fi/i7nsyoahuei6np95lbjxw/CholletKeynote.pdf?rlkey=t3502kbov5exsdxhderq70b9i&st=1ca91ewz&dl=0 [0:01:15] Abstraction and Reasoning Corpus (ARC): AI benchmark (François Chollet) https://arxiv.org/abs/1911.01547 [0:05:30] Monty Hall problem: Probability puzzle (Steve Selvin) https://www.tandfonline.com/doi/abs/10.1080/00031305.1975.10479121 [0:06:20] LLM training dynamics analysis (Tirumala et al.) https://arxiv.org/abs/2205.10770 [0:10:20] Transformer limitations on compositionality (Dziri et al.) https://arxiv.org/abs/2305.18654 [0:10:25] Reversal Curse in LLMs (Berglund et al.) https://arxiv.org/abs/2309.12288 [0:19:25] Measure of intelligence using algorithmic information theory (François Chollet) https://arxiv.org/abs/1911.01547 [0:20:10] ARC-AGI: GitHub repository (François Chollet) https://github.com/fchollet/ARC-AGI [0:22:15] ARC Prize: $1,000,000+ competition (François Chollet) https://arcprize.org/ [0:33:30] System 1 and System 2 thinking (Daniel Kahneman) https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 [0:34:00] Core knowledge in infants (Elizabeth Spelke) https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf [0:34:30] Embedding interpretive spaces in ML (Tennenholtz et al.) https://arxiv.org/abs/2310.04475 [0:44:20] Hypothesis Search with LLMs for ARC (Wang et al.) https://arxiv.org/abs/2309.05660 [0:44:50] Ryan Greenblatt's high score on ARC public leaderboard https://arcprize.org/
10/12/202446 minutes, 21 seconds
Episode Artwork

Bold AI Predictions From Cohere Co-founder

Ivan Zhang, co-founder of Cohere, discusses the company's enterprise-focused AI solutions. He explains Cohere's early emphasis on embedding technology and training models for secure environments. Zhang highlights their implementation of Retrieval-Augmented Generation in healthcare, significantly reducing doctor preparation time. He explores the shift from monolithic AI models to heterogeneous systems and the importance of improving various AI system components. Zhang shares insights on using synthetic data to teach models reasoning, the democratization of software development through AI, and how his gaming skills transfer to running an AI company. He advises young developers to fully embrace AI technologies and offers perspectives on AI reliability, potential risks, and future model architectures. https://cohere.com/ https://ivanzhang.ca/ https://x.com/1vnzh TOC: 00:00:00 Intro 00:03:20 AI & Language Model Evolution 00:06:09 Future AI Apps & Development 00:09:29 Impact on Software Dev Practices 00:13:03 Philosophical & Societal Implications 00:16:30 Compute Efficiency & RAG 00:20:39 Adoption Challenges & Solutions 00:22:30 GPU Optimization & Kubernetes Limits 00:24:16 Cohere's Implementation Approach 00:28:13 Gaming's Professional Influence 00:34:45 Transformer Optimizations 00:36:45 Future Models & System-Level Focus 00:39:20 Inference-Time Computation & Reasoning 00:42:05 Capturing Human Thought in AI 00:43:15 Research, Hiring & Developer Advice REFS: 00:02:31 Cohere, https://cohere.com/ 00:02:40 The Transformer architecture, https://arxiv.org/abs/1706.03762 00:03:22 The Innovator's Dilemma, https://www.amazon.com/Innovators-Dilemma-Technologies-Management-Innovation/dp/1633691780 00:09:15 The actor model, https://en.wikipedia.org/wiki/Actor_model 00:14:35 John Searle's Chinese Room Argument, https://plato.stanford.edu/entries/chinese-room/ 00:18:00 Retrieval-Augmented Generation, https://arxiv.org/abs/2005.11401 00:18:40 Retrieval-Augmented Generation, https://docs.cohere.com/v2/docs/retrieval-augmented-generation-rag 00:35:39 Let’s Verify Step by Step, https://arxiv.org/pdf/2305.20050 00:39:20 Adaptive Inference-Time Compute, https://arxiv.org/abs/2410.02725 00:43:20 Ryan Greenblatt ARC entry, https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt Disclaimer: This show is part of our Cohere partnership series
10/10/202447 minutes, 17 seconds
Episode Artwork

Open-Ended AI: The Key to Superhuman Intelligence? - Prof. Tim Rocktäschel

Prof. Tim Rocktäschel, AI researcher at UCL and Google DeepMind, talks about open-ended AI systems. These systems aim to keep learning and improving on their own, like evolution does in nature. Ad: Are you a hardcore ML engineer who wants to work for Daniel Cahn at SlingshotAI building AI for mental health? Give him an email! - [email protected] TOC: 00:00:00 Introduction to Open-Ended AI and Key Concepts 00:01:37 Tim Rocktäschel's Background and Research Focus 00:06:25 Defining Open-Endedness in AI Systems 00:10:39 Subjective Nature of Interestingness and Learnability 00:16:22 Open-Endedness in Practice: Examples and Limitations 00:17:50 Assessing Novelty in Open-ended AI Systems 00:20:05 Adversarial Attacks and AI Robustness 00:24:05 Rainbow Teaming and LLM Safety 00:25:48 Open-ended Research Approaches in AI 00:29:05 Balancing Long-term Vision and Exploration in AI Research 00:37:25 LLMs in Program Synthesis and Open-Ended Learning 00:37:55 Transition from Human-Based to Novel AI Strategies 00:39:00 Expanding Context Windows and Prompt Evolution 00:40:17 AI Intelligibility and Human-AI Interfaces 00:46:04 Self-Improvement and Evolution in AI Systems Show notes (New!) https://www.dropbox.com/scl/fi/5avpsyz8jbn4j1az7kevs/TimR.pdf?rlkey=pqjlcqbtm3undp4udtgfmie8n&st=x50u1d1m&dl=0 REFS: 00:01:47 - UCL DARK Lab (Rocktäschel) - AI research lab focusing on RL and open-ended learning - https://ucldark.com/ 00:02:31 - GENIE (Bruce) - Generative interactive environment from unlabelled videos - https://arxiv.org/abs/2402.15391 00:02:42 - Promptbreeder (Fernando) - Self-referential LLM prompt evolution - https://arxiv.org/abs/2309.16797 00:03:05 - Picbreeder (Secretan) - Collaborative online image evolution - https://dl.acm.org/doi/10.1145/1357054.1357328 00:03:14 - Why Greatness Cannot Be Planned (Stanley) - Book on open-ended exploration - https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 00:04:36 - NetHack Learning Environment (Küttler) - RL research in procedurally generated game - https://arxiv.org/abs/2006.13760 00:07:35 - Open-ended learning (Clune) - AI systems for continual learning and adaptation - https://arxiv.org/abs/1905.10985 00:07:35 - OMNI (Zhang) - LLMs modeling human interestingness for exploration - https://arxiv.org/abs/2306.01711 00:10:42 - Observer theory (Wolfram) - Computationally bounded observers in complex systems - https://writings.stephenwolfram.com/2023/12/observer-theory/ 00:15:25 - Human-Timescale Adaptation (Rocktäschel) - RL agent adapting to novel 3D tasks - https://arxiv.org/abs/2301.07608 00:16:15 - Open-Endedness for AGI (Hughes) - Importance of open-ended learning for AGI - https://arxiv.org/abs/2406.04268 00:16:35 - POET algorithm (Wang) - Open-ended approach to generate and solve challenges - https://arxiv.org/abs/1901.01753 00:17:20 - AlphaGo (Silver) - AI mastering the game of Go - https://deepmind.google/technologies/alphago/ 00:20:35 - Adversarial Go attacks (Dennis) - Exploiting weaknesses in Go AI systems - https://www.ifaamas.org/Proceedings/aamas2024/pdfs/p1630.pdf 00:22:00 - Levels of AGI (Morris) - Framework for categorizing AGI progress - https://arxiv.org/abs/2311.02462 00:24:30 - Rainbow Teaming (Samvelyan) - LLM-based adversarial prompt generation - https://arxiv.org/abs/2402.16822 00:25:50 - Why Greatness Cannot Be Planned (Stanley) - 'False compass' and 'stepping stone collection' concepts - https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 00:27:45 - AI Debate (Khan) - Improving LLM truthfulness through debate - https://proceedings.mlr.press/v235/khan24a.html 00:29:40 - Gemini (Google DeepMind) - Advanced multimodal AI model - https://deepmind.google/technologies/gemini/ 00:30:15 - How to Take Smart Notes (Ahrens) - Effective note-taking methodology - https://www.amazon.com/How-Take-Smart-Notes-Nonfiction/dp/1542866502 (truncated, see shownotes)
10/4/202455 minutes, 10 seconds
Episode Artwork

Ben Goertzel on "Superintelligence"

Ben Goertzel discusses AGI development, transhumanism, and the potential societal impacts of superintelligent AI. He predicts human-level AGI by 2029 and argues that the transition to superintelligence could happen within a few years after. Goertzel explores the challenges of AI regulation, the limitations of current language models, and the need for neuro-symbolic approaches in AGI research. He also addresses concerns about resource allocation and cultural perspectives on transhumanism. TOC: [00:00:00] AGI Timeline Predictions and Development Speed [00:00:45] Limitations of Language Models in AGI Development [00:02:18] Current State and Trends in AI Research and Development [00:09:02] Emergent Reasoning Capabilities and Limitations of LLMs [00:18:15] Neuro-Symbolic Approaches and the Future of AI Systems [00:20:00] Evolutionary Algorithms and LLMs in Creative Tasks [00:21:25] Symbolic vs. Sub-Symbolic Approaches in AI [00:28:05] Language as Internal Thought and External Communication [00:30:20] AGI Development and Goal-Directed Behavior [00:35:51] Consciousness and AI: Expanding States of Experience [00:48:50] AI Regulation: Challenges and Approaches [00:55:35] Challenges in AI Regulation [00:59:20] AI Alignment and Ethical Considerations [01:09:15] AGI Development Timeline Predictions [01:12:40] OpenCog Hyperon and AGI Progress [01:17:48] Transhumanism and Resource Allocation Debate [01:20:12] Cultural Perspectives on Transhumanism [01:23:54] AGI and Post-Scarcity Society [01:31:35] Challenges and Implications of AGI Development New! PDF Show notes: https://www.dropbox.com/scl/fi/fyetzwgoaf70gpovyfc4x/BenGoertzel.pdf?rlkey=pze5dt9vgf01tf2wip32p5hk5&st=svbcofm3&dl=0 Refs: 00:00:15 Ray Kurzweil's AGI timeline prediction, Ray Kurzweil, https://en.wikipedia.org/wiki/Technological_singularity 00:01:45 Ben Goertzel: SingularityNET founder, Ben Goertzel, https://singularitynet.io/ 00:02:35 AGI Conference series, AGI Conference Organizers, https://agi-conf.org/2024/ 00:03:55 Ben Goertzel's contributions to AGI, Wikipedia contributors, https://en.wikipedia.org/wiki/Ben_Goertzel 00:11:05 Chain-of-Thought prompting, Subbarao Kambhampati, https://arxiv.org/abs/2405.04776 00:11:35 Algorithmic information content, Pieter Adriaans, https://plato.stanford.edu/entries/information-entropy/ 00:12:10 Turing completeness in neural networks, Various contributors, https://plato.stanford.edu/entries/turing-machine/ 00:16:15 AlphaGeometry: AI for geometry problems, Trieu, Li, et al., https://www.nature.com/articles/s41586-023-06747-5 00:18:25 Shane Legg and Ben Goertzel's collaboration, Shane Legg, https://en.wikipedia.org/wiki/Shane_Legg 00:20:00 Evolutionary algorithms in music generation, Yanxu Chen, https://arxiv.org/html/2409.03715v1 00:22:00 Peirce's theory of semiotics, Charles Sanders Peirce, https://plato.stanford.edu/entries/peirce-semiotics/ 00:28:10 Chomsky's view on language, Noam Chomsky, https://chomsky.info/1983____/ 00:34:05 Greg Egan's 'Diaspora', Greg Egan, https://www.amazon.co.uk/Diaspora-post-apocalyptic-thriller-perfect-MIRROR/dp/0575082097 00:40:35 'The Consciousness Explosion', Ben Goertzel & Gabriel Axel Montes, https://www.amazon.com/Consciousness-Explosion-Technological-Experiential-Singularity/dp/B0D8C7QYZD 00:41:55 Ray Kurzweil's books on singularity, Ray Kurzweil, https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889 00:50:50 California AI regulation bills, California State Senate, https://sd18.senate.ca.gov/news/senate-unanimously-approves-senator-padillas-artificial-intelligence-package 00:56:40 Limitations of Compute Thresholds, Sara Hooker, https://arxiv.org/abs/2407.05694 00:56:55 'Taming Silicon Valley', Gary F. Marcus, https://www.penguinrandomhouse.com/books/768076/taming-silicon-valley-by-gary-f-marcus/ 01:09:15 Kurzweil's AGI prediction update, Ray Kurzweil, https://www.theguardian.com/technology/article/2024/jun/29/ray-kurzweil-google-ai-the-singularity-is-nearer
10/1/20241 hour, 37 minutes, 18 seconds
Episode Artwork

Taming Silicon Valley - Prof. Gary Marcus

AI expert Prof. Gary Marcus doesn't mince words about today's artificial intelligence. He argues that despite the buzz, chatbots like ChatGPT aren't as smart as they seem and could cause real problems if we're not careful. Marcus is worried about tech companies putting profits before people. He thinks AI could make fake news and privacy issues even worse. He's also concerned that a few big tech companies have too much power. Looking ahead, Marcus believes the AI hype will die down as reality sets in. He wants to see AI developed in smarter, more responsible ways. His message to the public? We need to speak up and demand better AI before it's too late. Buy Taming Silicon Valley: https://amzn.to/3XTlC5s Gary Marcus: https://garymarcus.substack.com/ https://x.com/GaryMarcus Interviewer: Dr. Tim Scarfe (Refs in top comment) TOC [00:00:00] AI Flaws, Improvements & Industry Critique [00:16:29] AI Safety Theater & Image Generation Issues [00:23:49] AI's Lack of World Models & Human-like Understanding [00:31:09] LLMs: Superficial Intelligence vs. True Reasoning [00:34:45] AI in Specialized Domains: Chess, Coding & Limitations [00:42:10] AI-Generated Code: Capabilities & Human-AI Interaction [00:48:10] AI Regulation: Industry Resistance & Oversight Challenges [00:54:55] Copyright Issues in AI & Tech Business Models [00:57:26] AI's Societal Impact: Risks, Misinformation & Ethics [01:23:14] AI X-risk, Alignment & Moral Principles Implementation [01:37:10] Persistent AI Flaws: System Limitations & Architecture Challenges [01:44:33] AI Future: Surveillance Concerns, Economic Challenges & Neuro-Symbolic AI YT version with refs: https://youtu.be/o9MfuUoGlSw
9/24/20241 hour, 56 minutes, 55 seconds
Episode Artwork

Prof. Mark Solms - The Hidden Spring

Prof. Mark Solms, a neuroscientist and psychoanalyst, discusses his groundbreaking work on consciousness, challenging conventional cortex-centric views and emphasizing the role of brainstem structures in generating consciousness and affect. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key points discussed: The limitations of vision-centric approaches to consciousness studies. Evidence from decorticated animals and hydranencephalic children supporting the brainstem's role in consciousness. The relationship between homeostasis, the free energy principle, and consciousness. Critiques of behaviorism and modern theories of consciousness. The importance of subjective experience in understanding brain function. The discussion also explored broader topics: The potential impact of affect-based theories on AI development. The role of the SEEKING system in exploration and learning. Connections between neuroscience, psychoanalysis, and philosophy of mind. Challenges in studying consciousness and the limitations of current theories. Mark Solms: https://neuroscience.uct.ac.za/contacts/mark-solms Show notes and transcript: https://www.dropbox.com/scl/fo/roipwmnlfmwk2e7kivzms/ACjZF-VIGC2-Suo30KcwVV0?rlkey=53y8v2cajfcgrf17p1h7v3suz&st=z8vu81hn&dl=0 TOC (*) are best bits 00:00:00 1. Intro: Challenging vision-centric approaches to consciousness * 00:02:20 2. Evidence from decorticated animals and hydranencephalic children * 00:07:40 3. Emotional responses in hydranencephalic children 00:10:40 4. Brainstem stimulation and affective states 00:15:00 5. Brainstem's role in generating affective consciousness * 00:21:50 6. Dual-aspect monism and the mind-brain relationship 00:29:37 7. Information, affect, and the hard problem of consciousness * 00:37:25 8. Wheeler's participatory universe and Chalmers' theories 00:48:51 9. Homeostasis, free energy principle, and consciousness * 00:59:25 10. Affect, voluntary behavior, and decision-making 01:05:45 11. Psychoactive substances, REM sleep, and consciousness research 01:12:14 12. Critiquing behaviorism and modern consciousness theories * 01:24:25 13. The SEEKING system and exploration in neuroscience Refs: 1. Mark Solms' book "The Hidden Spring" [00:20:34] (MUST READ!) https://amzn.to/3XyETb3 2. Karl Friston's free energy principle [00:03:50] https://www.nature.com/articles/nrn2787 3. Hydranencephaly condition [00:07:10] https://en.wikipedia.org/wiki/Hydranencephaly 4. Periaqueductal gray (PAG) [00:08:57] https://en.wikipedia.org/wiki/Periaqueductal_gray 5. Positron Emission Tomography (PET) [00:13:52] https://en.wikipedia.org/wiki/Positron_emission_tomography 6. Paul MacLean's triune brain theory [00:03:30] https://en.wikipedia.org/wiki/Triune_brain 7. Baruch Spinoza's philosophy of mind [00:23:48] https://plato.stanford.edu/entries/spinoza-epistemology-mind 8. Claude Shannon's "A Mathematical Theory of Communication" [00:32:15] https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf 9. Francis Crick's "The Astonishing Hypothesis" [00:39:57] https://en.wikipedia.org/wiki/The_Astonishing_Hypothesis 10. Frank Jackson's Knowledge Argument [00:40:54] https://plato.stanford.edu/entries/qualia-knowledge/ 11. Mesolimbic dopamine system [01:11:51] https://en.wikipedia.org/wiki/Mesolimbic_pathway 12. Jaak Panksepp's SEEKING system [01:25:23] https://en.wikipedia.org/wiki/Jaak_Panksepp#Affective_neuroscience
9/18/20241 hour, 26 minutes, 45 seconds
Episode Artwork

Patrick Lewis (Cohere) - Retrieval Augmented Generation

Dr. Patrick Lewis, who coined the term RAG (Retrieval Augmented Generation) and now works at Cohere, discusses the evolution of language models, RAG systems, and challenges in AI evaluation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmented generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key topics covered: - Origins and evolution of Retrieval Augmented Generation (RAG) - Challenges in evaluating RAG systems and language models - Human-AI collaboration in research and knowledge work - Word embeddings and the progression to modern language models - Dense vs sparse retrieval methods in information retrieval The discussion also explored broader implications and applications: - Balancing faithfulness and fluency in RAG systems - User interface design for AI-augmented research tools - The journey from chemistry to AI research - Challenges in enterprise search compared to web search - The importance of data quality in training AI models Patrick Lewis: https://www.patricklewis.io/ Cohere Command Models, check them out - they are amazing for RAG! https://cohere.com/command TOC 00:00:00 1. Intro to RAG 00:05:30 2. RAG Evaluation: Poll framework & model performance 00:12:55 3. Data Quality: Cleanliness vs scale in AI training 00:15:13 4. Human-AI Collaboration: Research agents & UI design 00:22:57 5. RAG Origins: Open-domain QA to generative models 00:30:18 6. RAG Challenges: Info retrieval, tool use, faithfulness 00:42:01 7. Dense vs Sparse Retrieval: Techniques & trade-offs 00:47:02 8. RAG Applications: Grounding, attribution, hallucination prevention 00:54:04 9. UI for RAG: Human-computer interaction & model optimization 00:59:01 10. Word Embeddings: Word2Vec, GloVe, and semantic spaces 01:06:43 11. Language Model Evolution: BERT, GPT, and beyond 01:11:38 12. AI & Human Cognition: Sequential processing & chain-of-thought Refs: 1. Retrieval Augmented Generation (RAG) paper / Patrick Lewis et al. [00:27:45] https://arxiv.org/abs/2005.11401 2. LAMA (LAnguage Model Analysis) probe / Petroni et al. [00:26:35] https://arxiv.org/abs/1909.01066 3. KILT (Knowledge Intensive Language Tasks) benchmark / Petroni et al. [00:27:05] https://arxiv.org/abs/2009.02252 4. Word2Vec algorithm / Tomas Mikolov et al. [01:00:25] https://arxiv.org/abs/1301.3781 5. GloVe (Global Vectors for Word Representation) / Pennington et al. [01:04:35] https://nlp.stanford.edu/projects/glove/ 6. BERT (Bidirectional Encoder Representations from Transformers) / Devlin et al. [01:08:00] https://arxiv.org/abs/1810.04805 7. 'The Language Game' book / Nick Chater and Morten H. Christiansen [01:11:40] https://amzn.to/4grEUpG Disclaimer: This is the sixth video from our Cohere partnership. We were not told what to say in the interview. Filmed in Seattle in June 2024.
9/16/20241 hour, 13 minutes, 46 seconds
Episode Artwork

Ashley Edwards - Genie Paper (DeepMind/Runway)

Ashley Edwards, who was working at DeepMind when she co-authored the Genie paper and is now at Runway, covered several key aspects of the Genie AI system and its applications in video generation, robotics, and game creation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Genie's approach to learning interactive environments, balancing compression and fidelity. The use of latent action models and VQE models for video processing and tokenization. Challenges in maintaining action consistency across frames and integrating text-to-image models. Evaluation metrics for AI-generated content, such as FID and PS&R diff metrics. The discussion also explored broader implications and applications: The potential impact of AI video generation on content creation jobs. Applications of Genie in game generation and robotics. The use of foundation models in robotics and the differences between internet video data and specialized robotics data. Challenges in mapping AI-generated actions to real-world robotic actions. Ashley Edwards: https://ashedwards.github.io/ TOC (*) are best bits 00:00:00 1. Intro to Genie & Brave Search API: Trade-offs & limitations * 00:02:26 2. Genie's Architecture: Latent action, VQE, video processing * 00:05:06 3. Genie's Constraints: Frame consistency & image model integration 00:07:26 4. Evaluation: FID, PS&R diff metrics & latent induction methods 00:09:44 5. AI Video Gen: Content creation impact, depth & parallax effects 00:11:39 6. Model Scaling: Training data impact & computational trade-offs 00:13:50 7. Game & Robotics Apps: Gamification & action mapping challenges * 00:16:16 8. Robotics Foundation Models: Action space & data considerations * 00:19:18 9. Mask-GPT & Video Frames: Real-time optimization, RL from videos 00:20:34 10. Research Challenges: AI value, efficiency vs. quality, safety 00:24:20 11. Future Dev: Efficiency improvements & fine-tuning strategies Refs: 1. Genie (learning interactive environments from videos) / Ashley and DM collegues [00:01] https://arxiv.org/abs/2402.15391 2. VQ-VAE (Vector Quantized Variational Autoencoder) / Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu [02:43] https://arxiv.org/abs/1711.00937 3. FID (Fréchet Inception Distance) metric / Martin Heusel et al. [07:37] https://arxiv.org/abs/1706.08500 4. PS&R (Precision and Recall) metric / Mehdi S. M. Sajjadi et al. [08:02] https://arxiv.org/abs/1806.00035 5. Vision Transformer (ViT) architecture / Alexey Dosovitskiy et al. [12:14] https://arxiv.org/abs/2010.11929 6. Genie (robotics foundation models) / Google DeepMind [17:34] https://deepmind.google/research/publications/60474/ 7. Chelsea Finn's lab work on robotics datasets / Chelsea Finn [17:38] https://ai.stanford.edu/~cbfinn/ 8. Imitation from observation in reinforcement learning / YuXuan Liu [20:58] https://arxiv.org/abs/1707.03374 9. Waymo's autonomous driving technology / Waymo [22:38] https://waymo.com/ 10. Gen3 model release by Runway / Runway [23:48] https://runwayml.com/ 11. Classifier-free guidance technique / Jonathan Ho and Tim Salimans [24:43] https://arxiv.org/abs/2207.12598
9/13/202425 minutes, 4 seconds
Episode Artwork

Cohere's SVP Technology - Saurabh Baji

Saurabh Baji discusses Cohere's approach to developing and deploying large language models (LLMs) for enterprise use. * Cohere focuses on pragmatic, efficient models tailored for business applications rather than pursuing the largest possible models. * They offer flexible deployment options, from cloud services to on-premises installations, to meet diverse enterprise needs. * Retrieval-augmented generation (RAG) is highlighted as a critical capability, allowing models to leverage enterprise data securely. * Cohere emphasizes model customization, fine-tuning, and tools like reranking to optimize performance for specific use cases. * The company has seen significant growth, transitioning from developer-focused to enterprise-oriented services. * Major customers like Oracle, Fujitsu, and TD Bank are using Cohere's models across various applications, from HR to finance. * Baji predicts a surge in enterprise AI adoption over the next 12-18 months as more companies move from experimentation to production. * He emphasizes the importance of trust, security, and verifiability in enterprise AI applications. The interview provides insights into Cohere's strategy, technology, and vision for the future of enterprise AI adoption. https://www.linkedin.com/in/saurabhbaji/ https://x.com/sbaji https://cohere.com/ https://cohere.com/business MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC (*) are best bits 00:00:00 1. Introduction and Background 00:04:24 2. Cloud Infrastructure and LLM Optimization 00:06:43 2.1 Model deployment and fine-tuning strategies * 00:09:37 3. Enterprise AI Deployment Strategies 00:11:10 3.1 Retrieval-augmented generation in enterprise environments * 00:13:40 3.2 Standardization vs. customization in cloud services * 00:18:20 4. AI Model Evaluation and Deployment 00:18:20 4.1 Comprehensive evaluation frameworks * 00:21:20 4.2 Key components of AI model stacks * 00:25:50 5. Retrieval Augmented Generation (RAG) in Enterprise 00:32:10 5.1 Pragmatic approach to RAG implementation * 00:33:45 6. AI Agents and Tool Integration 00:33:45 6.1 Leveraging tools for AI insights * 00:35:30 6.2 Agent-based AI systems and diagnostics * 00:42:55 7. AI Transparency and Reasoning Capabilities 00:49:10 8. AI Model Training and Customization 00:57:10 9. Enterprise AI Model Management 01:02:10 9.1 Managing AI model versions for enterprise customers * 01:04:30 9.2 Future of language model programming * 01:06:10 10. AI-Driven Software Development 01:06:10 10.1 AI bridging human expression and task achievement * 01:08:00 10.2 AI-driven virtual app fabrics in enterprise * 01:13:33 11. Future of AI and Enterprise Applications 01:21:55 12. Cohere's Customers and Use Cases 01:21:55 12.1 Cohere's growth and enterprise partnerships * 01:27:14 12.2 Diverse customers using generative AI * 01:27:50 12.3 Industry adaptation to generative AI * 01:29:00 13. Technical Advantages of Cohere Models 01:29:00 13.1 Handling large context windows * 01:29:40 13.2 Low latency impact on developer productivity * Disclaimer: This is the fifth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Filmed in Seattle in Aug 2024.
9/12/20241 hour, 30 minutes, 25 seconds
Episode Artwork

David Hanson's Vision for Sentient Robots

David Hanson, CEO of Hanson Robotics and creator of the humanoid robot Sofia, explores the intersection of artificial intelligence, ethics, and human potential. In this thought-provoking interview, Hanson discusses his vision for developing AI systems that embody the best aspects of humanity while pushing beyond our current limitations, aiming to achieve what he calls "super wisdom." YT version: https://youtu.be/LFCIEhlsozU MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. The interview with David Hanson covers: The importance of incorporating biological drives and compassion into AI systems Hanson's concept of "existential pattern ethics" as a basis for AI morality The potential for AI to enhance human intelligence and wisdom Challenges in developing artificial general intelligence (AGI) The need to democratize AI technologies globally Potential future advancements in human-AI integration and their societal impacts Concerns about technological augmentation exacerbating inequality The role of ethics in guiding AI development and deployment Hanson advocates for creating AI systems that embody the best aspects of humanity while surpassing current human limitations, aiming for "super wisdom" rather than just artificial super intelligence. David Hanson: https://www.hansonrobotics.com/david-hanson/ https://www.youtube.com/watch?v=9u1O954cMmE TOC 1. Introduction and Background [00:00:00] 1.1. David Hanson's interdisciplinary background [0:01:49] 1.2. Introduction to Sofia, the realistic robot [0:03:27] 2. Human Cognition and AI [0:03:50] 2.1. Importance of social interaction in cognition [0:03:50] 2.2. Compassion as distinguishing factor [0:05:55] 2.3. AI augmenting human intelligence [0:09:54] 3. Developing Human-like AI [0:13:17] 3.1. Incorporating biological drives in AI [0:13:17] 3.2. Creating AI with agency [0:20:34] 3.3. Implementing flexible desires in AI [0:23:23] 4. Ethics and Morality in AI [0:27:53] 4.1. Enhancing humanity through AI [0:27:53] 4.2. Existential pattern ethics [0:30:14] 4.3. Expanding morality beyond restrictions [0:35:35] 5. Societal Impact of AI [0:38:07] 5.1. AI adoption and integration [0:38:07] 5.2. Democratizing AI technologies [0:38:32] 5.3. Human-AI integration and identity [0:43:37] 6. Future Considerations [0:50:03] 6.1. Technological augmentation and inequality [0:50:03] 6.2. Emerging technologies for mental health [0:50:32] 6.3. Corporate ethics in AI development [0:52:26] This was filmed at AGI-24
9/10/202453 minutes, 14 seconds
Episode Artwork

The Fabric of Knowledge - David Spivak

David Spivak, a mathematician known for his work in category theory, discusses a wide range of topics related to intelligence, creativity, and the nature of knowledge. He explains category theory in simple terms and explores how it relates to understanding complex systems and relationships. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. We discuss abstract concepts like collective intelligence, the importance of embodiment in understanding the world, and how we acquire and process knowledge. Spivak shares his thoughts on creativity, discussing where it comes from and how it might be modeled mathematically. A significant portion of the discussion focuses on the impact of artificial intelligence on human thinking and its potential role in the evolution of intelligence. Spivak also touches on the importance of language, particularly written language, in transmitting knowledge and shaping our understanding of the world. David Spivak http://www.dspivak.net/ TOC: 00:00:00 Introduction to category theory and functors 00:04:40 Collective intelligence and sense-making 00:09:54 Embodiment and physical concepts in knowledge acquisition 00:16:23 Creativity, open-endedness, and AI's impact on thinking 00:25:46 Modeling creativity and the evolution of intelligence 00:36:04 Evolution, optimization, and the significance of AI 00:44:14 Written language and its impact on knowledge transmission REFS: Mike Levin's work https://scholar.google.com/citations?user=luouyakAAAAJ&hl=en Eric Smith's videos on complexity and early life https://www.youtube.com/watch?v=SpJZw-68QyE Richard Dawkins' book "The Selfish Gene" https://amzn.to/3X73X8w Carl Sagan's statement about the cosmos knowing itself https://amzn.to/3XhPruK Herbert Simon's concept of "satisficing" https://plato.stanford.edu/entries/bounded-rationality/ DeepMind paper on open-ended systems https://arxiv.org/abs/2406.04268 Karl Friston's work on active inference https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind MIT category theory lectures by David Spivak (available on the Topos Institute channel) https://www.youtube.com/watch?v=UusLtx9fIjs
9/5/202446 minutes, 28 seconds
Episode Artwork

Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe. YT version: https://youtu.be/DP454c1K_vQ MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC 00:00:00 Intro 00:03:38 Reasoning 00:13:09 Potential AI Breakthroughs Reducing Computation Needs 00:20:39 Memorization vs. Generalization in AI 00:25:19 Approach to the ARC Challenge 00:29:10 Perceptions of Chat GPT and AGI 00:58:45 Abstract Principles of Jurgen's Approach 01:04:17 Analogical Reasoning and Compression 01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI 01:15:50 Use of LSTM in Language Models by Tech Giants 01:21:08 Neural Network Aspect Ratio Theory 01:26:53 Reinforcement Learning Without Explicit Teachers Refs: ★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber): ★ Chain Rule For Backward Credit Assignment (Leibniz, 1676) ★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800) ★ First 20th Century Pioneer of Practical AI (Quevedo, 1914) ★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925) ★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34) ★ Unpublished ideas about evolving RNNs (Turing, 1948) ★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958) ★ First Published Learning RNNs (Amari and others, ~1972) ★ First Deep Learning (Ivakhnenko & Lapa, 1965) ★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68) ★ ReLUs (Fukushima, 1969) ★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960) ★ Backpropagation for NNs (Werbos, 1982) ★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988). ★ Metalearning or Learning to Learn (Schmidhuber, 1987) ★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT) ★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990) ★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT) ★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT) ★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber) ★ LSTM journal paper (1997, most cited AI paper of the 20th century) ★ xLSTM (Hochreiter, 2024) ★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015) ★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team) https://arxiv.org/abs/2305.17066 ★ Bremermann's physical limit of computation (1982) EXTERNAL LINKS CogX 2018 - Professor Juergen Schmidhuber https://www.youtube.com/watch?v=17shdT9-wuA Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997) https://sferics.idsia.ch/pub/juergen/loconet.pdf The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy https://www.youtube.com/watch?v=I4pQbo5MQOs (Refs truncated, full version on YT VD)
8/28/20241 hour, 39 minutes, 39 seconds
Episode Artwork

"AI should NOT be regulated at all!" - Prof. Pedro Domingos

Professor Pedro Domingos, is an AI researcher and professor of computer science. He expresses skepticism about current AI regulation efforts and argues for faster AI development rather than slowing it down. He also discusses the need for new innovations to fulfil the promises of current AI techniques. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmented generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Show notes: * Domingos' views on AI regulation and why he believes it's misguided * His thoughts on the current state of AI technology and its limitations * Discussion of his novel "2040", a satirical take on AI and tech culture * Explanation of his work on "tensor logic", which aims to unify neural networks and symbolic AI * Critiques of other approaches in AI, including those of OpenAI and Gary Marcus * Thoughts on the AI "bubble" and potential future developments in the field Prof. Pedro Domingos: https://x.com/pmddomingos 2040: A Silicon Valley Satire [Pedro's new book] https://amzn.to/3T51ISd TOC: 00:00:00 Intro 00:06:31 Bio 00:08:40 Filmmaking skit 00:10:35 AI and the wisdom of crowds 00:19:49 Social Media 00:27:48 Master algorithm 00:30:48 Neurosymbolic AI / abstraction 00:39:01 Language 00:45:38 Chomsky 01:00:49 2040 Book 01:18:03 Satire as a shield for criticism? 01:29:12 AI Regulation 01:35:15 Gary Marcus 01:52:37 Copyright 01:56:11 Stochastic parrots come home to roost 02:00:03 Privacy 02:01:55 LLM ecosystem 02:05:06 Tensor logic Refs: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World [Pedro Domingos] https://amzn.to/3MiWs9B Rebooting AI: Building Artificial Intelligence We Can Trust [Gary Marcus] https://amzn.to/3AAywvL Flash Boys [Michael Lewis] https://amzn.to/4dUGm1M
8/25/20242 hours, 12 minutes, 15 seconds
Episode Artwork

Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Andrew Ilyas, a PhD student at MIT who is about to start as a professor at CMU. We discuss Data modeling and understanding how datasets influence model predictions, Adversarial examples in machine learning and why they occur, Robustness in machine learning models, Black box attacks on machine learning systems, Biases in data collection and dataset creation, particularly in ImageNet and Self-selection bias in data and methods to address it. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api Andrew's site: https://andrewilyas.com/ https://x.com/andrew_ilyas TOC: 00:00:00 - Introduction and Andrew's background 00:03:52 - Overview of the machine learning pipeline 00:06:31 - Data modeling paper discussion 00:26:28 - TRAK: Evolution of data modeling work 00:43:58 - Discussion on abstraction, reasoning, and neural networks 00:53:16 - "Adversarial Examples Are Not Bugs, They Are Features" paper 01:03:24 - Types of features learned by neural networks 01:10:51 - Black box attacks paper 01:15:39 - Work on data collection and bias 01:25:48 - Future research plans and closing thoughts References: Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/pdf/1905.02175 TRAK: Attributing Model Behavior at Scale https://arxiv.org/pdf/2303.14186 Datamodels: Predicting Predictions from Training Data https://arxiv.org/pdf/2202.00622 Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/pdf/1905.02175 IMAGENET-TRAINED CNNS https://arxiv.org/pdf/1811.12231 ZOO: Zeroth Order Optimization Based Black-box https://arxiv.org/pdf/1708.03999 A Spline Theory of Deep Networks https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf Scaling Monosemanticity https://transformer-circuits.pub/2024/scaling-monosemanticity/ Adversarial Examples Are Not Bugs, They Are Features https://gradientscience.org/adv/ Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies https://proceedings.mlr.press/v235/bartoldson24a.html Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors https://arxiv.org/abs/1807.07978 Estimation of Standard Auction Models https://arxiv.org/abs/2205.02060 From ImageNet to Image Classification: Contextualizing Progress on Benchmarks https://arxiv.org/abs/2005.11295 Estimation of Standard Auction Models https://arxiv.org/abs/2205.02060 What Makes A Good Fisherman? Linear Regression under Self-Selection Bias https://arxiv.org/abs/2205.03246 Towards Tracing Factual Knowledge in Language Models Back to the Training Data [Akyürek] https://arxiv.org/pdf/2205.11482
8/22/20241 hour, 28 minutes
Episode Artwork

Joscha Bach - AGI24 Keynote (Cyberanimism)

Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
8/21/202457 minutes, 21 seconds
Episode Artwork

Gary Marcus' keynote at AGI-24

Prof Gary Marcus revisited his keynote from AGI-21, noting that many of the issues he highlighted then are still relevant today despite significant advances in AI. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Gary Marcus criticized current large language models (LLMs) and generative AI for their unreliability, tendency to hallucinate, and inability to truly understand concepts. Marcus argued that the AI field is experiencing diminishing returns with current approaches, particularly the "scaling hypothesis" that simply adding more data and compute will lead to AGI. He advocated for a hybrid approach to AI that combines deep learning with symbolic AI, emphasizing the need for systems with deeper conceptual understanding. Marcus highlighted the importance of developing AI with innate understanding of concepts like space, time, and causality. He expressed concern about the moral decline in Silicon Valley and the rush to deploy potentially harmful AI technologies without adequate safeguards. Marcus predicted a possible upcoming "AI winter" due to inflated valuations, lack of profitability, and overhyped promises in the industry. He stressed the need for better regulation of AI, including transparency in training data, full disclosure of testing, and independent auditing of AI systems. Marcus proposed the creation of national and global AI agencies to oversee the development and deployment of AI technologies. He concluded by emphasizing the importance of interdisciplinary collaboration, focusing on robust AI with deep understanding, and implementing smart, agile governance for AI and AGI. YT Version (very high quality filmed) https://youtu.be/91SK90SahHc Pre-order Gary's new book here: Taming Silicon Valley: How We Can Ensure That AI Works for Us https://amzn.to/4fO46pY Filmed at the AGI-24 conference: https://agi-conf.org/2024/ TOC: 00:00:00 Introduction 00:02:34 Introduction by Ben G 00:05:17 Gary Marcus begins talk 00:07:38 Critiquing current state of AI 00:12:21 Lack of progress on key AI challenges 00:16:05 Continued reliability issues with AI 00:19:54 Economic challenges for AI industry 00:25:11 Need for hybrid AI approaches 00:29:58 Moral decline in Silicon Valley 00:34:59 Risks of current generative AI 00:40:43 Need for AI regulation and governance 00:49:21 Concluding thoughts 00:54:38 Q&A: Cycles of AI hype and winters 01:00:10 Predicting a potential AI winter 01:02:46 Discussion on interdisciplinary approach 01:05:46 Question on regulating AI 01:07:27 Ben G's perspective on AI winter
8/17/20241 hour, 12 minutes, 16 seconds
Episode Artwork

Is ChatGPT an N-gram model on steroids?

DeepMind Research Scientist / MIT scholar Dr. Timothy Nguyen discusses his recent paper on understanding transformers through n-gram statistics. Nguyen explains his approach to analyzing transformer behavior using a kind of "template matching" (N-grams), providing insights into how these models process and predict language. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key points covered include: A method for describing transformer predictions using n-gram statistics without relying on internal mechanisms. The discovery of a technique to detect overfitting in large language models without using holdout sets. Observations on curriculum learning, showing how transformers progress from simpler to more complex rules during training. Discussion of distance measures used in the analysis, particularly the variational distance. Exploration of model sizes, training dynamics, and their impact on the results. We also touch on philosophical aspects of describing versus explaining AI behavior, and the challenges in understanding the abstractions formed by neural networks. Nguyen concludes by discussing potential future research directions, including attempts to convert descriptions of transformer behavior into explanations of internal mechanisms. Timothy Nguyen's earned his B.S. and Ph.D. in mathematics from Caltech and MIT, respectively. He held positions as Research Assistant Professor at the Simons Center for Geometry and Physics (2011-2014) and Visiting Assistant Professor at Michigan State University (2014-2017). During this time, his research expanded into high-energy physics, focusing on mathematical problems in quantum field theory. His work notably provided a simplified and corrected formulation of perturbative path integrals. Since 2017, Nguyen has been working in industry, applying his expertise to machine learning. He is currently at DeepMind, where he contributes to both fundamental research and practical applications of deep learning to solve real-world problems. Refs: The Cartesian Cafe https://www.youtube.com/@TimothyNguyen Understanding Transformers via N-Gram Statistics https://www.researchgate.net/publication/382204056_Understanding_Transformers_via_N-Gram_Statistics TOC 00:00:00 Timothy Nguyen's background 00:02:50 Paper overview: transformers and n-gram statistics 00:04:55 Template matching and hash table approach 00:08:55 Comparing templates to transformer predictions 00:12:01 Describing vs explaining transformer behavior 00:15:36 Detecting overfitting without holdout sets 00:22:47 Curriculum learning in training 00:26:32 Distance measures in analysis 00:28:58 Model sizes and training dynamics 00:30:39 Future research directions 00:32:06 Conclusion and future topics
8/15/202432 minutes, 57 seconds
Episode Artwork

Jay Alammar on LLMs, RAG, and AI Engineering

Jay Alammar, renowned AI educator and researcher at Cohere, discusses the latest developments in large language models (LLMs) and their applications in industry. Jay shares his expertise on retrieval augmented generation (RAG), semantic search, and the future of AI architectures. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Cohere Command R model series: https://cohere.com/command Jay Alamaar: https://x.com/jayalammar Buy Jay's new book here! Hands-On Large Language Models: Language Understanding and Generation https://amzn.to/4fzOUgh TOC: 00:00:00 Introduction to Jay Alammar and AI Education 00:01:47 Cohere's Approach to RAG and AI Re-ranking 00:07:15 Implementing AI in Enterprise: Challenges and Solutions 00:09:26 Jay's Role at Cohere and the Importance of Learning in Public 00:15:16 The Evolution of AI in Industry: From Deep Learning to LLMs 00:26:12 Expert Advice for Newcomers in Machine Learning 00:32:39 The Power of Semantic Search and Embeddings in AI Systems 00:37:59 Jay Alammar's Journey as an AI Educator and Visualizer 00:43:36 Visual Learning in AI: Making Complex Concepts Accessible 00:47:38 Strategies for Keeping Up with Rapid AI Advancements 00:49:12 The Future of Transformer Models and AI Architectures 00:51:40 Evolution of the Transformer: From 2017 to Present 00:54:19 Preview of Jay's Upcoming Book on Large Language Models Disclaimer: This is the fourth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Note also that this combines several previously unpublished interviews from Jay into one, the earlier one at Tim's house was shot in Aug 2023, and the more recent one in Toronto in May 2024. Refs: The Illustrated Transformer https://jalammar.github.io/illustrated-transformer/ Attention Is All You Need https://arxiv.org/abs/1706.03762 The Unreasonable Effectiveness of Recurrent Neural Networks http://karpathy.github.io/2015/05/21/rnn-effectiveness/ Neural Networks in 11 Lines of Code https://iamtrask.github.io/2015/07/12/basic-python-network/ Understanding LSTM Networks (Chris Olah's blog post) http://colah.github.io/posts/2015-08-Understanding-LSTMs/ Luis Serrano's YouTube Channel https://www.youtube.com/channel/UCgBncpylJ1kiVaPyP-PZauQ Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks https://arxiv.org/abs/1908.10084 GPT (Generative Pre-trained Transformer) models https://jalammar.github.io/illustrated-gpt2/ https://openai.com/research/gpt-4 BERT (Bidirectional Encoder Representations from Transformers) https://jalammar.github.io/illustrated-bert/ https://arxiv.org/abs/1810.04805 RoPE (Rotary Positional Encoding) https://arxiv.org/abs/2104.09864 (Linked paper discussing rotary embeddings) Grouped Query Attention https://arxiv.org/pdf/2305.13245 RLHF (Reinforcement Learning from Human Feedback) https://openai.com/research/learning-from-human-preferences https://arxiv.org/abs/1706.03741 DPO (Direct Preference Optimization) https://arxiv.org/abs/2305.18290
8/11/202457 minutes, 28 seconds
Episode Artwork

Can AI therapy be more effective than drugs?

Daniel Cahn, co-founder of Slingshot AI, on the potential of AI in therapy. Why is anxiety and depression affecting a large population? To what extent are these real categories? Why is the mental health getting worse? How often do you want an AI to agree with you? What are the ethics of persuasive AI? You will discover all in this conversation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Daniel Cahn (who is also hiring ML engineers by the way!) https://x.com/thecahnartist?lang=en / cahnd https://thinkingmachinespodcast.com/ TOC: 00:00:00 Intro 00:01:56 Therapy effectiveness vs drugs and societal implications 00:04:02 Mental health categories: Iatrogenesis and social constructs 00:10:19 Psychiatric treatment models and cognitive perspectives 00:13:30 AI design and human-like interactions: Intentionality debates 00:20:04 AI in therapy: Ethics, anthropomorphism, and loneliness mitigation 00:28:13 Therapy efficacy: Neuroplasticity, suffering, and AI placebos 00:33:29 AI's impact on human agency and cognitive modeling 00:41:17 Social media's effects on brain structure and behavior 00:50:46 AI ethics: Altering values and free will considerations 01:00:00 Work value perception and personal identity formation 01:13:37 Free will, agency, and mutable personal identity in therapy 01:24:27 AI in healthcare: Challenges, ethics, and therapy improvements 01:53:25 AI development: Societal impacts and cultural implications Full references on YT VD: https://www.youtube.com/watch?v=7hwX6OZyNC0 (and baked into mp3 metadata)
8/8/20242 hours, 14 minutes, 7 seconds
Episode Artwork

Prof. Subbarao Kambhampati - LLMs don't reason, they memorize (ICML2024 2/13)

Prof. Subbarao Kambhampati argues that while LLMs are impressive and useful tools, especially for creative tasks, they have fundamental limitations in logical reasoning and cannot provide guarantees about the correctness of their outputs. He advocates for hybrid approaches that combine LLMs with external verification systems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC (sorry the ones baked into the MP3 were wrong apropos due to LLM hallucination!) [00:00:00] Intro [00:02:06] Bio [00:03:02] LLMs are n-gram models on steroids [00:07:26] Is natural language a formal language? [00:08:34] Natural language is formal? [00:11:01] Do LLMs reason? [00:19:13] Definition of reasoning [00:31:40] Creativity in reasoning [00:50:27] Chollet's ARC challenge [01:01:31] Can we reason without verification? [01:10:00] LLMs cant solve some tasks [01:19:07] LLM Modulo framework [01:29:26] Future trends of architecture [01:34:48] Future research directions Youtube version: https://www.youtube.com/watch?v=y1WnHpedi2A Refs: (we didn't have space for URLs here, check YT video description instead) Can LLMs Really Reason and Plan? On the Planning Abilities of Large Language Models : A Critical Investigation Chain of Thoughtlessness? An Analysis of CoT in Planning On the Self-Verification Limitations of Large Language Models on Reasoning and Planning Tasks LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve "Task Success" is not Enough Partition function (number theory) (Srinivasa Ramanujan and G.H. Hardy's work) Poincaré conjecture Gödel's incompleteness theorems ROT13 (Rotate13, "rotate by 13 places") A Mathematical Theory of Communication (C. E. SHANNON) Sparks of AGI Kambhampati thesis on speech recognition (1983) PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change Explainable human-AI interaction Tree of Thoughts On the Measure of Intelligence (ARC Challenge) Getting 50% (SoTA) on ARC-AGI with GPT-4o (Ryan Greenblatt ARC solution) PROGRAMS WITH COMMON SENSE (John McCarthy) - "AI should be an advice taker program" Original chain of thought paper ICAPS 2024 Keynote: Dale Schuurmans on "Computing and Planning with Large Generative Models" (COT) The Hardware Lottery (Hooker) A Path Towards Autonomous Machine Intelligence (JEPA/LeCun) AlphaGeometry FunSearch Emergent Abilities of Large Language Models Language models are not naysayers (Negation in LLMs) The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" Embracing negative results
7/29/20241 hour, 42 minutes, 27 seconds
Episode Artwork

Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)

How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don’t worry too much about x-risk from alien invasions. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at brave.com/api. Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. Kapoor has previously worked on AI in both industry and academia, with experience at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. Notably, Kapoor was included in TIME's inaugural list of the 100 most influential people in AI. Sayash Kapoor https://x.com/sayashk https://www.cs.princeton.edu/~sayashk/ Arvind Narayanan (other half of the AI Snake Oil duo) https://x.com/random_walker AI existential risk probabilities are too unreliable to inform policy https://www.aisnakeoil.com/p/ai-existential-risk-probabilities Pre-order AI Snake Oil Book https://amzn.to/4fq2HGb AI Snake Oil blog https://www.aisnakeoil.com/ AI Agents That Matter https://arxiv.org/abs/2407.01502 Shortcut learning in deep neural networks https://www.semanticscholar.org/paper/Shortcut-learning-in-deep-neural-networks-Geirhos-Jacobsen/1b04936c2599e59b120f743fbb30df2eed3fd782 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/ TOC: 00:00:00 Intro 00:01:57 How seriously should we take Xrisk threat? 00:02:55 Risk too unrealiable to inform policy 00:10:20 Overinflated risks 00:12:05 Perils of utility maximisation 00:13:55 Scaling vs airplane speeds 00:17:31 Shift to smaller models? 00:19:08 Commercial LLM ecosystem 00:22:10 Synthetic data 00:24:09 Is AI complexifying our jobs? 00:25:50 Does ChatGPT make us dumber or smarter? 00:26:55 Are AI Agents overhyped? 00:28:12 Simple vs complex baselines 00:30:00 Cost tradeoff in agent design 00:32:30 Model eval vs downastream perf 00:36:49 Shortcuts in metrics 00:40:09 Standardisation of agent evals 00:41:21 Humans in the loop 00:43:54 Levels of agent generality 00:47:25 ARC challenge
7/28/202449 minutes, 42 seconds
Episode Artwork

Sara Hooker - Why US AI Act Compute Thresholds Are Misguided

Sara Hooker is VP of Research at Cohere and leader of Cohere for AI. We discuss her recent paper critiquing the use of compute thresholds, measured in FLOPs (floating point operations), as an AI governance strategy. We explore why this approach, recently adopted in both US and EU AI policies, may be problematic and oversimplified. Sara explains the limitations of using raw computational power as a measure of AI capability or risk, and discusses the complex relationship between compute, data, and model architecture. Equally important, we go into Sara's work on "The AI Language Gap." This research highlights the challenges and inequalities in developing AI systems that work across multiple languages. Sara discusses how current AI models, predominantly trained on English and a handful of high-resource languages, fail to serve the linguistic diversity of our global population. We explore the technical, ethical, and societal implications of this gap, and discuss potential solutions for creating more inclusive and representative AI systems. We broadly discuss the relationship between language, culture, and AI capabilities, as well as the ethical considerations in AI development and deployment. YT Version: https://youtu.be/dBZp47999Ko TOC: [00:00:00] Intro [00:02:12] FLOPS paper [00:26:42] Hardware lottery [00:30:22] The Language gap [00:33:25] Safety [00:38:31] Emergent [00:41:23] Creativity [00:43:40] Long tail [00:44:26] LLMs and society [00:45:36] Model bias [00:48:51] Language and capabilities [00:52:27] Ethical frameworks and RLHF Sara Hooker https://www.sarahooker.me/ https://www.linkedin.com/in/sararosehooker/ https://scholar.google.com/citations?user=2xy6h3sAAAAJ&hl=en https://x.com/sarahookr Interviewer: Tim Scarfe Refs The AI Language gap https://cohere.com/research/papers/the-AI-language-gap.pdf On the Limitations of Compute Thresholds as a Governance Strategy. https://arxiv.org/pdf/2407.05694v1 The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm https://arxiv.org/pdf/2406.18682 Cohere Aya https://cohere.com/research/aya RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs https://arxiv.org/pdf/2407.02552 Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs https://arxiv.org/pdf/2402.14740 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ EU AI Act https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf The bitter lesson http://www.incompleteideas.net/IncIdeas/BitterLesson.html Neel Nanda interview https://www.youtube.com/watch?v=_Ygf0GnlwmY Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet https://transformer-circuits.pub/2024/scaling-monosemanticity/ Chollet's ARC challenge https://github.com/fchollet/ARC-AGI Ryan Greenblatt on ARC https://www.youtube.com/watch?v=z9j3wB1RRGA Disclaimer: This is the third video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.
7/18/20241 hour, 5 minutes, 41 seconds
Episode Artwork

Prof. Murray Shanahan - Machines Don't Think Like Us

Murray Shanahan is a professor of Cognitive Robotics at Imperial College London and a senior research scientist at DeepMind. He challenges our assumptions about AI consciousness and urges us to rethink how we talk about machine intelligence. We explore the dangers of anthropomorphizing AI, the limitations of current language in describing AI capabilities, and the fascinating intersection of philosophy and artificial intelligence. Show notes and full references: https://docs.google.com/document/d/1ICtBI574W-xGi8Z2ZtUNeKWiOiGZ_DRsp9EnyYAISws/edit?usp=sharing Prof Murray Shanahan: https://www.doc.ic.ac.uk/~mpsha/ (look at his selected publications) https://scholar.google.co.uk/citations?user=00bnGpAAAAAJ&hl=en https://en.wikipedia.org/wiki/Murray_Shanahan https://x.com/mpshanahan Interviewer: Dr. Tim Scarfe Refs (links in the Google doc linked above): Role play with large language models Waluigi effect "Conscious Exotica" - Paper by Murray Shanahan (2016) "Simulators" - Article by Janis from LessWrong "Embodiment and the Inner Life" - Book by Murray Shanahan (2010) "The Technological Singularity" - Book by Murray Shanahan (2015) "Simulacra as Conscious Exotica" - Paper by Murray Shanahan (newer paper of the original focussed on LLMs) A recent paper by Anthropic on using autoencoders to find features in language models (referring to the "Scaling Monosemanticity" paper) Work by Peter Godfrey-Smith on octopus consciousness "Metaphors We Live By" - Book by George Lakoff (1980s) Work by Aaron Sloman on the concept of "space of possible minds" (1984 article mentioned) Wittgenstein's "Philosophical Investigations" (posthumously published) Daniel Dennett's work on the "intentional stance" Alan Turing's original paper on the Turing Test (1950) Thomas Nagel's paper "What is it like to be a bat?" (1974) John Searle's Chinese Room Argument (mentioned but not detailed) Work by Richard Evans on tackling reasoning problems Claude Shannon's quote on knowledge and control "Are We Bodies or Souls?" - Book by Richard Swinburne Reference to work by Ethan Perez and others at Anthropic on potential deceptive behavior in language models Reference to a paper by Murray Shanahan and Antonia Creswell on the "selection inference framework" Mention of work by Francois Chollet, particularly the ARC (Abstraction and Reasoning Corpus) challenge Reference to Elizabeth Spelke's work on core knowledge in infants Mention of Karl Friston's work on planning as inference (active inference) The film "Ex Machina" - Murray Shanahan was the scientific advisor "The Waluigi Effect" Anthropic's constitutional AI approach Loom system by Lara Reynolds and Kyle McDonald for visualizing conversation trees DeepMind's AlphaGo (mentioned multiple times as an example) Mention of the "Golden Gate Claude" experiment Reference to an interview Tim Scarfe conducted with University of Toronto students about self-attention controllability theorem Mention of an interview with Irina Rish Reference to an interview Tim Scarfe conducted with Daniel Dennett Reference to an interview with Maria Santa Caterina Mention of an interview with Philip Goff Nick Chater and Martin Christianson's book ("The Language Game: How Improvisation Created Language and Changed the World") Peter Singer's work from 1975 on ascribing moral status to conscious beings Demis Hassabis' discussion on the "ladder of creativity" Reference to B.F. Skinner and behaviorism
7/14/20242 hours, 15 minutes, 22 seconds
Episode Artwork

David Chalmers - Reality+

In the coming decades, the technology that enables virtual and augmented reality will improve beyond recognition. Within a century, world-renowned philosopher David J. Chalmers predicts, we will have virtual worlds that are impossible to distinguish from non-virtual worlds. But is virtual reality just escapism? In a highly original work of 'technophilosophy', Chalmers argues categorically, no: virtual reality is genuine reality. Virtual worlds are not second-class worlds. We can live a meaningful life in virtual reality - and increasingly, we will. What is reality, anyway? How can we lead a good life? Is there a god? How do we know there's an external world - and how do we know we're not living in a computer simulation? In Reality+, Chalmers conducts a grand tour of philosophy, using cutting-edge technology to provide invigorating new answers to age-old questions. David J. Chalmers is an Australian philosopher and cognitive scientist specializing in the areas of philosophy of mind and philosophy of language. He is Professor of Philosophy and Neural Science at New York University, as well as co-director of NYU's Center for Mind, Brain, and Consciousness. Chalmers is best known for his work on consciousness, including his formulation of the "hard problem of consciousness." Reality+: Virtual Worlds and the Problems of Philosophy https://amzn.to/3RYyGD2 https://consc.net/ https://x.com/davidchalmers42 00:00:00 Reality+ Intro 00:12:02 GPT conscious? 10/10 00:14:19 The consciousness processor thought experiment (11/10) 00:20:34 Intelligence and Consciousness entangled? 10/10 00:22:44 Karl Friston / Meta Problem 10/10 00:29:05 Knowledge argument / subjective experience (6/10) 00:32:34 Emergence 11/10 (best chapter) 00:42:45 Working with Douglas Hofstadter 10/10 00:46:14 Intelligence is analogy making? 10/10 00:50:47 Intelligence explosion 8/10 00:58:44 Hypercomputation 10/10 01:09:44 Who designed the designer? (7/10) 01:13:57 Experience machine (7/10)
7/8/20241 hour, 17 minutes, 57 seconds
Episode Artwork

Ryan Greenblatt - Solving ARC with GPT4o

Ryan Greenblatt from Redwood Research recently published "Getting 50% on ARC-AGI with GPT-4.0," where he used GPT4o to reach a state-of-the-art accuracy on Francois Chollet's ARC Challenge by generating many Python programs. Sponsor: Sign up to Kalshi here https://kalshi.onelink.me/1r91/mlst -- the first 500 traders who deposit $100 will get a free $20 credit! Important disclaimer - In case it's not obvious - this is basically gambling and a *high risk* activity - only trade what you can afford to lose. We discuss: - Ryan's unique approach to solving the ARC Challenge and achieving impressive results. - The strengths and weaknesses of current AI models. - How AI and humans differ in learning and reasoning. - Combining various techniques to create smarter AI systems. - The potential risks and future advancements in AI, including the idea of agentic AI. https://x.com/RyanPGreenblatt https://www.redwoodresearch.org/ Refs: Getting 50% (SoTA) on ARC-AGI with GPT-4o [Ryan Greenblatt] https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt On the Measure of Intelligence [Chollet] https://arxiv.org/abs/1911.01547 Connectionism and Cognitive Architecture: A Critical Analysis [Jerry A. Fodor and Zenon W. Pylyshyn] https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf Software 2.0 [Andrej Karpathy] https://karpathy.medium.com/software-2-0-a64152b37c35 Why Greatness Cannot Be Planned: The Myth of the Objective [Kenneth Stanley] https://amzn.to/3Wfy2E0 Biographical account of Terence Tao’s mathematical development. [M.A.(KEN) CLEMENTS] https://gwern.net/doc/iq/high/smpy/1984-clements.pdf Model Evaluation and Threat Research (METR) https://metr.org/ Why Tool AIs Want to Be Agent AIs https://gwern.net/tool-ai Simulators - Janus https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators AI Control: Improving Safety Despite Intentional Subversion https://www.lesswrong.com/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion https://arxiv.org/abs/2312.06942 What a Compute-Centric Framework Says About Takeoff Speeds https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/ Global GDP over the long run https://ourworldindata.org/grapher/global-gdp-over-the-long-run?yScale=log Safety Cases: How to Justify the Safety of Advanced AI Systems https://arxiv.org/abs/2403.10462 The Danger of a “Safety Case" http://sunnyday.mit.edu/The-Danger-of-a-Safety-Case.pdf The Future Of Work Looks Like A UPS Truck (~02:15:50) https://www.npr.org/sections/money/2014/05/02/308640135/episode-536-the-future-of-work-looks-like-a-ups-truck SWE-bench https://www.swebench.com/ Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model https://arxiv.org/pdf/2201.11990 Algorithmic Progress in Language Models https://epochai.org/blog/algorithmic-progress-in-language-models
7/6/20242 hours, 18 minutes, 1 second
Episode Artwork

Aiden Gomez - CEO of Cohere (AI's 'Inner Monologue' – Crucial for Reasoning)

Aidan Gomez, CEO of Cohere, reveals how they're tackling AI hallucinations and improving reasoning abilities. He also explains why Cohere doesn't use any output from GPT-4 for training their models. Aidan shares his personal insights into the world of AI and LLMs and Cohere's unique approach to solving real-world business problems, and how their models are set apart from the competition. Aidan reveals how they are making major strides in AI technology, discussing everything from last mile customer engineering to the robustness of prompts and future architectures. He also touches on the broader implications of AI for society, including potential risks and the role of regulation. He discusses Cohere's guiding principles and the health the of startup scene. With a particular focus on enterprise applications. Aidan provides a rare look into the internal workings of Cohere and their vision for driving productivity and innovation. https://cohere.com/ https://x.com/aidangomez Check out Cohere's amazing new Command R* models here https://cohere.com/command Disclaimer: This is the second video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.
6/29/20241 hour, 22 seconds
Episode Artwork

New "50%" ARC result and current winners interviewed

The ARC Challenge, created by Francois Chollet, tests how well AI systems can generalize from a few examples in a grid-based intelligence test. We interview the current winners of the ARC Challenge—Jack Cole, Mohammed Osman and their collaborator Michael Hodel. They discuss how they tackled ARC (Abstraction and Reasoning Corpus) using language models. We also discuss the new "50%" public set approach announced today from Redwood Research (Ryan Greenblatt). Jack and Mohammed explain their winning approach, which involves fine-tuning a language model on a large, specifically-generated dataset and then doing additional fine-tuning at test-time, a technique known in this context as "active inference". They use various strategies to represent the data for the language model and believe that with further improvements, the accuracy could reach above 50%. Michael talks about his work on generating new ARC-like tasks to help train the models. They also debate whether their methods stay true to the "spirit" of Chollet's measure of intelligence. Despite some concerns, they agree that their solutions are promising and adaptable for other similar problems. Note: Jack's team is still the current official winner at 33% on the private set. Ryan's entry is not on the private leaderboard or eligible. Chollet invented ARC in 2019 (not 2017 as stated) "Ryan's entry is not a new state of the art. We don't know exactly how well it does since it was only evaluated on 100 tasks from the evaluation set and does 50% on those, reportedly. Meanwhile Jacks team i.e. MindsAI's solution does 54% on the entire eval set and it is seemingly possible to do 60-70% with an ensemble" Jack Cole: https://x.com/Jcole75Cole https://lab42.global/community-interview-jack-cole/ Mohamed Osman: Mohamed is looking to do a PhD in AI/ML, can you help him? Email: [email protected] https://www.linkedin.com/in/mohamedosman1905/ Michael Hodel: https://arxiv.org/pdf/2404.07353v1 https://www.linkedin.com/in/michael-hodel/ https://x.com/bayesilicon https://github.com/michaelhodel Getting 50% (SoTA) on ARC-AGI with GPT-4o - Ryan Greenblatt https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt Neural networks for abstraction and reasoning: Towards broad generalization in machines [Mikel Bober-Irizar, Soumya Banerjee] https://arxiv.org/pdf/2402.03507 Measure of intelligence: https://arxiv.org/abs/1911.01547 YT version: https://youtu.be/jSAT_RuJ_Cg
6/18/20242 hours, 14 minutes, 17 seconds
Episode Artwork

Cohere co-founder Nick Frosst on building LLM apps for business

Nick Frosst, co-founder of Cohere, on the future of LLMs, and AGI. Learn how Cohere is solving real problems for business with their new AI models. This is the first podcast from our new Cohere partnership! Nick talks about his journey at Google Brain, working with AI legends like Geoff Hinton, and the amazing things his company, Cohere, is doing. From creating the must useful language models for businesses to making tools for developers, Nick shares a lot of interesting insights. He even talks about his band, Good Kid! Nick said that RAG is one of the best features of Cohere's new Command R* models. We are about to release a deep-dive on RAG with Patrick Lewis from Cohere, keep an eye out for that - he explains why their models are specifically optimised for RAG use cases. Learn more about Cohere Command R* models here: https://cohere.com/commandhttps://github.com/cohere-ai/cohere-toolkit Nick's band Good Kid: https://goodkidofficial.com/ Nick on Twitter: https://x.com/nickfrosst Disclaimer: We are in a partnership with Cohere to release content for them. We were not told what to say in the interview, and didn't edit anything out from the interview. We are currently planning to release 2 shows per month under the partnership about their AI platform, research and strategy.
6/16/202441 minutes, 25 seconds
Episode Artwork

What’s the Magic Word? A Control Theory of LLM Prompting.

These two scientists have mapped out the insides or “reachable space” of a language model using control theory, what they discovered was extremely surprising. Please support us on Patreon to get access to the private Discord server, bi-weekly calls, early access and ad-free listening. https://patreon.com/mlst YT version: https://youtu.be/Bpgloy1dDn0 Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto to discuss their groundbreaking paper, “What’s the Magic Word? A Control Theory of LLM Prompting.” (the main theorem on self-attention controllability was developed in collaboration with Dr. Shi-Zhuo Looi from Caltech). They frame LLM systems as discrete stochastic dynamical systems. This means they look at LLMs in a structured way, similar to how we analyze control systems in engineering. They explore the “reachable set” of outputs for an LLM. Essentially, this is the range of possible outputs the model can generate from a given starting point when influenced by different prompts. The research highlights that prompt engineering, or optimizing the input tokens, can significantly influence LLM outputs. They show that even short prompts can drastically alter the likelihood of specific outputs. Aman and Cameron’s work might be a boon for understanding and improving LLMs. They suggest that a deeper exploration of control theory concepts could lead to more reliable and capable language models. We dropped an additional, more technical video on the research on our Twitter account here: https://x.com/MLStreetTalk/status/1795093759471890606 Additional 20 minutes of unreleased footage on our Patreon here: https://www.patreon.com/posts/whats-magic-word-104922629 What's the Magic Word? A Control Theory of LLM Prompting (Aman Bhargava, Cameron Witkowski, Manav Shah, Matt Thomson) https://arxiv.org/abs/2310.04444 LLM Control Theory Seminar (April 2024) https://www.youtube.com/watch?v=9QtS9sVBFM0 Society for the pursuit of AGI (Cameron founded it) https://agisociety.mydurable.com/ Roger Federer demo http://conway.languagegame.io/inference Neural Cellular Automata, Active Inference, and the Mystery of Biological Computation (Aman) https://aman-bhargava.com/ai/neuro/neuromorphic/2024/03/25/nca-do-active-inference.html Aman and Cameron also want to thank Dr. Shi-Zhuo Looi and Prof. Matt Thomson from from Caltech for help and advice on their research. (https://thomsonlab.caltech.edu/ and https://pma.caltech.edu/people/looi-shi-zhuo) https://x.com/ABhargava2000 https://x.com/witkowski_cam
6/5/20241 hour, 17 minutes, 7 seconds
Episode Artwork

CAN MACHINES REPLACE US? (AI vs Humanity) - Maria Santacaterina

Maria Santacaterina, with her background in the humanities, brings a critical perspective on the current state and future implications of AI technology, its impact on society, and the nature of human intelligence and creativity. She emphasizes that despite technological advancements, AI lacks fundamental human traits such as consciousness, empathy, intuition, and the ability to engage in genuine creative processes. Maria argues that AI, at its core, processes data but does not have the capability to understand or generate new, intrinsic meaning or ideas as humans do. Throughout the conversation, Maria highlights her concern about the overreliance on AI in critical sectors such as healthcare, the justice system, and business. She stresses that while AI can serve as a tool, it should not replace human judgment and decision-making. Maria points out that AI systems often operate on past data, which may lead to outdated or incorrect decisions if not carefully managed. The discussion also touches upon the concept of "adaptive resilience", which Maria describes in her book. She explains adaptive resilience as the capacity for individuals and enterprises to evolve and thrive amidst challenges by leveraging technology responsibly, without undermining human values and capabilities. A significant portion of the conversation focussed on ethical considerations surrounding AI. Tim and Maria agree that there's a pressing need for strong governance and ethical frameworks to guide AI development and deployment. They discuss how AI, without proper ethical considerations, risks exacerbating issues like privacy invasion, misinformation, and unintended discrimination. Maria is skeptical about claims of achieving Artificial General Intelligence (AGI) or a technological singularity where machines surpass human intelligence in all aspects. She argues that such scenarios neglect the complex, dynamic nature of human intelligence and consciousness, which cannot be fully replicated or replaced by machines. Tim and Maria discuss the importance of keeping human agency and creativity at the forefront of technology development. Maria asserts that efforts to automate or standardize complex human actions and decisions are misguided and could lead to dehumanizing outcomes. They both advocate for using AI as an aid to enhance human capabilities rather than a substitute. In closing, Maria encourages a balanced approach to AI adoption, urging stakeholders to prioritize human well-being, ethical standards, and societal benefit above mere technological advancement. The conversation ends with Maria pointing people to her book for more in-depth analysis and thoughts on the future interaction between humans and technology. Buy Maria's book here: https://amzn.to/4avF6kq https://www.linkedin.com/in/mariasantacaterina TOC 00:00:00 - Intro to Book 00:03:23 - What Life Is 00:10:10 - Agency 00:18:04 - Tech and Society 00:21:51 - System 1 and 2 00:22:59 - We Are Being Pigeonholed 00:30:22 - Agency vs Autonomy 00:36:37 - Explanations 00:40:24 - AI Reductionism 00:49:50 - How Are Humans Intelligent 01:00:22 - Semantics 01:01:53 - Emotive AI and Pavlovian Dogs 01:04:05 - Technology, Social Media and Organisation 01:18:34 - Systems Are Not That Automated 01:19:33 - Hiring 01:22:34 - Subjectivity in Orgs 01:32:28 - The AGI Delusion 01:45:37 - GPT-laziness Syndrome 01:54:58 - Diversity Preservation 01:58:24 - Ethics 02:11:43 - Moral Realism 02:16:17 - Utopia 02:18:02 - Reciprocity 02:20:52 - Tyranny of Categorisation
5/6/20242 hours, 31 minutes, 33 seconds
Episode Artwork

Dr. Thomas Parr - Active Inference Book

Thomas Parr and his collaborators wrote a book titled "Active Inference: The Free Energy Principle in Mind, Brain and Behavior" which introduces Active Inference from both a high-level conceptual perspective and a low-level mechanistic, mathematical perspective. Active inference, developed by the legendary neuroscientist Prof. Karl Friston - is a unifying mathematical framework which frames living systems as agents which minimize surprise and free energy in order to resist entropy and persist over time. It unifies various perspectives from physics, biology, statistics, and psychology - and allows us to explore deep questions about agency, biology, causality, modelling, and consciousness. Buy Active Inference: The Free Energy Principle in Mind, Brain, and Behavior https://amzn.to/4dj0iMj YT version: https://youtu.be/lbb-Si5wa_o Please support us on Patreon to get access to the private Discord server, bi-weekly calls, early access and ad-free listening. https://patreon.com/mlst Chapters should be embedded in the mp3, let me me know if issues
5/1/20241 hour, 37 minutes, 9 seconds
Episode Artwork

Connor Leahy - e/acc, AGI and the future.

Connor is the CEO of Conjecture and one of the most famous names in the AI alignment movement. This is the "behind the scenes footage" and bonus Patreon interviews from the day of the Beff Jezos debate, including an interview with Daniel Clothiaux. It's a great insight into Connor's philosophy. At the end there is an unreleased additional interview with Beff. Support MLST: Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, very early-access + exclusive content and lots more. https://patreon.com/mlst Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail Topics: Externalized cognition and the role of society and culture in human intelligence The potential for AI systems to develop agency and autonomy The future of AGI as a complex mixture of various components The concept of agency and its relationship to power The importance of coherence in AI systems The balance between coherence and variance in exploring potential upsides The role of dynamic, competent, and incorruptible institutions in handling risks and developing technology Concerns about AI widening the gap between the haves and have-nots The concept of equal access to opportunity and maintaining dynamism in the system Leahy's perspective on life as a process that "rides entropy" The importance of distinguishing between epistemological, decision-theoretic, and aesthetic aspects of morality (inc ref to Hume's Guillotine) The concept of continuous agency and the idea that the first AGI will be a messy admixture of various components The potential for AI systems to become more physically embedded in the future The challenges of aligning AI systems and the societal impacts of AI technologies like ChatGPT and Bing The importance of humility in the face of complexity when considering the future of AI and its societal implications Disclaimer: this video is not an endorsement of e/acc or AGI agential existential risk from us - the hosts of MLST consider both of these views to be quite extreme. We seek diverse views on the channel. 00:00:00 Intro 00:00:56 Connor's Philosophy 00:03:53 Office Skit 00:05:08 Connor on e/acc and Beff 00:07:28 Intro to Daniel's Philosophy 00:08:35 Connor on Entropy, Life, and Morality 00:19:10 Connor on London 00:20:21 Connor Office Interview 00:20:46 Friston Patreon Preview 00:21:48 Why Are We So Dumb? 00:23:52 The Voice of the People, the Voice of God / Populism 00:26:35 Mimetics 00:30:03 Governance 00:33:19 Agency 00:40:25 Daniel Interview - Externalised Cognition, Bing GPT, AGI 00:56:29 Beff + Connor Bonus Patreons Interview
4/21/20241 hour, 19 minutes, 34 seconds
Episode Artwork

Prof. Chris Bishop's NEW Deep Learning Textbook!

Professor Chris Bishop is a Technical Fellow and Director at Microsoft Research AI4Science, in Cambridge. He is also Honorary Professor of Computer Science at the University of Edinburgh, and a Fellow of Darwin College, Cambridge. In 2004, he was elected Fellow of the Royal Academy of Engineering, in 2007 he was elected Fellow of the Royal Society of Edinburgh, and in 2017 he was elected Fellow of the Royal Society. Chris was a founding member of the UK AI Council, and in 2019 he was appointed to the Prime Minister’s Council for Science and Technology. At Microsoft Research, Chris oversees a global portfolio of industrial research and development, with a strong focus on machine learning and the natural sciences. Chris obtained a BA in Physics from Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory. Chris's contributions to the field of machine learning have been truly remarkable. He has authored (what is arguably) the original textbook in the field - 'Pattern Recognition and Machine Learning' (PRML) which has served as an essential reference for countless students and researchers around the world, and that was his second textbook after his highly acclaimed first textbook Neural Networks for Pattern Recognition. Recently, Chris has co-authored a new book with his son, Hugh, titled 'Deep Learning: Foundations and Concepts.' This book aims to provide a comprehensive understanding of the key ideas and techniques underpinning the rapidly evolving field of deep learning. It covers both the foundational concepts and the latest advances, making it an invaluable resource for newcomers and experienced practitioners alike. Buy Chris' textbook here: https://amzn.to/3vvLcCh More about Prof. Chris Bishop: https://en.wikipedia.org/wiki/Christopher_Bishop https://www.microsoft.com/en-us/research/people/cmbishop/ Support MLST: Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more. https://patreon.com/mlst Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail TOC: 00:00:00 - Intro to Chris 00:06:54 - Changing Landscape of AI 00:08:16 - Symbolism 00:09:32 - PRML 00:11:02 - Bayesian Approach 00:14:49 - Are NNs One Model or Many, Special vs General 00:20:04 - Can Language Models Be Creative 00:22:35 - Sparks of AGI 00:25:52 - Creativity Gap in LLMs 00:35:40 - New Deep Learning Book 00:39:01 - Favourite Chapters 00:44:11 - Probability Theory 00:45:42 - AI4Science 00:48:31 - Inductive Priors 00:58:52 - Drug Discovery 01:05:19 - Foundational Bias Models 01:07:46 - How Fundamental Is Our Physics Knowledge? 01:12:05 - Transformers 01:12:59 - Why Does Deep Learning Work? 01:16:59 - Inscrutability of NNs 01:18:01 - Example of Simulator 01:21:09 - Control
4/10/20241 hour, 22 minutes, 59 seconds
Episode Artwork

Philip Ball - How Life Works

Dr. Philip Ball is a freelance science writer. He just wrote a book called "How Life Works", discussing the how the science of Biology has advanced in the last 20 years. We focus on the concept of Agency in particular. He trained as a chemist at the University of Oxford, and as a physicist at the University of Bristol. He worked previously at Nature for over 20 years, first as an editor for physical sciences and then as a consultant editor. His writings on science for the popular press have covered topical issues ranging from cosmology to the future of molecular biology. YT: https://www.youtube.com/watch?v=n6nxUiqiz9I Transcript link on YT description Philip is the author of many popular books on science, including H2O: A Biography of Water, Bright Earth: The Invention of Colour, The Music Instinct and Curiosity: How Science Became Interested in Everything. His book Critical Mass won the 2005 Aventis Prize for Science Books, while Serving the Reich was shortlisted for the Royal Society Winton Science Book Prize in 2014. This is one of Tim's personal favourite MLST shows, so we have designated it a special edition. Enjoy! Buy Philip's book "How Life Works" here: https://amzn.to/3vSmNqp Support MLST: Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more. https://patreon.com/mlst Donate: https://www.paypal.com/donate/?hosted... If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail
4/7/20242 hours, 9 minutes, 17 seconds
Episode Artwork

Dr. Paul Lessard - Categorical/Structured Deep Learning

Dr. Paul Lessard and his collaborators have written a paper on "Categorical Deep Learning and Algebraic Theory of Architectures". They aim to make neural networks more interpretable, composable and amenable to formal reasoning. The key is mathematical abstraction, as exemplified by category theory - using monads to develop a more principled, algebraic approach to structuring neural networks. We also discussed the limitations of current neural network architectures in terms of their ability to generalise and reason in a human-like way. In particular, the inability of neural networks to do unbounded computation equivalent to a Turing machine. Paul expressed optimism that this is not a fundamental limitation, but an artefact of current architectures and training procedures. The power of abstraction - allowing us to focus on the essential structure while ignoring extraneous details. This can make certain problems more tractable to reason about. Paul sees category theory as providing a powerful "Lego set" for productively thinking about many practical problems. Towards the end, Paul gave an accessible introduction to some core concepts in category theory like categories, morphisms, functors, monads etc. We explained how these abstract constructs can capture essential patterns that arise across different domains of mathematics. Paul is optimistic about the potential of category theory and related mathematical abstractions to put AI and neural networks on a more robust conceptual foundation to enable interpretability and reasoning. However, significant theoretical and engineering challenges remain in realising this vision. Please support us on Patreon. We are entirely funded from Patreon donations right now. https://patreon.com/mlst If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail Links: Categorical Deep Learning: An Algebraic Theory of Architectures Bruno Gavranović, Paul Lessard, Andrew Dudzik, Tamara von Glehn, João G. M. Araújo, Petar Veličković Paper: https://categoricaldeeplearning.com/ Symbolica: https://twitter.com/symbolica https://www.symbolica.ai/ Dr. Paul Lessard (Principal Scientist - Symbolica) https://www.linkedin.com/in/paul-roy-lessard/ Interviewer: Dr. Tim Scarfe TOC: 00:00:00 - Intro 00:05:07 - What is the category paper all about 00:07:19 - Composition 00:10:42 - Abstract Algebra 00:23:01 - DSLs for machine learning 00:24:10 - Inscrutibility 00:29:04 - Limitations with current NNs 00:30:41 - Generative code / NNs don't recurse 00:34:34 - NNs are not Turing machines (special edition) 00:53:09 - Abstraction 00:55:11 - Category theory objects 00:58:06 - Cat theory vs number theory 00:59:43 - Data and Code are one in the same 01:08:05 - Syntax and semantics 01:14:32 - Category DL elevator pitch 01:17:05 - Abstraction again 01:20:25 - Lego set for the universe 01:23:04 - Reasoning 01:28:05 - Category theory 101 01:37:42 - Monads 01:45:59 - Where to learn more cat theory
4/1/20241 hour, 49 minutes, 10 seconds
Episode Artwork

Can we build a generalist agent? Dr. Minqi Jiang and Dr. Marc Rigter

Dr. Minqi Jiang and Dr. Marc Rigter explain an innovative new method to make the intelligence of agents more general-purpose by training them to learn many worlds before their usual goal-directed training, which we call "reinforcement learning". Their new paper is called "Reward-free curricula for training robust world models" https://arxiv.org/pdf/2306.09205.pdf https://twitter.com/MinqiJiang https://twitter.com/MarcRigter Interviewer: Dr. Tim Scarfe Please support us on Patreon, Tim is now doing MLST full-time and taking a massive financial hit. If you love MLST and want this to continue, please show your support! In return you get access to shows very early and private discord and networking. https://patreon.com/mlst We are also looking for show sponsors, please get in touch if interested mlstreettalk at gmail. MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778
3/20/20241 hour, 57 minutes, 11 seconds
Episode Artwork

Prof. Nick Chater - The Language Game (Part 1)

Nick Chater is Professor of Behavioural Science at Warwick Business School, who works on rationality and language using a range of theoretical and experimental approaches. We discuss his books The Mind is Flat, and the Language Game. Please support me on Patreon (this is now my main job!) - https://patreon.com/mlst - Access the private Discord, networking, and early access to content. MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778 https://twitter.com/MLStreetTalk Buy The Language Game: https://amzn.to/3SRHjPm Buy The Mind is Flat: https://amzn.to/3P3BUUC YT version: https://youtu.be/5cBS6COzLN4 https://www.wbs.ac.uk/about/person/nick-chater/ https://twitter.com/nickjchater?lang=en
3/1/20241 hour, 43 minutes, 46 seconds
Episode Artwork

Kenneth Stanley created a new social network based on serendipity and divergence

See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network". The idea is that you follow interests, NOT people. It's a social network without the popularity contest. We discuss the phgilosophy and technology behind the venture in great detail. The main ideas of which came from Kenneth's famous book "Why greatness cannot be planned". See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network".The idea is that you follow interests, NOT people. It's a social network without the popularity contest. YT version: https://www.youtube.com/watch?v=pWIrXN-yy8g Chapters should be baked into the MP3 file now MLST public Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778 Please support our work on Patreon - get access to interviews months early, private Patreon, networking, exclusive content and regular calls with Tim and Keith. https://patreon.com/mlst Get Maven here: https://www.heymaven.com/ Kenneth: https://twitter.com/kenneth0stanley https://www.kenstanley.net/home Host - Tim Scarfe: https://www.linkedin.com/in/ecsquizor/ https://www.mlst.ai/ Original MLST show with Kenneth: https://www.youtube.com/watch?v=lhYGXYeMq_E Tim explains the book more here: https://www.youtube.com/watch?v=wNhaz81OOqw
2/28/20243 hours, 15 minutes, 27 seconds
Episode Artwork

Dr. Brandon Rohrer - Robotics, Creativity and Intelligence

Brandon Rohrer who obtained his Ph.D from MIT is driven by understanding algorithms ALL the way down to their nuts and bolts, so he can make them accessible to everyone by first explaining them in the way HE himself would have wanted to learn! Please support us on Patreon for loads of exclusive content and private Discord: https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Brandon Rohrer is a seasoned data science leader and educator with a rich background in creating robust, efficient machine learning algorithms and tools. With a Ph.D. in Mechanical Engineering from MIT, his expertise encompasses a broad spectrum of AI applications — from computer vision and natural language processing to reinforcement learning and robotics. Brandon's career has seen him in Principle-level roles at Microsoft and Facebook. An educator at heart, he also shares his knowledge through detailed tutorials, courses, and his forthcoming book, "How to Train Your Robot." YT version: https://www.youtube.com/watch?v=4Ps7ahonRCY Brandon's links: https://github.com/brohrer https://www.youtube.com/channel/UCsBKTrp45lTfHa_p49I2AEQ https://www.linkedin.com/in/brohrer/ How transformers work: https://e2eml.school/transformers Brandon's End-to-End Machine Learning school courses, posts, and tutorials https://e2eml.school Free course: https://end-to-end-machine-learning.teachable.com/p/complete-course-library-full-end-to-end-machine-learning-catalog Blog: https://e2eml.school/blog.html Ziptie: Learning Useful Features [Brandon Rohrer] https://www.brandonrohrer.com/ziptie TOC should be baked into the MP3 file now 00:00:00 - Intro to Brandon 00:00:36 - RLHF 00:01:09 - Limitations of transformers 00:07:23 - Agency - we are all GPTs 00:09:07 - BPE / representation bias 00:12:00 - LLM true believers 00:16:42 - Brandon's style of teaching 00:19:50 - ML vs real world = Robotics 00:29:59 - Reward shaping 00:37:08 - No true Scotsman - when do we accept capabilities as real 00:38:50 - Externalism 00:43:03 - Building flexible robots 00:45:37 - Is reward enough 00:54:30 - Optimization curse 00:58:15 - Collective intelligence 01:01:51 - Intelligence + creativity 01:13:35 - ChatGPT + Creativity 01:25:19 - Transformers Tutorial
2/13/20241 hour, 31 minutes, 42 seconds
Episode Artwork

Showdown Between e/acc Leader And Doomer - Connor Leahy + Beff Jezos

The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values. As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path. Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week! https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions. Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism. Beff Jezos (Guillaume Verdon): https://twitter.com/BasedBeffJezos https://twitter.com/GillVerd Connor Leahy: https://twitter.com/npcollapse YT: https://www.youtube.com/watch?v=0zxi0xSBOaQ TOC: 00:00:00 - Intro 00:03:05 - Society library reference 00:03:35 - Debate starts 00:05:08 - Should any tech be banned? 00:20:39 - Leaded Gasoline 00:28:57 - False vacuum collapse method? 00:34:56 - What if there are dangerous aliens? 00:36:56 - Risk tolerances 00:39:26 - Optimizing for growth vs value 00:52:38 - Is vs ought 01:02:29 - AI discussion 01:07:38 - War / global competition 01:11:02 - Open source F16 designs 01:20:37 - Offense vs defense 01:28:49 - Morality / value 01:43:34 - What would Conor do 01:50:36 - Institutions/regulation 02:26:41 - Competition vs. Regulation Dilemma 02:32:50 - Existential Risks and Future Planning 02:41:46 - Conclusion and Reflection Note from Tim: I baked the chapter metadata into the mp3 file this time, does that help the chapters show up in your app? Let me know. Also I accidentally exported a few minutes of dead audio at the end of the file - sorry about that just skip on when the episode finishes.
2/3/20243 hours, 18 seconds
Episode Artwork

Mahault Albarracin - Cognitive Science

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon: https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk YT version: https://youtu.be/n8G50ynU0Vg In this interview on MLST, Dr. Tim Scarfe interviews Mahault Albarracin, who is the director of product for R&D at VERSES and also a PhD student in cognitive computing at the University of Quebec in Montreal. They discuss a range of topics related to consciousness, cognition, and machine learning. Throughout the conversation, they touch upon various philosophical and computational concepts such as panpsychism, computationalism, and materiality. They consider the "hard problem" of consciousness, which is the question of how and why we have subjective experiences. Albarracin shares her views on the controversial Integrated Information Theory and the open letter of opposition it received from the scientific community. She reflects on the nature of scientific critique and rivalry, advising caution in declaring entire fields of study as pseudoscientific. A substantial part of the discussion is dedicated to the topic of science itself, where Albarracin talks about thresholds between legitimate science and pseudoscience, the role of evidence, and the importance of validating scientific methods and claims. They touch upon language models, discussing whether they can be considered as having a "theory of mind" and the implications of assigning such properties to AI systems. Albarracin challenges the idea that there is a pure form of intelligence independent of material constraints and emphasizes the role of sociality in the development of our cognitive abilities. Albarracin offers her thoughts on scientific endeavors, the predictability of systems, the nature of intelligence, and the processes of learning and adaptation. She gives insights into the concept of using degeneracy as a way to increase resilience within systems and the role of maintaining a degree of redundancy or extra capacity as a buffer against unforeseen events. The conversation concludes with her discussing the potential benefits of collective intelligence, likening the adaptability and resilience of interconnected agent systems to those found in natural ecosystems. https://www.linkedin.com/in/mahault-albarracin-1742bb153/ 00:00:00 - Intro / IIT scandal 00:05:54 - Gaydar paper / What makes good science 00:10:51 - Language 00:18:16 - Intelligence 00:29:06 - X-risk 00:40:49 - Self modelling 00:43:56 - Anthropomorphisation 00:46:41 - Mediation and subjectivity 00:51:03 - Understanding 00:56:33 - Resiliency Technical topics: 1. Integrated Information Theory (IIT) - Giulio Tononi 2. The "hard problem" of consciousness - David Chalmers 3. Panpsychism and Computationalism in philosophy of mind 4. Active Inference Framework - Karl Friston 5. Theory of Mind and its computation in AI systems 6. Noam Chomsky's views on language models and linguistics 7. Daniel Dennett's Intentional Stance theory 8. Collective intelligence and system resilience 9. Redundancy and degeneracy in complex systems 10. Michael Levin's research on bioelectricity and pattern formation 11. The role of phenomenology in cognitive science
1/14/20241 hour, 7 minutes, 7 seconds
Episode Artwork

$450M AI Startup In 3 Years | Chai AI

Chai AI is the leading platform for conversational chat artificial intelligence. Note: this is a sponsored episode of MLST. William Beauchamp is the founder of two $100M+ companies - Chai Research, an AI startup, and Seamless Capital, a hedge fund based in Cambridge, UK. Chaiverse is the Chai AI developer platform, where developers can train, submit and evaluate on millions of real users to win their share of $1,000,000. https://www.chai-research.com https://www.chaiverse.com https://twitter.com/chai_research https://facebook.com/chairesearch/ https://www.instagram.com/chairesearch/ Download the app on iOS and Android (https://onelink.to/kqzhy9 ) #chai #chai_ai #chai_research #chaiverse #generative_ai #LLMs
1/9/202429 minutes, 47 seconds
Episode Artwork

DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon: https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya Agency in the context of cognitive science, particularly when considering the free energy principle, extends beyond just human decision-making and autonomy. It encompasses a broader understanding of how all living systems, including non-human entities, interact with their environment to maintain their existence by minimising sensory surprise. According to the free energy principle, living organisms strive to minimize the difference between their predicted states and the actual sensory inputs they receive. This principle suggests that agency arises as a natural consequence of this process, particularly when organisms appear to plan ahead many steps in the future. Riddhi J. Pitliya is based in the computational psychopathology lab doing her Ph.D at the University of Oxford and works with Professor Karl Friston at VERSES. https://twitter.com/RiddhiJP References: THE FREE ENERGY PRINCIPLE—A PRECIS [Ramstead] https://www.dialecticalsystems.eu/contributions/the-free-energy-principle-a-precis/ Active Inference: The Free Energy Principle in Mind, Brain, and Behavior [Thomas Parr, Giovanni Pezzulo, Karl J. Friston] https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind The beauty of collective intelligence, explained by a developmental biologist | Michael Levin https://www.youtube.com/watch?v=U93x9AWeuOA Growing Neural Cellular Automata https://distill.pub/2020/growing-ca Carcinisation https://en.wikipedia.org/wiki/Carcinisation Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned https://www.youtube.com/watch?v=lhYGXYeMq_E On Defining Artificial Intelligence [Pei Wang] https://sciendo.com/article/10.2478/jagi-2019-0002 Why? The Purpose of the Universe [Goff] https://amzn.to/4aEqpfm Umwelt https://en.wikipedia.org/wiki/Umwelt An Immense World: How Animal Senses Reveal the Hidden Realms [Yong] https://amzn.to/3tzzTb7 What's it like to be a bat [Nagal] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION) https://www.youtube.com/watch?v=axJtywd9Tbo We live in the infosphere [FLORIDI] https://www.youtube.com/watch?v=YLNGvvgq3eg Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398 https://www.youtube.com/watch?v=MVYrJJNdrEg Black Mirror: Rachel, Jack and Ashley Too | Official Trailer | Netflix https://www.youtube.com/watch?v=-qIlCo9yqpY
1/7/20241 hour, 2 minutes, 39 seconds
Episode Artwork

Understanding Deep Learning - Prof. SIMON PRINCE [STAFF FAVOURITE]

Watch behind the scenes, get early access and join private Discord by supporting us on Patreon: https://patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk In this comprehensive exploration of the field of deep learning with Professor Simon Prince who has just authored an entire text book on Deep Learning, we investigate the technical underpinnings that contribute to the field's unexpected success and confront the enduring conundrums that still perplex AI researchers. Key points discussed include the surprising efficiency of deep learning models, where high-dimensional loss functions are optimized in ways which defy traditional statistical expectations. Professor Prince provides an exposition on the choice of activation functions, architecture design considerations, and overparameterization. We scrutinize the generalization capabilities of neural networks, addressing the seeming paradox of well-performing overparameterized models. Professor Prince challenges popular misconceptions, shedding light on the manifold hypothesis and the role of data geometry in informing the training process. Professor Prince speaks about how layers within neural networks collaborate, recursively reconfiguring instance representations that contribute to both the stability of learning and the emergence of hierarchical feature representations. In addition to the primary discussion on technical elements and learning dynamics, the conversation briefly diverts to audit the implications of AI advancements with ethical concerns. Follow Prof. Prince: https://twitter.com/SimonPrinceAI https://www.linkedin.com/in/simon-prince-615bb9165/ Get the book now! https://mitpress.mit.edu/9780262048644/understanding-deep-learning/ https://udlbook.github.io/udlbook/ Panel: Dr. Tim Scarfe - https://www.linkedin.com/in/ecsquizor/ https://twitter.com/ecsquendor TOC: [00:00:00] Introduction [00:11:03] General Book Discussion [00:15:30] The Neural Metaphor [00:17:56] Back to Book Discussion [00:18:33] Emergence and the Mind [00:29:10] Computation in Transformers [00:31:12] Studio Interview with Prof. Simon Prince [00:31:46] Why Deep Neural Networks Work: Spline Theory [00:40:29] Overparameterization in Deep Learning [00:43:42] Inductive Priors and the Manifold Hypothesis [00:49:31] Universal Function Approximation and Deep Networks [00:59:25] Training vs Inference: Model Bias [01:03:43] Model Generalization Challenges [01:11:47] Purple Segment: Unknown Topic [01:12:45] Visualizations in Deep Learning [01:18:03] Deep Learning Theories Overview [01:24:29] Tricks in Neural Networks [01:30:37] Critiques of ChatGPT [01:42:45] Ethical Considerations in AI References on YT version VD: https://youtu.be/sJXn4Cl4oww
12/26/20232 hours, 6 minutes, 38 seconds
Episode Artwork

Prof. BERT DE VRIES - ON ACTIVE INFERENCE

Watch behind the scenes with Bert on Patreon: https://www.patreon.com/posts/bert-de-vries-93230722 https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Note, there is some mild background music on chapter 1 (Least Action), 3 (Friston) and 5 (Variational Methods) - please skip ahead if annoying. It's a tiny fraction of the overall podcast. YT version: https://youtu.be/2wnJ6E6rQsU Bert de Vries is Professor in the Signal Processing Systems group at Eindhoven University. His research focuses on the development of intelligent autonomous agents that learn from in-situ interactions with their environment. His research draws inspiration from diverse fields including computational neuroscience, Bayesian machine learning, Active Inference and signal processing. Bert believes that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful from situated environmental interactions. Bert received nis M.Sc. (1986) and Ph.D. (1991) degrees in Electrical Engineering from Eindhoven University of Technology (TU/e) and the University of Florida, respectively. From 1992 to 1999, he worked as a research scientist at Sarnoff Research Center in Princeton (NJ, USA). Since 1999, he has been employed in the hearing aids industry, both in engineering and managerial positions. De Vries was appointed part-time professor in the Signal Processing Systems Group at TU/e in 2012. Contact: https://twitter.com/bertdv0 https://www.tue.nl/en/research/researchers/bert-de-vries https://www.verses.ai/about-us Panel: Dr. Tim Scarfe / Dr. Keith Duggar TOC: [00:00:00] Principle of Least Action [00:05:10] Patreon Teaser [00:05:46] On Friston [00:07:34] Capm Peterson (VERSES) [00:08:20] Variational Methods [00:16:13] Dan Mapes (VERSES) [00:17:12] Engineering with Active Inference [00:20:23] Jason Fox (VERSES) [00:20:51] Riddhi Jain Pitliya [00:21:49] Hearing Aids as Adaptive Agents [00:33:38] Steven Swanson (VERSES) [00:35:46] Main Interview Kick Off, Engineering and Active Inference [00:43:35] Actor / Streaming / Message Passing [00:56:21] Do Agents Lose Flexibility with Maturity? [01:00:50] Language Compression [01:04:37] Marginalisation to Abstraction [01:12:45] Online Structural Learning [01:18:40] Efficiency in Active Inference [01:26:25] SEs become Neuroscientists [01:35:11] Building an Automated Engineer [01:38:58] Robustness and Design vs Grow [01:42:38] RXInfer [01:51:12] Resistance to Active Inference? [01:57:39] Diffusion of Responsibility in a System [02:10:33] Chauvinism in "Understanding" [02:20:08] On Becoming a Bayesian Refs: RXInfer https://biaslab.github.io/rxinfer-website/ Prof. Ariel Caticha https://www.albany.edu/physics/faculty/ariel-caticha Pattern recognition and machine learning (Bishop) https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Data Analysis: A Bayesian Tutorial (Sivia) https://www.amazon.co.uk/Data-Analysis-Bayesian-Devinderjit-Sivia/dp/0198568320 Probability Theory: The Logic of Science (E. T. Jaynes) https://www.amazon.co.uk/Probability-Theory-Principles-Elementary-Applications/dp/0521592712/ #activeinference #artificialintelligence
11/20/20232 hours, 27 minutes, 39 seconds
Episode Artwork

MULTI AGENT LEARNING - LANCELOT DA COSTA

Please support us https://www.patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Lance Da Costa aims to advance our understanding of intelligent systems by modelling cognitive systems and improving artificial systems. He's a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. He completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto. Summary: Lance did pure math originally but became interested in the brain and AI. He started working with Karl Friston on the free energy principle, which claims all intelligent agents minimize free energy for perception, action, and decision-making. Lance has worked to provide mathematical foundations and proofs for why the free energy principle is true, starting from basic assumptions about agents interacting with their environment. This aims to justify the principle from first physics principles. Dr. Scarfe and Da Costa discuss different approaches to AI - the free energy/active inference approach focused on mimicking human intelligence vs approaches focused on maximizing capability like deep reinforcement learning. Lance argues active inference provides advantages for explainability and safety compared to black box AI systems. It provides a simple, sparse description of intelligence based on a generative model and free energy minimization. They discuss the need for structured learning and acquiring core knowledge to achieve more human-like intelligence. Lance highlights work from Josh Tenenbaum's lab that shows similar learning trajectories to humans in a simple Atari-like environment. Incorporating core knowledge constraints the space of possible generative models the agent can use to represent the world, making learning more sample efficient. Lance argues active inference agents with core knowledge can match human learning capabilities. They discuss how to make generative models interpretable, such as through factor graphs. The goal is to be able to understand the representations and message passing in the model that leads to decisions. In summary, Lance argues active inference provides a principled approach to AI with advantages for explainability, safety, and human-like learning. Combining it with core knowledge and structural learning aims to achieve more human-like artificial intelligence. https://www.lancelotdacosta.com/ https://twitter.com/lancelotdacosta Interviewer: Dr. Tim Scarfe TOC 00:00:00 - Start 00:09:27 - Intelligence 00:12:37 - Priors / structure learning 00:17:21 - Core knowledge 00:29:05 - Intelligence is specialised 00:33:21 - The magic of agents 00:39:30 - Intelligibility of structure learning #artificialintelligence #activeinference
11/5/202349 minutes, 56 seconds
Episode Artwork

THE HARD PROBLEM OF OBSERVERS - WOLFRAM & FRISTON [SPECIAL EDITION]

Please support us! https://www.patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk YT version (with intro not found here) https://youtu.be/6iaT-0Dvhnc This is the epic special edition show you have been waiting for! With two of the most brilliant scientists alive today. Atoms, things, agents, ... observers. What even defines an "observer" and what properties must all observers share? How do objects persist in our universe given that their material composition changes over time? What does it mean for a thing to be a thing? And do things supervene on our lower-level physical reality? What does it mean for a thing to have agency? What's the difference between a complex dynamical system with and without agency? Could a rock or an AI catflap have agency? Can the universe be factorised into distinct agents, or is agency diffused? Have you ever pondered about these deep questions about reality? Prof. Friston and Dr. Wolfram have spent their entire careers, some 40+ years each thinking long and hard about these very questions and have developed significant frameworks of reference on their respective journeys (the Wolfram Physics project and the Free Energy principle). Panel: MIT Ph.D Keith Duggar Production: Dr. Tim Scarfe Refs: TED Talk with Stephen: https://www.ted.com/talks/stephen_wolfram_how_to_think_computationally_about_ai_the_universe_and_everything https://writings.stephenwolfram.com/2023/10/how-to-think-computationally-about-ai-the-universe-and-everything/ TOC 00:00:00 - Show kickoff 00:02:38 - Wolfram gets to grips with FEP 00:27:08 - How much control does an agent/observer have 00:34:52 - Observer persistence, what universe seems like to us 00:40:31 - Black holes 00:45:07 - Inside vs outside 00:52:20 - Moving away from the predictable path 00:55:26 - What can observers do 01:06:50 - Self modelling gives agency 01:11:26 - How do you know a thing has agency? 01:22:48 - Deep link between dynamics, ruliad and AI 01:25:52 - Does agency entail free will? Defining Agency 01:32:57 - Where do I probe for agency? 01:39:13 - Why is the universe the way we see it? 01:42:50 - Alien intelligence 01:43:40 - The hard problem of Observers 01:46:20 - Summary thoughts from Wolfram 01:49:35 - Factorisability of FEP 01:57:05 - Patreon interview teaser
10/29/20231 hour, 59 minutes, 29 seconds
Episode Artwork

DR. JEFF BECK - THE BAYESIAN BRAIN

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT version: https://www.youtube.com/watch?v=c4praCiy9qU Dr. Jeff Beck is a computational neuroscientist studying probabilistic reasoning (decision making under uncertainty) in humans and animals with emphasis on neural representations of uncertainty and cortical implementations of probabilistic inference and learning. His line of research incorporates information theoretic and hierarchical statistical analysis of neural and behavioural data as well as reinforcement learning and active inference. https://www.linkedin.com/in/jeff-beck... https://scholar.google.com/citations?... Interviewer: Dr. Tim Scarfe TOC 00:00:00 Intro 00:00:51 Bayesian / Knowledge 00:14:57 Active inference 00:18:58 Mediation 00:23:44 Philosophy of mind / science 00:29:25 Optimisation 00:42:54 Emergence 00:56:38 Steering emergent systems 01:04:31 Work plan 01:06:06 Representations/Core knowledge #activeinference
10/16/20231 hour, 10 minutes, 6 seconds
Episode Artwork

Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Prof. Melanie Mitchell argues that the concept of "understanding" in AI is ill-defined and multidimensional - we can't simply say an AI system does or doesn't understand. She advocates for rigorously testing AI systems' capabilities using proper experimental methods from cognitive science. Popular benchmarks for intelligence often rely on the assumption that if a human can perform a task, an AI that performs the task must have human-like general intelligence. But benchmarks should evolve as capabilities improve. Large language models show surprising skill on many human tasks but lack common sense and fail at simple things young children can do. Their knowledge comes from statistical relationships in text, not grounded concepts about the world. We don't know if their internal representations actually align with human-like concepts. More granular testing focused on generalization is needed. There are open questions around whether large models' abilities constitute a fundamentally different non-human form of intelligence based on vast statistical correlations across text. Mitchell argues intelligence is situated, domain-specific and grounded in physical experience and evolution. The brain computes but in a specialized way honed by evolution for controlling the body. Extracting "pure" intelligence may not work. Other key points: - Need more focus on proper experimental method in AI research. Developmental psychology offers examples for rigorous testing of cognition. - Reporting instance-level failures rather than just aggregate accuracy can provide insights. - Scaling laws and complex systems science are an interesting area of complexity theory, with applications to understanding cities. - Concepts like "understanding" and "intelligence" in AI force refinement of fuzzy definitions. - Human intelligence may be more collective and social than we realize. AI forces us to rethink concepts we apply anthropomorphically. The overall emphasis is on rigorously building the science of machine cognition through proper experimentation and benchmarking as we assess emerging capabilities. TOC: [00:00:00] Introduction and Munk AI Risk Debate Highlights [05:00:00] Douglas Hofstadter on AI Risk [00:06:56] The Complexity of Defining Intelligence [00:11:20] Examining Understanding in AI Models [00:16:48] Melanie's Insights on AI Understanding Debate [00:22:23] Unveiling the Concept Arc [00:27:57] AI Goals: A Human vs Machine Perspective [00:31:10] Addressing the Extrapolation Challenge in AI [00:36:05] Brain Computation: The Human-AI Parallel [00:38:20] The Arc Challenge: Implications and Insights [00:43:20] The Need for Detailed AI Performance Reporting [00:44:31] Exploring Scaling in Complexity Theory Eratta: Note Tim said around 39 mins that a recent Stanford/DM paper modelling ARC “on GPT-4 got around 60%”. This is not correct and he misremembered. It was actually davinci3, and around 10%, which is still extremely good for a blank slate approach with an LLM and no ARC specific knowledge. Folks on our forum couldn’t reproduce the result. See paper linked below. Books (MUST READ): Artificial Intelligence: A Guide for Thinking Humans (Melanie Mitchell) https://www.amazon.co.uk/Artificial-Intelligence-Guide-Thinking-Humans/dp/B07YBHNM1C/?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=44ccac78973f47e59d745e94967c0f30&camp=1634&creative=6738 Complexity: A Guided Tour (Melanie Mitchell) https://www.amazon.co.uk/Audible-Complexity-A-Guided-Tour?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=3f8bd505d86865c50c02dd7f10b27c05&camp=1634&creative=6738 Show notes (transcript, full references etc) https://atlantic-papyrus-d68.notion.site/Melanie-Mitchell-2-0-15e212560e8e445d8b0131712bad3000?pvs=25 YT version: https://youtu.be/29gkDpR2orc
9/10/20231 hour, 1 minute, 47 seconds
Episode Artwork

Autopoitic Enactivism and the Free Energy Principle - Prof. Friston, Prof Buckley, Dr. Ramstead

We explore connections between FEP and enactivism, including tensions raised in a paper critiquing FEP from an enactivist perspective. Dr. Maxwell Ramstead provides background on enactivism emerging from autopoiesis, with a focus on embodied cognition and rejecting information processing/computational views of mind. Chris shares his journey from robotics into FEP, starting as a skeptic but becoming convinced it's the right framework. He notes there are both "high road" and "low road" versions, ranging from embodied to more radically anti-representational stances. He doesn't see a definitive fork between dynamical systems and information theory as the source of conflict. Rather, the notion of operational closure in enactivism seems to be the main sticking point. The group explores definitional issues around structure/organization, boundaries, and operational closure. Maxwell argues the generative model in FEP captures organizational dependencies akin to operational closure. The Markov blanket formalism models structural interfaces. We discuss the concept of goals in cognitive systems - Chris advocates an intentional stance perspective - using notions of goals/intentions if they help explain system dynamics. Goals emerge from beliefs about dynamical trajectories. Prof Friston provides an elegant explanation of how goal-directed behavior naturally falls out of the FEP mathematics in a particular "goldilocks" regime of system scale/dynamics. The conversation explores the idea that many systems simply act "as if" they have goals or models, without necessarily possessing explicit representations. This helps resolve tensions between enactivist and computational perspectives. Throughout the dialogue, Maxwell presses philosophical points about the FEP abolishing what he perceives as false dichotomies in cognitive science such as internalism/externalism. He is critical of enactivists' commitment to bright line divides between subject areas. Prof. Karl Friston - Inventor of the free energy principle https://scholar.google.com/citations?user=q_4u0aoAAAAJ Prof. Chris Buckley - Professor of Neural Computation at Sussex University https://scholar.google.co.uk/citations?user=nWuZ0XcAAAAJ&hl=en Dr. Maxwell Ramstead - Director of Research at VERSES https://scholar.google.ca/citations?user=ILpGOMkAAAAJ&hl=fr We address critique in this paper: Laying down a forking path: Tensions between enaction and the free energy principle (Ezequiel A. Di Paolo, Evan Thompson, Randall D. Beere) https://philosophymindscience.org/index.php/phimisci/article/download/9187/8975 Other refs: Multiscale integration: beyond internalism and externalism (Maxwell J D Ramstead) https://pubmed.ncbi.nlm.nih.gov/33627890/ MLST panel: Dr. Tim Scarfe and Dr. Keith Duggar TOC (auto generated): 0:00 - Introduction 0:41 - Defining enactivism and its variants 6:58 - The source of the conflict between dynamical systems and information theory 8:56 - Operational closure in enactivism 10:03 - Goals and intentions 12:35 - The link between dynamical systems and information theory 15:02 - Path integrals and non-equilibrium dynamics 18:38 - Operational closure defined 21:52 - Structure vs. organization in enactivism 24:24 - Markov blankets as interfaces 28:48 - Operational closure in FEP 30:28 - Structure and organization again 31:08 - Dynamics vs. information theory 33:55 - Goals and intentions emerge in the FEP mathematics 36:58 - The Good Regulator Theorem 49:30 - enactivism and its relation to ecological psychology 52:00 - Goals, intentions and beliefs 55:21 - Boundaries and meaning 58:55 - Enactivism's rejection of information theory 1:02:08 - Beliefs vs goals 1:05:06 - Ecological psychology and FEP 1:08:41 - The Good Regulator Theorem 1:18:38 - How goal-directed behavior emerges 1:23:13 - Ontological vs metaphysical boundaries 1:25:20 - Boundaries as maps 1:31:08 - Connections to the maximum entropy principle 1:33:45 - Relations to quantum and relational physics
9/5/20231 hour, 34 minutes, 46 seconds
Episode Artwork

The Lottery Ticket Hypothesis with Jonathan Frankle

In this episode of Machine Learning Street Talk, we chat with Jonathan Frankle, author of The Lottery Ticket Hypothesis. Frankle has continued researching Sparse Neural Networks, Pruning, and Lottery Tickets leading to some really exciting follow-on papers! This chat discusses some of these papers such as Linear Mode Connectivity, Comparing and Rewinding and Fine-tuning in Neural Network Pruning, and more (full list of papers linked below). We also chat about how Jonathan got into Deep Learning research, his Information Diet, and work on developing Technology Policy for Artificial Intelligence!  This was a really fun chat, I hope you enjoy listening to it and learn something from it! Thanks for watching and please subscribe! Huge thanks to everyone on r/MachineLearning who asked questions! Paper Links discussed in the chat: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: https://arxiv.org/abs/1803.03635 Linear Mode Connectivity and the Lottery Ticket Hypothesis: https://arxiv.org/abs/1912.05671 Dissecting Pruned Neural Networks: https://arxiv.org/abs/1907.00262 Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs: https://arxiv.org/abs/2003.00152 What is the State of Neural Network Pruning? https://arxiv.org/abs/2003.03033 The Early Phase of Neural Network Training: https://arxiv.org/abs/2002.10365 Comparing Rewinding and Fine-tuning in Neural Network Pruning: https://arxiv.org/abs/2003.02389 (Also Mentioned) Block-Sparse GPU Kernels: https://openai.com/blog/block-sparse-gpu-kernels/ Balanced Sparsity for Efficient DNN Inference on GPU: https://arxiv.org/pdf/1811.00206.pdf Playing the Lottery with Rewards and Multiple Languages: Lottery Tickets in RL and NLP: https://arxiv.org/pdf/1906.02768.pdf r/MachineLearning question list: https://www.reddit.com/r/MachineLearning/comments/g9jqe0/d_lottery_ticket_hypothesis_ask_the_author_a/ (edited)  #machinelearning #deeplearning
5/19/20201 hour, 26 minutes, 43 seconds