This series is host to episodes created by the Department of Computer Science, University of Oxford, one of the longest-established Computer Science departments in the country. The series reflects this department's world-class research and teaching by providing talks that encompass topics such as computational biology, quantum computing, computational linguistics, information systems, software verification, and software engineering.
Strachey Lecture: The Computer in the Sky
The talk will emphasize the diversity of mathematical tools necessary for understanding blockchain protocols and their applications The talk will emphasize the diversity of mathematical tools necessary for understanding blockchain protocols and their applications (e.g., distributed computing, game theory, mechanism design, and continuous-time stochastic processes) and the immediate practical impact that mathematical work on this topic has had (e.g., Ethereum's EIP-1559 and LVR for automated market makers).
5/16/2024 • 1 hour, 2 minutes, 9 seconds
From classical to non-classical stochastic shortest path problems
Professor Christel Baier delivers the Hillary Term 2024 Strachey Lecture Abstract: The classical stochastic shortest path (SSP) problems asks to find a policy for traversing a weighted stochastic graph until reaching a distinguished goal state that minimizes the expected accumulated weight. SSP problems have numerous applications in, e.g., operations research, artificial intelligence, robotics and verification of probabilistic systems. The underlying graph model is a finite-state Markov decision process (MDP) with integer weights for its state-action pairs. Prominent results are the existence of optimal memoryless deterministic policies together with linear programming techniques and value and policy iteration to compute such policies and their values. These results rely on the assumption that the minimum under all proper policies that reach the goal state almost surely exists. Early work on the SSP problems goes back to the 1960s-1990s and makes additional assumptions. Complete algorithms that only require the existence of proper policies combine these techniques with a pre-analysis of end components, an elegant graph-theoretic concept for MDPs that has been introduced by de Alfaro in the late 1990s. The talk will start with a summary of these results. The second part of the talk presents more recent results for variants of the classical SSP. The conditional and partial SSP drop the assumption that the goal state must be reached almost surely and ask to minimize the expected accumulated weight under the condition that the goal will be reached (conditional SSP) resp. assign value 0 to all paths that do not reach the goal state (partial SSP). Other variants take into account aspects of risk-awareness, e.g., by studying the conditional value-at-risk or the variance-penalized expected accumulated weight. While the classical SSP problem is solvable in polynomial time, such non-classical SSP problems are computationally much harder. For the general case, the decidability status of such non-classical SSP problems is unknown, but they have been shown to be at least as hard as the Skolem problem (and even as the positivity problem). However, for non-positive weights, the conditional, partial and variance-penalized SSP problem are solvable in exponential time with a PSPACE lower bounds for acyclic MDPs
Speaker bio:
Christel Baier is a full professor and head of the chair for Algebraic and Logic Foundations of Computer Science at the Faculty of Computer Science of the Technische Universität Dresden since 2006. Since 2022 she holds an honorary doctorate (Dr. rer. nat. h.c.) from RWTH Aachen. From the University of Mannheim she received her Diploma in Mathematics in 1990, her Ph.D. in Computer Science in 1994 and her Habilitation in 1999. She was an associate professor for Theoretical Computer Science at the University of Bonn from 1999 to 2006. She was member of the DFG (German Research Foundation) review board for computer science from 2012 to 2019, and is a member of the DFG senate commission on Research Training Groups since 2020. Since 2011 she is a member of Academia Europaea. Her research focuses on modeling, specification and verification of reactive systems, quantitative analysis of stochastic systems, probabilistic model checking, temporal and modal logics and automata theory.
2/6/2024 • 57 minutes, 9 seconds
Strachey Lecture: From classical to non-classical stochastic shortest path problems
Professor Christel Baier delivers the Hillary Term 2024 Strachey Lecture Abstract: The classical stochastic shortest path (SSP) problems asks to find a policy for traversing a weighted stochastic graph until reaching a distinguished goal state that minimizes the expected accumulated weight. SSP problems have numerous applications in, e.g., operations research, artificial intelligence, robotics and verification of probabilistic systems. The underlying graph model is a finite-state Markov decision process (MDP) with integer weights for its state-action pairs. Prominent results are the existence of optimal memoryless deterministic policies together with linear programming techniques and value and policy iteration to compute such policies and their values. These results rely on the assumption that the minimum under all proper policies that reach the goal state almost surely exists. Early work on the SSP problems goes back to the 1960s-1990s and makes additional assumptions. Complete algorithms that only require the existence of proper policies combine these techniques with a pre-analysis of end components, an elegant graph-theoretic concept for MDPs that has been introduced by de Alfaro in the late 1990s. The talk will start with a summary of these results. The second part of the talk presents more recent results for variants of the classical SSP. The conditional and partial SSP drop the assumption that the goal state must be reached almost surely and ask to minimize the expected accumulated weight under the condition that the goal will be reached (conditional SSP) resp. assign value 0 to all paths that do not reach the goal state (partial SSP). Other variants take into account aspects of risk-awareness, e.g., by studying the conditional value-at-risk or the variance-penalized expected accumulated weight. While the classical SSP problem is solvable in polynomial time, such non-classical SSP problems are computationally much harder. For the general case, the decidability status of such non-classical SSP problems is unknown, but they have been shown to be at least as hard as the Skolem problem (and even as the positivity problem). However, for non-positive weights, the conditional, partial and variance-penalized SSP problem are solvable in exponential time with a PSPACE lower bounds for acyclic MDPs
Speaker bio:
Christel Baier is a full professor and head of the chair for Algebraic and Logic Foundations of Computer Science at the Faculty of Computer Science of the Technische Universität Dresden since 2006. Since 2022 she holds an honorary doctorate (Dr. rer. nat. h.c.) from RWTH Aachen. From the University of Mannheim she received her Diploma in Mathematics in 1990, her Ph.D. in Computer Science in 1994 and her Habilitation in 1999. She was an associate professor for Theoretical Computer Science at the University of Bonn from 1999 to 2006. She was member of the DFG (German Research Foundation) review board for computer science from 2012 to 2019, and is a member of the DFG senate commission on Research Training Groups since 2020. Since 2011 she is a member of Academia Europaea. Her research focuses on modeling, specification and verification of reactive systems, quantitative analysis of stochastic systems, probabilistic model checking, temporal and modal logics and automata theory.
2/6/2024 • 57 minutes, 9 seconds
How Can Algorithms Help to Protect our Privacy
In this terms Strachey lecture, Professor Monika Henzinger gives an introduction to differential privacy with an emphasis on differential private algorithms that can handle changing input data. Decisions are increasingly automated using rules that were learnt from personal data. Thus, it is important to guarantee that the privacy of the data is protected during the learning process. To formalize the notion of an algorithm that protects the privacy of its data, differential privacy was introduced. It is a rigorous mathematical definition to analyze the privacy properties of an algorithm – or the lack thereof. In this talk I will give an introduction to differential privacy with an emphasis on differential private algorithms that can handle changing input data.
Monika Henzinger is a professor of Computer Science at the Institute of Science and Technology Austria (ISTA). She holds a PhD in computer science from Princeton University (New Jersey, USA), and has been the head of research at Google and a professor of computer science at EPFL and the University of Vienna.
Monika Henzinger is an ACM and EATCS Fellow and a member of the Austrian Academy of Sciences and the German National Academy of Sciences Leopoldina. She has received several awards, including an honorary doctorate from TU Dortmund University, Two ERC Advanced Grant, the Leopoldina Carus Medal, and the Wittgensteinpreis, the highest science award of Austria.
The Strachey Lectures are generously supported by OxFORD Asset Management
11/13/2023 • 54 minutes, 48 seconds
Strachey Lecture: How Can Algorithms Help to Protect our Privacy
In this term's Strachey lecture, Professor Monika Henzinger gives an introduction to differential privacy with an emphasis on differential private algorithms that can handle changing input data. Decisions are increasingly automated using rules that were learnt from personal data. Thus, it is important to guarantee that the privacy of the data is protected during the learning process. To formalize the notion of an algorithm that protects the privacy of its data, differential privacy was introduced. It is a rigorous mathematical definition to analyze the privacy properties of an algorithm – or the lack thereof. In this talk I will give an introduction to differential privacy with an emphasis on differential private algorithms that can handle changing input data.
Monika Henzinger is a professor of Computer Science at the Institute of Science and Technology Austria (ISTA). She holds a PhD in computer science from Princeton University (New Jersey, USA), and has been the head of research at Google and a professor of computer science at EPFL and the University of Vienna.
Monika Henzinger is an ACM and EATCS Fellow and a member of the Austrian Academy of Sciences and the German National Academy of Sciences Leopoldina. She has received several awards, including an honorary doctorate from TU Dortmund University, Two ERC Advanced Grant, the Leopoldina Carus Medal, and the Wittgensteinpreis, the highest science award of Austria.
The Strachey Lectures are generously supported by OxFORD Asset Management
11/13/2023 • 54 minutes, 48 seconds
Strachey Lecture - Used or Be Used: Regaining Control of AI
It’s said that Henry Ford’s customers wanted a “a faster horse”. If Henry Ford was selling us artificial intelligence today, what would the customer call for, “a smarter human”? That’s certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we’ve developed is far more mundane.
Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us?
If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don’t understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.
Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he leads the University’s flagship mission on AI, AI@Cam. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. He is interim chair of the advisory board of the UK’s Centre for Data Ethics and Innovation and a member of the UK’s AI Council. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines.
THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT
9/4/2023 • 50 minutes, 26 seconds
Strachey Lecture - Use or Be Used: Regaining Control of AI
It’s said that Henry Ford’s customers wanted a “a faster horse”. If Henry Ford was selling us artificial intelligence today, what would the customer call for, “a smarter human”? That’s certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we’ve developed is far more mundane.
Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us?
If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don’t understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.
Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he leads the University’s flagship mission on AI, AI@Cam. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. He is interim chair of the advisory board of the UK’s Centre for Data Ethics and Innovation and a member of the UK’s AI Council. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines.
THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT
9/4/2023 • 50 minutes, 26 seconds
Strachey Lecture: Use or Be Used - Regaining Control of AI
It’s said that Henry Ford’s customers wanted a “a faster horse”. If Henry Ford was selling us artificial intelligence today, what would the customer call for, “a smarter human”? That’s certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we’ve developed is far more mundane.
Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us?
If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don’t understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.
Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he leads the University’s flagship mission on AI, AI@Cam. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. He is interim chair of the advisory board of the UK’s Centre for Data Ethics and Innovation and a member of the UK’s AI Council. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines.
THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT
9/4/2023 • 50 minutes, 26 seconds
Strachey Lecture: Use or Be Used - Regaining Control of AI
It’s said that Henry Ford’s customers wanted “a faster horse”. If Henry Ford was selling us artificial intelligence today, what would the customer call for, “a smarter human”? That’s certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we’ve developed is far more mundane.
Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us?
If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don’t understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.
Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he leads the University’s flagship mission on AI, AI@Cam. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. He is interim chair of the advisory board of the UK’s Centre for Data Ethics and Innovation and a member of the UK’s AI Council. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines.
THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT
9/4/2023 • 50 minutes, 26 seconds
Strachey lecture - Symmetry and Similarity
An introduction to algorithmic aspects of symmetry and similarity, ranging from the fundamental complexity theoretic "Graph Isomorphism Problem" to applications in optimisation and machine learning Symmetry is a fundamental concept in mathematics, science and engineering, and beyond. Understanding symmetries is often crucial for understanding structures. In computer science, we are mainly interested in the symmetries of combinatorial structures. Computing the symmetries of such a structure is essentially the same as deciding whether two structures are the same ("isomorphic"). Algorithmically, this is a difficult task that has received a lot of attention since the early days of computing. It is a major open problem in theoretical computer science to determine the precise computational complexity of this "Graph Isomorphism Problem".
2/16/2023 • 1 hour, 34 seconds
Strachey Lecture: Symmetry and Similarity
An introduction to algorithmic aspects of symmetry and similarity, ranging from the fundamental complexity theoretic "Graph Isomorphism Problem" to applications in optimisation and machine learning Symmetry is a fundamental concept in mathematics, science and engineering, and beyond. Understanding symmetries is often crucial for understanding structures. In computer science, we are mainly interested in the symmetries of combinatorial structures. Computing the symmetries of such a structure is essentially the same as deciding whether two structures are the same ("isomorphic"). Algorithmically, this is a difficult task that has received a lot of attention since the early days of computing. It is a major open problem in theoretical computer science to determine the precise computational complexity of this "Graph Isomorphism Problem".
2/16/2023 • 1 hour, 34 seconds
Integrating Logic, Probability and Neuro-Symbolic Reasoning using Probabilistic Soft Logic
An overview of work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains and a description of recent work which integrates neural and symbolic (NeSy) reasoning. Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I’ll overview our work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll also describe recent work which integrates neural and symbolic (NeSy) reasoning. I’ll close by highlighting emerging opportunities (and challenges!) in realizing the effectiveness of data and structure for knowledge discovery.
Bio:
Lise Getoor is a Distinguished Professor in the Computer Science & Engineering Department at UC Santa Cruz, where she holds the Jack Baskin Endowed Chair in Computer Engineering. She is founding Director of the UC Santa Cruz Data Science Research Center and is a Fellow of ACM, AAAI, and IEEE. Her research areas include machine learning and reasoning under uncertainty and she has extensive experience with machine learning and probabilistic modeling methods for graph and network data. She has over 250 publications including 13 best paper awards. She has served as an elected board member of the International Machine Learning Society, on the Computing Research Association (CRA) Board, as Machine Learning Journal Action Editor, Associate Editor for the ACM Transactions of Knowledge Discovery from Data, JAIR Associate Editor, and on the AAAI Executive Council.. She is a Distinguished Alumna of the UC Santa Barbara Computer Science Department and received the UC Santa Cruz Women in Science & Engineering (WISE) award. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor at the University of Maryland, College Park from 2001-2013.
THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT
10/27/2022 • 1 hour, 3 minutes, 39 seconds
Strachey Lecture: Integrating Logic, Probability and Neuro-Symbolic Reasoning using Probabilistic Soft Logic
An overview of work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains and a description of recent work which integrates neural and symbolic (NeSy) reasoning. Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I’ll overview our work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll also describe recent work which integrates neural and symbolic (NeSy) reasoning. I’ll close by highlighting emerging opportunities (and challenges!) in realizing the effectiveness of data and structure for knowledge discovery.
Bio:
Lise Getoor is a Distinguished Professor in the Computer Science & Engineering Department at UC Santa Cruz, where she holds the Jack Baskin Endowed Chair in Computer Engineering. She is founding Director of the UC Santa Cruz Data Science Research Center and is a Fellow of ACM, AAAI, and IEEE. Her research areas include machine learning and reasoning under uncertainty and she has extensive experience with machine learning and probabilistic modeling methods for graph and network data. She has over 250 publications including 13 best paper awards. She has served as an elected board member of the International Machine Learning Society, on the Computing Research Association (CRA) Board, as Machine Learning Journal Action Editor, Associate Editor for the ACM Transactions of Knowledge Discovery from Data, JAIR Associate Editor, and on the AAAI Executive Council.. She is a Distinguished Alumna of the UC Santa Barbara Computer Science Department and received the UC Santa Cruz Women in Science & Engineering (WISE) award. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor at the University of Maryland, College Park from 2001-2013.
THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT
10/27/2022 • 1 hour, 3 minutes, 39 seconds
Strachey Lecture - The Continuing Evolution of C++
Stroustrup discusses the development and evolution of the C++, one of the most widely used programming languages ever. The development of C++ started in 1979. Since then, it has grown to be one of the most widely used programming languages ever, with an emphasis on demanding industrial uses. It was released commercially in 1985 and evolved through one informal standard (“the ARM”) and several ISO standards: C++98, C++11, C++14, and C++17. How could an underfinanced language without a corporate owner succeed like that? What are the key ideas and design principles? How did the original ideas survive almost 40 years of development and 30 years of attention from a 100+ member standards committee? What is the current state of C++ and what is likely to happen over the next few years? What are the problems we are trying to address through language evolution?
12/12/2017 • 58 minutes, 52 seconds
Strachey Lecture: The Continuing Evolution of C++
Stroustrup discusses the development and evolution of the C++, one of the most widely used programming languages ever. The development of C++ started in 1979. Since then, it has grown to be one of the most widely used programming languages ever, with an emphasis on demanding industrial uses. It was released commercially in 1985 and evolved through one informal standard (“the ARM”) and several ISO standards: C++98, C++11, C++14, and C++17. How could an underfinanced language without a corporate owner succeed like that? What are the key ideas and design principles? How did the original ideas survive almost 40 years of development and 30 years of attention from a 100+ member standards committee? What is the current state of C++ and what is likely to happen over the next few years? What are the problems we are trying to address through language evolution?
12/12/2017 • 58 minutes, 52 seconds
Strachey Lecture - The Once and Future Turing
Professor Andrew Hodges author of 'Alan Turing: The Enigma' talks about Turing's work and ideas from the definition of computability, the universal machine to the prospect of Artificial Intelligence. In 1951, Christopher Strachey began his career in computing. He did so as a colleague of Alan Turing, who had inspired him with a 'Utopian' prospectus for programming. By that time, Turing had already made far-reaching and futuristic innovations, from the definition of computability and the universal machine to the prospect of Artificial Intelligence. This talk will describe the origins and impacts of these ideas, and how wartime codebreaking allowed theory to turn into practice. After 1951, Turing was no less innovative, applying computational techniques to mathematical biology. His sudden death in 1954 meant the loss of most of this work, and its rediscovery in modern times has only added to Turing's iconic status as a scientific visionary seeing far beyond his short life.
Andrew Hodges is the author of Alan Turing: The Enigma (1983), which inspired the 2014 film The Imitation Game.
The Strachey Lectures are generously supported by OxFORD Asset Management.
11/2/2016 • 1 hour, 7 minutes, 22 seconds
Strachey Lecture: The Once and Future Turing
Professor Andrew Hodges author of 'Alan Turing: The Enigma' talks about Turing's work and ideas from the definition of computability, the universal machine to the prospect of Artificial Intelligence. In 1951, Christopher Strachey began his career in computing. He did so as a colleague of Alan Turing, who had inspired him with a 'Utopian' prospectus for programming. By that time, Turing had already made far-reaching and futuristic innovations, from the definition of computability and the universal machine to the prospect of Artificial Intelligence. This talk will describe the origins and impacts of these ideas, and how wartime codebreaking allowed theory to turn into practice. After 1951, Turing was no less innovative, applying computational techniques to mathematical biology. His sudden death in 1954 meant the loss of most of this work, and its rediscovery in modern times has only added to Turing's iconic status as a scientific visionary seeing far beyond his short life.
Andrew Hodges is the author of Alan Turing: The Enigma (1983), which inspired the 2014 film The Imitation Game.
The Strachey Lectures are generously supported by OxFORD Asset Management.
11/2/2016 • 1 hour, 7 minutes, 22 seconds
Strachey Lecture - Quantum Supremacy
Dr Scott Aaronson (MIT, UT Austin) gives the 2016 Strachey lecture. In the near future, it will likely become possible to perform special-purpose quantum computations that, while not immediately useful for anything, are plausibly hard to simulate using a classical computer. These "quantum supremacy experiments" would be a scientific milestone---decisively answering quantum computing skeptics, while casting doubt on one of the foundational tenets of computer science, the Extended Church-Turing Thesis. At the same time, these experiments also raise fascinating questions for computational complexity theorists: for example, on what grounds should we believe that a given quantum system really is hard to simulate classically?
Does classical simulation become easier as a quantum system becomes noisier? and how do we verify the results of such an experiment? In this lecture, I'll discuss recent results and open problems about these questions, using three proposed "quantum supremacy experiments" as examples: BosonSampling, IQP / commuting Hamiltonians, and random quantum circuits.
Based partly on joint work with Alex Arkhipov and with Lijie Chen.
The Strachey Lectures are generously supported by OxFORD Asset Management.
6/14/2016 • 1 hour, 12 minutes, 1 second
Strachey Lecture: Quantum Supremacy
Dr Scott Aaronson (MIT, UT Austin) gives the 2016 Strachey lecture. In the near future, it will likely become possible to perform special-purpose quantum computations that, while not immediately useful for anything, are plausibly hard to simulate using a classical computer. These "quantum supremacy experiments" would be a scientific milestone---decisively answering quantum computing skeptics, while casting doubt on one of the foundational tenets of computer science, the Extended Church-Turing Thesis. At the same time, these experiments also raise fascinating questions for computational complexity theorists: for example, on what grounds should we believe that a given quantum system really is hard to simulate classically?
Does classical simulation become easier as a quantum system becomes noisier? and how do we verify the results of such an experiment? In this lecture, I'll discuss recent results and open problems about these questions, using three proposed "quantum supremacy experiments" as examples: BosonSampling, IQP / commuting Hamiltonians, and random quantum circuits.
Based partly on joint work with Alex Arkhipov and with Lijie Chen.
The Strachey Lectures are generously supported by OxFORD Asset Management.
6/14/2016 • 1 hour, 12 minutes, 1 second
Artificial Intelligence and the Future
In this talk Demis Hassabis discuss's what is happening at the cutting edge of AI research, its future impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind. Strachey Lecture 2016, generously supported by Oxford Asset Management. Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition. Demis will draw on his eclectic experiences as an AI researcher, neuroscientist and videogames designer to discuss what is happening at the cutting edge of AI research, its future impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind.
2/26/2016 • 55 minutes, 8 seconds
Strachey Lecture: Artificial Intelligence and the Future
In this talk Demis Hassabis discuss's what is happening at the cutting edge of AI research, its future impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind. Strachey Lecture 2016, generously supported by Oxford Asset Management. Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition. Demis will draw on his eclectic experiences as an AI researcher, neuroscientist and videogames designer to discuss what is happening at the cutting edge of AI research, its future impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind.
2/26/2016 • 55 minutes, 8 seconds
Bidirectional Computation is Effectful
A reconstruction (slides and voiceover) of a talk given at the Summit on Advances in Programming Languages (snapl.org/2015) in May 2015. Bidirectional transformations inherently involve state effects. Modelling them that way allows the incorporation of other effects too, such as I/O, non-determinism, and exceptions. We briefly outline the construction.
11/17/2015 • 5 minutes, 16 seconds
Strachey Lecture: Bidirectional Computation is Effectful
A reconstruction (slides and voiceover) of a talk given at the Summit on Advances in Programming Languages (snapl.org/2015) in May 2015. Bidirectional transformations inherently involve state effects. Modelling them that way allows the incorporation of other effects too, such as I/O, non-determinism, and exceptions. We briefly outline the construction.