SORS: Energy-Efficient Machine Intelligence
Abstract
Ever since the beginning of digital computing, scientists have been fascinated by the concept of artificial intelligence (AI), a form of computation that mimics human-level reasoning and decision making. What was a mere vision in 1951 when Alan Turing proposed the imitation game to assess whether (or not) a computer program is intelligent, is affecting all our lives today: there is hardly an area of society that is not enhanced using AI algorithms ranging from applications in marketing & advertising, e-Commerce, gaming, communication to medicine and transportation.
However, we are starting to reach an inflection point in AI research where predictive accuracy is no longer the key success criteria but the amount of data, compute and, ultimately, energy, becomes the limiting factor for future AI algorithms. This change has profound implications on (1) the system-level aspects of machine learning – which digital technologies and hardware are best suited to trade off predictive accuracy and energy consumption – (2) the method-level aspects of machine learning – how can we achieve a human-level data-efficiency where algorithms can learn from a handful of examples and episodes as opposed to the thousands of training examples needed today –, and (3) the theory-level aspects of machine learning – how do we merge the physical notions of energy with the notions of information and learning in one unifying theory. In this talk, I will discuss these three aspects and share research problems in each of these three areas.
However, we are starting to reach an inflection point in AI research where predictive accuracy is no longer the key success criteria but the amount of data, compute and, ultimately, energy, becomes the limiting factor for future AI algorithms. This change has profound implications on (1) the system-level aspects of machine learning – which digital technologies and hardware are best suited to trade off predictive accuracy and energy consumption – (2) the method-level aspects of machine learning – how can we achieve a human-level data-efficiency where algorithms can learn from a handful of examples and episodes as opposed to the thousands of training examples needed today –, and (3) the theory-level aspects of machine learning – how do we merge the physical notions of energy with the notions of information and learning in one unifying theory. In this talk, I will discuss these three aspects and share research problems in each of these three areas.
Short Bio
Ralf Herbrich leads the group on Artificial Intelligence and Sustainability at the Hasso-Plattner Institute in Potsdam since May 2022. He is also on the Supervisory Board of SAP. Previously, he served as Senior Vice President, Builder Platform & Artificial Intelligence at Zalando (2020 – 2022), and was Director of Machine Learning at Amazon in Berlin
and Managing Director of the Amazon Development Center in Germany (2013 – 2020). Prior to these roles at Amazon, he led Facebook’s Unified Ranking and Allocation team in 2011. From 2000 to 2011, he served as Director of Microsoft's Future Social Experiences (FUSE) Lab UK and worked for nine years at Microsoft Research Lab in Cambridge, UK.
Speakers
Speaker: Ralf Herbrich, Hasso-Plattner Institute in Potsdam
Host: Ricardo Baeza-Yates, BSC AI Institute Director, BSC