SOCML 2020 Schedule

Sessions

Monday, November 30, 2020

AI, Social Impact and Public Goods

Moderator: Yoshua Bengio, Full Professor, Université de Montréal and Head of Montreal Institute for Learning Algorithms
15:30-17:00 UTC

AI systems are becoming more and more powerful, leading both to increasing opportunities for socially beneficial applications as well as for misuse which can harm individuals and society. What kind of governance and fiscal policies can help steer corporations towards the good side and avoid the bad one? This is an alignment problem similar to the one requiring us to design AI to match human values. What is the role of legal constraints vs fiscal incentives, thinking of corporations as learning agents? For incentives, we need metrics: are the indicators associated with the SGDs a good starting point? How do we avoid corporations overfitting the metrics? How do we make sure that the metrics reflect the future benefit or harm to society, including the current uncertainty about future values? How do we handle public goods, like knowledge and transparency, which are difficult to quantify?

Scaling laws and algorithmic efficiency gains

Moderator: Tom Brown, Member of Technical Staff, OpenAI
21:00-22:30 UTC

Machine learning has become a resource-intensive technology. Scaling laws predicting machine learning performance as a function of model size, dataset size, and the amount of computation used for training hold over several orders of magnitude. At the same time, the computational efficiency of many important AI algorithms has increased faster than Moore's law over the past several years. Come to this session to discuss ideas about how to measure more of these trends, their implications, and the likely future of AI development.

Tuesday, December 1, 2020

Generative adversarial networks

Moderator: Ming-Yu Liu, Distinguished Research Scientist and Manager, NVIDIA Research
00:00-01:30 UTC

"Generative adversarial networks are a class of generative models that use competition between two networks to learn to generate samples from the training set. They have become especially popular for modeling large, photorealistic images. GANs are also useful for unsupervised domain translation: for example, by training on daytime photos and nighttime photos, it becomes possible to turn a daytime photo into a nighttime photo or vice versa, without ever needing supervised pairs of photos showing the same scene in daytime and nighttime versions. GANs also pose many research challenges in terms of developing stable learning algorithms and meaningful performance metrics.

Deep Reinforcement Learning

Moderator: Laura Graesser, Software Engineer, lead author of Foundations of Deep Reinforcement Learning'
17:00-18:30 UTC

Deep reinforcement learning combines deep learning with reinforcement learning. Since 2013, this approach has outperformed top human professionals in several games such as Go, StarCraft II, and DotA 2, and has made major advances in robotics. Come discuss this fast moving research area.

Good practices for evaluation of machine learning systems

Moderator: Luciana Ferrer, Associate Researcher, University of Buenos Aires - CONICET
17:30-19:00 UTC

In this session we will discuss issues related to evaluation of performance of machine learning systems. Having a good evaluation protocol is a defining part of the development of a machine learning system. Poor evaluation practices can lead to wrong conclusions about the methods, misguiding research and development decisions. This can result in papers that cannot be replicated by other research teams and systems that, in practice, perform worse than expected. Important aspects of the evaluation protocol include (1) choice of data, (2) methods for splitting the data into training/development/evaluation sets, (3) choice of metrics, including the issue of calibration, (4) assessment of significance, and (5) decisions on what method or system should be used as a fair baseline for comparison. I discussed some of these issues in this talk and slides. I propose that those interested in attending the session take a look at the video or slides before hand so that we can use them as starting point for the discussion.

Formal verification of AI systems

Moderator: Aditi Raghunathan, PhD Student, Stanford University
21:00-22:30 UTC

Machine learning is a useful tool that often produces great results, but unfortunately machine learning also makes mistakes at rates that are unusually high by software engineering standards. So far this has prevented us from obtaining the benefits of machine learning in contexts where using machine learning would compromise safety or security. How can we design algorithms that provide guaranteed limits on the rate, type, or severity of these errors?

Wednesday, December 2, 2020

Generative models

Moderator: Mihaela Rosca, Staff Research Engineer, DeepMind
16:00-17:30 UTC

Generative models can learn to estimate the density of or draw samples from complicated high dimensional probability distributions. They are used to model images, audio such as speech, video, and a variety of other types of data. Come to this session to discuss anything ranging from research to applications or even challenges in measuring the performance of generative models.

Natural language understanding

Moderator: Jason D. Williams, Senior Manager, Apple
16:30-18:00 UTC

How can we build useful agents that understand natural language and are able to take actions that achieve what users want? Challenges in this area include the diversity of natural language and the complexity of understanding concepts throughout multi-turn conversations. These challenges are further compounded when the conversation is spoken out loud, requiring speech recognition as well. For this session, come prepared to do a 1-2 minute lightning talk on what you think the next big open problem(s) are for NLU and dialog systems.

Thursday, December 3, 2020

Machine learning and privacy

Moderator: Úlfar Erlingsson, Research Scientist, Apple
13:00-14:30 UTC

Privacy stands out as the quintessential success story in the quest to design robust machine learning. So far it has not been possible to design machine learning algorithms that always generalize or always perform well despite distribution shifts, but we do have machine learning algorithms that provide provable bounds on memorization of private information from the training set, thanks to differential privacy. Come to this session to discuss this and other privacy-related topics, such as multi-party computation, homomorphic encryption, model theft, and real-world applications of privacy preserving technology.

Imitation learning

Co-moderators:
   Tapani Raiko, Principal Research Scientist, Apple
   Tobias Gindele, Machine Learning Engineer, Apple
17:00-18:30 UTC

Imitation learning resembles both generative modeling and reinforcement learning. As with generative modeling, the goal is to learn a model that can reproduce complicated patterns (in this case, patterns of behavior) from training data. As with reinforcement learning, the goal is to learn an agent that can successfully carry out prolonged interactions with its environment. Imitation learning is an especially active area of research today, with both open loop approaches that learn purely from offline data, and closed loop approaches such as GAIL and SQIL that actively involve the environment in their learning process.

Beyond Modeling: Challenges to Practical Applications of ML to Healthcare

Moderator: Olivia Koshy, ML Operations Engineer, Nines
18:00-20:00 UTC

Without a doubt, the field of machine learning has made tremendous progress across a wide variety of tasks and fields. We have seen this especially in the context of integrating advances from academic research into industry settings. Yet rarely have we seen the same level of success when specifically applied to the healthcare field. Why is that?

This session aims to deep dive into the obstacles of ML healthcare applications in a real world setting. We'll reflect on the failure cases of the past and obstacles for the future with the following starting points:

  1. Our inputs - how can we assemble datasets which are useful for training clinically viable models? And how does this compare to the status quo (academic datasets e.g.)?
  2. The model development cycle - how can we decide what models would provide clinical value and then build them?
  3. Our outputs - how do we deploy and productionize, integrating our results with the clinical workflow and evaluating impact on standards of care?

Doing 'cognitive neuroscience' on models - will it help us understand generalization?

Moderator: Catherine Olsson, Senior Program Associate, Open Philanthropy Project
20:30-22:00 UTC

We would like to trust that machine learning systems will generalize safely to new environments. Unfortunately, training environments are “underspecified”: many different underlying strategies for solving a task can have equally good training performance, and only reveal themselves to be problematic when deployed in the real world.

If we had better tools for understanding how a neural net "thinks", could we audit models' "reasoning" in advance of deployment?

Different methods in the literature could be described as doing "cognitive science" on models, treating the neural nets themselves as the object of study. These include "neuroscience"-like approaches that open the black box of the neural net (including probing methods in NLP; and distill.pub-style work studying individual neurons' activations) and "cognitive"-like approaches that carefully craft novel inputs to test hypotheses (such as the texture-biased ImageNet images used by Geirhos et al, or the modified robust and non-robust datasets used in "Adversarial Examples Are Not Bugs, They Are Features")

Are these "cognitive neuroscience" approaches potentially useful for auditing models in advance of deployment to avoid unintended generalization? Or is this a dead end?

Friday, December 4, 2020

AI for Mental Health

Moderator: Danielle Belgrave, Principal Research Manager, Microsoft Research
16:00-17:30 UTC

25% of people are affected by a mental health condition at some point in their lives. How can AI help to improve the reach and effectiveness of mental health services? Come to this session to discuss topics like using machine learning to ensure earlier intervention, modeling patterns of engagement, and personalization of digital services.

Speech Translation (and Data Efficiency)

Moderator: Matthias Paulik, Sr. Manager, Apple
16:30-18:00 UTC

Recently, end-to-end trainable neural approaches have fueled hope for addressing many of the long standing challenges in speech translation (ST) in a more principled manner. Despite these hopes, the empirical evidence indicates that the success of such efforts has so far been mixed. In this session, we would like to first reflect on the state-of-the art and issues faced in ST, then discuss potential solutions and future directions of research. Solutions that address data scarcity in ST (but also machine translation in general) might be of particular interest in our upcoming discussion.

Game theory in AI

Moderator: Paulina Grnarova, PhD Student, ETH Zurich
17:00-18:30 UTC

Many important results in AI, ranging from Arthur Samuel's checkers playing agent in the 1950s to modern deep RL agents that excel at Go, Atari, StarCraft 2, and DotA 2, have essentially been agents for playing minimax games. Many other AI algorithms are explicitly intended to play game theoretic games, suchas GANs and adversarial training against adversarial examples. Game theory is more challenging than optimization in many ways---many iterative algorithms that converge for optimization problems do not converge or converge very slowly for games, and even measuring performance of an agent is complicated because each agent's performance depends on other agents' strategies. How can we design algorithms that perform efficiently in these scenarios? How can we leverage ideas from game theory such as fictitious play, exploitability, and the duality gap in a machine learning context?

Machine learning systems

Moderator: Shreya Rajpal, Machine Learning Engineer, Apple
18:00-19:30 UTC

MLSys@SOCML aims to discuss recent trends at the intersection of ML and Systems, including efficient ML architectures for training and inference, ML deployment at scale, evaluation of real world ML models beyond test sets, and ML systems in industry.