December 7–8, 2020

9 am to 2 pm PT (Noon to 5 pm ET) Daily

Watch Workshop Videos: C3.ai DTI YouTube Channel (From our homepage, scroll down to Workshops)

Advances in machine learning have accelerated the introduction of autonomy in our everyday lives. However, ensuring that these autonomous systems act as intended is an immense challenge. Today, when self-driving vehicles or collaborative robots operate in real-world uncertain environments, it is impossible to guarantee safety at all times. A key challenge stems from the uncertainty of the environment itself, and the inability to predict all possible situations and interactions that could confront the system. Machine learning, and its potential ability to generalize, may provide a solution. For example, a learning-based perception system for a self-driving vehicle must be able to generalize beyond the scenes that it has observed in training. Similarly, learned dynamical driving policies must successfully execute agile safety maneuvers in previously unexperienced scenarios. And yet today, these learning algorithms are producing solutions that are not easy to understand and may be brittle to faults and possible cyber-attacks. In addition, machine learning-based autonomy is largely being designed in isolation from the people who would use it, rather than being built from the ground up for interaction and collaboration.

In this workshop, we explore the scope of safe autonomy, present and identify the challenges, and to explore current research developments which help us move towards a solution. It includes talks from researchers and practitioners in academia, industry, and government from diverse areas such as control and robotics, AI and machine learning, formal methods, and human-robot interaction, and their applications to the domains of ground, air, and space vehicles as well as medical robotics.

ORGANIZERS
Geir Dullerud (University of Illinois at Urbana-Champaign), Claire Tomlin (University of California, Berkeley)

SPEAKERS

Pieter Abbeel (University of California, Berkeley), Lars Blackmore (SpaceX), J-P Clarke (University of Texas at Austin), Anca Dragan (University of California, Berkeley), Katie Driggs-Campbell (University of Illinois at Urbana-Champaign), Hadas Kress-Gazit (Cornell University), Sayan Mitra (University of Illinois at Urbana-Champaign), Sandeep Neema (Defense Advanced Research Projects Agency), George Pappas (University of Pennsylvania), Daniela Rus (Massachusetts Institute of Technology), Dawn Tilbury (National Science Foundation, University of Michigan), Keenan Wyrobek (Zipline)

Day 1: Monday, December 7th

8:55 am – 9:00 am
Opening Remarks, Shankar Sastry (C3.ai DTI Co-Director, University of California, Berkeley) and R. Srikant (C3.ai DTI Co-Director, University of Illinois at Urbana-Champaign)
9:00 am – 9:30 am
Understanding Risk and Social Behavior Improves Decision Making for Autonomous Vehicles

ABSTRACT

Deployment of autonomous vehicles on public roads promises increases in efficiency and safety, and requires evaluating risk, understanding the intent of human drivers, and adapting to different driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. This talk describes how to integrate risk and behavior analysis in the control look of an autonomous car. I will describe how Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect their interactions with others by quantifying the degree of selfishness or altruism, can be integrated in decision making and provide recent examples of developing and deploying self-driving vehicles with adaptation capabilities.

9:30 am – 10:00 am
Specifications and Feedback for Safe Autonomy

ABSTRACT

What is “safe” when we talk about autonomy? How is it defined and by whom? What happens when we cannot guarantee “safety”? In this talk, I will discuss specifications, synthesis, and feedback mechanisms that require us to be explicit about the definition of safety but also enable us to provide explanations when things go wrong.

10:00 am – 10:15 am
Break
10:15 am – 10:45 am
Autonomous Precision Landing of Reusable Rockets

ABSTRACT

The SpaceX reusable rocket program aims to reduce the cost of space travel by making rockets that can land, refuel, and refly instead of being thrown away after every flight. Autonomous precision landing of a rocket is a unique problem, which has been likened to balancing a rubber broomstick on your hand in a windstorm. Rockets do not have wings (unlike airplanes) and they cannot rely on a high ballistic coefficient to fly in a straight line (unlike missiles). The SpaceX Falcon 9 booster has now landed more than 50 times and has been reused more than 35 times, making reusability a normal part of the launch business. This talk will discuss the challenges involved, how these challenges were overcome, and the new challenges involved in landing Starship, which is designed to carry hundreds of tons of payload to the Moon, Mars, and beyond for a fraction of the cost of previous rockets.

SPEAKER

Lars Blackmore is the Senior Principal Mars Landing Engineer at Space Exploration Technologies. Lars is responsible for entry and landing of the Starship rocket. Lars’ team developed entry and landing for Falcon 9, which has landed over 50 times, sometimes on land and sometimes on a floating platform. Prior to his time at SpaceX, Lars wrote algorithms for space missions at the NASA Jet Propulsion Lab. He co-invented the G-FOLD algorithm for precision landing on Mars and his control algorithms are currently flying on the SMAP climate observing spacecraft. Lars previously worked with the McLaren Formula One racing team, and has a PhD from the Massachusetts Institute of Technology.

10:45 am – 11:15 am
System Safety and Policy Implications of Autonomy

ABSTRACT

Certification is a barrier to increased autonomy in civil aviation and other safety critical domains, perhaps most so because it is difficult to quantify or even bound the performance of non-deterministic machines using schemes that explicitly enumerate their input-output characteristics. However, non-deterministic humans are certified (granted license) to perform various functions based on an a priori assessment of their decision-making abilities, which suggests that machines could be certified in a similar way. During this presentation, I will discuss the system safety and policy implications of increasing machine autonomy in safety critical systems–from systems where humans and machines work in partnership to systems where there are no humans involved in decision-making and operation. Further, I will leverage analyses of autonomy in purely human systems to enumerate several attributes and requirements which must be satisfied by autonomous machines to provide equivalent guarantees to humans.

11:15 am – 11:30 am
Break
11:30 am – 12:00 pm
A Trust Management Framework for Calibrating Driver Trust in Semi-automated Vehicles

ABSTRACT

Although automated vehicles are expected to become ubiquitous in the future, it will be important for people to trust them appropriately. If drivers overtrust the AV’s capabilities, the risks of system failures or accidents increase. On the other hand, if drivers undertrust the AV, they will not fully leverage the benefits of the AV’s functionalities. Therefore, both types of trust miscalibrations (under- and overtrust) are undesirable. We consider the problem of maintaining drivers’ trust in the AV at a calibrated level–in real time, while they operate the AV. To do this, we estimate the driver’s trust in the AV, compare the trust estimate with the trust “reference” that represents the AV’s capabilities in context, and finally influence the driver’s trust to either increase or decrease it. A model for driver trust is developed, a Kalman filter is used to update the estimate in real time, and experimental results are presented that validate the trust management framework.

SPEAKER

Dawn M. Tilbury has been a professor of Mechanical Engineering at the University of Michigan since 1995. Her research interests lie broadly in the area of control systems, including applications to robotics and manufacturing systems. Since 2017, she has been the Assistant Director for Engineering at the National Science Foundation, where she oversees a federal budget of nearly $1 billion annually, while maintaining her position at the University of Michigan. She has published more than 200 articles in refereed journals and conference proceedings. She is a Fellow of both IEEE and ASME, and a Life Member of SWE.

12:00 pm – 12:30 pm
Assured Autonomy

ABSTRACT

The DARPA Assured Autonomy program aims to advance how computing systems can learn and evolve with machine learning to better manage variations in the environment and enhance the predictability of autonomous systems like driverless vehicles. In this talk, I will provide an overview of this DARPA program along with key results. Specifically, the talk will discuss rigorous design and analysis technologies for continual assurance of learning-enabled autonomous systems in order to guarantee safety properties in all phases of the system lifecycle.

SPEAKER

Sandeep Neema joined DARPA in July 2016 and again in September 2020. His research interests include cyber physical systems, model-based design methodologies, distributed real-time systems, and mobile software technologies. Prior to joining DARPA, Dr. Neema was a Professor of Electrical Engineering and Computer Science at Vanderbilt University. Dr. Neema participated in numerous DARPA initiatives through his career including the Transformative Apps, Adaptive Vehicle Make, and Model-based Integration of Embedded Systems programs. Dr. Neema has authored and co-authored more than 100 peer-reviewed conference, journal publications, and book chapters. Dr. Neema holds a doctorate in electrical engineering and computer science from Vanderbilt University, and a master’s in electrical engineering from Utah State University. He earned a bachelor of technology degree in electrical engineering from the Indian Institute of Technology, New Delhi, India.

12:30 pm – 12:45 pm
Break
12:45 pm – 2:00 pm
Panel

Day 2: Tuesday, December 8th

9:00 am – 9:30 am
Towards Safe and Stable Reinforcement Learning,

ABSTRACT

Deep Reinforcement Learning has seen many successes, ranging from the classical game of Go, over video games, to robots learning a wide range of skills from their own trial and error. However, when such trial and error learning needs to happen in the real world, it can often be costly and furthermore learning can destabilize. In this talk I will discuss new approaches towards satisfying (safety) constraints during learning and ensuring stable learning progress.

SPEAKER

Pieter Abbeel is a Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He strives to build ever more intelligent systems. His lab pushes the frontiers of deep reinforcement learning, deep imitation learning, deep unsupervised learning, transfer learning, meta-learning, and learning to learn, as well as study the influence of AI on society. Abbeel has received many awards and honors, including the Presidential Early Career Award for Scientists and Engineers (PECASE), NSF-CAREER, Office of Naval Research-Young Investigator Program (ONR-YIP), DARPA-YFA, and TR35. His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review.

9:30 am – 10:00 am
Towards Verified Robot Code

ABSTRACT

Distributed robotics is poised to transform transportation, agriculture, delivery, and exploration. Following the trends in cloud, mobile, and machine learning applications, finding the right programming abstractions is key to unlocking this potential. A robot’s code needs to sense the environment, control the hardware, and communicate with other robots. Current programming languages do not provide the necessary hardware platform-independent abstractions, and, therefore, developing robot applications requires detailed knowledge of control, path planning, network protocols, and various platform-specific details. Porting applications across hardware platforms is tedious. In this talk, I will present our recent explorations in finding good abstractions for robot code. The end result is a new language called Koord which abstracts platform-specific functions for sensing, communication, and low-level control and makes platform-independent control and coordination code portable and modularly verifiable.

SPEAKER

Sayan Mitra is a Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research is in algorithmic analysis of autonomous systems like self-driving cars and spacecraft. Several algorithms and tools developed by his lab have been commercialized. His new book Verifying Cyber-Physical Systems: A Path to Safe Autonomy will be published by MIT Press in 2021. Sayan holds a PhD from MIT and held postdoctoral and visiting positions at Caltech, Oxford, and TU Vienna. His work has been recognized by the NSF CAREER Award, AFOSR Young Investigator Award, IEEE-HKN Teaching Award, a Siebel Fellowship, and several best paper awards.

10:00 am – 10:15 am
Break
11:30 am – 12:00 pm
Fantastic Failures and Where to Find Them: Designing Trustworthy Autonomy

ABSTRACT

Autonomous robots are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of autonomy are only achievable if the underlying algorithms are robust to real-world conditions and are effective in (near) failure modes. This is often challenging in practice, as the scenarios in which general robots fail are often difficult to identify and characterize. In this talk, we’ll discuss how to learn from failures to design robust interactive systems and how we can exploit structure in different applications to efficiently find and classify failures. We’ll showcase both our failures and successes on autonomous vehicles and agricultural robots in real-world settings.

SPEAKER

Katie Driggs-Campbell is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Prior to that, she was a Postdoctoral Research Scholar at the Stanford Intelligent Systems Laboratory in the Aeronautics and Astronautics Department. She received a B.S.E. with honors from Arizona State University in 2012, an M.S. from the University of California, Berkeley in 2015, and a PhD in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2017. Her lab works on human-centered autonomy, focusing on the integration of autonomy into human-dominated fields, merging ideas in robotics, learning, human factors, and control.

12:00 pm – 12:30 pm
Autonomy at Zipline, Behind the Scenes

ABSTRACT

Zipline is the largest operator of autonomous drones in the world. This talk will go behind the scenes sharing nuts and bolts details of what it takes to do this at scale.

SPEAKER

Keenan Wyrobek is co-founder and head of product and engineering at Zipline, the world’s first drone delivery service whose focus is delivering life-saving medicine, even to the most difficult to reach places on earth. Prior to Zipline, Keenan was co-founder of the Robot Operating System (ROS) and lead the development of PR2, the first personal robot software for R&D. Keenan has spent his career delivering high tech products to market across a range of fields, including medical robotics. You can find Keenan on Twitter @keenanwyrobek.

12:30 pm – 12:45 pm
Break
12:45 pm – 2:00 pm
Break