The Colloquium on Digital Transformation is a series of weekly online talks featuring top scientists from academia and industry on how artificial intelligence, machine learning, and big data can lead to scientific breakthroughs with large-scale societal benefit.

Register for the fall Zoom webinar series here.

See videos of all C3.ai DTI talks at YouTube.com/C3DigitalTransformationInstitute.


January 14, 2021, 1 pm PT/4 pm ET

A Bayesian Hierarchical Network for Combining Heterogeneous Data Sources in Medical Diagnoses – With Applications to COVID-19

Claire Donnat, Assistant Professor, University of Chicago

The increasingly widespread use of affordable, yet often less reliable medical data and diagnostic tools poses a new challenge for the field of ComputerAided Diagnosis: how can we combine multiple sources of information with varying levels of precision and uncertainty to provide an informative diagnosis estimate with confidence bounds? Motivated by a concrete application in lateral flow antibody testing, we devise a Stochastic Expectation-Maximization algorithm that allows the principled integration of heterogeneous and potentially unreliable data types. Our Bayesian formalism is essential in (a) flexibly combining these heterogeneous data sources and their corresponding levels of uncertainty, (b) quantifying the degree of confidence associated with a given diagnostic, and (c) dealing with the missing values that typically plague medical data. We quantify the potential of this approach on simulated data, and showcase its practicality by deploying it on a real COVID19 immunity study.

Claire Donnat is an Assistant Professor in the Department of Statistics at the University of Chicago. Her work focuses on high-dimensional and Bayesian statistics, and their applications to biomedical data. Prior to the University of Chicago, she completed her PhD in Statistics at Stanford where she was advised by Professor Susan Holmes.

Watch the recording on YouTube


January 28, 2021, 1 pm PT/4 pm ET

Modeling and Managing the Spread of COVID-19

Subhonmesh Bose, Assistant Professor, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

Testing and lock-down provide two important control levers to combat the spread of an infectious disease. Testing is a targeted instrument that permits the isolation of infectious individuals. Lock-down, on the other hand, is blunt and restricts the mobility of all people. In the first part of the talk, I will present a compartmental epidemic model that accounts for asymptomatic disease transmission, the impact of lock-down and different kinds of testing, motivated by the nature of the ongoing COVID-19 outbreak. In the large population regime, static mobility levels and testing requirements are characteristics that can mitigate the disease spread asymptotically. Then, I present interesting properties of an optimal dynamic lock-down and testing strategy that minimizes a detailed cost of the epidemic. In the second part of the talk, I adapt the model for small populations, such as that of an educational institution, and use data from the UIUC SHIELD program’s rapid saliva-based testing strategy to estimate model parameters. Reopening strategies for educational institutions are evaluated via agent-based simulations using said parameter estimates. This talk is based on joint work with U. Mukherjee, S. Seshadri, S. Souyris, A. Ivanov, Y. Xu, and R. Watkins.

Subhonmesh Bose is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research is in the area of power and energy systems and is geared towards enabling the integration of renewable and distributed energy resources in the modern power grid. He is interested in developing rigorous analytical frameworks, fast algorithmic architectures, and efficient market designs to help enable that integration.

Watch the recording on YouTube


February 4, 2021, 1 pm PT/4 pm ET

Triaging of COVID-19 Patients from Audio-Visual Cues

Narendra Ahuja, Research Professor of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

The COVID-19 pandemic has placed unprecedented stress on hospital capacity. Increased emergency department (ED) patient volumes and admission rates have led to a scarcity in beds. Bed-sparing protocols that identify COVID-19 patients stable for discharge from the ED or early hospital discharge have proven elusive given this population’s propensity to rapidly deteriorate up to one week after illness onset. Consequently, a significant number of stable patients are unnecessarily admitted to the hospital while some discharged patients decompensate at home and subsequently require emergency transport to the ED. In order to conserve hospital beds, there is an urgent need for improved methods for assessing clinical stability of COVID-19 patients. In this talk, we will describe our project’s immediate goal to develop audiovisual tools to reproduce common physical exam findings. These will be subsequently used to predict clinical decompensation from patient videos captured using consumer grade smartphones. These tools will be tested on COVID-19 and other pulmonary patient populations. We will start collecting patient data at UIC and UC hospitals in January 2021 and are developing explainable artificial intelligence and machine learning algorithms for predicting impending deterioration from health-relevant audiovisual features and provide explanations in terms of the clinical details within the electronic health record. Once validated on our patient data, the tools will provide clinical assessments of COVID-19 patients both at the bedside and across telemedicine platforms during virtual follow-ups. The techniques and algorithms developed in this project are likely to be applicable to other high-risk patient populations and emerging platforms, such as telemedicine.

Narendra Ahuja is a Research Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research is in Artificial Intelligence fields of computer vision, pattern recognition, machine learning, and image processing and their applications, including problems in developing societies. He has co-authored more than 400 papers in journals and conferences and supervised the research of more than 50 PhD, 15 MS, 100 undergraduates, and 10 Postdoctoral Scholars. He received his Ph.D. from the University of Maryland, College Park, in 1979. He is a fellow of the Institute of Electrical and Electronics Engineers, the American Association for Artificial Intelligence, the International Association for Pattern Recognition, the Association for Computing Machinery, the American Association for the Advancement of Science, and the International Society for Optical Engineering.

Watch the recording on YouTube


February 11, 2021, 1 pm PT/4 pm ET

Scoring Drugs: Small Molecule Drug Discovery for COVID-19 Using Physics-Inspired Machine Learning

Teresa Head-Gordon, Chancellor’s Professor, Department of Chemistry, Chemical and Biomolecular Engineering, and Bioengineering, University of California, Berkeley

The rapid spread of SARS-CoV-2 has spurred the scientific world into action for therapeutics to help minimize fatalities from COVID-19. Molecular modeling is combating the current global pandemic through the traditional process of drug discovery, but the slow turnaround time for identifying leads for antiviral drugs, analyzing structural effects of genetic variation in the evolving virus, and targeting relevant virus-host protein interactions is still a great limitation during an acute crisis. The first component of drug discovery – the structure of potential drugs and the target proteins – has driven functional insight into biology ever since Watson, Crick, Franklin, and Wilkins solved the structure of DNA. What could we do with structural models of host and virus proteins and small molecule therapeutics? We can further enrich structure with dynamics for discovery of new surface sites exposed by fluctuations to bind drugs and peptide therapeutics not revealed by a static structural model. These “cryptic” binding sites offer new leads in drug discovery but will only yield fruit if they can be assessed rapidly for binding affinity for new small molecule drugs. We offer physics-inspired data-driven models to: 1) extend the chemical space of new drugs beyond those available; 2) create reliable scoring functions to evaluate drug binding affinities to cryptic binding sites of COVID-19 targets; 3) accelerate computation of binding affinities by training machine learning models; and 4) closing the loop of design and evaluation to bias the distribution of new drug candidates towards desired metrics enabled by the C3 AI Suite.

The simultaneous revolutions in energy, molecular biology, nanotechnology, and advanced scientific computing is giving rise to new interdisciplinary research opportunities in theoretical and computational chemistry. The research interests of the Teresa Head-Gordon lab embraces this large scope of science drivers through the development of general computational models and methodologies applied to molecular liquids, macromolecular assemblies, protein biophysics, and homogeneous, heterogeneous catalysis and biocatalysis. She has a continued and abiding interest in the development and application of complex chemistry models, accelerated sampling methods, coarse graining, and multiscale techniques, analytical and semi-analytical solutions to the Poisson-Boltzmann Equation, and advanced self-consistent field (SCF) solvers and SCF-less methods for many-body physics. The methods and models developed in her lab are widely disseminated through many community software codes that scale on high performance computing platforms.


February 18, 2021, 1 pm PT/4 pm ET

Why Do ML Models Fail?

Aleksander Madry, Professor of Computer Science, Massachusetts Institute of Technology

Our current machine learning (ML) models achieve impressive performance on many benchmark tasks. Yet, these models remain remarkably brittle, susceptible to manipulation and, more broadly, often behave in ways that are unpredictable to users. Why is this the case? In this talk, we identify human-ML misalignment as a chief cause of this behavior. We then take an end-to-end look at the current ML training paradigm and pinpoint some of the roots of this misalignment. We discuss how current pipelines for dataset creation, model training, and system evaluation give rise to unintuitive behavior and widespread vulnerability. Finally, we conclude by outlining possible approaches towards alleviating these deficiencies.

Aleksander Madry is a Professor of Computer Science at MIT and leads the MIT Center for Deployable Machine Learning. His research interests span algorithms, continuous optimization, science of deep learning, and understanding machine learning from a robustness and deployability perspectives. Aleksander’s work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and Presburger Award. He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent time at Microsoft Research New England and on the faculty of EPFL.

Watch recording on YouTube


February 25, 2021, 1 pm PT/4 pm ET

Mad Max: Affine Spline Insights into Deep Learning

Richard Baraniuk, Victor E. Cameron Professor of Electrical and Computer Engineering, Rice University

We build a rigorous bridge between deep networks (DNs) and approximation theory via spline functions and operators. Our key result is that a large class of DNs can be written as a composition of max-affine spline operators (MASOs), which provide a powerful portal through which to view and analyze their inner workings. For instance, conditioned on the input signal, the output of a MASO DN can be written as a simple affine transformation of the input. This implies that a DN constructs a set of signal-dependent, class-specific templates against which the signal is compared via a simple inner product; we explore the links to the classical theory of optimal classification via matched filters and the effects of data memorization. The spline partition of the input signal space that is implicitly induced by a MASO directly links DNs to the theory of vector quantization (VQ) and K-means clustering, which opens up new geometric avenue to study how DNs organize signals in a hierarchical and multiscale fashion.

Richard G. Baraniuk is the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University and the Founding Director of OpenStax. His research interests lie in new theory, algorithms, and hardware for sensing, signal processing, and machine learning. He is a Fellow of the American Academy of Arts and Sciences, National Academy of Inventors, American Association for the Advancement of Science, and IEEE. He has received the DOD Vannevar Bush Faculty Fellow Award (National Security Science and Engineering Faculty Fellow), the IEEE Signal Processing Society Technical Achievement Award, and the IEEE James H. Mulligan, Jr. Education Medal, among others.

Watch recording on YouTube


March 4, 2021, 1 pm PT/4 pm ET

Beyond Open Loop Thinking: A Prelude to Learning-Based Intelligent Systems

Lillian Ratliff, Assistant Professor, Department of Electrical and Computer Engineering, Adjunct Professor, Allen School of Computer Science and Engineering, University of Washington

Learning algorithms are increasingly being deployed in a variety of real world systems. A central tenet of present day machine learning is that when it is arduous to model a phenomenon, observations thereof are representative samples from some, perhaps unknown, static or otherwise independent distribution. In the context of systems such as civil infrastructure and other services (e.g., online marketplaces) dependent on its use, there are two central challenges that call into question the integrity of this tenet. First, (supervised) algorithms tend to be trained on past data without considering that the output of the algorithm may change the environment, and hence the data distribution. Second, data used to either train algorithms offline or as input to online decision-making algorithms may be generated by strategic data sources such as human users. Indeed, such data depends on how the algorithm impacts a user’s individual objectives or (perceived) quality of service, which leads to the underlying data distribution being dependent on the output of the algorithm. This begs the question of how learning algorithms can and should be designed taking into consideration this closed-loop interaction with the environment in which they will be deployed. This talk will provide one perspective on designing and analyzing algorithms by modeling the underlying learning task in the language of game theory and control, and using tools from these domains to provide performance guarantees and highlight recent, promising results in this direction.

Lillian Ratliff earned her PhD in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2015. Prior to that Lillian obtained an MS in Electrical Engineering (2010) and BS degrees in Mathematics and Electrical Engineering (2008) all from the University of Nevada, Las Vegas. Her research interests lie at the intersection of learning, optimization, and game theory. She is the recipient of a National Science Foundation Graduate Research Fellowship (2009), CISE Research Initiation Initiative Award (2017), and CAREER Award (2019). She is a recipient of the 2020 Office of Naval Research Young Investigator award and the Dhanani Endowed Faculty Fellowship (2020).


March 11, 2021, 1 pm PT/4 pm ET

Using Data Science to Understand the Heterogeneity of SARS-COV-2 Transmission & COVID-19 Clinical Presentation in Mexico

Stefano Bertozzi, MD, Professor, School of Public Health, University of California, Berkeley

Juan Pablo Gutierrez, Professor at the Center for Policy, Population & Health Research, National Autonomous University of Mexico

In 2020, Mexico confirmed 1.5M cases of COVID-19, with 128,000 deaths — an 8.8 percent fatality rate that is among the highest worldwide. The positivity rate for those tested is 42 percent (WHO target = 5 percent). The pandemic is likely to become the main cause of death in 2020, and in 2021— even with the vaccine —mortality is expected to rise. Almost half of the Mexican population receives its medical care from the Mexican Social Security Institute (IMSS). Our team from UCB, IMSS, and UNAM aims to harness the massive patient-level clinical and socio-demographic data from the IMSS to better predict susceptibility to infection and serious complications among those who are infected. The advantages of working with the IMSS are clear – the disadvantage is that it has taken many months to get approval from the relevant human subjects and research committees. The IMSS comprises many poorly integrated data systems, so there is significant work involved in relating the disparate databases to each other. We now have 2.5 years of utilization data (outpatient visits [>300M], hospitalizations, prescriptions [almost 500M], and COVID tests). We will study variability by employer, by state and neighborhood, by household structure, by clinic, by provider (and provider behavior), by current and prior health conditions, by degree of control of chronic health conditions, by any drugs that they have been prescribed, as well as by the usual demographic and socioeconomic characteristics. The priority will be to identify modifiable factors that the IMSS can use to reduce population risk.

Stefano M. Bertozzi is dean emeritus and professor of health policy and management at the UC Berkeley School of Public Health. He served as the interim director of Alianza UCMX, which integrates all UC systemwide programs with Mexico. Previously, he directed the HIV/TB programs at the Bill and Melinda Gates Foundation. At the Mexican National Institute of Public Health, he served as director of its Center for Evaluation Research and Surveys. He was the last director of the WHO Global Programme on AIDS and has also held positions with UNAIDS, the World Bank, and the government of the Democratic Republic of Congo. He is the founding editor-in-chief for Rapid Review: COVID-19, an overlay journal that reviews COVID-19 research, published by MIT Press. He holds a bachelor’s degree in biology and a PhD in health policy and management from MIT. He earned his medical degree at UCSD, and trained in internal medicine at UCSF.

Juan Pablo Gutierrez is Professor at the Center for Policy, Population & Health Research, National Autonomous University of Mexico (UNAM), Chair of the Technical Committee of the Morelos’ Commission on Evaluation of Social Development, and Member of GAVI Evaluation Advisory Committee. His research focuses on comprehensive evaluation of social programs and policies, universal health coverage and effective access, and social inequalities in health. He has been responsible for the evaluation of social and health programs in Mexico, Ecuador, Guatemala, Dominican Republic, Honduras, and India, as well as several population-based health surveys both in households and facilities. He is a member of the National Observatory on Health Inequalities in Mexico and has authored or co-authored more than 60 papers in peer-reviewed journals.


March 18, 2021, 1 pm PT/4 pm ET

Building Structure Into Deep Learning

Zico Kolter, Associate Professor, Department of Computer Science, Carnegie Mellon University

Despite their wide applicability, deep learning systems often fail to exactly capture simple “known” features of many problem domains, such as those governed by physical laws or those that incorporate decision-making procedures. In this talk, I will present methods for these types of structural constraints — such as those associated with decision making, optimization problems, or physical simulation — directly into the predictions of a deep network. Our tool for achieving this will be the use of so-called “implicit layers” in deep models: layers that are defined implicitly in terms of conditions we would like them to satisfy, rather than via explicit computation graphs. l discuss how we can use these layers to embed (exact) physical constraints, robust control criteria, and task-based objectives, all within modern deep learning models. I will also highlight several applications of this work in reinforcement learning, control, energy systems, and other settings, and discuss generalizations and directions for future work in the area.

Zico Kolter is an Associate Professor in the Computer Science Department at Carnegie Mellon University, and also serves as Chief Scientist of AI Research for the Bosch Center for Artificial Intelligence. His work spans the intersection of machine learning and optimization, with a focus on developing more robust and rigorous methods in deep learning. In addition, he has worked in a number of application areas, highlighted by work on sustainability and smart energy systems. He is a recipient of the DARPA Young Faculty Award, a Sloan Fellowship, and Best Paper awards at NeurIPS, ICML (honorable mention), IJCAI, KDD, and PESGM.


April 1, 2021, 1 pm PT/4 pm ET

Agent-based Modeling to Understand Social Determinants of Health as Drivers of COVID-19 Epidemics and Test Interventions to Reduce Health Inequities

Anna Hotton, Research Assistant Professor, Department of Medicine, University of Chicago
Jonathan Ozik, Computational Scientist, Argonne National Laboratory

In Chicago and elsewhere across the U.S., Latinx and Black communities have experienced disproportionate morbidity and mortality from COVID-19, highlighting drastic health inequities. Testing and vaccination efforts need to be scaled up within communities disproportionately affected by economic vulnerability, housing instability, limited healthcare access, and incarceration. Agent-based models (ABMs) can be used to investigate the complex processes by which social determinants of health influence population-level COVID-19 transmission and mortality, and to conduct computational experiments to evaluate the effects of candidate policies or interventions. Through partnerships between the University of Chicago, Argonne National Laboratory, the Chicago Department of Public Health, and the Illinois COVID-19 Modeling Task Force, we combined multiple data sources to develop a locally informed, realistic, and statistically representative synthetic agent population, with attributes and processes that reflect real-world social and biomedical aspects of transmission. We built a stochastic ABM (CityCOVID) capable of modeling millions of agents representing the behaviors and social interactions, geographic locations, and hourly activities of the population of Chicago and surrounding areas. Transitions between disease states depend on agent attributes and exposure to infected individuals through co-location, placed-based risks, and protective behaviors. The model provides a platform for evaluating how social determinants of health impact COVID-19 transmission, testing, and vaccine uptake and testing optimal approaches to intervention deployment. We discuss implications for public health interventions and policies to address health inequities.

Anna Hotton is a Research Assistant Professor in the Section of Infectious Diseases and Global Health at the University of Chicago Department of Medicine. She earned her B.S. degree at Cornell University and her MPH and Ph.D. at the School of Public Health at the University of Illinois at Urbana-Champaign. As staff scientist at the Chicago Center for HIV Elimination, Hutton studied the relationship between social factors and viral spread. Her C3.ai DTI-funded project aims to adapt that work to COVID-19, using machine learning to identify data elements that are most important to include in modeling to better simulate various scenarios of disease spread and virtually test how different public health or social policy strategies can help mitigate the disease.

Jonathan Ozik is a Computational Scientist at Argonne National Laboratory and Senior Scientist in the Consortium for Advanced Science and Engineering at the University of Chicago where he develops applications of large-scale agent-based models, including models of infectious diseases, healthcare interventions, biological systems, water use and management, critical materials supply chains, and critical infrastructure. He also applies large-scale model exploration across modeling methods, including agent-based modeling, microsimulation and machine/deep learning. He leads the Repast project for agent-based modeling toolkits and the Extreme-scale Model Exploration with Swift (EMEWS) framework for large-scale model exploration capabilities on high performance computing resources.


April 8, 2021, 1 pm PT/4 pm ET

Recent Advances in the Analysis of the Implicit Bias of Gradient Descent on Deep Networks

Matus Telgarsky, Assistant Professor of Computer Science, University of Illinois at Urbana-Champaign

In Chicago and elsewhere across the U.S., Latinx and Black communities have experienced disproportionate morbidity and mortality from COVID-19, highlighting drastic health inequities. Testing and vaccination efforts need to be scaled up within communities disproportionately affected by economic vulnerability, housing instability, limited healthcare access, and incarceration. Agent-based models (ABMs) can be used to investigate the complex processes by which social determinants of health influence population-level COVID-19 transmission and mortality, and to conduct computational experiments to evaluate the effects of candidate policies or interventions. Through partnerships between the University of Chicago, Argonne National Laboratory, the Chicago Department of Public Health, and the Illinois COVID-19 Modeling Task Force, we combined multiple data sources to develop a locally informed, realistic, and statistically representative synthetic agent population, with attributes and processes that reflect real-world social and biomedical aspects of transmission. We built a stochastic ABM (CityCOVID) capable of modeling millions of agents representing the behaviors and social interactions, geographic locations, and hourly activities of the population of Chicago and surrounding areas. Transitions between disease states depend on agent attributes and exposure to infected individuals through co-location, placed-based risks, and protective behaviors. The model provides a platform for evaluating how social determinants of health impact COVID-19 transmission, testing, and vaccine uptake and testing optimal approaches to intervention deployment. We discuss implications for public health interventions and policies to address health inequities.

Matus Telgarsky is an Assistant Professor of Computer Science at the University of Illinois at Urbana-Champaign, specializing in deep learning theory. He received a PhD at the University of California, San Diego under Sanjoy Dasgupta. He co-founded the Midwest ML Symposium in 2017 with Po-Ling Loh and organized a Simons Institute summer 2019 program on deep learning with Samy Bengio, Aleskander Madry, and Elchanan Mossel. He received an NSF CAREER Award in 2018.

Watch recording on YouTube

View presentation slides

Read transcript


April 15, 2021, 1 pm PT/4 pm ET

AI Enabled Deep Mutational Scanning of Interaction between SARS-CoV-2 Spike Protein S and Human ACE2 Receptor

Diwakar Shukla, Assistant Professor, Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign

The rapid and escalating spread of SARS-CoV-2 poses an immediate public health emergency. The viral spike protein S binds ACE2 on host cells to initiate molecular events that release the viral genome intracellularly. Soluble ACE2 inhibits entry of both SARS and SARS-2 coronaviruses by acting as a decoy for S binding sites, and is a candidate for therapeutic and prophylactic development. Deep mutational scans is one of the approaches that could provide such a detailed map of protein-protein interactions. However, this technique suffers from several issues such as experimental noise, expensive experimental protocol and lack of techniques that could provide second or higher-order mutation effects. In this talk, we describe an approach that employs a recently developed platform, TLmutation, that could enable rapid investigation of sequence-structure-function relationship of proteins. In particular, we employ a transfer learning approach to generate high-fidelity scans from noisy experimental data and transfer the knowledge from single point mutation data to generate higher-order mutational scans from the single amino-acid substitution data. Using deep mutagenesis, variants of ACE2 will be identified with increased binding to the receptor binding domain of S at a cell surface. We plan to employ the information from the preliminary mutational landscape to generate the high order mutations in ACE2 that could enhance binding to S protein. We also aim to investigate this problem using distributed computing approaches to understand the underlying physics of the spike protein and ACE2 interaction.

Diwakar Shukla is the Blue Waters Assistant Professor, Department of Chemical and Biomolecular Engineering at the University of Illinois at Urbana-Champaign. His research is focused on understanding the complex biological processes using novel physics-based models and techniques. He received his B.Tech and M.Tech. degrees from the Indian Institute of Technology in Bombay and his MS and PhD degrees from the Massachusetts Institute of Technology. His postdoctoral work was at Stanford University. He has received several awards for his research including the Peterson award from ACS, Innovation in Biotechnology award from AAPS, COMSEF Graduate student award from AIChE, Institute Silver Medal and Manudhane Award from IIT Bombay.


April 22, 2021, 1 pm PT/4 pm ET

Is Local Information Enough to Predict an Epidemic?

Christian Borgs, Professor of Computer Science, University of California, Berkeley

While simpler models of epidemics assume homogeneous mixing, it is clear that the structure of our social networks is important for the spread of an infection, with degree inhomogeneities and the related notion of super-spreaders being just the obvious reasons. This raises the question of whether knowledge of the local structure of a network is enough to predict the probability and size of an epidemic. More precisely, one might wonder if by having access to randomly sampled nodes in the network and their neighborhoods, we can predict the above quantities. It turns out that, in general, the answer to this question is negative, as the example of isolated, large communities show. However, under a suitable assumption on the global structure of the network, the size and probability of an outbreak can be determined from local graph features. This research is joint work with Yeganeh Alimohammadi and Amin Saberi from Stanford University.

Christian Borgs is a professor of Computer Science in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley and a member of the Berkeley Artificial Intelligence Research (BAIR) Lab. He graduated in Physics at the University of Munich and holds a Ph.D. in Mathematical Physics from the University of Munich and the Max-Planck-Institute for Physics. In 1997, he joined Microsoft Research, where he co-founded the Theory Group and served as its manager until 2008, when he co-founded Microsoft Research New England in Cambridge, Massachusetts, until he joined UC Berkeley in 2020. A Fellow of both the American Mathematical Society and the American Association for the Advancement of Science, his research focuses on responsible AI, from differential privacy to questions of bias in automatic decision-making.


April 29, 2021, 1 pm PT/4 pm ET

Understanding Deep Learning through Optimization Bias

Nathan Srebro, Professor, Toyota Technological Institute at Chicago

How and why are we succeeding in training huge non-convex deep networks? How can deep neural networks with billions of parameters generalize well, despite not having enough capacity to overfit any data? What is the true inductive bias of deep learning? And, does it all just boil down to a big fancy kernel machine? In this talk, I will highlight the central role the optimization geometry and optimization dynamics play in determining the inductive bias of deep learning, showing how specific optimization methods can allow generalization even in underdetermined overparameterized models.

Nathan Srebro is interested in statistical and computational aspects of machine learning, and the interaction between them. He has done theoretical work in statistical learning theory and in algorithms, devised novel learning models and optimization techniques, and has worked on applications in computational biology, text analysis, and collaborative filtering. Before TTIC, Srebro was a postdoctoral fellow at the University of Toronto and a visiting scientist at IBM Research.

Watch recording on YouTube

View presentation slides


May 6, 2021, 1 pm PT/4 pm ET

Bringing Social Distancing to Light: Architectural Interventions for COVID-19 Containment

Stefana Parascho, Assistant Professor of Architecture, Princeton University
Corina Tarnita, Associate Professor of Ecology and Evolutionary Biology, Princeton University

With the spread of COVID-19, social distancing has become an integral part of our everyday lives. Worldwide, efforts are focused on identifying ways to reopen public spaces, restart businesses, and reintroduce physical togetherness. We believe that architecture plays a key role in the return to a healthy public life by providing a means for controlling distances between people. Making use of computational processing power and data accessibility, we investigate how we can promote healthy and efficient movement through public spaces. Our approach is dynamic, to easily accommodate developing requirements and programmatic changes within these spaces.

Stefana Parascho, Assistant Professor of Architecture at Princeton University, is an architect with teaching and research in the field of computational design and robotic fabrication. She completed her doctorate at ETH Zurich and  architectural studies at the University of Stuttgart. Her research interest lies at the intersection of design, structure, and fabrication, with a focus on fabrication-informed design. She explores computational design methods and their potential role for architectural construction, from agent-based models to mathematical optimization. Her goal is to strengthen the connection between design, structure, and fabrication and the interdisciplinary nature of architectural design through the development of accessible computational design tools.

Corina Tarnita is an Associate Professor in Ecology and Evolutionary Biology and the Director of the Program in Environmental Studies at Princeton University. Previously, she was a Junior Fellow at the Harvard Society of Fellows (2010-2012). She obtained her B.A. (2006), M.A. (2008), and PhD (2009) in Mathematics from Harvard University. She is an ESA Early Career Fellow, a Kavli Frontiers of Science Fellow of the National Academy of Sciences, and an Alfred P. Sloan Research Fellow. Her work is centered around the emergence of complex behavior out of simple interactions, across spatial and temporal scales.

Watch recording on YouTube, view presentation slides, and read transcript below.


May 13, 2021, 1 pm PT/4 pm ET

Graceful AI: Backward-Compatibility, Positive-Congruent Training, and the Search for Desirable Behavior of Deep Neural Networks

Stefano Soatto, Vice President of Applied Science, Amazon Web Services and Professor of Computer Science, UCLA

As machine learning-based decision systems improve rapidly, we are discovering that it is no longer enough for them to perform well on their own. They should also behave nicely towards their predecessors and peers. More nuanced demands beyond accuracy now drive the learning process, including robustness, explainability, transparency, fairness, and now also compatibility and regression minimization. We call this “Graceful AI,’’ because in 2021, when we replace an old trained classifier with a new one, we should expect a peaceful transfer of decision powers.

Today, a new model can introduce errors that the old model did not make, despite significantly improving average performance. Such “regression” can break post-processing pipelines, or cause the need to reprocess large amounts of data. How can we train machine learning models to not only minimize the average error, but also minimize “regression”? Can we design and train new learning-based models in a manner that is compatible with previous ones, so that it is not necessary to re-process any data?

These problems are prototypical of the nascent field of cross-model compatibility in representation learning. I will describe the first approach to Backward-Compatible Training (BCT), introduced at the last Conference on Computer Vision and Pattern Recognition (CVPR), and an initial solution to the problem of Positive-Congruent Training (PC-Training), a first step towards “regression constrained” learning, to appear at the next CVPR. Along the way, I will also introduce methodological innovations that enable full-network fine-tuning by solving a linear-quadratic optimization. Such Linear-Quadratic Fine-Tuning (LQF, also to appear at the next CVPR) achieves performance equivalent to non-linear fine-tuning, and superior in the low-data regime, while allowing easy incorporation of convex constraints.

Stefano Soatto is Vice President of Applied Science at Amazon Web Services AI, where he oversees research for AI Applications including vision (Custom Labels, Lookout4Vision), speech (Amazon Transcribe), natural language (Amazon Comprehend, Amazon Lex, Amazon Kendra, Amazon Translate), Document Understanding (Amazon Textract), time series analysis (Amazon Forecast, Lookout4Metrics, Lookout4Equipment), personalization (Amazon Personalize) and others in the works. He is also a Professor of Computer Science at UCLA and founding director of the UCLA Vision Lab.

Watch recording on YouTube


May 20, 2021, 1 pm PT/4 pm ET

Feedback Control Perspectives on Learning

Jeff Shamma, Professor, Department of Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign

The impact of feedback control is extensive. It is deployed in a wide array of engineering domains, including aerospace, robotics, automotive, communications, manufacturing, and energy applications, with super-human performance having been achieved for decades. Many settings in learning involve feedback interconnections, e.g., reinforcement learning has an agent in feedback with its environment, and multi-agent learning has agents in feedback with each other. By explicitly recognizing the presence of a feedback interconnection, one can exploit feedback control perspectives for the analysis and synthesis of such systems, as well as investigate trade-offs in fundamental limitations of achievable performance inherent in all feedback control systems. This talk highlights selected feedback control concepts — in particular, robustness, passivity, tracking, and stabilization — as they relate to specific questions in evolutionary game theory, no-regret learning, and multi-agent learning.

Jeff S. Shamma is the Department Head of Industrial and Enterprise Systems Engineering (ISE) and Jerry S. Dobrovolny Chair in ISE at the University of Illinois at Urbana-Champaign. Prior academic appointments include faculty positions at King Abdullah University of Science and Technology (KAUST), as Adjunct Professor of Electrical and Computer Engineering, and Georgia Institute of Technology, where he was the Julian T. Hightower Chair in Systems and Controls. Shamma received a PhD in Systems Science and Engineering from MIT in 1988. He is a Fellow of IEEE and IFAC; recipient of IFAC High Impact Paper Award, AACC Donald P. Eckman Award, and NSF Young Investigator Award; and a past Distinguished Lecturer of the IEEE Control Systems Society. Shamma is currently serving as Editor-in-Chief for IEEE Transactions on Control of Network Systems.

Watch recording on YouTube


May 27, 2021, 1 pm PT/4 pm ET

AI-Assisted COVID-19 Medical Guidance System Using C3 AI Suite

Lui Sha, Donald B. Gillies Chair in Computer Science, University of Illinois at Urbana-Champaign

To provide the best medical guidelines, we are developing a prototype for the COVID-19 best practice guidelines with computational pathophysiology for ARDS (lung failure) and pediatric cardiopulmonary resuscitation. To ensure software safety, we have been using an executable formal model, K, to code the medical algorithms for verifiability. We are also developing a physician-friendly syntax of K. To improve care, we integrate this with the C3 AI Suite and use it to recognize the early signs of clinical deterioration and initiate early intervention. As part of this work, we will perform simulation-based clinical evaluations at hospitals. The talk will include a short video demonstration of the prototype.

This is joint work with Grigore Rosu, Zikun Chen, Shuang Song, Priti Jeni, M.D., and Paul M. Jeziorczak, M.D.

Lui Sha graduated with a Ph.D. from CMU in 1985 and joined the UIUC faculty in 1998 as a full professor. He is now Donald B. Gillies Chair Professor of Computer Science Department and Daniel C. Drucker Eminent Faculty at the College of Engineering. A fellow of IEEE and ACM, Sha served on the National Academy of Sciences Committee on Certifiably Dependable Software Systems and the NASA Advisory Council. His work on real-time and safety-critical system integration has contributed to many technology programs, including GPS, Space Station, and Mars Pathfinder. In recent years, his team has been developing medical GPS systems to dramatically reduce preventable medical errors, which claim 250,000 lives per year — the third leading cause of death in the U.S.

Watch recording on YouTube


June 10, 2021, 1 pm PT/4 pm ET

Security of Cyberphysical Systems

P.R. Kumar, Professor of Electrical and Computer Engineering and Industrial and Systems Engineering

The coming decades may see large-scale deployment of networked cyber-physical systems to address global needs in areas such as energy, water, health care, and transportation. However, as recent events have shown, such systems are vulnerable to cyber attacks. We begin by revisiting classical linear systems theory, developed in more innocent times, from a security-conscious, even paranoid, viewpoint. Then we present a general technique, called “dynamic watermarking,” for detecting any sort of malicious activity in networked systems of sensors and actuators. We then present a field test experimental demonstration of this technique on an automobile on a test track, a process control system, a simulation study of defense against an attack on Automatic Gain Control (AGC) in a synthetic power system, and an emulated attack on a solar powered home. This is joint work with Bharadwaj Satchidanandan, Jaewon Kim, Woo Hyun Ko, Tong Huang, Lantian Shangguan, Kenny Chour, Jorge Ramos, Prasad Enjeti, Le Xie, and Swaminathan Gopalswamy.

P.R. Kumar is a Professor of Electrical and Computer Engineering and Industrial and Systems Engineering at Texas A&M University. Prior to that, he served in the Department of Mathematics at the University of Maryland, Baltimore County (1977-84) and the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign (1985-2011). His current focus includes Machine Learning, Cyber-Physical Systems, security, privacy, UTM, 5G, wireless networks, and power systems. He is a member of the U.S. National Academy of Engineering, the World Academy of Sciences, and Indian National Academy of Engineering. Honors include a Doctor Honoris Causa by ETH, the IEEE Field Award for Control Systems, the Eckman Award of AACC, the Ellersick Prize of IEEE ComSoc, the Outstanding Contribution Award of ACM SIGMOBILE, the Infocom Achievement Award, and the SIGMOBILE Test-of-Time Paper Award. He is a Fellow of IEEE and ACM.


June 17, 2021, 1 pm PT/4 pm ET

Data-Driven Coordination of Distributed Energy Resources

Alejandro D. Dominguez-Garcia, Professor of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

The integration of distributed energy resources (DERs) such as rooftop photovoltaics installations, electric energy storage devices, and flexible loads, is becoming prevalent. This integration poses numerous operational challenges on the lower-voltage systems to which DERs are connected, but also creates new opportunities for provision of grid services. In the first part of the talk, we discuss one such operational challenge — ensuring proper voltage regulation in the distribution network to which DERs are connected. To address this problem, we propose a Volt/VAR control architecture that relies on the proper coordination of conventional voltage regulation devices (e.g., tap changing under load (TCUL) transformers and switched capacitors) and DERs with reactive power provision capability. In the second part of the talk, we discuss one such opportunity — utilizing DERs to provide regulation services to the bulk power grid. To leverage this opportunity, we propose a scheme for coordinating the response of the DERs so that the power injected into the distribution network (to which the DERs are connected) follows some regulation signal provided by the bulk power system operator. Throughout the talk, we assume limited knowledge of the particular power system models and develop data-driven methods to learn them. We then utilize these models to design appropriate controls for determining the set-points of DERs (and other assets such as TCULs) in an optimal or nearly-optimal fashion.

Alejandro D. Dominguez-Garcia is a Professor, William L Everitt Scholar, and Grainger Associate in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research aims to develop technologies for providing a reliable and efficient supply of electricity — a key to ensuring societal welfare and sustainable economic growth. He received the NSF CAREER Award in 2010, and the Young Engineer Award from the IEEE Power and Energy Society in 2012. He was selected by the UIUC Provost to receive a Distinguished Promotion Award in 2014, and he received the UIUC College of Engineering Dean’s Award for Excellence in Research in 2015.

Watch recording on YouTube

Read Transcript


June 24, 2021, 1 pm PT/4 pm ET

Closing the Loop on Machine Learning: Data Markets, Domain Expertise, and Human Behavior

Roy Dong, Research Assistant Professor of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

As machine learning and data analytics are increasingly deployed in practice, it becomes more and more pressing to consider the ecosystem created by such methods. In recent years, issues of data provenance, the veracity of available data, vulnerabilities to data manipulation, and human perceptions/behavior have had a growing effect on the overall performance of our intelligent systems. In the first part of this talk, I consider a game-theoretic model for data markets, and demonstrate that whenever multiple data purchasers compete for data sources without exclusivity contracts, there is a fundamental degeneracy in the equilibria, independent of each data purchaser’s learning capabilities. In the second part, we discuss issues of causal inference, which are essential when our learning algorithms are used to make decisions. We analyze how passively observed data can be efficiently combined with actively collected trial data to most efficiently recover causal structures. In the last section, I will discuss some of our recent experiments with human participants in the context of intelligent building control, and show that commonly designed mechanisms assuming utility-maximizing behavior may fall short of theoretical performance in practice.

Roy Dong is a Research Assistant Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. He received a BS Honors in Computer Engineering and a BS Honors in Economics from Michigan State University in 2010 and a PhD in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2017, where he was funded in part by the NSF Graduate Research Fellowship. From 2017 to 2018, he was a postdoctoral researcher in the Berkeley Energy and Climate Institute (BECI) and a visiting lecturer in the Department of Industrial Engineering and Operations Research at UC Berkeley. His research uses tools from control theory, economics, statistics, and optimization to understand the closed-loop effects of machine learning, with applications in cyber-physical systems such as the smart grid, modern transportation networks, and autonomous vehicles.