The Colloquium on Digital Transformation is a series of weekly online talks featuring top scientists from academia and industry on how artificial intelligence, machine learning, and big data can lead to scientific breakthroughs with large-scale societal benefit.

Register for the fall Zoom webinar series here.

See videos of all DTI talks at

September 2, 2021, 1 pm PT/4 pm ET

A Business Model for Load Control Aggregation to Firm up Renewable Capacity

Shmuel Oren, Professor of Industrial Engineering and Operations Research, University of California, Berkeley

The idea of actively managing load through economic incentives so as to reduce cost and improve reliability of the electric power system dates back to the pioneering work of Schweppe, et al, on “Homeostatic Utility Control.” Over the years, numerous studies and various technological approaches were introduced in this area but not adopted on a large scale. More recently, however, the rapid proliferation of intermittent renewable resources that highlights the need for load flexibility and advances in smart grid technologies have given new life to the pursuit of active demand side participation — called demand response (DR) in electric power infrastructure. While the metering, control, and communication technology for achieving this goal have become economically affordable, the critical elements for large-scale DR adoption continue to be the institutional and regulatory framework and a viable end-to-end business model. Such a model must incentivize customer participation at the retail level and convert such participation into valuable wholesale electricity market products that are utilized by system operators. In this talk I will describe a DR business model for an aggregator that offers quality differentiated electricity service at the retail level by means of a “fuse control” paradigm. Under this paradigm, the aggregator buys from retail customers, provides options to control their fuse size, limiting total supply to their meters, whereas the customers are responsible to meet prescribed limits by managing household energy use (and production) behind the meter. Compensation is based on curtailment conditions and frequency as specified in the contract. Aggregators can bundle contracted curtailment options into wholesale DR offers or use such options to firm up intermittent resources, which are offered to wholesale energy markets. We formulate the household fuse-limited energy management problem as a stochastic optimization, which yields as a byproduct a demand function for fuse increments. This demand function, which is the private information of the household, can be elicited by the aggregator by invoking the “revelation principle” through a mechanism design satisfying incentive compatibility and individual rationality conditions. We then formulate the overall aggregator optimization problem, which simultaneously determines the menu of contracts offered to retail customers along with the curtailment policy, the total nameplate wind capacity matched up with a given capacity of retail demand, and the quantity of energy offered by the aggregator in the day-ahead wholesale market as function of wholesale price.

Shmuel S. Oren is Professor of the Graduate School and the Earl J. Isaac Chair Professor in the Department of Industrial Engineering and Operations Research at UC Berkeley. He is a co-founder and the Berkeley site director of PSerc, a multi-university Power Systems Engineering Research Center. He has also been a member of the California ISO Market Surveillance Committee and a consultant to many private and public entities in the U.S. and abroad. His research has focused on nonlinear optimization, mechanism design, energy systems, and on the design and analysis of electricity markets. He holds a B.S. and M.S. in Mechanical Engineering from the Technion, Israel and an M.S and Ph.D. in Engineering Economic Systems from Stanford University. He is a member of the U.S. National Academy of Engineering, a Life Fellow  of the IEEE, and Fellow of INFORMS.

Watch recording on Youtube

September 9, 2021, 1 pm PT/4 pm ET

Reinforcement Learning, Bit by Bit

Benjamin Van Roy, Professor of Electrical Engineering and Management Science and Engineering, Stanford University

Leveraging creative algorithmic concepts and advances in computation and achievements of reinforcement learning in simulated systems have captivated the world’s imagination. The path to an artificial general intelligence that can learn from real experience counts on major advances in data efficiency. Should we try to carefully quantify information generated through and extracted from experience so that we can design agents that uncover and retain information as efficiently as possible? I will share some thoughts on this direction, drawing motivation from the fields of communication and information theory.

Benjamin Van Roy is a professor at Stanford University, where he has served on the faculty since 1998. His research focuses on the design, analysis, and application of reinforcement learning algorithms. Beyond academia, he leads a DeepMind Research team in Mountain View, and has also led research programs at Unica (acquired by IBM), Enuvis (acquired by SiRF), and Morgan Stanley. He is a Fellow of INFORMS and IEEE. He received the SB in Computer Science and Engineering and the SM and PhD in Electrical Engineering and Computer Science, all from MIT. He has received the MIT George C. Newton Undergraduate Laboratory Project Award, the MIT Morris J. Levin Memorial Master’s Thesis Award, the MIT George M. Sprowls Doctoral Dissertation Award, the National Science Foundation CAREER Award, the Stanford Tau Beta Pi Award for Excellence in Undergraduate Teaching, and the Management Science and Engineering Department’s Graduate Teaching Award.

Watch recording on Youtube

September 16, 2021, 1 pm PT/4 pm ET

Causal Tensor Estimation

Devavrat Shah, Professor of Electrical Engineering and Computer Science, Massachusetts Institute of Technology

In this talk, we present a framework for causal inference for the “panel” or “longitudinal” setting from the lens of tensor estimation. Traditionally, such panel or longitudinal settings are considered in econometrics literature for program or policy evaluation. Tensor estimation has been considered in machine learning, where tantalizing statistical and computational tradeoffs have emerged for random observation models. We introduce a causal variant of tensor estimation that provides a unified view for prior works in econometrics and provides newer avenues to explore. We discuss a method for estimating such a causal variant of the tensor and various exciting directions for future research, including offline reinforcement learning. This is based on joint work with Alberto Abadie (MIT), Anish Agarwal (MIT), and Dennis Shen (MIT/UC Berkeley).

Devavrat Shah is a professor in the Department of Electrical Engineering and Computer Science at MIT. His current research interests are at the interface of statistical inference and social data processing. His work has been recognized through prize paper awards in machine learning, operations research, and computer science, as well as career prizes including the 2010 Erlang prize from the INFORMS Applied Probability Society and the 2008 ACM Sigmetrics Rising Star Award. He is a distinguished alumnus of his alma mater IIT Bombay. He co-founded Celect, Inc. (now part of Nike) in 2013 to help retailers decide what to put where by accurately predicting demand using omni-channel data.

Watch recording on YouTube

September 30, 2021, 1 pm PT/4 pm ET

Challenges and Opportunities in Cloud Operations Research

Ishai Menache, Senior Principal Research Manager, Microsoft Research

Cloud computing is a multi-billion-dollar business that has revolutionized the way computing resources are consumed. The emergence of cloud computing is attributed to lowering the risks for end-users (e.g., scaling-out resource usage based on demand), while allowing providers to reduce their costs by managing compute resources at scale. In this talk, we will provide an overview of some of the challenges faced by providers at different dimensions of cloud operations. In particular, we will highlight the role of Operations Research and algorithms in increasing efficiency and return on investment of cloud infrastructure. As a concrete example, we zoom in on the Virtual Machine (VM) allocation problem, one of the fundamental problems in the area. VMs need to be assigned to physical machines in a way that reduces fragmentation and efficiently utilizes the available machines. Motivated by advances in Machine Learning that provide good estimates of workload characteristics, we consider the effect of having extra information in the form of VM lifetimes and future demand. We show that even basic information about demand (e.g., its average) leads to algorithms with significantly better guarantees.

Ishai Menache received his PhD in Electrical Engineering from the Technion, Israel Institute of Technology. He was a Postdoctoral Associate at the Laboratory for Information and Decision Systems (LIDS) at MIT. Ishai has been with Microsoft Research since 2011, where he is the founder and manager of the Cloud Operations Research (CORE) group. His research focuses on developing large-scale optimization frameworks for cloud systems and applications. More broadly, his areas of interest include systems and networking, optimization, and machine learning.

Watch recording on YouTube

October 7, 2021, 1 pm PT/4 pm ET

Hierarchical Control for Cyber-Physical Systems and Applications to Traffic Management

Murat Arcak, Professor of Electrical Engineering and Computer Sciences, University of California, Berkeley

Control of vehicle traffic management cyber-physical systems is invariably organized in a hierarchical structure that consists of multiple layers of feedback such as network, road link, and vehicle control. Using traffic management as a running example, this talk will present an integrated approach to designing these layers, thereby enabling a rigorous framework to provide system-level guarantees for the whole control stack. An example of this approach is symbolic control, which generates supervisory control actions to fulfill complex requirements expressed in temporal logic. In traffic management, symbolic control enables us to depart from steady state signal timing plans, and to develop reactive signaling schemes by first expressing finite horizon goals, such as dissipating queues and avoiding saturation, in temporal logic. Moving up to the network layer, we will next present a game theoretic analysis for routing, which takes into account the nonequilibrium dynamics resulting from the drivers’ continual revisions of their routes. We will introduce tools to deal with such dynamics and illustrate them on mixed-autonomy traffic.

Murat Arcak is a professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received his PhD degree from the University of California, Santa Barbara in 2000. He received a CAREER Award from the National Science Foundation in 2003, the Donald P. Eckman Award from the American Automatic Control Council in 2006, the Control and Systems Theory Prize from the Society for Industrial and Applied Mathematics (SIAM) in 2007, and the Antonio Ruberti Young Researcher Prize from the IEEE Control Systems Society in 2014. He is a Fellow of IEEE and the International Federation of Automatic Control (IFAC).

View presentation slides

Watch recording on YouTube

October 14, 2021, 1 pm PT/4 pm ET

Universal Laws and Architectures and Their Fragilities

John Doyle, Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and BioEngineering, California Institute of Technology

The past year unfortunately highlighted intrinsic and systemic unsustainability and fragilities in our society and technologies. While detailed mechanisms underlying “systemic fragilities” in immune, medical, computing, social, legal, energy, and transportation systems are incredibly diverse, all are enabled by shared universal features of their architectures, which are largely ad hoc historical artifacts. AI has many well-known fragilities, but outside social media has not so far contributed substantially to the catastrophes unfolding in these systems. This is poised to change dramatically. We need to more systematically design architectures that produce more robust and sustainable systems, including allowing higher layer learning and lower layer efficiencies to contribute. I’ll sketch the basic concepts of laws, layers, levels, speed-efficiency-accuracy-flexibility tradeoffs (SEAFTs), diversity-enable sweet spots (DeSS), how crucial hardware layer constraints on sparsity, locality, and delay limit system layer functionality, and how proper layering can mitigate this via DeSS. Examples include all our tech nets, layered brains (e.g., throwing and hitting 100 mph fastballs), layered immunity augmented by medicine and policy (and insights into the current pandemic), systemic legal fragilities and the 14th amendment, cascading failures in energy, climate change, language and its hijacking in social media, encouraging animal models for social architectures, and wildfire ecosystems.

John Doyle is the Jean-Lou Chameau Professor at the California Institute of Technology. He earned his BS, and MS degrees in electrical engineering from MIT. He received his doctorate in mathematics from the University of California, Berkeley. His research interests are in integrated theory foundations and architectures for complex networks that enable efficiency and robustness, with applications to tech, bio, neuro, and social systems and an emphasis on the impact on control performance due to delays, sparsity, locality, and saturations in sensors, actuators, communications, and computing components, and how these arise in and challenge bio, neuro, tech, and social system design. He has few academic awards but when younger had many regional, national, and world records and championships in various sports, and he is known for fantastic students and colleagues.

Watch recording on YouTube

October 21, 2021, 1 pm PT/4 pm ET

Resource Allocation through Machine Learning in Emerging Wireless Networks: 5G and Beyond to 6G

Sanjay Shakkottai, Professor of Electrical and Computer Engineering, University of Texas at Austin

In this talk, we discuss learning-inspired algorithms for resource allocation in emerging wireless networks (5G and beyond to 6G). We begin with an overview of opportunities for wireless and ML at various time-scales in network resource allocation. We then present two specific instances to make the case that learning-assisted resource allocation algorithms can significantly improve performance in real wireless deployments. First, we study co-scheduling of ultra-low-latency traffic (URLLC) and broadband traffic (eMBB) in a 5G system, where we need to meet the dual objectives of maximizing utility for eMBB traffic while immediately satisfying URLLC demands. We study iterative online algorithms based on stochastic approximation to achieve these objectives. Next, we study online learning (through a bandit framework) of wireless capacity regions to assist in downlink scheduling, where these capacity regions are “maps” from each channel-state to the corresponding set of feasible transmission rates. In practice, these maps are hand-tuned by operators based on experiments, and these static maps are chosen such that they are good across several base-station deployment scenarios. Instead, we propose an epoch-greedy bandit algorithm for learning scenario-specific maps. We derive regret guarantees, and also empirically validate our approach on a high-fidelity 5G New Radio (NR) wireless simulator developed within AT&T Labs. This is based on joint work with Gustavo de Veciana, Arjun Anand, Isfar Tariq, Rajat Sen, Thomas Novlan, Salam Akoum, and Milap Majmundar.

Sanjay Shakkottai received his PhD from the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign in 2002. He is with the University of Texas at Austin, where he is the Temple Foundation Endowed Professor No. 4, and a Professor in the Department of Electrical and Computer Engineering. He received the NSF CAREER award in 2004, was elected an IEEE Fellow in 2014, and was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021. His research interests lie at the intersection of algorithms for resource allocation, statistical learning, and networks, with applications to wireless communication networks and online platforms.

Watch recording on YouTube

October 28, 2021, 1 pm PT/4 pm ET

Deep Learning to Replace, Improve, or Aid CFD Analysis in Built Environment Applications

Wei Liu, Assistant Professor of Civil and Architectural Engineering, KTH Royal Institute of Technology

Fast and accurate airflow simulations in the built environment are critical to provide acceptable thermal comfort and air quality to occupants. Computational Fluid Dynamics (CFD) offers detailed analysis on airflow motion, heat transfer, and contaminant transport in indoor environments, as well as wind flow and pollution dispersion around buildings in urban environments. However, CFD still faces many challenges, mainly in terms of computational expense and accuracy. With the increasing availability of large amounts of data, data-driven models are starting to be investigated to either replace, improve, or aid CFD simulations. More specifically, the abilities of deep learning and Artificial Neural Networks (ANN) as universal non-linear approximators, handling of high dimensionality fields, and lower computational expense are very appealing. In built environment research, deep learning applications to airflow simulations show the ANN as a surrogate, replacement for expensive CFD analysis. Surrogate modeling enables fast or even real-time predictions, but usually at the cost of degraded accuracy. This talk presents the deep learning interactions with fluid mechanics simulations in general and proposes different techniques other than surrogate modeling for built environment applications. There are promising methods largely yet to be explored in the built environment scene.

Wei Liu is an assistant professor at the Division of Sustainable Buildings, Department of Civil and Architectural Engineering, KTH Royal Institute of Technology. Liu’s current research topics include Indoor Air Quality and Air Distribution, Inverse Design and Control of indoor environments, and Data-Driven/AI-based Smart Buildings. He has published 47 journal papers and 30 conference papers. Liu is an Outstanding Winner and recipient of INFORMS Award from the Mathematical Contest in Modeling 2019, Best Paper Award from ROOMVENT 2018, Bilsland Dissertation Fellowship from Purdue University in 2016, and First Prize of the RP-1493-Shootout Contest from ASHRAE in 2012.

View presentation slides

Watch recording on YouTube

November 18, 2021, 1 pm PT/3 pm CT

Towards the Next Era of Traffic Control: From Theory to Applications

Maria Laura Delle Monache, Assistant Professor of Civil and Environmental Engineering, University of California, Berkeley

In this talk, we present new models and control techniques for transportation on large-scale networks. First, we introduce a new two-dimensional traffic model based on partial differential equations (PDEs). We show the validation of the model based on synthetic and real data. Then, we propose an innovative control design, based on the 2D model, that considerably simplifies control design for traffic systems evolving in large-scale networks. The idea consists in projecting the flow evolution into a new space where the control problem can be decomposed in a finite number of one-dimensional problems. Lastly, we show how emergent behaviors in transportation, described with control theory and systems, can be complemented with big data and machine learning algorithms to address more complex societal implications.

Maria Laura Delle Monache is an assistant professor in the Department of Civil and Environmental Engineering at the University of California, Berkeley. Prior to joining the faculty at UC Berkeley she was a research scientist at Inria in Grenoble, France and a postdoctoral fellow at Rutgers University. She received the B.Sc. degree from the university of L’Aquila (Italy), a joint M.Sc. degree from the University of L’Aquila (Italy) and the University of Hamburg (Germany), and the Ph.D. degree in applied mathematics from the University of Nice-Sophia Antipolis, France. Delle Monache’s research lies at the intersection of engineering and mathematics. It is aimed at designing more sustainable communities leveraging new technologies for transportation.

Watch recording on YouTube

December 2, 2021, 1 pm PT/3 pm CT

Quantifying Carbon Credit over U.S. Midwestern Cropland Using AI-Based Data-Model Fusion 

Kaiyu Guan, Blue Waters Associate Professor of Natural Resources and Environmental Sciences, University of Illinois at Urbana-Champaign

In this talk, we will first provide an overview of the background on agriculture carbon credit, and then focus on the quantification of field-level carbon credit, including the issues in the conventional methods, and our proposed “System-of-Systems” solution that leverages various sources of sensing data, process-based modeling, and AI-based Model-Data Fusion methods. to achieve field-level, accurate, scalable and cost-effective quantification.

Kaiyu Guan is the Blue Waters Professor at the University of Illinois at Urbana-Champaign. He got his PhD at Princeton University and was a postdoctoral scholar at Stanford University. His research group uses satellite data, computational models, field work, and AI to address how climate and human practices affect crop productivity, water resource availability, and ecosystem functioning. He has published over 100 papers in leading scientific journals and leads over 15 federal grants from NASA, NSF, DOE, and USDA. He is the recipient of an NSF CAREER Award, the NASA New Investigator Award, the AGU Early Career Award in Global Environmental Change, and was a Blavatnik National Award for Young Scientists Finalist.

Watch recording on YouTube