October 10–12, 2022
Watch all sessions on the C3.ai DTI YouTube channel workshop playlist.
Online markets that match buyers and sellers are ubiquitous. Well-known examples include Airbnb, Uber, Lyft, and eBay. All of these online markets rely on machine learning algorithms to learn from past data to maximize profit and/or utility of the platform to users. Additionally, new markets are emerging for data itself: an example is the emerging research on the design of privacy-preserving mechanisms for the possible release of private data by individuals for the greater good of society.
This workshop explores theoretical aspects of the interplay between data science, machine learning, game theory, and economics for the design of markets using data and for the design of data markets. The program includes plenary research talks and poster presentations.
ORGANIZERS
Rasoul Etesami (University of Illinois at Urbana-Champaign), Eric Mazumdar (California Institute of Technology), R. Srikant (University of Illinois at Urbana-Champaign)
SPEAKERS
Subhonmesh Bose (University of Illinois at Urbana-Champaign), Ozan Candogan (University of Chicago), Rasoul Etesami (University of Illinois at Urbana-Champaign), Raul Castro Fernandez (University of Chicago), Nikhil Garg (Cornell University), Ashish Goel (Stanford University), Negin Golrezaei (Massachusetts Institute of Technology), Thibaut Horel (Massachusetts Institute of Technology), Nicole Immorlica (Microsoft Research), Yash Kanoria (Columbia University), Eric Mazumdar (California Institute of Technology), Kamesh Munagala (Duke University), Rad Niazadeh (University of Chicago), Amin Saberi (Stanford University), Vijay Vazirani (University of California, Irvine)
PROGRAM
Day 1 (Monday, October 10)
ABSTRACT
The classic Hylland-Zeckhauser mechanism (1979) for implementing a one-sided matching market was recently shown to be intractable. We introduced Nash-bargaining-based matching market models, which are tractable. Motivated by recent insights on the online bipartite matching problem, we attempted to extend its optimal algorithm to a budget-oblivious algorithm for the adwords problem. We will show how far this went and what remains. Finally, the lattice of stable matching has been extensively studied ever since Gale and Shapley (1962) introduced this problem. Two recent papers ([1] and [2]) studied the problem of understanding structural relationships between the stable matching lattices of two “nearby” instances. A “killer app” is yet to be found for these ideas.
SPEAKER
Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. A Distinguished Professor at the University of California, Irvine, Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, and also to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, Vazirani has been working on algorithms for matching markets; his co-edited book on this topic will be published by Cambridge University Press in February 2023. Vazirani is an ACM Fellow, a Guggenheim Fellow and the recipient of the 2022 INFORMS John von Neumann Theory Prize.
ABSTRACT
Motivated by a variety of online matching markets, we consider demand and supply that arise i.i.d. in [0,1]^d, and need to be matched with each other while minimizing the expected average distance between matched pairs (the “cost”). We characterize the achievable cost scaling in three models as a function of the dimension d and the amount of excess supply (M or m): (i) Static matching of N demand units with N+M supply units. (ii) A semi-dynamic model where N+M supply units are present beforehand and N demand units arrive sequentially and must be matched immediately. (iii) A fully dynamic model where there are always m supply units present in the system, one supply and one demand unit arrive in each period, and the demand unit must be matched immediately. We show that, despite the matching constraint and uncertainty about the future, cost nearly as small as the distance to the nearest neighbor is achievable in all cases except models (i) and (ii) for d=1 and M = o(N). Moreover, the latter is the only case in models (i) and (ii) where excess supply significantly reduces the achievable cost. Work with Akshit Kumar and Yilun Chen introduces a practically motivated variant of model (ii) in which the distributions of supply and demand can be different, and generates new algorithmic and scaling insights.
Watch recorded presentation on YouTube
SPEAKER
Yash Kanoria, the Sidney Taurel Associate Professor of Business in the Decision, Risk, and Operations division at Columbia Business School, is working on the design and operations of marketplaces, especially matching markets. Previously, he obtained a BTech in Electrical Engineering from IIT Bombay in 2007, a PhD from Stanford in 2012, and spent a year at Microsoft Research New England during 2012-13 as a Schramm postdoctoral fellow. He received a National Science Foundation CAREER Award in 2017.
ABSTRACT
In many online platforms, customers’ decisions are substantially influenced by product rankings, as most customers only examine a few top-ranked products. Concurrently, such platforms also use the same data corresponding to customers’ actions to learn how these products must be ranked or ordered. These interactions in the underlying learning process, however, may incentivize sellers to artificially inflate their position by employing fake users, as exemplified by the emergence of click farms. Motivated by such fraudulent behavior, we study the ranking problem of a platform that faces a mixture of real and fake users who are indistinguishable from one another. We first show that existing learning algorithms — that are optimal in the absence of fake users — may converge to highly sub-optimal rankings under manipulation by fake users. To overcome this deficiency, we develop efficient learning algorithms under two informational environments: in the first setting, the platform is aware of the number of fake users, and in the second setting, it is agnostic to the number of fake users. For both these environments, we prove that our algorithms converge to the optimal ranking, while being robust to the aforementioned fraudulent behavior; we also present worst-case performance guarantees for our methods, and show that they significantly outperform existing algorithms. At a high level, our work employs several novel approaches to guarantee robustness such as: 1) constructing product-ordering graphs that encode the pairwise relationships between products inferred from the customers’ actions; and 2) implementing multiple levels of learning with a judicious amount of bi-directional cross-learning between levels.
Watch recorded presentation on YouTube
SPEAKER
Negin Golrezaei is the KDD Career Development Professor in Communications and Technology and an Assistant Professor of Operations Management at the MIT Sloan School of Management. Her current research interests are in the area of machine learning, statistical learning theory, mechanism design, and optimization algorithms with applications to revenue management, pricing, and online markets. Before joining MIT, Negin spent a year as a postdoctoral researcher at Google Research in New York where she worked with the Market Algorithm team to develop, design, and test new mechanisms and algorithms for online marketplaces. Negin is the recipient of a 2018 Google Faculty Research Award; 2017 George B. Dantzig Dissertation Award; 2017 INFORMS Revenue Management and Pricing Section Dissertation Prize; 2018 Elwood S. Buffa Doctoral Dissertation Award from the Decision Sciences Institute; University of Southern California (USC) Ph.D. Achievement Award (2017); USC CAMS Graduate Student Prize, for excellence in research with a substantial mathematical component (2017); Honorable mention in the 2016 MSOM Student Paper Competition; and USC Provost’s Ph.D. Fellowship (2011). Negin received her BSc (2007) and MSc (2009) degrees in electrical engineering from the Sharif University of Technology, Iran, and a PhD (2017) in operations research from USC.
ABSTRACT
We study the classical Bayesian auction setting with a twist: Between the revenue maximizing seller and buyers lies an intermediary that is better informed about the buyer values. This intermediary segments the market by selectively releasing information to the seller, who still controls the auction. This process, called signaling, allows sellers to price discriminate. Though one would expect signaling to always help the seller, in the setting with one buyer, Bergemann, Brooks, and Morris [AER 2015] showed a remarkable result: Signaling can maximally help the buyer without really helping the seller. Specifically, there exists a signaling scheme where the seller’s revenue does not increase but the item always sells, thereby maximizing the consumer (buyer) surplus. In this talk, we will explore whether such a result is possible in more general settings: First, when the type space of the buyer is “inter-dimensional” with a private budget or deadline in addition to a private value, and second, when there are multiple buyers in the auction. On the positive side, we show exact and approximation results via new signaling schemes, while on the negative side, we show impossibility results that capture the limits to which information intermediaries can help the buyers. This is joint work with Reza Alijani, Sid Banerjee, Shao-Heng Ko, and Kangning Wang, and combines two papers that appeared in ACM EC 2022.
Watch recorded presentation on YouTube
SPEAKER
Kamesh Munagala is a Professor and Associate Chair of Computer Science at Duke University. His research is in the general area of theoretical computer science, particularly the areas of Approximation Algorithms, Online Algorithms, and Computational Economics. He works on developing models, algorithms, and markets for resource allocation, decision-making, and provisioning problems. These problems arise in a variety of applications — designing a data network, facility location and clustering, data center scheduling, allocating ad slots, scheduling ride-shares, and civic budgeting. In these contexts, his work has addressed several research challenges: (1) Intractability: Exactly optimal solutions are often inefficient to compute due to the discrete, combinatorial nature of the problems, and we seek efficient algorithms that compute provably approximate optimal solutions; (2) Uncertainty: When future demand is uncertain, we seek algorithms that either work with specified models of uncertainty, or are robust to any possible future; (3) Pricing and Incentives: In settings where participants may lie about the utility of resources to them, we seek to design appropriate pricing schemes and incentives; and (4) Fairness: In settings where resources are shared and the allocation has to be fair to the participants, we seek models for fairness and devise efficient algorithms for these models. He is part of the Algorithms group and the Computational Economics group.
ABSTRACT
Motivated by online decision-making in time-varying combinatorial environments, we study the problem of transforming offline algorithms to their online counterparts. We focus on offline combinatorial problems that are amenable to a constant factor approximation using a greedy algorithm that is robust to local errors. For such problems, we provide a general framework that efficiently transforms offline robust greedy algorithms to online ones using Blackwell approachability. We show that the resulting online algorithms have O(T^{1/2}) (approximate) regret under the full information setting. We further introduce a bandit extension of Blackwell approachability and leverage this notion to transform greedy robust offline algorithms into a O(T^{2/3})(approximate) regret in the bandit setting. Demonstrating the flexibility of our framework, we apply our offline-to-online transformation to several problems at the intersection of revenue management, market design, and online optimization, including product ranking optimization in online platforms, reserve price optimization in auctions, and submodular maximization. We show that our transformation, when applied to these applications, leads to new regret bounds or improves the current known bounds. If time permits, I also elaborate on some open problems and future directions stemming from this work. The talk is based on the paper Online Learning via Offline Greedy Algorithms: Applications in Market Design and Optimization (with Negin Golrezaei, Joshua Wang, Fransisca Susan and Ashwinkumar Badanidiyuru), a conference version has appeared in ACM EC’21 and a journal version in Management Science (to appear).
Watch recorded presentation on YouTube
SPEAKER
Rad Niazadeh is an assistant professor of Operations Management at the University of Chicago Booth School of Business. Prior to joining Chicago Booth, he was a visiting researcher at Google Research NYC and a Motwani postdoctoral fellow at Stanford University, Computer Science Department. He obtained his PhD in Computer Science from Cornell University. He studies the interplay between algorithms, incentives, and learning, advancing the theoretical methodologies and foundations of market design and operations in dynamic and complex environments. On the practical side, his research aims to build generic frameworks to design faster and more economically efficient online marketplaces, and also to help with addressing humanitarian needs (such as equity, fairness, and non-discrimination) in operations of governmental agencies and non-profit organizations. Niazadeh has received several research awards, including the INFORMS Auctions and Market Design 2021 Rothkopf Junior Researcher Paper Award (first place), the INFORMS Revenue Management and Pricing Dissertation Award (honorable mention), the Google PhD Fellowship in Market Algorithms, the Stanford Motwani fellowship, and the Cornell Jacobs fellowship.
ABSTRACT
Machine learning algorithms are increasingly being deployed into environments in which they must contend with other strategic agents with misaligned objectives. The presence of these other agents breaks many of the underlying assumptions on which machine learning algorithms are built and can cause non-stationarity in the environment that can give rise to surprising dynamics and behaviors. In this talk, we explore challenges and opportunities available to algorithm designers in such scenarios and show how to take advantage of the game theoretic interactions in the environment. We focus on two sets of problems: 1) Strategic classification, where an agent attempts to learn a machine learning model while data is being strategically manipulated, and 2) Matching markets, where individual agents attempt to learn their most preferred match while competing with other agents over firms. In strategic classification, we show that the presence of strategic agents means that the learning rate or speed of learning becomes a design choice that can be used to select for different equilibria. In matching markets, we design a family of algorithms that allow agents to optimally learn in structured matching markets while competing with other agents even without full observation of the market. Both of these results suggest that dealing with game theoretic interactions requires re-evaluating long-standing assumptions underlying in machine learning, but that if one can leverage the game and underlying game theoretic structure, one can still give strong performance guarantees.
Watch recorded presentation on YouTube
SPEAKER
Eric Mazumdar is an Assistant Professor in Computing and Mathematical Sciences and Economics at Caltech. He obtained his Ph.D in Electrical Engineering and Computer Science at UC Berkeley, co-advised by Michael Jordan and Shankar Sastry. His research interests lie at the intersection of machine learning and economics and he is broadly interested in developing the tools and understanding necessary to confidently deploy machine learning algorithms into societal systems. This requires understanding the theoretical underpinnings of learning algorithms in uncertain, dynamic environments where they interact with other strategic agents, humans, and algorithms. Practically, he applies his work to problems in intelligent infrastructure, online markets, e-commerce, and the delivery of healthcare and uses tools and ideas from statistical machine learning, optimization, stochastic control, dynamical systems, and game theory. Some of the topics addressed by his recent work include strategic classification, learning behavioral models of human decision-making from data, min-max optimization, learning in games, multi-agent reinforcement learning, distributionally robust learning, and learning for control.
Day 2 (Tuesday, October 11)
ABSTRACT
Modern city governance relies heavily on crowdsourcing to identify problems such as downed trees and power-lines. Two major concerns are that (1) residents do not report problems at the same rates and (2) agencies differentially respond to reports, leading to an inefficient and inequitable allocation of government resources. However, measuring such under-reporting and differential responses are challenging statistical tasks: ground truth incident and risk distributions may differ by area, and, almost by definition, we do not observe incidents that are not reported. First, we develop a method to identify (heterogeneous) reporting rates, without using external (proxy) ground truth data. We apply our method to over 100,000 resident reports made to the New York City Department of Parks and Recreation, finding that there are substantial spatial and socio-economic disparities in reporting rates, even after controlling for incident characteristics. Second, in ongoing work, we develop a method to audit differential response rates, even when incident occurrence varies spatio-temporally and the agency faces capacity constraints. Finally, I’ll discuss an agenda in analyzing and addressing downstream consequences of data and learning limitations in societal systems, drawing on work in both government service allocation and recommender systems.
Watch recorded presentation on YouTube
SPEAKER
Nikhil Garg is an Assistant Professor of Operations Research and Information Engineering at Cornell Tech as part of the Jacobs Technion-Cornell Institute. His research interest is the application of algorithms, data science, and mechanism design to the study of democracy, markets, and societal systems at large. He received his PhD from Stanford University and has spent considerable time in industry — most recently, he was the Principal Data Scientist at PredictWise, which provides election analytics for political campaigns. Nikhil received the INFORMS George Dantzig Dissertation Award and an honorable mention for the ACM SIGecom dissertation award.
ABSTRACT
The theory of matching, with its roots in the work of mathematical giants like Euler and Kirchhoff, has played a central and catalytic role in combinatorial optimization for decades. More recently, the growth of online marketplaces for allocating advertisements, rides, or other goods and services has led to new interest and progress in this area. I will start the talk by giving examples from various industries and survey a few models, algorithms, and open problems in the context of ride-sharing.
Watch recorded presentation on YouTube
SPEAKER
Amin Saberi is a Professor of Management Science and Engineering at Stanford University. He received his B.Sc. from Sharif University of Technology and his Ph.D. from Georgia Institute of Technology in Computer Science. His research interests include algorithms, design and analysis of social networks, and applications. He is a recipient of the Terman Fellowship, Alfred Sloan Fellowship, and several best paper awards.
ABSTRACT
We will discuss the design of exchanges that allow arbitrary pairs of assets to be traded against each other without the presence of a reserve currency. We show that if only sell orders are allowed, a simple convex program can be used to compute the clearing prices and the set of trades that can be accomplished. We present practical approaches to make these exchanges tractable on top of a distributed consensus platform (e.g. a blockchain). We also show (axiomatically) how liquidity pools, represented by Constant Function Market Makers, can be incorporated into a multi-asset exchange as a trading agent. This is joint work with Mohak Goyal, David Mazieres, and Geoffrey Ramseyer.
Watch recorded presentation on YouTube
SPEAKER
Ashish Goel is a professor of Management Science and Engineering and (by courtesy) Computer Science at Stanford University, and a member of Stanford’s Institute for Computational and Mathematical Engineering. He received his PhD in Computer Science from Stanford in 1999, and was an assistant professor of Computer Science at the University of Southern California from 1999 to 2002. His research interests lie in the design, analysis, and applications of algorithms. Current application areas of interest include social networks, participatory democracy, Internet commerce, and large-scale data processing. Goel is a recipient of an Alfred P. Sloan faculty fellowship (2004-06), a Terman faculty fellowship from Stanford, an NSF Career Award (2002-07), and a Rajeev Motwani mentorship award (2010). He co-authored the paper that won Best Paper at WWW 2009, was a 2014 Edelman Laureate and a co-winner of the SigEcom Test of Time Award in 2018. Goel was a research fellow and technical advisor at Twitter, Inc. from July 2009 to Aug 2014. He currently serves as a technical advisor to Coinbase, Inc.
ABSTRACT
Most economic models assume that agents hold beliefs in the form of priors, or probability distributions over a state of the world, which guide their behavior. In this talk, I consider a model in which agents don’t have priors. Instead, beliefs are built off data or anecdotes that are drawn from a distribution parameterized by the state of the world. I ask how this impacts learning and outcomes. I first discuss a model where the agent has limited memory and must strategically choose which anecdotes to commit to memory. In this context I characterize the amount of memory required for the agent to learn as well as one with perfect memory. I next discuss a model where agents communicate by sharing anecdotes. This mode of communication results in higher noise and bias when agents have differing preferences, giving rise to informational homophily and polarization. Finally, I discuss a model where a principal selectively discloses anecdotes to facilitate social learning. Here I will show that an appropriate information structure, chosen ex ante, can incentivize exploration and thus avoid the herding problems common in such social learning settings.
Watch recorded presentation on YouTube
SPEAKER
Nicole Immorlica is a Senior Principal Researcher at Microsoft Research New England (MSR NE) where she leads the economics and computation group. She is also chair of SIGecom, the ACM Special Interest Group on Economics and Computation, which fosters world-class research in this interdisciplinary field through conferences, awards, and mentorship programs. She received her BS in 2000, MEng in 2001, and PhD in 2005 in theoretical computer science from MIT in Cambridge, MA. She joined MSR NE in 2012 after completing postdocs at Microsoft in Redmond, WA and Centruum vor Wiskunde en Informatics (CWI) in Amsterdam, Netherlands, and a professorship in computer science at Northwestern University. Nicole’s research interest is in the design and operation of sociotechnical systems. Using tools and modeling concepts from both theoretical computer science and economics, Nicole hopes to explain, predict, and shape behavioral patterns in various online and offline systems, markets, and games. She is known for her work on social networks, matching markets, and mechanism design. She is the recipient of a number of fellowships and awards including the Sloan Fellowship, the Microsoft Faculty Fellowship and the NSF CAREER Award. She has been on several boards including SIGACT, the Game Theory Society, and OneChronos, is an associate editor of Operations Research and Transactions on Economics and Computation, and was program committee member and chair for several ACM, IEEE, and INFORMS conferences in her area.
ABSTRACT
In this talk, we explore information design settings where the designer controls information about a state, and multiple agents interacting in a game are privately informed about their types. Each agent’s utility depends on all agents’ types and actions, as well as (linearly) on the state. We characterize the structure of the optimal mechanisms, and show that there always exists an optimal mechanism that is laminar partitional. We illustrate the applications of the framework to two-sided markets and social networks. We also highlight the value of screening: without screening, the best achievable payoff could be as low as one over the number of types times the optimal payoff. Along the way, we shed light on solutions of optimization problems over distributions subject to a mean-preserving contraction constraint and additional side constraints, which might be of independent interest.
Watch recorded presentation on YouTube
SPEAKER
Ozan Candogan is a professor of Operations Management at Chicago Booth. He studies the impact of networks on operational decisions and develops new methodologies for the study of complex social and economic systems. His work has applications to the operations of online social networks, ride-sharing platforms, delivery platforms, two-sided marketplaces, supply chains, and online advertising platforms. Candogan was a finalist for the 2013 George Nicholson Student Paper Competition and the 2021 M&SOM Service Management SIG Prize. He received the 2009 Siebel Scholarship and the 2012 Microsoft Research PhD Fellowship.
ABSTRACT
We consider competitive online markets that exhibit externalities. Externalities capture the effect that agents may have on each other as a result of different resource allocation strategies and can substantially affect allocation strategies and outcomes. Motivated by applications such as data placement in peer-to-peer networks and online assortment in digital advertising, in this talk, we consider two online platforms and study the complexity and approximability of their optimal allocation strategies under various settings of homogeneous and heterogeneous buyers/resources. We show that these markets allow for constant competitive algorithms when either the resources or buyers are homogeneous but fail to admit any constant competitive algorithms in the presence of resource/buyer heterogeneity. We also consider these markets in an offline setting and develop distributed learning dynamics to obtain their equilibrium points.
Watch recorded presentation on YouTube
SPEAKER
Rasoul Etesami is an Assistant Professor in the Department of Industrial and Systems Engineering at the University of Illinois at Urbana-Champaign, where he is also affiliated with the Coordinated Science Laboratory. Previously, he was a postdoctoral research fellow in the Department of Electrical Engineering at Princeton University and WINLAB. He received his PhD in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign, during which he was a research intern at Alcatel-Lucent Bell Labs. Etesami received the Best CSL PhD Thesis Award at UIUC in 2016, Springer Outstanding PhD Thesis Award in 2017, NSF CAREER Award in 2020, and multiple excellent teaching awards. His research interests include the analysis of complex social, communication, and decision-making systems using tools from control and game theory, optimization theory, and learning theory.
Day 3 (Wednesday, October 12)
ABSTRACT
The design of data markets has gained importance as firms increasingly use externally acquired data to inform their business decisions. When the firms are in competition, the acquisition of data by a firm exerts a negative externality on its competitors, an aspect that has received limited consideration so far. We formulate the problem facing a monopolistic data seller in such an environment as a mechanism design problem. The firms preferences for data as well as the negative externalities can be modeled as either intrinsic to the firms or arising from the downstream competition. In both cases, we derive the welfare- and revenue-optimal mechanisms, showing that the presence of externalities couples firms’ optimal allocations, despite the inherent free replicability of data, and increases the profitability of selling information to one of the firms exclusively.
Watch recorded presentation on YouTube
SPEAKER
Thibaut Horel is a Postdoctoral Associate in the Institute for Data, Systems and Society at the Massachusetts Institute of Technology working under the supervision of Professor Munther Dahleh. His research interests lie broadly in optimization, game theory, and theoretical computer science. He received his Ph.D. in Computer Science from Harvard University in 2019, advised by Professors Edoardo Airoldi and David Parkes, and an M.Sc. from École Normale Supérieure in 2012.
ABSTRACT
The growing adoption of data analytics platforms and machine learning-based solutions for decision-makers creates a significant demand for datasets, which explains the appearance of data markets. In a well-functioning data market, sellers share data in exchange for money, and buyers pay for datasets that help them solve problems. The market raises sufficient money to compensate sellers and incentivize them to keep sharing datasets. This low-friction matching of sellers and buyers distributes the value of data among participants. But designing online data markets is challenging because they must account for the strategic behavior of participants. In this talk, we motivate one type of data market and then introduce techniques to protect it from strategic participants. I combine protection techniques into a pricing algorithm designed for trading data. The talk emphasizes the use of empirical methods to motivate and validate analytical designs, and it includes the description of user studies and extensive simulations. I summarize some questions and opportunities at the intersection of empirical methods and data market design.
Watch recorded presentation on YouTube
SPEAKER
Raul Castro Fernandez is an Assistant Professor of Computer Science at the University of Chicago. He is interested in the economics and value of data. The goal of his research is to understand how to make the best use of data possible. For that, he often builds systems to share, discover, prepare, integrate, and process data and often uses techniques from data management, statistics, and machine learning. He is part of ChiData, the data systems research group at the University of Chicago.
ABSTRACT
The electric power grid is increasingly becoming reliant on data. In this talk, we first explore risk-sensitive electricity market design with data-driven market clearing that explicitly models power delivery risks and system constraint violation risks. Then we discuss the data-driven distributed solar hosting capacity problem that considers said risks in electricity infrastructure planning. Risks in this talk are measured via the conditional value at risk (CVaR) measure, properties of which allow efficient, often customized, algorithm design for optimization, and facilitate the definition of meaningful prices in market design contexts. We end with discussions on various other problems in grid operation where data plays a pivotal role, ranging from learning customer demand preferences to stability analysis.
Watch recorded presentation on YouTube
SPEAKER
Subhonmesh Bose is an Assistant Professor and Stanley Helm Fellow in the Department of Electrical and Computer Engineering at UIUC. His research focuses on facilitating the integration of renewable and distributed energy resources into the grid, leveraging tools from optimization, control, and game theory. Before joining UIUC, Bose was a postdoctoral fellow at the Atkinson Center for Sustainability at Cornell University. Prior to that, he received his MS and PhD degrees from the California Institute of Technology in 2012 and 2014, respectively. He received the NSF CAREER Award in 2021. His research projects have been supported by grants from NSF, PSERC, Siebel Energy Institute, and C3.ai DTI, among others.
VENUE
National Center for Supercomputing Applications
University of Illinois at Urbana-Champaign
Urbana, Illinois
LOCAL ACCOMMODATIONS
Hampton Inn Champaign/Urbana, 1200 West University Avenue, Urbana, Illinois 61801
TownePlace Suites Champaign Urbana/Campustown, 603 South Sixth Street, Champaign, Illinois 61820