I’m an Assistant Professor of Computer Science at Johns Hopkins, and a part-time Visiting Scientist at Abridge.
My research lies at the intersection of causality and machine learning, with the goal of enabling reliable decision-making and prediction in high-risk domains like healthcare. For more details on my research, see below - you can also see my CV.
Previously, I was a postdoc at Carnegie Mellon University, working with Zack Lipton, and completed my PhD in Computer Science at MIT working with David Sontag.
Research Overview
When can we rely on machine learning in high-risk domains like healthcare? In the long-term, we want machine learning systems to be as reliable as any FDA-approved medication or diagnostic test.
This goal is complicated by the need for causal reasoning and robust performance. To support decision-making, we want to draw causal conclusions about the impact of model recommendations (e.g., will recommending a particular drug lead to better patient outcomes?). Moreover, we want our models to perform well across different hospitals and patient populations, including those that differ from the hospitals / populations seen during model development.
These objectives complicate the development and validation of reliable models, as they run into limitations of what our data can tell us without further assumptions. For instance, we only observe outcomes for the treatments that were actually prescribed to patients, not all possible treatments. Similarly, we do not observe performance on every conceivable hospital where a model might be deployed, but only on the (typically much more limited) data we have access to.
I approach these challenges using tools from causality and statistics, often incorporating external knowledge into the process of model validation and design. External knowledge can come from a variety of sources, including human experts (e.g., clinicians) or gold-standard data (e.g., limited data from randomized trials). During my PhD, I’ve pursued research along two complimentary themes:
- Reliable causal inference and policy evaluation: Retrospective healthcare data is often used to learn better policies for treating disease, when experimentation is infeasible. However, this requires strong causal assumptions, and not all policies can be reliably evaluated. This has motivated my work on developing methods to help domain experts assess the plausibility of causal models [ICML 2019, MS Thesis], and get interpretable characterization of subpopulations where a given policy can be evaluated [AISTATS 2020]. I have also developed methods to incorporate limited clinical trial data [NeurIPS 2022, Preprint] to improve credibility.
- Robust, reliable prediction via (partial) causal knowledge: Causality is a useful lens for reasoning about how plausible changes in distribution will impact future model performance. In linear settings, I have developed methods for learning predictors with provably robust performance across changes in factors that are not directly observed (e.g., differences in socioeconomic status of patients) [ICML 2021]. In more general settings, I have developed new ways for domain experts to express their causal intuition about plausible changes (e.g., changes in clinical practice), evaluate the worst-case performance of models under those changes, and discover changes that result in poor performance [NeurIPS 2022].
The methodological problems I tackle are informed by my applied work with clinical collaborators, such as learning antibiotic treatment policies [Science Trans. Med. 2020] and debugging reinforcement-learning models for sepsis management [AMIA 2021].
Selected publications (Full List)
Auditing Fairness under Unobserved Confounding
Emily Byun, Dylan Sam, Michael Oberst, Zachary Lipton, Bryan Wilder
International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
[paper]
Evaluating Robustness to Dataset Shift via Parametric Robustness Sets
Nikolaj Thams*, Michael Oberst*, David Sontag
Neural Information Processing Systems (NeurIPS), 2022
[paper], [code] *Equal Contribution
Falsification before Extrapolation in Causal Effect Estimation
Zeshan Hussain*, Michael Oberst*, Ming-Chieh Shih*, David Sontag
Neural Information Processing Systems (NeurIPS), 2022
[paper] *Equal Contribution
Regularizing towards Causal Invariance: Linear Models with Proxies
Michael Oberst, Nikolaj Thams, Jonas Peters, David Sontag
International Conference on Machine Learning (ICML), 2021
[paper], [video], [slides], [poster], [code]
A Decision Algorithm to Promote Outpatient Antimicrobial Stewardship for Uncomplicated Urinary Tract Infection
Sanjat Kanjilal, Michael Oberst, Sooraj Boominathan, Helen Zhou, David C. Hooper, David Sontag
Science Translational Medicine, 2020
[article], [code], [dataset]
Selected Talks
Regularizing towards Causal Invariance: Linear Models with Proxies
Online Causal Inference Seminar
Stanford, March 29th, 2022
[video], [slides]
Primer: Learning Treatment Policies from Observational Data
Models, Inference, and Algorithms Seminar
Broad Institute, September 23rd, 2020
[video], [slides]
Reviewing
Conferences: AISTATS 2024, CLeaR 2024, NeurIPS 2023, NeurIPS 2022 (Top Reviewer), ICML 2022 (Top 10% of reviewers), NeurIPS 2021, UAI 2021 (Top 5% of reviewers), AISTATS 2019
Journals: Journal of Machine Learning Research, European Journal of Epidemiology, Journal of the Royal Statistical Society (Series A), Journal of Causal Inference, Statistics & Computing, Bayesian Analysis.