I am a Machine Learning Ph.D. student in the School of
Computer Science at Carnegie Mellon University, where I am advised by
Nina Balcan and
Steven Wu, and supported by the
During the summer of 2023, I was a research intern with the
Economics and Computation
Microsoft Research - New England,
where I was fortunate to work with
and Alex Slivkins.
I obtained a Master of Science in Machine Learning from Carnegie Mellon in 2022.
Before coming to CMU, I graduated from
Penn State University
with Bachelor of Science degrees in
Computer Science and
I am broadly interested in topics at the intersection of machine learning, algorithmic game theory, and econometrics.
My goal is to develop principled algorithms for modern data-driven decision-making.
To this end, my research has been focused on three (often interleaved) areas:
- Learning and decision-making using panel data: In settings such as e-commerce, content recommendation systems, and clinical trials,
one often observes repeated, noisy, measurements from a collection of individuals over a period of time. Such settings are ubiquitous in today's
digital world, and are often referred to as panel data settings.
I am interested in leveraging the temporal structure and similarity between individuals typically present in panel data
to design better algorithms for decision-making in real-world settings. The leap from counterfactual inference to decision making
results in additional challenges which must be taken into consideration. To this end, my research has focused on learning from panel data which has been
adaptively collected and learning & decision-making in the presence of strategic individuals.
- Algorithmic decision-making under incentives: When algorithmic assessment tools are used in high-stakes domains such as
lending, education, or employment, decision-subjects have an incentive to modify their input to the algorithm in
order to receive a more desirable outcome. As a result, machine learning systems deployed in these settings need to take
such strategic interactions into consideration in order to make reliable predictions (and decisions).
To this end, I have investigated the effects of partial feedback, incomplete information, causal learning, and repeated interactions in the
domain of algorithmic decision-making under incentives.
- Learning in non-stationary environments: While classical machine learning focuses on learning a single model for a single task using data from a stationary distribution,
the real world is often non-stationary and requires reasoning about many different-but-related tasks, often with just a small amount of data from each.
For example, in online advertising auctions, the advertiser’s value for different keywords adapts based on current marketing trends.
In online marketplaces, the price consumers are willing to pay for different products varies based on the current economic outlook.
My research in this area has been focused on developing meta-learning algorithms for decision-making with provable performance guarantees,
which are capable of operating under partial feedback and in the presence of other strategic agents.
- January 2024: Talks at the Multi-Agent Learning Seminar, CMU Economics, Columbia, and Google NYC.
- December 2023: One paper accepted to SIGMETRICS.
- October 2023: I gave talks at INFORMS, Drexel, and Penn.
- September 2023: Three papers accepted to NeurIPS.
- September 2023: I gave a talk at the CMU AI Seminar.
- September 2023: I gave talks at the University of Miami's Computer Science and Industrial & Systems Engineering Seminars.
- May 2023: I gave a talk at the Online Causal Inference Seminar.
- May 2023: I gave a talk at the Simons TOC for Fairness Seminar.
- February 2023: I will be interning with the Economics & Computation team at Microsoft Research this summer.
- February 2023: I was recognized as a top reviewer for AISTATS 2023.
- January 2023: One paper accepted to ICLR.
- December 2022: I received my Master's degree in Machine Learning.
- September 2022: One paper accepted to NeurIPS.
- August 2022: I am co-organizing the SIGecom Seminar Series for the fall 2022 semester. Check out our website here.
- May 2022: One paper accepted to ICML.
- April 2022: I was awarded the NDSEG Fellowship.
- March 2022: Our work was selected for two oral presentations at the
Learning with Strategic Agents Workshop at AAMAS 2022
- December 2021: I attended NeurIPS (virtually). Here's a
link to my oral presentation at the WHMD workshop.
- October 2021: Our work on Bayesian Persuasion for Algorithmic Recourse was selected for an oral presentation at the
Workshop on Human and Machine Decisions at NeurIPS 2021
- September 2021: One paper accepted to NeurIPS!
- August 2021: I wrote a blog post for CMU's Machine Learning Blog. Check it out
- July 2021: Our work on Strategic Instrumental Variable Regression was selected for a spotlight presentation
at the Algorithmic Recourse Workshop at ICML 2021. View a recording of my talk
- June 2021: I presented our work on Stateful Strategic Regression at FORC 2021. View my short talk
here and my long talk