I am broadly interested in how we can leverage structural assumptions about data generating processes to make flexible machine learning models generalize beyond the observed distribution of training data. To this end, I have worked on using deep learning for causal inference, and on designing deep network architectures for permutation invariant data. Since starting at Mila, I’ve been focusing on learning representations with identifiability guarantees.
Weakly Supervised Representation Learning with Sparse Perturbations; 2022
Properties from mechanisms: an equivariance perspective on identifiable representation learningIn International Conference on Learning Representations ; (joint first author), spotlight presentation; 2022
Valid Causal Inference with (Some) Invalid InstrumentsIn Proceedings of the 38th International Conference on Machine Learning 18–24 jul 2021
Deep IV: A Flexible Approach for Counterfactual PredictionIn Proceedings of the 34th International Conference on Machine Learning 06–11 aug 2017