About me
I am a PhD candidate at University of Southern California advised by Prof. Aram Galstyan and Prof. Greg Ver Steeg. I do both applied and theoretical research on some aspects of deep learning, often taking an information-theoretic perspective. My main research directions are (a) studying information stored in neural network weights or activations and its connections to generalization, memorization, stability and learning dynamics; and (b) representation learning with the goal of enriching the learned representation with useful properties, such as minimality, disentanglement, modularity, reduced synergy, etc. More broadly, I am interested in generalization under domain shifts, unsupervised/self-supervised learning, studying the generalization phenomenon of deep neural networks, and in estimation/approximation of information-theoretic quantities or their alternatives.
News
- [May 16, 2022] Started a summer internship at Google Research, New York. Will be working with Ankit Singh Rawat and Aditya Menon.
- [March 2, 2022] Our work “Failure Modes of Domain Generalization Algorithms” was accepted to CVPR 2022.
- [Sept. 28, 2021] Our work “Information-theoretic generalization bounds for black-box learning algorithms” was accepted to NeurIPS 2021.
- [May 17, 2021] Started a summer internship at AWS Custom Labels team. Will be working with Alessandro Achille and Avinash Ravichandran.
- [Jan. 12, 2021] Our work “Estimating informativeness of samples with Smooth Unique Information” got accepted to ICLR 2021.
- [Oct. 20, 2020] Received a free NeurIPS 2020 registration by making it to the list of the top 10% of high-scoring reviewers.
- [June 3, 2020] Our work “Improving generalization by controlling label-noise information in neural network weights” got accepted to ICML 2020.
- [May 18, 2020] Starting a summer internship at AWS Custom Labels team. Going to work with Alessandro Achille, Avinash Ravichandran, and Orchid Majumder!
Publications and preprints

Failure Modes of Domain Generalization Algorithms
arXiv preprint [arXiv, code 1 2, bibTeX]

Information-theoretic generalization bounds for black-box learning algorithms
NeurIPS 2021 [arXiv, code, bibTeX]

Estimating informativeness of samples with smooth unique information
ICLR 2021 [arXiv, code, bibTeX]

Improving generalization by controlling label-noise information in neural network weights
ICML 2020 [arXiv, code, bibTeX]

Fast structure learning with modular regularization
NeurIPS'19 [arXiv, code, bibTeX]

Efficient Covariance Estimation from Temporal Data
arXiv preprint [arXiv, code, bibTeX]

Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing
ICML'19 [arXiv, code, bibTeX]

Multitask learning and benchmarking with clinical time series data
Nature, Scientific data 6 (1), 96 [arXiv, code, bibTeX]

Disentangled representations via synergy minimization
Allerton'17 [arXiv, bibTeX]