# About me

I am a research scientist at Google Research. I obtained my Ph.D. in Computer Science from the University of Southern California, where I was fortunate to be advised by Aram Galstyan and Greg Ver Steeg. Prior to that, I received my M.S. and B.S. degrees in Applied Mathematics and Computer Science from Yerevan State University.

I do both applied and theoretical research on various aspects of deep learning, often taking an information-theoretic perspective. My main research direction is studying information stored in neural network weights or activations, and its connections to generalization, memorization, stability and learning dynamics. More broadly, I am interested in learning theory, generalization under domain shifts, unsupervised/self-supervised representation learning, and in the generalization phenomenon of deep neural networks.

## Updates

**[July 17, 2023]**Excited to share that I have joined Google Research NYC as a research scientist.**[June 16, 2023]**I have graduated from USC with a PhD in Computer Science!**[Jan 21, 2023]**Our work “Supervision Complexity and its Role in Knowledge Distillation” was accepted to ICLR 2023.**[Jan 11, 2023]**I am invited to the Rising Stars in AI Symposium 2023 at KAUST in Saudi Arabia (Feb. 19-21).**[Aug 3, 2022]**Our work “Formal limitations of sample-wise information-theoretic generalization bounds” was accepted to the 2022 IEEE Information Theory Workshop conference.**[May 16, 2022]**Started a summer internship at Google Research, New York. Will be working with Ankit Singh Rawat and Aditya Menon.**[March 2, 2022]**Our work “Failure Modes of Domain Generalization Algorithms” was accepted to CVPR 2022.**[Sept. 28, 2021]**Our work “Information-theoretic generalization bounds for black-box learning algorithms” was accepted to NeurIPS 2021.

## Publications and preprints

*Hrayr Harutyunyan*, Ankit Singh Rawat, Aditya Krishna Menon, Seungyeon Kim, Sanjiv Kumar

**Supervision Complexity and its Role in Knowledge Distillation**

ICLR 2023, [paper, bibTeX]

*Hrayr Harutyunyan*, Greg Ver Steeg, Aram Galstyan

**Formal limitations of sample-wise information-theoretic generalization bounds**

IEEE Information Theory Workshop 2022 [arXiv, bibTeX]

*single*training example. However, these sample-wise bounds were derived only for

*expected*generalization gap. We show that even for expected

*squared*generalization gap no such sample-wise information-theoretic bounds exist. The same is true for PAC-Bayes and single-draw bounds. Remarkably, PAC-Bayes, single-draw and expected squared generalization gap bounds that depend on information in pairs of examples exist.

*Hrayr Harutyunyan*, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan

**Failure Modes of Domain Generalization Algorithms**

CVPR 2021 [arXiv, code 1 2, bibTeX]

*Hrayr Harutyunyan*, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan

**Information-theoretic generalization bounds for black-box learning algorithms**

NeurIPS 2021 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Alessandro Achille, Giovanni Paolini, Orchid Majumder, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

**Estimating informativeness of samples with smooth unique information**

ICLR 2021 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Kyle Reing, Greg Ver Steeg, Aram Galstyan

**Improving generalization by controlling label-noise information in neural network weights**

ICML 2020 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Daniel Moyer, Aram Galstyan

**Fast structure learning with modular regularization**

NeurIPS'19 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Daniel Moyer, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan

**Efficient Covariance Estimation from Temporal Data**

arXiv preprint [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Nazanin Alipourfard, Kristina Lerman, Greg Ver Steeg, Aram Galstyan

**Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing**

ICML'19 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Hrant Khachatrian, David Kale, Greg Ver Steeg, Aram Galstyan

**Multitask learning and benchmarking with clinical time series data**

Nature, Scientific data 6 (1), 96 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Aram Galstyan

**Disentangled representations via synergy minimization**

Allerton'17 [arXiv, bibTeX]