# About me

I am a PhD candidate at University of Southern California advised by Prof. Aram Galstyan and Prof. Greg Ver Steeg. I do both applied and theoretical research on some aspects of deep learning, often taking an information-theoretic perspective. My main research directions are (a) studying information stored in neural network weights or activations and its connections to generalization, memorization, stability and learning dynamics; and (b) representation learning with the goal of enriching the learned representation with useful properties, such as minimality, disentanglement, modularity, reduced synergy, etc. More broadly, I am interested in generalization under domain shifts, unsupervised/self-supervised learning, studying the generalization phenomenon of deep neural networks, and in estimation/approximation of information-theoretic quantities or their alternatives.

## News

**[Aug 3, 2022]**Our work “Formal limitations of sample-wise information-theoretic generalization bounds” was accepted to the 2022 IEEE Information Theory Workshop conference.**[May 16, 2022]**Started a summer internship at Google Research, New York. Will be working with Ankit Singh Rawat and Aditya Menon.**[March 2, 2022]**Our work “Failure Modes of Domain Generalization Algorithms” was accepted to CVPR 2022.**[Sept. 28, 2021]**Our work “Information-theoretic generalization bounds for black-box learning algorithms” was accepted to NeurIPS 2021.**[May 17, 2021]**Started a summer internship at AWS Custom Labels team. Will be working with Alessandro Achille and Avinash Ravichandran.**[Jan. 12, 2021]**Our work “Estimating informativeness of samples with Smooth Unique Information” got accepted to ICLR 2021.

## Publications and preprints

*Hrayr Harutyunyan*, Greg Ver Steeg, Aram Galstyan

**Formal limitations of sample-wise information-theoretic generalization bounds**

IEEE Information Theory Workshop 2022 [arXiv, bibTeX]

*single*training example. However, these sample-wise bounds were derived only for

*expected*generalization gap. We show that even for expected

*squared*generalization gap no such sample-wise information-theoretic bounds exist. The same is true for PAC-Bayes and single-draw bounds. Remarkably, PAC-Bayes, single-draw and expected squared generalization gap bounds that depend on information in pairs of examples exist.

*Hrayr Harutyunyan*, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan

**Failure Modes of Domain Generalization Algorithms**

CVPR 2021 [arXiv, code 1 2, bibTeX]

*Hrayr Harutyunyan*, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan

**Information-theoretic generalization bounds for black-box learning algorithms**

NeurIPS 2021 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Alessandro Achille, Giovanni Paolini, Orchid Majumder, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

**Estimating informativeness of samples with smooth unique information**

ICLR 2021 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Kyle Reing, Greg Ver Steeg, Aram Galstyan

**Improving generalization by controlling label-noise information in neural network weights**

ICML 2020 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Daniel Moyer, Aram Galstyan

**Fast structure learning with modular regularization**

NeurIPS'19 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Daniel Moyer, Hrant Khachatrian, Greg Ver Steeg, Aram Galstyan

**Efficient Covariance Estimation from Temporal Data**

arXiv preprint [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Nazanin Alipourfard, Kristina Lerman, Greg Ver Steeg, Aram Galstyan

**Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing**

ICML'19 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Hrant Khachatrian, David Kale, Greg Ver Steeg, Aram Galstyan

**Multitask learning and benchmarking with clinical time series data**

Nature, Scientific data 6 (1), 96 [arXiv, code, bibTeX]

*Hrayr Harutyunyan*, Aram Galstyan

**Disentangled representations via synergy minimization**

Allerton'17 [arXiv, bibTeX]