About Me
I am an incoming Ph.D. student at Princeton University. Before joining Princeton, I studied at Peking University (PKU) majoring in applied mathematics, while pursuing a double major in computer science. I was fortunate to be advised by Professor Liwei Wang on research about theory of machine learning. My research interests lie on the theories that can inspire us to make better algorithms. I spent a wonderful summer at MIT as a research intern supervised by Professor Sasha Rakhlin in 2019. I am also fortunate to work with Professor Jason D. Lee.
News
- Two papers accepted by NeurIPS 2020!
- Graduate from PKU.
Publications
(NeurIPS 2020) Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su*, Yihang Chen*, Tianle Cai*, Tianhao Wu, Ruiqi Gao, Liwei Wang, Jason D. Lee
Highlight: We sanity-check several existing pruning methods and find the performance of a large group of methods only rely on the pruning ratio of each layer. This finding inspires us to design an efficient data-independent, training-free pruning method as a byproduct.
[Code]
(Preprint) GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training
Tianle Cai*, Shengjie Luo*, Keyulu Xu, Di He, Tie-Yan Liu, Liwei Wang
Highlight: A principled normalization scheme specially designed for graph neural networks. Achieve SOTA on several graph classification benchmarks.
[Code], [Third-part implementation by microsoft ptgnn lab. (Thanks for the very quick reaction and implementation MS!)]
(NeurIPS 2020) Locally Differentially Private (Contextual) Bandits Learning
Kai Zheng, Tianle Cai, Weiran Huang, Zhenguo Li, Liwei Wang
Highlight: Simple black-box reduction framework improves private bandits bounds.
[Code]
(NeurIPS 2019 Spotlight 2.4 % Acceptance rate) Convergence of Adversarial Training in Overparametrized Networks
Ruiqi Gao*, Tianle Cai*, Haochuan Li, Liwei Wang, Cho-Jui Hsieh, Jason D. Lee
[Slides]
Highlight: For overparameterized neural network, we prove that adversarial training can converge to global minima (with loss 0).
(NeurIPS 2019 Beyond First Order Method in ML Workshop) Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
Tianle Cai*, Ruiqi Gao*, Jikai Hou*, Siyu Chen, Dong Wang, Di He, Zhihua Zhang, Liwei Wang
Highlight: A provable second-order optimization method for overparameterized network on regression problem! As light as SGD at each iteration but converge much faster than SGD for real world application.
(Preprint) Adversarially Robust Generalization Just Requires More Unlabeled Data
Runtian Zhai*, Tianle Cai*, Di He, Chen Dan, Kun He, John Hopcroft, Liwei Wang
Highlight: Though robust generalization need more data, we show that just more unlabeled data is enough by both theory and experiments!
Talks
Towards Understanding Optimization of Deep Learning at IJTCS [slides] [video]
A Gram-Gauss-Newton Method Learning Overparameterized Deep Neural Networks for Regression Problems at PKU machine learning workshop [slides]
Experience
- Visiting Research Student at Simons Institute, UC Berkeley
- Program: Foundations of Deep Learning
- June, 2019 - July, 2019
- Visiting Research Internship at MIT
- Advisor: Professor Sasha Rakhlin
- June, 2019 - Sept., 2019
- Visiting Research Student at Princeton
- Host: Professor Jason D. Lee
- Sept., 2019 - Oct., 2019