Thesis Information
Title: Mathematics of Deep Learning
Adviser: Amitabh Basu
Institution: Johns Hopkins University
Graduation Date: January 2019
Contact Information
Email Candidate
Candidate Website
Candidate Bio:
I did my undergraduate in physics at the Chennai Mathematical Institute (CMI), India and my masters in theoretical physics at the Tata Institute of Fundamental Research (TIFR), India. After a few years of doing research in Quantum Field Theory, eventually I found my true calling in mathematics. Now I am completing my Ph.D. in applied mathematics at the Johns Hopkins University (JHU) under the guidance of Prof. Amitabh Basu. During my Ph.D. I got intrigued by this ongoing rise of deep-learning and I went on to prove many theorems about the neural function space and popular deep-learning algorithms.
Paper 1:
Title : Understanding Deep Neural Networks with Rectified Linear Units Authors : Raman Arora, Amitabh Basu, Poorya Mianjy, Anirbit Mukherjee Venue : International Conference on Learning Representations (ICLR) Date : 2018
Link to PDFPaper 2:
Title : Sparse Coding and Autoencoders Authors : Akshay Rangamani, Anirbit Mukherjee, Amitabh Basu, Ashish Arora, Tejaswini Ganapathi, Sang (Peter) Chin, and Trac D. Tran Venue : IEEE International Symposium on Information Theory (ISIT) Date : 2018
Link to PDFPaper 3:
Title : Improving PAC-Bayes bounds for neural networks using geometric properties of the training method Authors : Anirbit Mukherjee, Pushpendre Rastogi, Daniel M. Roy and Jun Yang Venue : ICML Workshop on Understanding and Improving Generalization in Deep Learning Date : 2019
Keywords: Deep Learning, Neural Networks, Learning Theory, High-Dimensional Probability, Sparse Representations