Talks

Invited Talk: “A Function Space Characterization of the Learning Dynamics of Gradient Descent” EE Seminar Series, SEAS Dept., Harvard University. November 15, 2019. Video.

Invited Talk: “Deep Learning with Domain-Specific Knowledge.” Proceedings of the 2018 Chinese-American Kavli Frontiers of Science Symposium. US and Chinese National Academy of Sciences. Nanjing, China, October 19-21, 2018. 

Invited Talk: “Deep Convnets from First Principles: Generative Models, Dynamic Programming and EM”. Center for Theoretical Biophysics, Rice University, Houston, TX, February 27, 2018.

Invited Talk: “Deep Learning with Domain-Specific Knowledge.”Proceedings of the 2018 Chinese-American Kavli Frontiers of Science Symposium. US and Chinese National Academy of Sciences. Nanjing, China, October 19-21, 2018.

Invited Talk: “A Probabilistic Framework for Deep Learning: Understanding Convnets and Moving Beyond.” Stanford University, Stats 385 Course from David Donoho. October 2017. Talk Highlights

“A Probabilistic Framework for Deep Learning: Understanding Convnets and Moving Beyond.” Google Cloud AI Group & Amazon Research, October 2017.

Invited Talk: “A Probabilistic Framework for Deep Learning: Understanding Convnets and Moving Beyond.” Center for Theoretical Neuroscience, Columbia University. NYC, NY, February 17, 2017.

“A Probabilistic Framework for Deep Learning: Understanding Convnets and Moving Beyond.” Seminar Series, Simons Institute. NYC, NY, February 16, 2017.

Invited Talk: “Beyond Convnets: The Next-Generation of Highly Scalable Architectures and Unsupervised Learning Algorithms.” Google Seminar Series on Deep Learning. Mountain View, California, August 18, 2016.

“A Probabilistic Theory of Deep Learning: How and Why Deep Convnets Work.” In Information Theory & Applications. San Diego, California, February 5, 2016.

“A Probabilistic Theory of Deep Learning: Or How I Learned to Love Neural Nets.” NIPS Workshop on Multi-scale Learning. Montreal Canada, December 2015. (Due to sickness, talk was given by Richard G. Baraniuk instead).

Invited Talk: “A Probabilistic Theory of Deep Learning: Applications to Computational Neuroscience.” CBCL Seminar, Tomaso Poggio Lab, Brain and Cognitive Science Dept., MIT. October 2015.

“How and Why Deep Learning Works: Applications to Computational Neuroscience.” Jim DiCarlo Lab, Brain and Cognitive Science Dept., MIT. October 2015.

Invited Talk: “How and Why Deep Learning Works” ISS Seminar Series, SEAS Dept., Harvard University. October 2015.

Invited Talk: “A Tutorial on Deep Learning: Why Does it Work?” International Conference of Computational Photography. Held at Rice University. April 25, 2015.

 

Workshops & Institutes

Frank Noe, Ankit B. Patel, Alán Aspuru-Guzik, Katya Scheinberg, Ruth Urner. “From Passive to Active: Generative and Reinforcement Learning with Physics.” Workshop. This workshop is a part of a longer program “Machine Learning for Physics and the Physics of Learning” at the Institute for Pure and Applied Mathematics (IPAM) at UCLA.

Richard G. Baraniuk, Ankit B. Patel, Anima Anandkumar, Stephane Mallat, nhật Hồ (2018). Integration of Deep Learning Theories. Proceedings of the Conference on  Neural Information Processing Systems, 2018. NIPS Workshop

Accepted to: “Physics of Hearing: From Neurobiology to Information Theory and Back.” 2018 Kavli Institute of Theoretical Physics. KITP