Projects

Probabilistic Framework for Deep Learning. The recent success of deep learning systems is impressive — they now routinely yield pattern recognition systems with near- or super-human capabilities — but a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this questions by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures variation due to latent nuisance variables. The graphical structure of the model enables it to be learned from data using the classical EM algorithm. Furthermore, by relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems: Deep Convolutional neural networks (DCNs) and Random Decision Forests (RDFs). Using this model, we develop insights into their successes and shortcomings as well as a principled route to their improvement. Please check out our paper for more details.

Cortically-Inspired Network Architectures for Vision & Reverse-Engineering Neural Plasticity Rules. Working with neuroscientists, we are reverse-engineering coarse-grained architectural motifs and the myriad different neural plasticity rules that have substantial empirical support. We plan to use these architectures/modules in a deep network in order to learn to solve hard perceptual tasks such as action recognition from video. 

Artificial Neuroscience on Large-scale Trained Neural Nets. Using state of the art trained nets for object recognition and character-level language modeling, we are systematically probing these architectures to elucidate the low-level mechanisms by which they accomplish their tasks. A key element of this approach is the strong interaction between Theory and (in silico) Experiments.

Event-driven Representations for RNNs. Inspired by the retina and cortex, we are developing a new class of representations and RNNs that learn events, defined as meaningful changes in the inputs. Event-driven representations have many computational and representational benefits: higher throughput, lower latency, sparsity, etc.

Advertisements