Experiment source code and latex source for http://arxiv.org/pdf/1402.5836.pdf
Choosing appropriate architectures and regularization strategies for deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.
This paper appeared in the 2014 Artificial Intelligence and Statistics conference, held in Reykjavik, Iceland.
Authors: David Duvenaud, Oren Rippel, Ryan P. Adams, and Zoubin Ghahramani
Feel free to email me with any questions at (dduvenaud@seas.harvard.edu).