Peter Latham
Understanding connectivity in Cerebellar-like Networks using the Inductive Bias
A fundamental learning problem is how to generalise, using past learning experiences to inform current choices. We study neural network models, in which many aspects of the learner, such as learning rule and nonlinearities, affect generalisation. In this work we’ll focus on initial connectivity, and how it changes which data the learner generalises well.
We use two tools to extract the data the learner performs well on from the architecture. The first is a recently developed approach from machine learning theory that, for a very constrained class of learners, provides a complete description. The second is a meta-learning tool we have developed that works, in principle, on all differentiable supervised learners, but comes with fewer guarantees.
Using these two tools we examine the effect of connectivity in cerebellar-like networks, in particular the cerebellum and mushroom body. Recent electron-microscopy studies have found hard-wired patterns that appears to be established independently of activity. We use our tools to give a normative interpretation for these connectomic patterns: roughly, they makes some tasks easier to learn than others. The link between these easier-to-learn tasks and naturally pertinent tasks or data distributions would be interesting to explore in the future. We hope this illustrates how these tools can be used to normatively understand detailed biological data, such as connectomics.