CS379C: Computational Models of the Neocortex

Spring 2019

Table of Contents

    1  Project Suggestions
        1.1  Strange Loop Memories
        1.2  On Hippocampal Replay
        1.3  Differentiable Programs
        1.4  Two Streams Hypothesis
        1.5  Cognition in Cerebellum
        1.6  On Cortical Connections

    2  Neuroscience Resources

    3  Simulated Environments


1  Project Suggestions

1.1  Strange Loop Memories

In his book entitled "I am a Strange Loop", Douglas Hofstadter characterizes conscious awareness as a recursive process out of which emerges our sense of self. Think about how you would model the interplay between conscious awareness as described by Dehaene [9] and Graziano [18] and episodic memory as characterized in the class notes and implemented as a Differentiable Neural Computer [17]. Keep it simple. This exercise is first and foremost about taking these overloaded terms and reducing them to the simplest instantiation possible. As you work on the design, think about what emergent properties you should be able to observe in the behavior of such a system. For inspiration, watch Michael Graziano's lecture from last year's class and check out Will Schoder's clever retelling of Hofstadter's book entitled You Are A Strange Loop that achieves its weight and clarity in a scant twenty minutes.


1.2  On Hippocampal Replay

One of the speakers in a recent event at Stanford — MBCT Symposium: Navigating the neuronal code for space and time — mentioned that "hippocampal differentiation is a mammalian invention" and that "its unique laminar structure is highly conserved". These comments only hint at what research on the hippocampus has revealed about this essential part of our brains in the last few years [13733]. Much of the recent attention has focused on understanding the role of place-cells1, grid-cells2 and, more recently, time cells. The role of the human hippocampus in general cognition and episodic memory [1227] has been overshadowed by work on rodent models that consist primarily of studying animals in contrived settings solving simple maze problems3. In this class, we focus on how these same mechanisms can support rich declarative and procedural recall and complex problem solving4. Projects include developing a model of episodic memory and simulated environment and performing experiments designed to investigate how a simple agent might explore that environment using its episodic memory to shape its understanding of said environment in pursuing its goals.


1.3  Differentiable Programs

In the first lecture, we introduced the idea of differentiable program emulation. Structured programs consisting of multiple procedures are represented using a differentiable neural computing (DNC) memory model such as a Neural Turing Machine [1638] that is partitioned to encode static programs in the form of abstract syntax trees and a dynamic run-time call stack to support program execution. Writing and debugging programs would be conducted using a variant of imagination-based planning building on the work of Battaglia, Hamrick, Pascanu, Vinyals, Weber and their colleagues at DeepMind [192937]. A project that encompassed a fully functional automated programming pipeline is well beyond what might be accomplished in a single quarter; however, it should be possible to carve up the problem into more tractable components. In particular, a system that learns to emulate simple recursive procedures would still be a significant achievement possible within the scope of a quarter by leveraging existing projects written in TensorFlow [1].


1.4  Two Streams Hypothesis

The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream ("what pathway") is involved with object and visual identification and recognition. The dorsal stream ("where pathway") is involved with processing the object's spatial location relative to the viewer and with speech repetition — Excerpt from SOURCE. See David Poeppel — What Language Processing in the Brain Tells Us About the Structure of the Mind, Johns Hopkins University, February 5, 2018 [VIDEO] and Greg Hickok — The Dual Stream Model: Clarifications and Recent Progress, University of California, Irvine, January 19, 2017 [VIDEO]. Papers include Hickok and Poeppel [23], Rizzolatti and Rozzi [30], Hickok and Small [22] and Fernyhough and McCarthy-Jones [1514].


1.5  Cognition in Cerebellum

The intricate neuronal circuitry of the cerebellum is thought to encode internal models that reproduce the dynamic properties of body parts. These models are essential for controlling the movement of these body parts: they allow the brain to precisely control the movement without the need for sensory feedback. It is thought that the cerebellum might also encode internal models that reproduce the essential properties of mental representations in the cerebral cortex. This hypothesis suggests a possible mechanism by which intuition and implicit thought might function and explains some of the symptoms that are exhibited by psychiatric patients. This article examines the conceptual bases and experimental evidence for this hypothesis — Excerpt from [25]. See Beckinghausen and Sillitoe [4], Ito [2524] and Schmahmann [32] on the cerebellum's role in cognition and its relationship to action selection and episodic memory in Cerebral Cortex.


1.6  On Cortical Connections

Rodney Douglas and Kevan Martin [1110] and Thomson and Bannister [35] discuss the distribution of cortical connections forward and backward, why this matters and how pervasive is this pattern across regions. There are many theories for why this organization appears in cortex, e.g., see Dean et al [8] for commentary on several of the most controversial issues and Hawkins and Ahmad [2120] for one of many attempts to explain this organization, but there is much we don't know about the detailed cytoarchitecture of the cortex and network characteristics of the cortical connectome. One potentially interesting project is to compare artificial neural network architectures with what is known about their biological counterparts and conduct synthetic ablation studies to assess induced deficits and perhaps compare with patients suffering from congenital defects.


2  Neuroscience Resources

You can find a useful interactive 3-D model of the human brain here, an introduction to the neuroanatomy for medical students here, and an overview aimed at the general public here. The amount of information available about the brain and cerebral cortex in particular is overwhelming. Students have told me that they found the chapters in Katheleen Rockland's Axons and Brains [31] useful as a general reference. Each chapter is written by an expert in the field. For example, Chapter 6 [2] focuses on "Neuronal Cell Types in the Neocortex" and Chapter 9 [28] on "Interareal Connections of the Cortex". The material in this book should be available electronically through Stanford's online library resources. The two chapters mentioned above should be open access. If you haven't taken an introductory course in neuroscience, the introductory text by Mark Bear, Barry Connors and Michael Paradiso [3] is worth the investment. Keep in mind that the research agendas followed by the likes of DeepMind, OpenAI, Numenta, Vicarious, and the Allen Institute for AI are often inspired by our understanding of the brain, but they vary substantially in just how much attention they pay to new discoveries in neuroscience. Multidisplinary teams of engineers and scientists are the norm, not the exception in this growing area of research and development.


3  Simulated Environments

There is a perception that all of our expectations for accelerating the evolution of AI systems are predicated on the availability of labeled data for supervised learning. While it is true that high-quality labeled data makes it easier to train artificial neural networks for some applications, simulated environments and other sources of less-arduously acquired unsupervised and semi-supervised data are quickly becoming the best alternative for engineering systems that deal with complex, real-world problems. To compensate for the less focused nature of environmental data, training strategies like curriculum learning [5] and meta-reinforcement learning frameworks [36] that mimic aspects of human cognitive development are providing better ways to bootstrap learning complex behaviors of the sort we expect to encounter as we tackle increasingly challenging problems. Here are some representative simulated environments for conducting experiments and generating training data for class projects:

  1. DeepMind Control Suite described in Tassa et al [34] [GitHub] and [arXiv]

  2. OpenAI Gym described in Brockman et al [6] OpenAI Gym [GitHub] and [arXiv]

  3. Unity / DeepMind Machine Learning Agents Toolkit described in Juliani et al [26] [GitHub] and [arXiv]


References

[1]   Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. Tensor Flow Technical Report, 2015.

[2]   J.C. Anderson and K.A.C. Martin. Chapter 6 - Interareal Connections of the Macaque Cortex: How Neocortex Talks to Itself. In Kathleen S. Rockland, editor, Axons and Brain Architecture, pages 117--134. Academic Press, San Diego, 2016.

[3]   Mark F. Bear, Barry Connors, and Michael Paradiso. Neuroscience: Exploring the Brain (Third Edition). Lippincott Williams & Wilkins, Baltimore, Maryland, 2006.

[4]   Jaclyn Beckinghausen and Roy V. Sillitoe. Insights into cerebellar development and connectivity. Neuroscience Letters, 688:2--13, 2019.

[5]   Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. CoRR, arXiv:1506.03099, 2015.

[6]   Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. CoRR, arXiv:1606.01540, 2016.

[7]   György Buzsàki and David Tingley. Space and time: The hippocampus as a sequence generator. Trends in Cognitive Sciences, 22(10):853--869, 2018.

[8]   Thomas Dean, Greg S. Corrado, and Jonathon Shlens. Three controversial hypotheses concerning computation in the primate cortex. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.

[9]   Stanislas Dehaene. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press, 2014.

[10]   Rodney J. Douglas and Kevan A.C. Martin. Neuronal circuits of the neocortex. Annual Review of Neuroscience, 27(1):419--451, 2004.

[11]   Rodney J. Douglas and Kevan A.C. Martin. Recurrent neuronal circuits in the neocortex. Current Biology, 18:496--500, 2007.

[12]   Howard Eichenbaum. Hippocampus: Cognitive processes and neural representations that underlie declarative memory. Neuron, 44(1):109--120, 2004.

[13]   Howard Eichenbaum. Time cells in the hippocampus: A new dimension for mapping memories. Nature Reviews Neuroscience, 15:732, 2014.

[14]   Charles Fernyhough. The Voices Within: The History and Science of How We Talk to Ourselves. Basic Books, 2016.

[15]   Charles Fernyhough and Simon McCarthy-Jones. Thinking aloud about mental voices. In Fiona Macpherson and Dimitris Platchias, editors, Hallucination: Philosophy and Psychology. The MIT Press, 2013.

[16]   Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. CoRR, arXiv:1410.5401, 2014.

[17]   Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdoménech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, 538:471--476, 2016.

[18]   Michael Graziao. Consciousness and the Social Brain. Oxford University Press, New York, NY, 2013.

[19]   Jessica B. Hamrick, Andrew J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W. Battaglia. Metacontrol for adaptive imagination-based optimization. CoRR, arXiv:1705.02670, 2017.

[20]   Jeff Hawkins and Subutai Ahmad. Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in Neural Circuits, 10, 2016.

[21]   Jeff Hawkins, Marcus Lewis, Mirko Klukas, Scott Purdy, and Subutai Ahmad. A framework for intelligence and cortical function based on grid cells in the neocortex. Frontiers in Neural Circuits, 12:1--14, 2019.

[22]   G. Hickok and S.L. Small. Neurobiology of Language. Elsevier, 2015.

[23]   Gregory Hickok and David Poeppel. The cortical organization of speech processing. Nature Reviews Neuroscience, 8:393, 2007.

[24]   Masao Ito. Control of mental activities by internal models in the cerebellum. Nature Reviews Neuroscience, 9:304--313, 2008.

[25]   Masao Ito. The Cerebellum: Brain for an Implicit Self. Financial Times Press, 2012.

[26]   Arthur Juliani, Vincent-Pierre Berges, Esh Vckay, Yuan Gao, Hunter Henry, Marwan Mattar, and Danny Lange. Unity: A general platform for intelligent agents. CoRR, arXiv:1809.02627, 2018.

[27]   Dharshan Kumaran and Eleanor A. Maguire. The human hippocampus: Cognitive maps or relational memory? Journal of Neuroscience, 25(31):7254--7259, 2005.

[28]   Rajeevan T. Narayanan, Robert Egger, Christiaan P.J. de Kock, and Marcel Oberlaender. Chapter 9 - Neuronal Cell Types in the Neocortex. In Kathleen S. Rockland, editor, Axons and Brain Architecture, pages 183--202. Academic Press, San Diego, 2016.

[29]   Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sébastien Racanière, David P. Reichert, Theophane Weber, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch. CoRR, arXiv:1707.06170, 2017.

[30]   Giacomo Rizzolatti and Stefano Rozzi. Motor cortex and mirror system in monkeys and humans. In Gregory Hickok and Steven L. Small, editors, Neurobiology of Language, pages 59--72. Academic Press, San Diego, 2016.

[31]   Kathleen S. Rockland. Axons and Brain Architecture. Academic Press, San Diego, 2016.

[32]   Jeremy D. Schmahmann. The cerebellum and cognition. Neuroscience Letters, 68:62--75, 2019.

[33]   H. Supèr and H.B.M. Uylings. The Early Differentiation of the Neocortex: a Hypothesis on Neocortical Evolution. Cerebral Cortex, 11(12):1101--1109, 2001.

[34]   Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, arXiv:1801.00690, 2018.

[35]   Alex M. Thomson and A. Peter Bannister. Interlaminar connections in the neocortex. Cerebral Cortex, 13:5--14, 2003.

[36]   Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, and Matthew Botvinick. Prefrontal cortex as a meta-reinforcement learning system. bioRxiv, 2018.

[37]   Theophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, and Daan Wierstra. Imagination-augmented agents for deep reinforcement learning. CoRR, arXiv:1707.06203, 2017.

[38]   Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, arXiv:1505.00521, 2015.


1 A place cell is a kind of pyramidal neuron within the hippocampus that becomes active when an animal enters a particular place in its environment; this place is known as the place field. A given place cell will have only one, or a few, place fields in a typical small laboratory environment, but more in a larger region. There is no apparent topography to the pattern of place fields, unlike other brain areas such as visual cortex—neighboring place cells are as likely to have nearby fields as distant ones. In a different environment, typically about half the place cells will still have place fields, but these will be in new places unrelated to their former locations. Place cells are thought, collectively, to act as a cognitive representation of a specific location in space, known as a cognitive map. (SOURCE)

2 Place cells mainly rely on set distal cues rather than cues in the immediate proximal environment. Movement can also be an important spatial cue. The ability of place cells to incorporate new movement information is called path integration, and it is important for keeping track of self-location during movement. Path integration is largely aided by grid cells, which are a type of neuron in the entorhinal cortex that relay information to place cells in the hippocampus. Grid cells establish a grid representation of a location, so that during movement place cells can fire according to their new location while orienting according to the reference grid of their external environment. (SOURCE)

3 Here are a few papers related to the function of the hippocampus in rodent models and the application of these ideas in designing agents:

@article{OlafsdottirCURRENT-BIOLOGY-18,
        title = {The Role of Hippocampal Replay in Memory and Planning},
       author = {H. Freyja \'{O}lafsd\'{o}ttir and Daniel Bush and Caswell Barry},
      journal = {Current Biology},
       volume = {28},
       number = {1},
        pages = {R37-R50},
         year = {2018},
     abstract = {The mammalian hippocampus is important for normal memory function, particularly memory for places and events. Place cells, neurons within the hippocampus that have spatial receptive fields, represent information about an animal's position. During periods of rest, but also during active task engagement, place cells spontaneously recapitulate past trajectories. Such 'replay' has been proposed as a mechanism necessary for a range of neurobiological functions, including systems memory consolidation, recall and spatial working memory, navigational planning, and reinforcement learning. Focusing mainly, but not exclusively, on work conducted in rodents, we describe the methodologies used to analyse replay and review evidence for its putative roles. We identify outstanding questions as well as apparent inconsistencies in existing data, making suggestions as to how these might be resolved. In particular, we find support for the involvement of replay in disparate processes, including the maintenance of hippocampal memories and decision making. We propose that the function of replay changes dynamically according to task demands placed on an organism and its current level of arousal.}
}
@article{PoulteretalCURRENT-BIOLOGY-18,
        title = {The Neurobiology of Mammalian Navigation},
       author = {Steven Poulter and Tom Hartley and Colin Lever},
      journal = {Current Biology},
       volume = {28},
       number = {17},
        pages = {R1023-R1042},
         year = {2018},
     abstract = {Mammals have evolved specialized brain systems to support efficient navigation within diverse habitats and over varied distances, but while navigational strategies and sensory mechanisms vary across species, core spatial components appear to be widely shared. This review presents common elements found in mammalian spatial mapping systems, focusing on the cells in the hippocampal formation representing orientational and locational spatial information, and 'core' mammalian hippocampal circuitry. Mammalian spatial mapping systems make use of both allothetic cues (space-defining cues in the external environment) and idiothetic cues (cues derived from self-motion). As examples of each cue type, we discuss: environmental boundaries, which control both orientational and locational neuronal activity and behaviour; and 'path integration', a process that allows the estimation of linear translation from velocity signals, thought to depend upon grid cells in the entorhinal cortex. Building cognitive maps entails sampling environments: we consider how the mapping system controls exploration to acquire spatial information, and how exploratory strategies may integrate idiothetic with allothetic information. We discuss how 'replay' may act to consolidate spatial maps, and simulate trajectories to aid navigational planning. Finally, we discuss grid cell models of vector navigation.}
}
@article{BaninoetalNATURE-18,
       author = {Banino, Andrea and Barry, Caswell and Uria, Benigno and Blundell, Charles and Lillicrap, Timothy and Mirowski, Piotr and Pritzel, Alexander and Chadwick, Martin J. and Degris, Thomas and Modayil, Joseph and Wayne, Greg and Soyer, Hubert and Viola, Fabio and Zhang, Brian and Goroshin, Ross and Rabinowitz, Neil and Pascanu, Razvan and Beattie, Charlie and Petersen, Stig and Sadik, Amir and Gaffney, Stephen and King, Helen and Kavukcuoglu, Koray and Hassabis, Demis and Hadsell, Raia and Kumaran, Dharshan},
        title = {Vector-based navigation using grid-like representations in artificial agents},
      journal = {Nature},
         year = {2018},
     abstract = {Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space, and is critical for integrating self-motion (path integration), and planning direct trajectories to goals (vector-based navigation). Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.},
}
@inproceedings{VinyalsetalNIPS-16,
       author = {Oriol Vinyals and Charles Blundell and Timothy P. Lillicrap and Koray Kavukcuoglu and Daan Wierstra},
        title = {Matching networks for one shot learning},
    booktitle = {Advances in Neural Information Processing Systems},
    publisher = {Curran Associates, Inc.},
         year = {2017},
        pages = {3630-3638},
     abstract = {Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6\% to 93.2\% and from 88.0\% to 93.8\% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.}
}
@article{WayneetalCoRR-18,
       author = {Greg Wayne and Chia-Chun Hung and David Amos and Mehdi Mirza and Arun Ahuja and Agnieszka Grabska-Barwinska and Jack Rae and Piotr Mirowski and Joel Z. Leibo and Adam Santoro and Mevlana Gemici and Malcolm Reynolds and Tim Harley and Josh Abramson and Shakir Mohamed and Danilo Rezende and David Saxton and Adam Cain and Chloe Hillier and David Silver and Koray Kavukcuoglu and Matt Botvinick and Demis Hassabis and Timothy Lillicrap},
        title = {Unsupervised Predictive Memory in a Goal-Directed Agent},
      journal = {CoRR},
       volume = {arXiv:1803.10760},
         year = {2018},
     abstract = {Animals execute goal-directed behaviours despite the limited range and scope of their sensors. To cope, they explore environments and store memories maintaining estimates of important information that is not presently available. Recently, progress has been made with artificial intelligence (AI) agents that learn to perform tasks from sensory input, even at a human level, by merging reinforcement learning (RL) algorithms with deep neural networks, and the excitement surrounding these results has led to the pursuit of related ideas as explanations of non-human animal learning. However, we demonstrate that contemporary RL algorithms struggle to solve simple tasks when enough information is concealed from the sensors of the agent, a property called "partial observability". An obvious requirement for handling partially observed tasks is access to extensive memory, but we show memory is not enough; it is critical that the right information be stored in the right format. We develop a model, the Memory, RL, and Inference Network (MERLIN), in which memory formation is guided by a process of predictive modeling. MERLIN facilitates the solution of tasks in 3D virtual reality environments for which partial observability is severe and memories must be maintained over long durations. Our model demonstrates a single learning agent architecture that can solve canonical behavioural tasks in psychology and neurobiology without strong simplifying assumptions about the dimensionality of sensory input or the duration of experiences.},
}
@inproceedings{VinyalsetalNIPS-16,
       author = {Oriol Vinyals and Charles Blundell and Timothy P. Lillicrap and Koray Kavukcuoglu and Daan Wierstra},
        title = {Matching networks for one shot learning},
    booktitle = {Advances in Neural Information Processing Systems},
    publisher = {Curran Associates, Inc.},
         year = {2017},
        pages = {3630-3638},
     abstract = {Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6\% to 93.2\% and from 88.0\% to 93.8\% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.}
}
@article {KumaranandMaguireJoN-05,
       author = {Kumaran, Dharshan and Maguire, Eleanor A.},
        title = {The Human Hippocampus: Cognitive Maps or Relational Memory?},
      journal = {Journal of Neuroscience},
    publisher = {Society for Neuroscience},
       volume = {25},
       number = {31},
         year = {2005},
        pages = {7254-7259},
     abstract = {The hippocampus is widely accepted to play a pivotal role in memory. Two influential theories offer competing accounts of its fundamental operating mechanism. The cognitive map theory posits a special role in mapping large-scale space, whereas the relational theory argues it supports amodal relational processing. Here, we pit the two theories against each other using a novel paradigm in which the relational processing involved in navigating in a city was matched with similar navigational and relational processing demands in a nonspatial (social) domain. During functional magnetic resonance imaging, participants determined the optimal route either between friends{\textquoteright} homes or between the friends themselves using social connections. Separate brain networks were engaged preferentially during the two tasks, with hippocampal activation driven only by spatial relational processing. We conclude that the human hippocampus appears to have a bias toward the processing of spatial relationships, in accordance with the cognitive map theory. Our results both advance our understanding of the nature of the hippocampal contribution to memory and provide insights into how social networks are instantiated at the neural level.},
}
@article{EichenbaumNEURON-04,
        title = {Hippocampus: Cognitive Processes and Neural Representations that Underlie Declarative Memory},
       author = {Howard Eichenbaum},
      journal = {Neuron},
       volume = {44},
       number = {1},
        pages = {109-120},
         year = {2004},
     abstract = {The hippocampus serves a critical role in declarative memory—our capacity to recall everyday facts and events. Recent studies using functional brain imaging in humans and neuropsychological analyses of humans and animals with hippocampal damage have revealed some of the elemental cognitive processes mediated by the hippocampus. In addition, recent characterizations of neuronal firing patterns in behaving animals and humans have suggested how neural representations in the hippocampus underlie those elemental cognitive processes in the service of declarative memory.}
}
	    

4 I've listed a few of the people I talked with at the MBCT Symposium. Magee and Giocomo were particularly interesting both for their research and as contacts for subsequent collaboration. Lisa said that she would be interested in working with my class and Jeff mentioned that right before the symposium he was in the Bahamas participating in a DeepMind offsite focused on the neural correlates of memory and what DM could learn from cellular and molecular neuroscientists. He said they were supremely confidant and refreshingly open to and curious about new ideas. Jeff's presentation at Stanford and his comments about interactions with DM gave me more confidence that we're on the right track given our ongoing focus on hippocampal and basal ganglia systems for episodic memory and action selection respectively. Here are some of the speakers whose talks I found particularly relevant:

  • Jeffrey Magee HHMI Janelia — His presentation focused on a novel form of plasticity that produces predictive place fields that appear to make this a non-autonomous form of one-trial learning allowing experience to shape the CA1 representation. SOURCE

  • Jill Leutgeb — Her laboratory at UCSD combines high-density electrophysiology with behavioral testing, theoretical modeling, and pharmacological and molecular manipulations as a multidisciplinary approach to understanding the neural basis of cognition. SOURCE

  • Lisa Giocomo — Assistant Professor, Neurobiology at Stanford University with a focus on the hippocampus and functionally related nuclei including the entorhinal cortex. She received her Ph.D. in Neurophysiology from the Leibniz Institute for Neurobiology, Germany, and was a postdoctoral fellow at the Kavli Institute for Systems Neuroscience, Norway, in the lab of Nobel laureates May-Britt and Edvard Moser. SOURCE

  • Edvard Moser — Norwegian University of Science and Technology in Trondheim — Moser shared the Nobel Prize in Physiology or Medicine in 2014 with his then-wife May-Britt Moser and their mentor John O'Keefe for their work identifying the place cells that make up the brain's positioning system. SOURCE

Here is a question I sent to Lisa and Jeff following up on a brief conversation we had at the symposium:

What would it mean for the EHC to reconstruct the original stimulus in the cortex? Presumably, the original stimulus is a pattern of activity corresponding to one or more component (sub) patterns in regions spread throughout the cortex. The EHC receives input from these regions via pyramidal neurons in the form of a compressed representation that is used to encode the index for subsequent retrieval in CA3, and then, using feedback from the EHC, train the CA3 to CA1 connections to reconstruct the original stimulus.

It seems plausible that the reciprocal connections from the EHC to the original sources of activity in the cortex could be trained via LTP to reproduce these cortical patterns. However, by the time the relevant activity has had time to propagate from the cortex, stimulate EHC, perform pattern separation in DG and pattern completion in CA3 and then cycle back to the cortex through the EHC reciprocal connections, presumably the cortex has undergone innervation from additional sensor input.

This means that any attempt to reconstruct the original stimulus will have to accomodate the resulting changes in the sensory and motor cortex, and so the resulting reconstruction could reflect — or perhaps should — reflect the differences in the ambient sensory soup, hopefully in a behaviorally coherent and potentially beneficial manner. This suggests a more creative reconstructive process that could be employed to imagine possible futures, engage in counterfactual reasoning, etc.