CS379C: Computational Models of the Neocortex

Spring 2018

Proposal Components:


Your proposal should address each of the following elements:

  • What is the problem? Are you trying confirm an existing theory by testing a computational model or running a set of experiments? Are you trying to replicate some previously published results? Do you have a new model or hypothesis to test? Are you demonstrating how a cognitively inspired model performs on a given dataset? Perhaps you are going to make a careful review of the literature on a given subject and propose a new computational model or a synthesis of existing models. Or perhaps you’re going to take some existing implementation and run it on a new or altered dataset in an effort to explore its failure modes. What sort of behavior do you expect and why is the new or altered dataset likely to reveal interesting behavior? Be explicit about the problem you want to address. You may not know exactly how to solve the problem yet, but you should be able to state the problem clearly.

  • How will you address the problem? Will you write code and run it on a set of existing benchmarks comparing your performance with published results? Will you compare two or more algorithms on a dataset meant to contrast the algorithms relative to their cognitive plausibility? If you plan to do a literature survey, who are the relevant researchers and what journals and other reference materials will you employ? If you are not sufficiently familiar with the area, do have access to a mentor or colleague who is sufficiently familiar? Where appropriate tell me how you will obtain the necessary resources: existing code and datasets, access to relevant expert consultants, etc. Convince me that you’ll be able to acquire these resources in timely manner. If this is a team project, tell be how you will divide the effort?

  • How does your project address computational and cognitive issues? In short, why does this project make sense as a project for a class on neural network architectures inspired by research in cognitive and systems neuroscience? Is your computational model cognitively plausible? Or, conversely, can you instantiate your cognitively plausible theory in a working program? Are you making a claim about what capabilities humans are capable of (or not) or about how humans perform particular tasks? How will you substantiate such claims? Why does your project make sense in the context of the papers we’ve read — or are on the suggested reading list — and the material we’ve covered in class?

  • How much time do you expect this project will take you? Be realistic. If you’ll be building on someone else’s code base, check it out and make that sure you not getting in over your head. If you are planning to write new code, then give me some idea of how much new code you think will be required. Wherever possible borrow code and data from others; don’t underestimate how difficult it is to collect data or convert existing data into a form that will work with your code. If you’re planning on a theoretical exercise, provide a first approximation of the bibliography that you’re going to work your way through in surveying the relevant literature.

Limit your proposal to a maximum of two pages; if you write much less than one page — 11 point type, one inch margins — then you probably haven’t provided enough detail for me to evaluate. Send your proposal to me at tld [at] google [dot] com by the end of day, Monday, May 7, 2018. You are encouraged to run your idea by me before then, make an appointment to me, send me a quick sketch by email or catch me after class.

Example Project Proposal

Here’s a very rough sketch of a reasonable project for someone has read the papers by Pinto et al [7] and Lee et al [5] on hierarchical models and isn’t afraid to grab Honglak Lee’s Matlab code and experiment with it:

  • Problem description: The unsupervised training step in the Pinto et al work used a technique due to Földiák [3] to exploit video in the learning of invariants. The Lee et al work achieves translation invariance by sharing the weights of convolution kernels. We propose to adapt the Lee et al objective function used for unsupervised learning of convolution kernels in a single layer to operate on two or more frames from a video sequence. Include a sketch of how you propose to modify the objective function here. Also summarize what you expect to achieve by making these changes. For this requirement you can reconstitute the arguments made in the Pinto et al paper or provide your own rationale.

  • Methods and Results: In addition to obtaining Honglak’s code you’ll need datasets for training — video data — and testing — labeled still images. Selecting datasets that have a good chance of demonstrating the phenomenon or tradeoff that you are interested in is an art. I’ll bet that David Cox would share his training videos and you could obtain his test datasets from the web sites listed in the PLoS paper [7]. Alternatively, you might get some useful suggestions from Andrew Ng’s students regarding the data that they use in their experiments, which would have the added benefit that you wouldn’t have to bother David. But don’t be shy about asking David or other researchers for data or code; like Newton, you too can stand on the shoulders of giants. 1

  • Scientific Rationale: Read Földiák [3] or one of the more recent papers on slow-feature-analysis  [1849] and summarize the basic biological rationale. You might also take a quick look at any of the listed readings that address the question of invariance in biological vision such as the papers of Bruno Olshausen, e.g., [6] or the recent paper by DiCarlo and Cox [2].

  • Estimated Effort: Break it down into pieces. How much time to familiarize yourself with Honglak’s code? How much time required to get an appropriate set of training videos and your benchmark image-classification test dataset? How about the time required to experiment running the code, tuning parameters, etc., in order to have some chance of getting some decent results? Finally, you will need some time to document what you’ve done in a short write-up that expands on your problem description and includes your results and conclusions.

References

[1]   Pietro Berkes and Laurenz Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 5(6):579–602, 2005.

[2]   James J. DiCarlo and David D. Cox. Untangling invariant object recognition. Trends in Cognitive Sciences, 11(8):333–341, 2007.

[3]   P. Földiák. Learning invariance from transformation sequences. Neural Computation, 3:194–200, 1991.

[4]   Aapo Hyvärinen, Jarmo Hurri, and Jaakko Väyrynen. Bubbles: a unifying framework for low-level statistical properties of natural image sequences. Journal of the Optical Society of America, 20(7):1237–1252, 2003.

[5]   Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML ’09: Proceedings of the 26th Annual International Conference on Machine Learning, pages 609–616, New York, NY, 2009. ACM.

[6]   B. A. Olshausen and D. J. Field. Natural image statistics and efficient coding. Computation in Neural Systems, 7(2):333–339, 1996.

[7]   Nicolas Pinto, David Doukhan, James DiCarlo, and David Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Computational Biology, 5(11):e1000579, November 2009.

[8]   Laurenz Wiskott. How does our visual system achieve shift and size invariance? In J. L. van Hemmen and T. J. Sejnowski, editors, Problems in Systems Neuroscience. Oxford University Press, 2003.

[9]   Laurenz Wiskott and Terrence Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715–770, 2002.


1 In a letter to Robert Hooke in 1676, Isaac Newton wrote “If I have seen further, it is only by standing on the shoulders of giants.” Some historians have interpreted this as a clever sleight to Hooke by the prickly and easily threatened Newton.