Vision Lunch 2012

From VISTA LAB WIKI

Jump to: navigation, search

Contents

[edit] 2012 Schedule

[edit] December

THURSDAY 13 Justin Gardner, RIKEN Brian Science Institute, JAPAN

Cortical mechanisms in humans which improve perception with prior information

Prior information can improve human perception, yet little is known about the cortical mechanisms that make this possible. For example, ambiguous sensory information can be disambiguated by biasing perceptual estimates towards ones that are more likely. When trying to estimate the speed of a moving object at low contrast - a situation that occurs when it is foggy and it is hard to see - humans are known to bias their estimates of speed to slow. This phenomenon has been interpreted as a prior for slow movement. While this perceptual behavior is well-known its cortical basis is not. We have conducted fMRI decoding experiments in humans and have found that these prior biases for slow are represented in the same early sensory areas where sensory evidence is encoded. Priors can also improve performance by selecting out relevant sensory information. We have examined this by systematically changing the prior probability that a location contains task relevant information. Using computational modeling that links cortical measurements made with fMRI and psychophysical measurements of improved behavioral sensitivity, we have concluded that spatial prior probability is not directly represented in early sensory cortex. Instead, we have found that a mechanism that pools sensory signals weighted by their magnitude can account for the tracking of spatial probability by behavior. Thus we have found that various forms of prior information can affect sensory processing for perception in early visual cortex. These cortical processes result in biasing of perceptual estimates of quantitates like speed or in the selection of relevant sensory information.

[edit] November

14 Uri Hasson, Department of Psychology, Princeton University, Inter-subject functional connectivity: a new tool for exploring the mechanisms of dyadic social interactions
28 Yan Karklin, Center for Neural Science, New York University

Efficient coding as a theory of retinal computation

Efficient coding provides a powerful principle for explaining early sensory processing. Most attempts to test this principle have been limited to linear, noiseless models, and when applied to natural images, they yield localized oriented filters (e.g., Bell and Sejnowski, 1995). This is generally consistent with cortical representations, but fails to account for the most basic properties of early visual processing (the center-surround receptive fields of retinal ganglion cells, their tiling of visual space, and their nonlinear response properties). Assuming that the retina efficiently transmits information to the rest of the brain, how do we reconcile these results? What computational principles are necessary to explain the retinal code?

I will show that an efficient coding model that incorporates ingredients critical to biological computation -- input and output noise, nonlinear response functions, and a metabolic cost on the firing rate -- can predict the basic properties of retinal processing. Specifically, we develop numerical methods for simultaneously optimizing linear filters and response nonlinearities of a population of model neurons so as to maximize information transmission in the presence of noise and metabolic costs. We place no restrictions on the form of the linear filters, and assume only that the nonlinearities are monotonically increasing. In the case of vanishing noise, our method reduces to a generalized version of independent component analysis; training on natural image patches produces localized oriented filters and smooth nonlinearities. When the model includes biologically realistic levels of noise, the predicted filters are center-surround and the nonlinearities are rectifying, consistent with properties of retinal ganglion cells. The model yields two populations of neurons, with On- and Off-center responses, which independently tile the visual space, and even predicts an asymmetry observed in the primate retina: Off-center neurons are more numerous and have filters with smaller spatial extent.

(joint work with Eero Simoncelli)

[edit] October

3 Aviv Mezer (Wandell Lab) and Moqian Tian (Grill-Spector Lab) Practice talks for SfN.

[edit] September

5 Moqian Tian, Grill-Spector Lab Stanford University, will present the following article: Fischer, E, Bülthoff, HH, Logothetis, NK, Bartels, A Human Areas V3A and V6 Compensate for Self-Induced Planar Visual Motion Neuron - 22 March 2012 (Vol. 73, Issue 6, pp. 1228-1240).

12 Kelly Hennigan, Stanford Psychology, Distinct Dopamine pathways in the human brain

19 Netta Levin, Neurology Dept. Hadassah Hospital, Jerusalem

26 Moqian Tian, Grill-Spector Lab Stanford University Learning view invariant object recognition: Exposure to multiple 2D views is sufficient for rapid learning.

[edit] August

1 Michal Ben-Shachar English Department, Linguistics Division and Gonda Brain Research Center Bar Ilan University
Diffusion imaging of dorsal and ventral language pathways in adults who stutter.

15 Russell Poldrack University of Texas at Austin
TBA

[edit] July

11 Stéfan Van Der Walt Postdoctoral Fellow, Helen-Wills Neuroscience Institute, UC Berkeley
Using spherical harmonic kernels to model fiber orientation distributions in diffusion MRI

18 Susana Chung
Professor, UC Berkeley, Optometry

25 Sholomo Bentin CANCELLED Professor of Psychology, Hebrew University, Jerusalem
Inter-hemispheric transfer of categorical visual information: When and Why

Tribute to Sholomo Bentin Electrophysiological Studies of Face Perception in Humans. Bentin S, Allison T, Puce A, Perez E, McCarthy G. J Cogn Neurosci. 1996 Nov;8(6):551-565.

[edit] June

6 Mandel-Palanker
Stanford Visual Prosthetics

13 Scott Kolbe Postdoctoral Fellow
Department of Anatomy and Neuroscience, University of Melbourne Recent paper

20 Kalanit Grill-Spector
HBM Rehash

27 Cynthia Henderson

[edit] May

21 VSS Rehash

Informal presentation and discussions around the science presented at VSS.

2 VSS Practice day 1

Jon Winawer (Mini-symposium 25 mins, Wandell) "The fourth visual area: A question of human and macaque homology"

Ariel Rokem (Mini-symposium 25 mins, Wandell) "Cholinergic enhancement of perceptual learning in the human visual system"

Hiroshi Hiriguci (Poster, Wandell)


9 VSS Practice day 2

Nick Davidenko (Poster, Grill-Spector) "Parametric face-to-hand transformations reveal shape-tuned representations in human high-level visual cortex."

Nathan Witthoft (Grill-Spector)

Cătălin Iordan (15 mins, Fei-Fei Li) "Neural representations of object categories at multiple taxonomic levels."

Chris Baldassano (15 mins, Fei-Fei Li) "Neural Representation of Human-Object Interactions."

23 VSS rehash

[edit] April

18 Henrik Ehrsson, Karolinska Institutet, Sweden - Hosted by A. Wagner Example of his work: Body size and perception

25 Emily Cooper, University of California at Berkeley - Hosted by Joyce Farrell The Perceptual Basis of Common Photographic Practice

[edit] March

28 Jon Winawer
Paper discussion: Local Visual Energy Mechanisms Revealed by Detection of Global (Morgensten et al., J Neurosci)

21 Karl Zilles & Katrin Amunts
Title: Is cyto- and receptor architecture important for understanding the brain?

7 Yanxi Liu, PhD, Penn State University
Computational Symmetry

[edit] February

22 Josef Parvizi, MD, PhD, Stanford University
The number form area

15 No vision lunch

8 No vision lunch

1 Nicolas Davidenko, PhD, Stanford University
Face perception: from stimulus spaces to mental spaces

[edit] January

18 Kevin Weiner, PhD, Stanford University
Leading the discussion on: Saygin ZM et al., (2011) Anatomical connectivity patterns predict face selectivity in the fusiform gyrus. Nature Neuroscience. http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.3001.html

4 Ken Nakayama, PhD, Harvard University
Subjective Contours: proving ground for perceptual and neural theory

Personal tools