Vision Lunch 2011


Jump to: navigation, search

Return to Vision Lunch current schedule


[edit] 2011 Schedule

[edit] November

30 Justin Ales, PhD, Stanford
Presents the following article: Burge, J. and Geisler, W.S., Optimal defocus estimation in individual natural images. PNAS 2011

2 Golijeh Golarai, PhD, Stanford
SfN practice talk: Distributed responses in the fusiform gyrus are modulated by the age of the face stimuli

[edit] October

5 Summer Sheremata, PhD, UC Berkeley
Motion Selectivity and Attentional Modulation in Visuotopic Parietal Cortex

12 Thomas Wiegand, PhD, Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut
Towards a Direct Measure of Video Quality Perception using EEG

19 Kalanit Grill-Spector, PhD, Stanford University

26 Kendrick Kay and Jason Yeatman, Stanford University
SfN practice talks.
K.K.: Compressive spatial summation improves models of extrastriate responses

J.Y.: Identifying biological signatures of occipital white matter pathways with quantitative MR methods.

[edit] September

28 Jason Fischer, UC Berkeley
The role of attention in cortical and subcortical spatial mapping.

[edit] August

3 Stefano Baldassi, PhD, Visiting Scholar, Stanford Univeristy
Spatio-temporal remapping of peri-saccadic perceptive fields

10 Hiromasa Takemura, University of Tokyo
Neural correlates of induced motion perception: an fMRI study


24 Nolan Nichols, University of Washington Integrated Brain Imaging Center
Leading a discussion on the Scalable Neuroimaging Initiative (SNI) to facilitate neuroimaging data access and sharing.

31 Ariel Rokem, PhD, Wandell Lab, Stanford University
Nitime: time-series analysis for neuroscience data with Python

[edit] July

6 Rainer Goebel Extended discussion, led by Rainer.

13 OPEN (Wandell away)

20 Dora Hermes, Parvizi Lab
The relation between neuronal population electrophysiology and the BOLD response: a comparison of ECoG and fMRI in motor, language and visual areas.

27 Amit Etkin - Tentative Psychiatry department TMS-FMRI

[edit] June

1 Anna Ma-Wyatt: 12:15-1:15pm

[1] Anna is from the University of Adelaide. She studies how visual position information is used over time to plan and execute eye and hand movements.
Coordinating eye and hand movements
People use goal-directed movements to pick up objects or make a cup of tea and perform other tasks essential to daily living. When a target appears in the peripheral visual field, a saccade is usually deployed just before a rapid hand movement, and eye and hand landing locations are temporally and spatially correlated. What benefit does this coordination of eye and hand confer on rapid reaching? We investigated how visual information about the target location is used to drive eye-hand coordination in normals and patients with central visual field loss. Our results indicate that normally sighted observers use a visual position signal as well as an eye position signal to coordinate eye and hand. In patients without a fovea, the target has to be localised with peripheral retina and a recalibration must occur between eye and hand. Saccade and reach errors occur frequently when this recalibration is incomplete.

22 Discussion: Gamma power in visual cortex

1: Ray S, Maunsell JH.
Different origins of gamma rhythm and high-gamma activity in macaque visual cortex.
PLoS Biol. 2011 Apr;9(4):e1000610. Epub 2011 Apr 12.
PubMed PMID: 21532743; PubMed Central PMCID: PMC3075230.

2: Ray S, Maunsell JH.
Differences in gamma frequencies across visual cortex restrict their possible use in computation.
Neuron. 2010 Sep 9;67(5):885-96.
PubMed PMID: 20826318; PubMed Central PMCID: PMC3001273.


[edit] May

18 Devyani Nanduri

[2] Devyani is from USC. She has worked with Ione Fine and colleagues at Second Sight. [3]
The visual experience of epiretinal prosthesis users
Over the last ten years more than 30 subjects have been implanted chronically with epiretinal (Humayun et al., 2003) and semi-chronically with subretinal implants (Zrenner et al., 2010). Visual performance in these prosthesis users varies widely (Yanai et al., 2007). Some patients have difficulties with even very simple orientation tasks, while others can discriminate complex forms such as letters (Zrenner et al., 2010, Caspi et al., 2009). While our group has developed models that can quantitatively predict both perceptual thresholds and apparent brightness of electrically elicited phosphenes (de Balthasar et al., 2008, Greenwald et al., 2009, Horsager et al.), to date there has been little quantitative description of the shape and location of these phosphenes. Here we measured the percepts of electrically elicited ‘phosphenes’ in four prosthesis subjects and found that the perceptual experiences elicited by electrical stimulation could be predicted by a simple model based on retinal anatomy. Furthermore, manipulating pulse train frequency and amplitude have different effects on the size and brightness of phosphene appearance. Experimental findings could be explained using a simple computational model based on previous psychophysical work and the expected spatial spread of current from a disk electrode.

25 Kevin Weiner: Dissertation Defense

[edit] April

6th Felipe Pegado

Literacy breaks the symmetry of alphabetic visual objects

All primates, including humans, recognise images in a left-right invariant-way. This mirror-invariance is useful to recognise objects both from left or right perspectives, but this very competency has to be 'unlearned' for reading acquisition in order to correctly identify letters (e.g. to distinguish a 'b' from a 'd'). In a first study, we presented pairs of visual stimuli (faces, houses, tools, strings and falsefonts), whose left-right orientation was manipulated, to adult literates and illiterates. The task was to judge if the pairs were 'same' or 'different', regardless of orientation (identity task). The subjects were explicitly instructed to assign 'same' for mirror-inverted pairs. The results showed an important behavioural cost to respond 'same' in mirror-trials, proportional to the literacy level, but only for strings and falsefonts. A strong bias to respond 'different' in mirrored-strings was also observed in good readers but not in illiterates. In a second study, we used an fMRI priming paradigm to probe the neural discrimination of mirror-inverted pairs of stimuli in skilled readers. We showed that the left occipito-temporal cortex, namely the Visual Word Form Area (VWFA) distinguishes the left-right orientation of single letters, and yet exhibits mirror invariance for simple matched pictures. These results clarify how letter shapes, after reading acquisition, escape the process of mirror invariance which is a basic property of the ventral visual shape recognition pathway.

13th Justin Ales

Studying models of motion perception using the Steady-State Visual Evoked Potential.

This discussion will describe a technique used extensively in the Norcia lab: the Steady-State Visual Evoked Potential(SSVEP). The SSVEP is widely misunderstood, and under appreciated. In this presentation I will explain how the Steady-State VEP paradigm can be exploited to study models of motion adaptation and integration.

20th Franco Pestilli

Attentional enhancement via selection and pooling of early sensory responses in human visual cortex.

To characterize the computational processes by which attention improves behavioral performance, we measured activity in visual cortex with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance that were accompanied by increases in cortical responses. Using a quantitative analysis, we determined that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large (approximately 400%) reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.

27th Winrich Friewald

[edit] March

2nd - Kendrick Kay is going to do an informal presentation of results pertaining to PRF models, GLM analysis, changes in spatial summation across visual areas, consequences for object position and size tolerance.

9th - Kamil Ugurbil
Neuroimaging with ever increasing magnetic fields: from cortical columns to whole brain function, connectivity and morphology
Dr. Kamil Ugurbil is a Professor in the Departments of Radiology, Neurosciences, and Medicine, and holds the McKnight Presidential Endowed Chair of Radiology at the University of Minnesota. He is also the Director of the Center for Magnetic Resonance Research. Dr. Ugurbil was educated at Robert Academy, Istanbul (high school) and Columbia University, New York, New York where he received A.B. and Ph.D. degrees in physics, and chemical physics, respectively. He worked at AT&T Bell Laboratories after receiving his Ph.D. in 1977, and subsequently returned to Columbia University in 1979 as an Assistant Professor. In 1982, he moved to the University of Minnesota where he started the in vivo magnetic resonance imaging and spectroscopy research effort, which ultimately led to the creation of the Center for Magnetic Resonance Research (CMRR).

16th - Discussion (Davie Yoon): A Ventral Visual Stream Reading Center Independent of Visual Experience
DOI 10.1016/j.cub.2011.01.040
The visual word form area (VWFA) is a ventral stream visual area that develops expertise for visual reading [1–3]. It is activated across writing systems and scripts [4, 5] and encodes letter strings irrespective of case, font, or location in the visual field [1] with striking anatomical reproducibility across individuals [6]. In the blind, comparable reading expertise can be achieved using Braille. This study investi- gated which area plays the role of the VWFA in the blind. One would expect this area to be at either parietal or bilateral occipital cortex, reflecting the tactile nature of the task and crossmodal plasticity, respectively [7, 8]. However, accord- ing to the metamodal theory [9], which suggests that brain areas are responsive to a specific representation or compu- tation regardless of their input sensory modality, we pre- dicted recruitment of the left-hemispheric VWFA, identically to the sighted. Using functional magnetic resonance imaging, we show that activation during Braille reading in blind individuals peaks in the VWFA, with striking anatom- ical consistency within and between blind and sighted. Furthermore, the VWFA is reading selective when con- trasted to high-level language and low-level sensory controls. Thus, we propose that the VWFA is a metamodal reading area that develops specialization for reading regard- less of visual experience.

[edit] February

Arno Klein, Columbia
Arno klein
Link to binary bottle
Link to mind boggle
asst. professor of clinical neurobiology
columbia university

[edit] January

5 Discussion (Nathan Witthoft): V1 size and visual illusions

12 A tomography approach to pRF estimation

Dr. David Ress
University of Texas at Austin

19 Damon Clark - From the Clandinin Lab Title: Defining the computational structure of the motion detector in Drosophila
Summary: Two pathways in visual motion processing lead to complementary computations that underlie the Hassenstein-Reichardt correlator.

26 Note special time: 10:00 am - 12 pm (Usual room: Jordan Hall, 419)
Imaging attention in the brain: optical imaging studies of area V4 in awake, behaving macaque monkeys
Anna Roe
Vanderbit University
Hosted by Bill Newsome

Personal tools