Vision Lunch 2006

From VISTA LAB WIKI

Jump to: navigation, search

For the current year's Vision Lunch schedule see: Vision Lunch

Other years: Previous Vision Lunches

Contents

[edit] 2006 Vision Lunches

[edit] January

11 Greg Corrado, Leo Sugrue, and Julian Brown, Newsome Lab: Some monkey fMRI results

18 Rory Sayres: discussion of Neural Information Processing Systems (NIPS) conference

25 Satoshi Nakadomori, Wandell Lab: Visual field referred measurements of scotoma

This is a clinical application that has some similarities to all that Tong/Haynes work on interpreting the cortical signal with respect to the presented stimulus.


[edit] February

1 Discussion: Sawamura et al 2005 ()

8 Sing-Hang Cheung, Wandell lab

15 Discussion: Tsao et al 2006

22 Killian Pohl, MIT/UCSC: Automatic segmentation of medical images

Many neuroscientists analyze medical images to locate brain structures impacted by brain diseases. These anatomical structures are often characterized by weakly visible boundaries so that standard image analysis tools perform poorly in outlining them. In this talk, we present a robust segmentation algorithm based on a probabilistic model.

The probabilistic model incorporates anatomical prior information in order to simplify the detection process. Throughout this talk, we discuss different types of prior information such as spatial priors, shape models, and trees describing hierarchical anatomical relationships. We pose a maximum a posteriori estimation problem to find the optimal solution within our model. From the estimation problem we derive an instance of the Expectation Maximization algorithm, which uses an initial imperfect estimate to converge to a good approximation. The resulting algorithm is tested on a variety of studies. The studies range from the segmentation of the brain into the three major brain tissue classes, to the parcellation of anatomical structures with weakly visible boundaries (such as the thalamus or superior temporal gyrus). In general, our new method performs significantly better than other standard automatic segmentation techniques.


[edit] March

1 Steven Dakin, UCL Institute of Ophthalmology: "Image reconstruction guided by natural scene statistics predicts many aspects of lightness perception"

The perceived lightness of objects remains consistent under huge variations in illumination. This requires that the visual system take one effect - the amount of light landing on the retina - and disentangle the contribution of two causes: the reflectance of surfaces and the intensity of their illumination. To solve this under-constrained problem the visual brain must make assumptions about the world. Specifically, we pose lightness perception as a reconstruction problem where lightness perception works by inferring the image most likely to have elicited a particular set of filter-responses under the assumption of scale invariance (a.k.a. 1/f statistics). This model can both adequately reconstruct natural scenes and provides a quantitative account for a wide range of lightness illusions.

8 Lenny Kontsevich from Smith-Kettlewell Eye Research Insitute: "On the Virtues of Focal Attention"

Is serial focal attention an integral part of the matching process in the visual system? This possibility will be illustrated with a novel Content-Based Image Retrieval system, which implements such a scheme. The inner mechanics of this mechanism will be examined with examples from localization task, visual search, and the salience of complex features.

15 Hyejean Suh, Grill-Spector Lab: "Object recognition near perceptual threshold"

In this talk, I'd like to explore several key questions to understand how human visual system processes the object recognition tasks near perceptual threshold. Some of the questions are: How much human visual system relies on local feature information vs. global information for object recognition? Is there a bottleneck of processing that detects local features prior to higher level processing? What are the critical features for object recognition and how do you determine them?

I will present some psychophysical results on face recognition tasks. The results implies there are dynamic ranges in which human visual system relies more on local feature information than global information and vice versa. It is also observed that some features are more useful than others and the extent of utilizing those features is dependent on specific recognition task.

22 Discussion: Polyn et al., 2005 (Category-specific cortical activity precedes retrieval during memory search.)

30 Rory Sayres, Grill-Spector Lab: some new high-res results (Slides)


[edit] April

5 Mark Schira, Smith-Kettlewell Eye Research Institute: "An Average Map of Human V1 and V2."

...briefly it is about a method to generate a V1 and V2 map that is averaged rather than the result of imaging these areas in Individuals. I will then present some results of this average map and discuss interesting findings. I will also test current models like the log-polar mapping proposed by Eric Schwartz (1980), and present an improved model. Finally, I will start a discussion about measuring distances along the cortical manifold using Dijkstra's algorithm in an over-connected gray graph (the method currently build into the mrVistaTools).

12 John-Dylan Haynes, Max Planck Institute of Cognitive and Brain Sciences, Leipzig: "Decoding conscious and unconscious perception from brain activity in humans."

It has recently emerged that the sensitivity of fMRI can be dramatically increased if the full information present in large ensembles of voxels is appropriately taken into account. For example, supervised learning can be used to train a pattern classifier to distinguish between several orientation stimuli viewed by a subject based on the characteristic distributed brain responses they evoke in visual cortex. This holds even though the relevant features are represented at a finer spatial scale than the nominal resolution of single voxels. Here several studies will be presented that apply such supervised learning to the study of conscious and unconscious perception in humans. In one study the information about a stimulus that is available to a subject for a perceptual decision is compared to the information that can be decoded from early visual areas. This reveals that V1 has information about stimulus features even when they are rendered completely invisible due to masking, suggesting that V1 can have information about visual stimuli that is not available for conscious access. A second study demonstrates that pattern classification can be used to accurately predict on a second- by-second basis participants' conscious perception while it undergoes many spontaneous changes during binocular rivalry. Importantly, this reveals that the source of predictive information differs between visual areas, being more eye-based in V1 and more percept-based in V3. Taken together this provides valuable information about the nature of perceptual coding in these areas.


19 Daphne Koller, Stanford University Computer Science Department: "Learning Shape Models for Object Recognition and Localization."

Recognizing and localizing objects in 2D images and 3D scenes is an important task for many applications, including scene understanding and robotic object manipulation. While many current approaches focus on the appearance of object patches, we focus instead on the fundamental object shape.

We present a probabilistic approach to modeling object shape in 2 and 3 dimensions together with tools for localizing object in scenes. We begin with a high-resolution model of the human form that captures mesh deformation and articulation of parts. We show how to learn such a model from dense range scans, and how to "embed" a model in a scene for object localization.

We then consider how to model the shape of a variety of object classes in 2D. Analogously to our 3D approach, we use a landmark-based piecewise-linear contour model to probabilistically describe the object outline. We show how to learn such models from cartoon images, and how to outline instances of the object in real images.

We conclude with a description of our current research trajectories, specifically discovering object classes in 3D and embedding 3D models in 2D scenes.

26 Serge Dumoulin, Wandell Lab "Visual field reconstruction: cortical receptive field estimates"

This is an informal meeting in which Serge will present some of his current work in visual field reconstruction, with an eye toward feedback from the group.

[edit] May

2-3 Vision Sciences Society practice talks


10 NO MEETING: VSS in progress

A recommended alternative for those who are around: go to the Developmental Brown Bag Series instead:

May 10: Bruce McCandliss, Sackler Institute for Developmental Psychobiology Title TBA (Researches DTI+fMRI and Reading)

11 (Thursday, time: Noon, place: Room 102) Kaoru Amano, NTT Japan

Personal Webpage

This will be an informal talk about the following work:

Kaoru Amano, Derek Arnold, Alan Johnston, Tsunehiro Takeda Watching the brain oscillating : A neural correlate of illusory jitter

Moving borders defined by small luminance changes (or by colour changes), placed in close proximity to moving borders defined by large luminance changes, can appear to jitter at a characteristic frequency (Arnold & Johnston, 2003). To reveal the neurophysiological substrates of this illusion we measured brain activity using magnetoenceohalography (MEG). In conditions 1-3, vertical green bars, superimposed upon larger red squares, moved across a black background. The green bars were either (1) darker, (2) isoluminant with, or (3) brighter than the red squares. In condition 4, vertical green bars moved across an isoluminant red background. In condition 5, physical jitter was added to dark green bars centered in a moving red square to mimic illusory jitter. In conditions 1-4, subjects indicated if the green bar appeared to jitter. If illusory jitter was reported, subjects then matched the illusory jitter rate to the frequency of an adjacent physical jitter. The matched frequency for each subject was used in condition 5. Illusory jitter was only perceived in condition 2 and its frequency was ~10 Hz. We also found that neural oscillations around 10 Hz were significantly enhanced in condition 2 relative to all other conditions. As these oscillations were enhanced relative to isoluminant motion (condition 4) and physical 10 Hz jitter (condition 5), we believe that the enhanced activity is related to illusory jitter generation rather than to jitter perception or to isoluminant motion per se, supporting our hypothesis that MISC is generated within cortex by a dynamic cortical feedback circuit.

16 David Eagleman visits

David Eagleman will speak in room 102 at noon. Title: Time and the brain

Abstract While walking through the forest you hear a twig crack. Did the sound occur when your foot fell, or just before? If it happened just before, the sound may alert you to a nearby predator. If the sound happened coincident with your step, then it was a normal occurrence consistent with the sensory feedback expected during walking. Survival and learning depend on correctly judging the order of motor action and sensory input. However, the ability to correctly judge temporal order is confounded by the fact that delays in sensory pathways can change (e.g. due to lighting conditions, limb growth, etc). This suggests that nervous systems constantly readjust cross-model timing estimates in order to reliably report when events sensed by the different modalities occurred. We present a series of new experiments that reveal the existence of neural mechanisms by which animals can use self-generated actions to calibrate the timing between signals from different modalities. By manipulating the timing of sensory events, we demonstrate a set of novel temporal illusions in which perceived order of action and event are reversed. Sensory events appearing at a consistent delay after motor actions seem to be interpreted as consequences of those actions, and the brain recalibrates timing judgments to make them consistent with a prior expectation that sensory feedback should immediately follow motor acts. fMRI BOLD signals suggest the existence of multiple timing representations that have different timescales of plasticity. Computational modeling demonstrates how event times can be encoded in patterns of activity in neural populations. These results are leveraged into new predictions and experiments which ask whether subjective time can move in slow motion, how the brain calibrates its timing of predictive models, and whether nervous systems process time continuously or in batches. We attempt to show how the study of time cuts across several aspects of neuroscience from a new angle, exposing novel views of neural coding, binding and perception.

Biography:

David Eagleman earned his undergraduate degree in British and American Literature at Rice University and Oxford University, then obtained a Ph.D. in Neuroscience at Baylor College of Medicine, working with Read Montague. He moved on to a postdoctoral fellowship at The Salk Institute, working with Terrence Sejnowski and Francis Crick. He is currently faculty in the Department of Neurobiology and Anatomy at the University of Texas in Houston, with joint appointments at UT Austin (Institute for Neuroscience) and Rice University (Psychology). Dr. Eagleman is publishing two books this year, Ten Unsolved Problems of Neuroscience (MIT Press) and Dethronement: the hidden hegemony of the unconscious brain (Oxford University Press).


17 VSS wrap-up

For this week's vision lunch, we will be discussing the high- and low-lights of VSS.

If you attended, please come ready to present a couple of your favorite presentations.


24 Jan Brascamp, Ryota Kanai & Thomas Knapen, Utrecht University "The determinants of perceptual suppression during bistable perception."

Bistable perception occurs when the visual system is confronted with input that supports different perceptual interpretations. During bistable perception the different possible interpretations are not fused into a single percept, but are alternatingly suppressed. Using monocular and binocular rivalry stimuli we investigated the determining factors for the occurrence of suppression. We varied the difference between the two different perceptual interpretations using three different cues: eye-of-origin information, color and disparity-defined depth. We find that the total amount of exclusive percept subjects perceive depends strongly on the difference in the cues that segregate the two interpretations. These findings point to some interesting conclusions. First, suppression does not have to be bound to a single feature (e.g. orthogonal orientations) but may build up gradually over several processing stages. Second, binocular rivalry may have to be treated not as a separate phenomenon but as part of the same class of processes as monocular rivarly, with eye-of-origin information as the segregating cue.


31 Discussion: Mapping papers: beyond the ventral pathway; attendotopy, Xenatopy, Buffytopy

Hagler and Sereno 2006 (Spatial maps in frontal and prefrontal cortex.)

Silver et al 2005 (Topographic maps of visual spatial attention in human parietal cortex.)

Schluppek et al 2005 (Topographic organization for delayed saccades in human posterior parietal cortex.)

[edit] June

7 Discussion: Reward timing in V1

Shuler and Bear 2006 (Reward Timing in the Primary Visual Cortex.)


14: Canceled


21: Kwabena Boahen, Stanford Department of Bioengineering "Neurogrid: Emulating a Million Neurons in the Cortex"

I will present a proposal for Neurogrid, a specialized hardware platform that will perform cortex-scale emulations while offering software-like flexibility. Recent breakthroughs in brain mapping present an unprecedented opportunity to understand how the brain works, with profound implications for society. To interpret these richly growing observations, we have to build models—the only way to test our understanding—since building a real brain out of biological parts is currently infeasible. Neurogrid will emulate (simulate in real-time) one million neurons connected by six billion synapses with Analog VLSI techniques, matching the performance of a one-megawatt, 500-teraflop supercomputer while consuming less than one watt. Neurogrid will provide the programmability required to implement various models, replicate experimental manipulations (and controls), and elucidate mechanisms by augmenting Analog VLSI with Digital VLSI, a mixed-mode approach that combines the best of both worlds. Realizing programmability without sacrificing scale or real-time operation will make it possible to replicate tasks laboratory animals perform in biologically realistic models for the first time, which my lab plans to pursue in close collaboration with neurophysiologists.


28: Austin Roorda, UC Berkeley School of Optometry "Applications of adaptive optics scanning laser ophthalmoscopy"

En-face scanning laser imaging systems, or SLOs, are an attractive modality for the integration of adaptive optics because of the wide range of potential applications. AOSLO systems produce high-resolution, high-contrast, video-rate images of the living human retina through direct scattering, confocal imaging or other detection schemes including fluorescence and coherence gating. The scanning modality further allows for precisely controlled delivery and localization of aberration-corrected patterned stimuli directly to the retina. I will discuss current and future applications for AOSLO technology, mainly concentrating on efforts in our laboratory.

We have used the AOSLO to visualize capillaries, photoreceptors, nerve fibers, lamina cribrosa and RPE cells in healthy and diseased human eyes. The real-time imaging feature has facilitated measurements of the velocity in capillaries near the fovea and has also been used to record eye movements with extreme speed and precision.

But the most fruitful applications of AOSLO might arise from its effectiveness for functional imaging. By employing simultaneous imaging and stimulus delivery to the retina, AOSLO offers a new way to study the retinal and optical limits to vision via visual acuity, Vernier acuity, and fixation tracking.



[edit] July

5: Katherine Armstrong, Moore Lab - Department of Neurobiology: "Changes in visual receptive fields with microstimulation of frontal cortex."

The influence of attention on visual cortical neurons has been described in terms of its effect on the structure of receptive fields (RFs), where multiple stimuli compete to drive neural responses and ultimately behavior. We stimulated the frontal eye field (FEF) of passively fixating monkeys and produced changes in V4 responses similar to known effects of voluntary attention. Subthreshold FEF stimulation enhanced visual responses at particular locations within the RF and altered the interaction between pairs of RF stimuli to favor those aligned with the activated FEF site. Thus, we could influence which stimulus drove the responses of individual V4 neurons. These results suggest that spatial signals involved in saccade preparation are used to covertly select among multiple stimuli appearing within the RFs of visual cortical neurons.


12: Alex Wade & Greg Appelbaum, Smith-Kettlewell Eye Research Institute / Duke University: "Using source-localized high-density EEG to study low and mid-level visual processing"

Recent advances in EEG source localization mean that it is possible to image electrical currents in cortex with millisecond resolution. We describe a data processing pipeline that can be used to measure rapid changes in activity in visual areas defined by fMRI. This data processing pipeline can be combined with a frequency-tagging technique to study visual compuations. We will describe two such applications: measuring neural correlates of a) contrast normalization and b) scene segmentation.


19: NO MEETING


26: NO MEETING


[edit] August

2: Carol Whitney (U Maryland): What can Visual Word Recognition Tell us about Visual Object Recognition?

The SERIOL model addresses the problem of how the brain encodes the sequence of letters in a written word. It provides a neurobiologically plausible account of how the initial retinotopic representation of a string is progressively converted into an abstract, location-invariant encoding of letter order, and has led to new accounts of visual half-field asymmetries in lexical decision, which have been experimentally confirmed. In the model, location-invariance is achieved by mapping space into time. That is, the retinotopic encoding is converted into a temporal encoding. Relative timing of firing of letter pairs is then used to activate bigram representations. If indeed the brain uses such mechanisms in visual word recognition, they would have to be derived from the mechanisms normally used in visual object recognition. I will discuss how this approach could be extended to object recognition in general, and touch upon some data from the literature that are consistent with this proposal.


9: Discussion: Validity of ROI Analyses: Friston / Kanwisher Neuroimage debate

Friston et al 2006 (A critique of functional localisers.)

Saxe et al 2006 (Divide and conquer: A defense of functional localizers.)


16: Discussion: Functional Diffusion MRI

Le Bihan et al 2006 (Direct and fast detection of neuronal activation in the human brain with diffusion MRI.)


23: Discussion: Negative BOLD in V1

Shmuel et al 2006 (Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1.)

30: Discussion: Microstimulation of monkey face-selective cells

Afraz et al., 2006 (Microstimulation of inferotemporal cortex influences face categorization)

[edit] September

6 POSTPONED due to Matteo Candini talk

13 Discussion: Monkey Butts: worth their weight in juice.

Deaner et al 2005 (Monkeys pay per view: adaptive valuation of social images by rhesus macaques.)

20 Early Project Discussion: Functional connectivity of the FFA (Davie)

28 Leo Sugrue and Greg Corrado, Newsome lab: (Preliminary) fMRI correlates of value based decision making in the awake behaving monkey

The emphasis will definitely be on the preliminary - we are seeking input on what analyses we should do given what we think we're seeing.


[edit] October

5 Discussion: Attention and Learning in the MTL Yi et al., 2005 (Attentional modulation of learning-related repetition attenuation effects in human parahippocampal cortex.)

12 SFN 06 practice talks

19 SFN wrapup

26 Davie - project proposals

[edit] November

2 Mike Greicius - Department of Neurology: "Resting-State Functional Connectivity MRI: Principles and Clinical Applications"

It appears that a finite set of 5-10 canonical brain networks can be detected in humans with resting-state functional connectivity MRI. This talk will review the brief history of fcMRI focussing on the methods used to identify these networks, the putative functions ascribed to them, and the potential clinical utility of quantifying activity in one such network. Both region-of-interest and independent component analysis methods will be considered. A number of canonical resting-state networks will be examined with particular emphasis on the "default mode" network-- first described by Marc Raichle--and its dysfunction in Alzheimer's disease.

3 SPECIAL MEETING: FRIDAY NOV. 3 10:00 AM: Bruno Roisson, Universite catholique de Louvain Bruno is working with prosopagnostic patients, and will discuss some interesting cases.

Bruno will describe recent experiments on an acquired prosopagnostic with bilateral lessions in LOC and an intact rFFA

9 CANCELED

16 Discussion: Neitz et al 2002 (Color perception is mediated by a plastic neural mechanism that is adjustable in adults.)

23 NO MEETING: Thanksgiving Day

30 Angela Kessell, Tversky lab: Proposed behavioral and neural studies of the effects of categorization level on scene and object-in-scene perception.

I plan to review the evidence on the timecourse and phenomenology of scene perception and the neural systems involved in processing visual scenes. I am particularly interested in the relationship between task (experimental design and recognition level) and activity in parahippocampal cortex (esp. the "PPA") and retrosplenial cortex.

[edit] December

14 Nick Davidenko, Ramscar / Grill-Spector labs: "The role of distinctiveness in face processing"

abstract TBA

Personal tools