Vision Lunch 2005

From VISTA LAB WIKI

Jump to: navigation, search

For the current year's Vision Lunch schedule see: Vision Lunch

Other years: Previous Vision Lunches

Contents

[edit] 2005 Vision Lunches

[edit] January

25 Discussion: Amedi et al., 2003 (Early 'visual' cortex activation correlates with superior verbal memory performance in the blind.)

[edit] February

2 Discussion: Fiser et al., 2004 (Small modulation of ongoing cortical dynamics by sensory input during natural vision.)

9 Bob Dougherty, Wandell lab

16 Greg Goodrich and Jennifer Wood, Western Blind Rehabilitation Center

23 Steven Baccus


[edit] March

2 David Remus, Grill-Spector Lab

This talk was a recap of work performed in Frank Tong's lab at Princeton (now Vanderbilt) - primarily focusing on imaging studies of visual awareness in retinotopic cortex (rivalry, imagery, visual phantoms, neural decoding, etc.)

9 Paul van Valken:
"fMRI Retinotopic Mapping of Visual Cortex, to a new level of precision,

  Visualized in Matlab 3-D flythrough models, 
And a real physical unfolded wireframe model of the Calcarine sulcus,
And plotting on them, a sample of hundreds of
Migraine visual auras (cortical spreading depression)
progressions."

16 Fumiko Maeda, Gabrieli / DeCharms / Shimojo Labs: Metaphor of 'high' and 'low' in pitch revisited: Visual motion capture by changing pitch article

23 David Andresen, Grill-Spector Lab

This talk was about view invariance and dependence revealed by the neural activity of object-selective visual cortex during object recognition, and relating this neural activity to components of subject performance.


[edit] April

6 Junjie Liu, Wandell Lab: "Functional organization in human visual cortex V1/V2"

13 Gal Chechik, Koller Lab: "Changes in representations along the ascending auditory pathway."

Information processing by a sensory system is reflected in the changes in stimulus representation along its successive processing stages. Here we study principles of information processing by comparing representations and coding of complex and natural stimuli in three stations of the auditory pathway, inferior colliculus (IC), auditory thalamus, and primary auditory cortex (A1).

Information about the spectro-temporal content of short stimulus segments was ten-fold smaller in A1 than in IC, but the information about stimulus identity was almost the same in A1 relative to the IC. Sets A1 neurons thus code well the identity of complex stimuli without explicitly coding their "physical" aspects.

Furthermore, IC neurons were also substantially more redundant than A1 neurons. Redundancy reduction may be a generic organization principle of neural systems, allowing for easier readout of the identity of complex stimuli in A1 relative to IC.

20 Jon Winawer, Boroditsky Lab: "Some thoughts and experiments on visually guided eye growth"

Why are so many people near-sighted? According to some old views, myopia was a manifestation of a biological defect, probably genetic in origin. However, decades of epidemiological research and animal experiments have suggested a different view. According to this view there is a homeostatic mechanism, well-conserved across vertebrates, in which the eye actively uses visual feedback during development in order to grow into focus. This raises the possibility that refractive errors such as myopia may be due more to properties of the visual input than to any defect of the eye. In this talk I will review some of what has been learned about visually guided eye growth from lens-rearing experiments in animals, including some of my own work on (1) whether these guidance mechanisms can discern the sign of defocus and (2) how periods of defocus, which vary in sign and magnitude, are summed by the eye's growth control mechanisms.

27 Eran Borenstein, Berkeley Mathematical Sciences Research Institute / Weizmann Institute of Science: "Top-down figure-ground segmentation"

In this talk I will describe a top-down scheme for figure-ground segmentation, how it is learned automatically from training images, and how it can be effectively combined with bottom-up processing. Our segmentation approach uses stored fragment-based class representations (such as the eyes, mouth etc. in the class of faces) to detect and recognize object(s) parts in a top-down manner. The fragments are then used to cover the entire object in a hierarchical manner to achieve segmentation. We combine this top-down processing with a bottom-up approach that is guided by regions that are homogenous in terms of image-based criteria such as color, texture etc.

Computational experiments with several object classes show that this method leads to markedly improved results that cannot be achieved by either approach alone, and can deal with significant variation in shape and background.

If time permits, we can discuss biological data relevant to our model.


[edit] May

25 Sherry Xian, Moore Lab: "Chromatic induction from Grouping and Where in the Brain it occurs".

There are two parts in this talk. First, I will talk briefly about chromatic induction influenced by perceptual grouping: a shift in appearance due to chromatic induction in one part of the visual field occurs also in a separate region that belongs to the same group. Chromatic appearance depends also on local chromatic induction, and I will demonstrate how grouping and local induction independently affects color perception. These results are part of my Ph.D. thesis using psychophysical methods.

In order to explore where in the brain we can find a neural representation of this phenomenon, I came to Stanford after graduation to do some single-unit recordings in visual area V4. There are several ongoing projects. First, the color selectivity of V4 neurons is measured using cone-isolating stimuli. The orientation selectivity of these cells is also measured to see how color and form information is processed. Finally, contextual influence from stimuli outside the classical receptive field is tested. Some Preliminary data will be reported.

28 VSS 2005 wrap-up

[edit] June

22 Emanuel Donchin, University of South Florida: "Inferring what from when and how much: Electrophysiological neuroimaging"

15 Nathan Witthoft, Boroditsky Lab: Face Adaptation Improves Recognition of Famous Faces.

We morphed 4 famous faces with an unknown face, and 4 different famous faces with another unknown face. Subjects adapted to one of the two unknown faces and then were asked if they could recognize subsequently presented morphs. The test stimuli were 40, 50, 60, or 67% of the famous face. The result is that subjects need a lower percentage of the famous face in order to recognize it when it is morphed with the adapting stimulus, and are worse than baseline performance (no adaptation) when the test is morphed with the nonadapting stimulus. The idea is that the result can't be explained by response bias since the subjects have no idea who the famous faces are until they recognize them. Its actually the first part of a series of experiments designed to look at mental imagery, but the result is pretty robust.

Mostly, I'm hoping to get feedback about whether or not the result is worth publishing, and ideas about the kinds of experiments that might be interesting to do with the method. (Especially given that many of the people who show up are more expert about adaptation than I am). If people get a chance they can check out the attached Leopold paper which shows something similar, but using a different approach. I suspect everyone has seen this paper already and I'll probably go over it some in the talk.

8 Kalanit Grill-Spector: Fine scale functional organization of the Fusiform Face Area revealed by high resolution fMRI.

1 Discussion: Two adaptation papers Tolias et al 2005 (Neurons in macaque area V4 acquire directional tuning after adaptation to motion stimuli.) Kohn & Movshon 2004 (Adaptation changes the direction tuning of macaque MT neurons.)


[edit] July

27 Discussion: Muckli et al 2005 (Primary Visual Cortex Activity along the Apparent-Motion Trace Reflects Illusory Perception.)

20 And Turken, Gabrieli lab: Relating regional properties of white matter to psychological variable and brain function

I will start with presenting some data and results from voxel-based morphometric analysis of diffusion-weighted (using voxel-wise fractional anisotropy values)and standard MR images. Specifically, I have been looking into the relation between regional FA values and psychological parameters such as cognitive processing speed, performance in cognitive control (Stroop) and choice reaction time tasks, a measure of fluid intelligence (Raven Progressive Matrices test), and age-related changes in young adulthood. I will discuss the interpretation of results from these analysis in terms of functional neuroanatomy. I want to focus on some puzzling observations that arose from comparisons of different analyses, and methodological issues concerning the VBM method. I will proceeed with an overview of some possible strategies that I have considered for using these results as an initial step for tractography based analyses of structure-function relationships. If time permits, I present some preliminary attempt to relate the morphometric analysis to functional imaging and EEG data.

[edit] August

31 Discussion: Quiroga et al., 2005 (Invariant visual representation by single neurons in the human brain.)

17 Marlene Cohen, Newsome Lab: Context-dependent changes in noise correlation in MT

An animal can flexibly change its behavior in response to a particular sensory stimulus; thus the functional connectivity between sensory neurons and neurons that control a perceptual decision must be flexible as well. Changes in the correlated firing of a pair of neurons may provide a metric of changes in functional circuitry within the nervous system during ongoing behavior. Positive changes in noise correlation, for example, can reflect the activation of a common input to the two neurons or the activation of a functional connection from one neuron to the other.

We sought to detect dynamic changes in functional circuitry by analyzing the noise correlations of simultaneously recorded direction-selective middle temporal area (MT) neurons in two behavioral contexts: one that promotes cooperative interactions between the two neurons and another that promotes competitive interactions. We found that when the monkey viewed an identical visual stimulus under the two task conditions, the noise correlation between two MT neurons changed based on the behavioral context.

This result suggests that MT neurons receive inputs of central origin whose strength changes with the task structure. The changes in noise correlation appear to reflect differences in how MT neurons are pooled for the purpose of discrimination in the two task structures, and may derive from higher-level cognitive processes such as feature-based attention.

10 Discussion: Brincat and Connor 2004 (Underlying principles of visual shape selectivity in posterior inferotemporal cortex.)

3 Kevin Brooks, University of New South Wales: Cues to Suprathreshold Stereomotion Perception; or, How to Catch a Bullet with your Teeth.

When objects recede from, or approach, an observer, they present several visual cues to their motion. Besides the monocular cues of image expansion, changes in contrast, blur, etc., several binocular cues to motion-in-depth (or "stereomotion") are also available. For many everyday activities such as driving a car, playing sports, and performing death-defying circus tricks, the mere detection of stereomotion will not help greatly. In addition, accurate processing of the suprathreshold details (speed and trajectory) of motion information is crucial. In this talk, I will discuss several experiments aimed at elucidating the role of these binocular cues in human perception of motion-in-depth. As the characteristics of suprathreshold stereomotion perception emerge, so does some helpful advice about how to avoid certain disaster while attempting to catch a bullet with your teeth.


[edit] September

21 Discussion: Ohki et al 2005 (Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex.)

14 Junjie Liu, Wandell Lab: "Reconning Pfiles and other technical issues in high-resolution fMRI"

7 Disussion: Donoho Mumford Olshausen 2005 (Theory/Experiment Position paper)


[edit] October

26 Philippe Schyns, University of Glasgow

19 Davie Yoon, Grill-Spector Lab: Socially guided information processing in infants

12 Discussion: Loffler et al 2005 (fMRI evidence for the neural representation of faces.)

5 Hans Op de Beeck, MIT: Object recognition and learning in human and monkey

Humans and other primates recognize objects almost effortlessly in a fraction of a second, and this ability relies critically on learning. I'll review some previous single-unit data that reveal how object shape is represented in macaque inferior temporal cortex, and to what extent this representation can be changed during long-term training and exposure. I recently moved to functional imaging to investigate the stability and plasticity of shape representations across human and monkey visual cortex. In human cortex, we find that training on discrimination between exemplars of one object class increases the strength and reliability of the response in visual cortex to these objects. In addition, training changes the profile of response across object-selective cortex. For example, some regions with very little pre-training response to the to-be-trained objects showed a strong post-training preference for these trained objects. In contrast, preliminary data from monkeys in a similar experiment indicate that preferences of object-selective regions for an object class are much more stable across extensive training periods in object discrimination. We are currently investigating effects of task and retinotopic object position in order to find the cause for the inter-species difference.


[edit] November

30 SFN 2006 Review

2 Antonio Rangel, Stanford University Department of Economics: How Does the Brain Make Simple Economic Choices?

[edit] December

7 Susana Chung, University of Houston: psychophysics of crowding, reading and letter recognition.

Personal tools