Vision Lunch

From VISTA LAB WIKI

Jump to: navigation, search

Weekly meeting of people at and around Stanford interested in vision research-related topics.

We meet in Room 419 at Jordan Hall, Stanford University.

Wednesdays at 11:30 AM.

Organizers Kevin Weiner and Peter Kohler.

We maintain an email list for meeting announcements and related topics. If you'd like to be added to or removed from this list, please visit https://mailman.stanford.edu/mailman/listinfo/vis-lunch-announce.

There is a separate page for the Wandell Lab Meetings.

Archived: Vision classics reading group

Contents

[edit] Presenting at Vision Lunch

We would like to encourage anyone in the Stanford community (or outside it) who is working on vision-related research to come and talk about their stuff. In particular, Vision Lunches are intended to allow people in Vision labs at Stanford to hear about each other's work early on, when feedback can be important, and the results are new and exciting.

Please email Peter and Kevin if you're interested in presenting. In general, it should be no problem to bump paper discussions to a later week, so if you see a date you're interested in, chances are we can accommodate you.

Below is a list of Vision Lunches for the current year (2011).

Have an interesting paper that we should review? Post it in the VL Article Repository

[edit] Current Schedule

[edit] 2017 Schedule

[edit] August

2 TBA

Stefan Uddenberg, Yale Perception & Cognition Lab

[edit] May

31st Journal Club

Kevin Weiner presents "Natural speech reveals the semantic maps that tile human cerebral cortex", published by Huth and colleagues in Nature (2016)

[edit] March

29 Musical literacy shifts asymmetries in the ventral visual cortex

Florence Bouhali, CRI, Paris

The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high- level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise modifies extensively the laterality pattern in the visual system.

15 Framework for shape analysis of white matter tracts

Tanya Glozman, Electrical Engineering, Stanford University

Most studies on white matter variability focus on understanding the relationship between tissue properties (e.g. fractional anisotropy) and different pathological conditions. My work focuses on analyzing the shape variability of white matter tracts across populations. In this talk I will describe a framework we developed for shape analysis of fiber bundles. This framework allows: (1) describing the tract shape using a concise geometric model; (2) spatial correspondence mapping; (3) registration of fiber bundles across subjects; (4) bundle deformation estimation. I will demonstrate an application of this framework for modeling the shape evolution of white matter tracts in normally developing children.

[edit] February

22 TBA

Steve Engel, Department of Psychology, University of Minnesota

15 TBA

Ben Backus, College of Optometry, SUNY

[edit] January

25 A tale of three special populations: What can we learn about brain organization from people born blind, deaf, or without hands?

Ella Striem-Amit, Postdoctoral Fellow, Department of Psychology, Harvard University

How does being born blind, deaf, or without arms, affect brain structure and function? Does our brain depend on sensory information during critical developmental periods for its formation? I present studies conducted on a group of congenitally blind people using sensory substitution (a transformation of images into sounds) and on a different cohort, of people born without hands, which show that no single sensory or motor system is critically required for visual cortex organization. Instead these data suggest that cortex organization is highly dependent on innate constraints and computation preferences. Hints for similar principles can be found in the auditory cortex in people born deaf. These findings will be discussed with reference to the model of a-modal brain structure and its limitations regarding critical periods and plasticity.

[edit] 2016 Schedule

[edit] April

6 Discussion of, "On the interpretation of weight vectors of linear models in multivariate neuroimaging" by Haufe et al.


[edit] March

30 Swaroop Guntupalli, Dartmouth University


16 Roland Nadler, Stanford University


9 Diogo Peixoto, Stanford University

[edit] February

24 Discussion of "Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition" by Norman-Haignere, Kanwisher, and McDermott


17 Discussion of “Color-biased Regions of the Ventral Visual Pathway Lie between Face- and Place- Selective Regions in Humans, as in Macaques” by Lafer-Sousa, R., Conway, B., & Kanwisher, N.


10 Discussion of “Vascular Supply of the Cerebral Cortex is Specialized for Cell Layers but Not Columns” by Daniel L. Adams, Valentina Piserchia, John R. Economides, and Jonathan C. Horton

[edit] January

27 Discussion of “Transformation from a Retinal to a Cyclopean Representation in Human Visual Cortex” by Barendregt, Harvey, Rokers, and Dumoulin


13 Lane McIntosh, Stanford University

Deep Convolutional Neural Network Models of the Retina

[edit] 2015 Schedule

[edit] December

16 Steve Kennerley (Joint meeting with MAD lunch in room 050 in Jordan Hall)

Title Forthcoming


2 Zygmunt Pizlo, Purdue University (http://www1.psych.purdue.edu/~zpizlo/)

The role of Symmetry in 3D vision: psychophysics and computational modeling

Almost all animals are mirror-symmetrical. This includes birds, fish, cats, elephants, humans, horses, pigs and insects. Animal bodies are symmetrical because of the way the animals move. Plants are symmetrical because of the way they grow. Man-made objects are symmetrical because of the function they serve. Completely asymmetrical objects are dysfunctional. The ubiquity of symmetry in our natural environment makes symmetry a perfect candidate for serving as a particularly effective prior in visual perception. Is symmetry such a prior? How much does the computational recovery of 3D shape benefit from symmetry? A lot. Symmetry actually provides the basis for the accurate recovery 3D shapes from 2D images, arguably the most difficult problem in vision. Does our visual system use symmetry as its most important prior? Is the recovered 3D percept veridical? Does the 3D percept deteriorate when the object’s symmetry is removed? Is there another way to produce veridical 3D percepts? These questions and answers will be discussed within a context provided by the results obtained in psychophysical experiments and by computational modeling. We will also ask whether human subjects update their symmetry prior by using repeated experiences with visual stimuli. Should they? Can a symmetry prior, active with a single eye, be rendered ineffective when the observer views 3D shapes with both eyes? Psychophysical results on monocular and binocular 3D shape recovery will be presented and then compared with a Bayesian model in which the visual data is combined with a number of priors. In this model, the priors supplement, rather than conflict with the visual information contained in the visual stimulus. I will conclude this talk by showing how the symmetry inherent in 3D objects can be used to solve the "Figure-Ground Organization" problem, namely detecting 3D objects in a 3D scene and detecting them in a 2D retinal or camera image.

[edit] November

11 Peter Tass

Counteracting abnormal neuronal synchrony with coordinated reset stimulation - possible applications to migraine

Several brain diseases are characterized by abnormal neuronal synchronization. To specifically counteract neuronal synchronization we have developed Coordinated Reset (CR) stimulation, a spatial-temporally patterned desynchronizing stimulation technique. According to computational studies CR stimulation induces a reduction of the rate of coincidences and, mediated by synaptic plasticity, an unlearning of abnormal synaptic connectivity. A sustained desynchronization is achieved by shifting the neuronal system from a pathological to a physiological attractor. Computationally it was shown that CR effectively works no matter whether it is delivered directly to the neurons’ somata or indirectly via excitatory or inhibitory synapses. Accordingly, CR stimulation can be realized by means of different invasive as well as non-invasive stimulation modalities. In accordance with theoretical predictions, electrical deep brain CR stimulation has pronounced therapeutic after-effects in Parkinsonian monkeys as well as cumulative and lasting therapeutic and desynchronizing after-effects in Parkinsonian patients. In tinnitus patients acoustic CR stimulation leads to a significant clinical improvement as well as a decrease of pathological neuronal synchrony in a tinnitus-related network of auditory and non-auditory brain areas along with a normalization of tinnitus characteristic abnormal interactions between different brain areas. After reviewing the principles of CR stimulation and the clinical findings obtained so far, I will explain how visual CR stimulation might be applicable to migraine patients.

[edit] October

28 Gunnar Schaefer, Brian Wandell, and Michael Perry

The latest developments of SciTran (formerly known as NIMS)

[edit] September

23 Adam Jones, Leopold Lab, NIH Norm-Based Coding Of Faces in the Anterior Fundus Face Patch

Face perception, a fundamental component of primate social behavior, is supported by a network of specialized visual regions recently identified within the ventral visual stream of humans and macaques. Discrete regions, or “patches” within this network respond preferentially to face images over non-face object images, with the majority of visually responsive neurons within these regions firing selectively to faces. In recent years, the functional specialization of neurons within particular fMRI-defined face patches has been studied intensively. In this study, we have investigated the selectivity of neurons in one such patch (AF) located in the anterior fundus of the superior temporal sulcus. Using sets of morphed monkey faces and morphed human faces, we found that a large population of neurons responded to the distinctiveness of briefly presented faces, with the average face yielding the smallest response over the majority of the population. The finding was present for both human and monkey faces, though monkey faces generally gave larger responses across the population. Such norm-based tuning closely resembles previous results in ventral IT cortex (TEav, Leopold et al 2006) and is in accord with psychophysical models for face perception holding a special role of the average face for extracting individual identity. The use of longitudinal electrophysiological recording, allowing us to monitor individual neurons for weeks at a time, provided us with sensitivity to the potential contribution of within- and between- sessions to the effects of adaption, for which no evidence was found during the entire 75 days of recording. These results contribute an emerging understanding of functional compartmentalization in macaque face-processing system.


[edit] June

10 Dirk Walther University of Toronto


Contour junctions are causally involved in eliciting neural representations of scene categories in human visual cortex

People can categorize complex real-world scenes accurately and rapidly. What are the mechanisms and features underlying this astonishing feat? Here we provide conclusive evidence from computational analysis, behavioral testing, and decoding from neural activity that junctions of contours are essential for human scene categorization. We trained computational models for scene categorization using structural properties of contours (orientation, length, and curvature) and contour junctions (types and angles). Of these properties, orientation contained the most information about scene category that can be exploited computationally. We found, however, that junction properties generated prediction errors most similar to errors made by humans in a six-alternative forced-choice scene categorization task. Furthermore, disrupting the distribution of contour junctions led to a significant decrease in behavioral categorization performance. Accuracy of decoding scene categories from patterns of fMRI activity in the parahippocampal place area (PPA) and other scene-selective brain areas also decreased significantly when junctions were perturbed. Disruption of orientation statistics, on the other hand, did not affect decoding accuracy in the PPA. A searchlight analysis of error pattern similarity between neural decoding and computational models of scene categorization further elucidates the critical role of junction statistics for the representation of real-world scene categories in foveal regions of early visual cortex and all scene-selective high-level regions, including the PPA. We conclude that contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene, are essential for scene categorization.

[edit] May

[edit] 6 - VSS Prep Part One

1. Cătălin Iordan: Talk Practice

TITLE: Category Boundaries and Typicality Warp the Neural Representation Space of Real-World Object Categories

ABSTRACT: Categories create cognitively useful generalizations by leveraging the correlational structure of the world. Although classic cognitive studies have shown that object categories have both intrinsic hierarchical structure (entry-level effects, Rosch et al., 1976), as well as graded typicality structure (Rosch, 1973), relatively little is known about the neural underpinnings of these processes. In this study, we leverage representational similarity analysis to understand how behaviorally relevant category structure emerges in the human visual system. We performed an fMRI experiment in which participants were shown color photographs of 15 subordinate-level categories from each of two basic-level categories (dogs and cars). Typicality for each subordinate within its basic was also assessed behaviorally. We computed the neural correlation distance between all pairs of exemplars in early visual areas (V1, V2, V3v, hV4) and object-selective cortex (LOC). We found that as we move from low-level visual areas to object-selective regions, neural distances are compressed within object categories, and simultaneously expanded between object categories. This effect arises gradually as we move up the ventral visual stream through V1, V2, V3, V4, with a marked increase between hV4 and LOC. Furthermore, within each basic category in LOC, subordinate typicality influences the organization of the neural distance space: highly typical items are brought closer together, while distance between atypical exemplars grows. Again, this effect arises between hV4 and LOC, suggesting that a significant qualitative jump in the differentiation of object categories from one another as independent structures, as well as in their internal organization, occurs in object-selective areas. Our results show that as we move up the ventral visual stream, distances between neural representations of real-world objects warp to facilitate categorical distinctions. Moreover, the nature of this warping may provide evidence for a prototype-based representation that clusters highly typical subordinates together in object-selective cortex.*


2. Haomiao Jiang: Talk practice

[edit] 13 - VSS Prep Part Two

Peter Kohler: Talk Prep; Title: Parametric responses to rotation symmetry in mid-level visual cortex

Vaidehi Natu: Poster Prep; Title: Neural discriminability for face identity improves from childhood to adulthood

Lior Bugatus: Poster Prep; Title: Task differentially modulates the spatial extent of category-selective regions across anatomical locations

[edit] 20: No vision Lunch due to VSS

[edit] 27: VSS rehash

[edit] April

[edit] April 15

Jesse will lead the discussion of the following paper: Structural Connectivity Fingerprints Predict Cortical Selectivity for Multiple Visual Categories across Cortex. Osher DE, Saxe RR, Koldewyn K, Gabrieli JD, Kanwisher N, Saygin ZM. Cereb Cortex. 2015 Jan 26.

Abstract A fundamental and largely unanswered question in neuroscience is whether extrinsic connectivity and function are closely related at a fine spatial grain across the human brain. Using a novel approach, we found that the anatomical connectivity of individual gray-matter voxels (determined via diffusion-weighted imaging) alone can predict functional magnetic resonance imaging (fMRI) responses to 4 visual categories (faces, objects, scenes, and bodies) in individual subjects, thus accounting for both functional differentiation across the cortex and individual variation therein. Furthermore, this approach identified the particular anatomical links between voxels that most strongly predict, and therefore plausibly define, the neural networks underlying specific functions. These results provide the strongest evidence to date for a precise and fine-grained relationship between connectivity and function in the human brain, raise the possibility that early-developing connectivity patterns may determine later functional organization, and offer a method for predicting fine-grained functional organization in populations who cannot be functionally scanned.


[edit] April 22

Steeve will lead the discussion of the following paper: Functional connectivity of visual cortex in the blind follows retinotopic organization principles. Brain. 2015 Apr 13. pii: awv083. [Epub ahead of print] Striem-Amit E, Ovadia-Caro S, Caramazza A, Margulies DS, Villringer A, Amedi A.

Abstract Is visual input during critical periods of development crucial for the emergence of the fundamental topographical mapping of the visual cortex? And would this structure be retained throughout life-long blindness or would it fade as a result of plastic, use-based reorganization? We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main retinotopic mapping axes: eccentricity (centre-periphery), laterality (left-right), and elevation (upper-lower) throughout the retinotopic cortex extending to high-level ventral and dorsal streams, including characteristic eccentricity biases in face- and house-selective areas. Functional connectivity-based topographic organization in the visual cortex was indistinguishable from the normally sighted retinotopic functional connectivity structure as indicated by clustering analysis, and was found even in participants who did not have a typical retinal development in utero (microphthalmics). While the internal structural organization of the visual cortex was strikingly similar, the blind exhibited profound differences in functional connectivity to other (non-visual) brain regions as compared to the sighted, which were specific to portions of V1. Central V1 was more connected to language areas but peripheral V1 to spatial attention and control networks. These findings suggest that current accounts of critical periods and experience-dependent development should be revisited even for primary sensory areas, in that the connectivity basis for visual cortex large-scale topographical organization can develop without any visual experience and be retained through life-long experience-dependent plasticity. Furthermore, retinotopic divisions of labour, such as that between the visual cortex regions normally representing the fovea and periphery, also form the basis for topographically-unique plastic changes in the blind. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.

[edit] April 29

Rosemary will lead the discussion of the following paper: The fine-scale functional correlation of striate cortex in sighted and blind people. Butt OH, Benson NC, Datta R, Aguirre GK.

J Neurosci. 2013 Oct 9;33(41):16209-19. doi: 10.1523/JNEUROSCI.0363-13.2013.

Abstract To what extent are spontaneous neural signals within striate cortex organized by vision? We examined the fine-scale pattern of striate cortex correlations within and between hemispheres in rest-state BOLD fMRI data from sighted and blind people. In the sighted, we find that corticocortico correlation is well modeled as a Gaussian point-spread function across millimeters of striate cortical surface, rather than degrees of visual angle. Blindness produces a subtle change in the pattern of fine-scale striate correlations between hemispheres. Across participants blind before the age of 18, the degree of pattern alteration covaries with the strength of long-range correlation between left striate cortex and Broca's area. This suggests that early blindness exchanges local, vision-driven pattern synchrony of the striate cortices for long-range functional correlations potentially related to cross-modal representation.

[edit] March

4 Preeti Verghese (Smith-Kettlewell) Neural mechanisms of selective attention

Prior studies suggest that visual attention selects objects of interest by biasing the competition in favor of attended items. However, neurophysiological studies of selective attention to one of two objects typically report aggregate responses to individual objects as well as to their inter­action. To separate these two response components, we directly measured the interaction between stimuli by using steady-state visual evoked potential (SSVEP). These responses were measured with high density EEG in neurotypicals and electrocorticographically (ECOG) from patients implanted with cortical surface electrodes for the treatment of intractable seizures due to epilepsy. The SSVEP allows us to measure attentional modulation of individual stimuli (self-terms) as well as their nonlinear interaction (intermodulation terms). Participants were tested with a pair of adjacent wedge-shaped gratings flickering at two different frequencies. By asking participants to attend to both stimuli or to one of them in separate conditions, we determined the attentional modulations of self-terms and intermodulation terms compared to a condition when attention was directed away from the flickering wedges. Our data show that selective attention differentially modulates self-terms as well as intermodulation terms. Consistent with previous single-cell studies, the self-terms have the greatest amplitude when attention is directed to one of the two stimuli. In contrast, the intermodulation term has the greatest amplitude when patients attend to both stimuli, is smaller when they attend to a single stimulus, and insignificant when attention is directed away. This suggests that the intermodulation term can serve as an index of attentional selection. Our study advances the understanding of processes involved in selective attention by separately tracking response components resulting from individual stimuli and from their nonlinear interaction.

11 Charlie Gross, Princeton University

17 (Tuesday, time: 1PM-, Jordan 102) Shinsuke Shimojo, Caltech Postdiction, and Perceptual Awareness

There are a few postdictive perceptual phenomena known, in which a stimulus presented later seems causally to affect the percept of another stimulus presented earlier (=the operational definition of “postdictive phenomenon”). While backward masking and apparent motion provide classical examples, the flash lag effect and its variations have stimulated theorists. The TMS-triggered scotoma together with “backward filling-in” of it offer a unique neurophysiological case. Findings suggest that various visual attributes - not just spatial locations, but shape, temporal sequence, motion, and even color - are vigorously reorganized in a postdictive fashion to be consistent with each other, or to be consistent in a causality framework. It is highly related to the ideas of “object updating” (by J. Enns & his colleagues) and “backward referral (B. Libet), but with different emphases and implications. In terms of the underlying neural mechanisms, four prototypical models have been considered: the “catch up,” the “reentry,” the “different pathway” and the “memory revision” models. It may also be argued that “perceptual awareness” can be understood as a postdictive construct. If so(, together with the operational definition of “postdictive phenomenon” above), one may expect structurally similar phenomena of backward reconstrunction across a wide variety of time scale, not just limited to less than few hundred milliseconds for perception. By extending the list of postdictive phenomena to memory, sensory-motor and higher-level cognition, indeed one may note that such a postdictive reconstruction may be a general principle of neural computation, ranging from milliseconds to months in a time scale, from local neuronal interactions to long-range connectivity, in the complex brain. The mechanisms and functions of such postdictive processes may be an intriguing “unsolved” target of research for the next several decades of perceptual/neural sciences.


24 Peter Kohler (Norcia Lab) Does SNR of visually evoked BOLD responses change with rapid multiplexed fMRI?

Multiplexed fMRI allows for sub-second acquisitions of whole-brain images (Feinberg, 2010), much faster than standard fMRI protocols. What happens to the signal-to-noise ratio (SNR) of fMRI BOLD responses as acquisitions get faster? We address this question by showing participants a flickering grating undergoing periodic contrast modulation, while acquiring multiplexed fMRI. This protocol yields a direct measure of a stimulus-evoked BOLD response (unlike resting-state-based measures of data quality, e.g. Smith et al. 2013). Stimulus frequency and acquisition rate were varied independently. SNR was quantified using spectral analysis as the ratio of the response amplitude at the stimulus frequency to the non-stimulus-related background at neighboring frequencies. Our first experiment used a stimulus period of 24s, combined with four different multiplexing factors that yielded TRs of 2000ms, 1200ms, 800ms and 400ms. SNR did not change with different TRs, and there was no interaction between TR and region-of-interest in retinotopic visual cortex, although there was a main effect of region-of-interest. In our second experiment, we looked for an interaction between stimulus frequency and TR. We used TRs of 2000ms and 400ms to sample the response to the stimulus modulating over 12s, 8s and 6s periods. We again found little effect of TR on SNR, and no interaction, although we did find main effects of both region-of-interest and stimulus frequency. These results demonstrate that, at least for visually evoked BOLD-responses, SNR does not decrease when acquiring multiplexed sub-second fMRI.

[edit] February

18 Kevin Weiner leads the discussion on

Hung et al. (2015) J Neurosci.

Functional mapping of face-selective regions in the extrastriate visual cortex of the marmoset.

http://www.ncbi.nlm.nih.gov/pubmed/25609630

25 Anthorny Stigliani, Journal Club

De Martino et al. (2014) Cerebral Cortex

High-Resolution Mapping of Myeloarchitecture In Vivo: Localization of Auditory Areas in the Human Brain.

http://www.ncbi.nlm.nih.gov/pubmed/24994817


[edit] January

7 Michelle R. Greene, Stanford (Fei-Fei Li Lab) More than the sum of their parts: Understanding and reconstructing real world visual representations

14 Hiromasa Takemura (in collaboration with Maiko Uesaki and Hiroshi Ashida in Kyoto Univ) Human white matter pathway communicating cortical regions selective for optic flow

21 Donatas Jonikaitis, Stanford (Moore Lab) Action selection and inhibition in cognitive control

To be impulsive is to act without forethought. However, most prevalent models of impulse control, or response inhibition, focus on ‘stopping' an initiated action in response to an external signal instead of the ‘preventing' an undesirable action from being initiated. By modifying a delayed eye movement task that allowed us to contrast foreknowledge of 'where to look' versus 'where not to look', we studied the processes underlying action selection and inhibition. We found that advanced preparation to avoid a saccade resulted in formation of spatially specific inhibitory biases. Further, spatial and temporal properties of the preparatory processes involved in action selection and inhibition were characteristically different from each other. Lastly, we show that oculomotor system modulates sensory processing differently in scenarios of action selection and inhibition. Together, these findings outline an essential element of cognitive control that enables us to prevent specific undesirable actions.


28 Cătălin Iordan (Fei-Fei Li Lab) Basic Level Category Structure Emerges Gradually Across Human Ventral Visual Cortex

Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad (“natural object”) to very distinct (“Mr. Woof”), with a mid-level of generality (basic level: “dog”) often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multi-voxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipito-temporal cortex. We found that although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipito-temporal cortex (LOC). This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

[edit] 2014 Schedule

[edit] December

1 (Monday) Christof Koch (Allen Institute)

3 Elise Piazza (UC Berkeley, Silver Lab) Resolving Ambiguity in the Visual World

When visual input is consistent with multiple perceptual interpretations (e.g., the Necker cube), these interpretations compete for conscious awareness. The process of determining which interpretation will be dominant at a given time is known as perceptual selection. We study this process using binocular rivalry, a bistable phenomenon in which incompatible images presented separately to the two eyes result in perceptual alternation between the two images over time.

In one study, we showed that a well-established asymmetry in spatial frequency processing between the brain’s two hemispheres applies to perceptual selection. Specifically, a lower spatial frequency grating was more likely to be selected when it was presented in the left visual field (right hemisphere) than in the right visual field (left hemisphere), while a higher spatial frequency grating showed the opposite pattern of results. Surprisingly, this asymmetry persisted for the entire stimulus duration (30 seconds), which is the first demonstration that hemispheric differences in spatial frequency processing continue long after stimulus onset.

In another study, we found that very recently formed audio-visual associations influence perceptual selection. Here, we used a brief (8-minute) crossmodal statistical learning paradigm to expose subjects to arbitrary, consistent pairings of images and sounds. In a subsequent binocular rivalry test, we found that a given image was more likely to be perceived when it was presented with a sound that had been consistently paired with it during exposure than when presented with previously unpaired sounds. Our results indicate that the audio-visual associations formed during the brief exposure period influenced visual competition, and that this effect of learning was largely implicit, or unconscious.

10 Brian will make a presentation on NIMS

17 Joy L. Taylor (Department of Psychiatry and Behavioral Sciences)

Influences of experience and contrast sensitivity on pilots’ landing decisions: Simulations of foggy weather

To land or not to land in fog? The answer to this question likely involves multiple large-scale corticostriatal circuits, informed by the visual system’s computations. I will present our lab’s latest research on real-world decision making in the presence of risk and uncertainty. We have created aircraft-simulator and fMRI-laptop versions of a flight task in which pilots are asked to “fly” a series of instrument-landing-system approaches, each time deciding whether or not to land in varying foggy conditions. The density of the fog is manipulated to affect visibility of the runway environment, making runway cues barely visible (and legal to land), to not visible (not legal and risky to land). Pilots ranging from ages 19 to 77 years, with two different levels of flight training/expertise, have participated in three experiments to date (two aircraft-simulator and one fMRI experiment). I will present a few key findings on how pilot training, age, contrast sensitivity, and task experience affect simulator landing decisions. I also want to leave time for audience participation regarding future directions and increased use of vision-science methods and computational models.

[edit] November

5 Steeve Laquitaine (Gardner Lab) Humans approximate Bayesian inference with a switching heuristic during motion direction estimation

12 Moqian Tian (Grill-Spector Lab) How do people learn viewpoint invariant object recognition during unsupervised learning?

19 No Vision Lunch (SfN meeting)

26 No Vision Lunch (Thanksgiving)

[edit] October

15 Hiromasa Takemura (Wandell Lab) Parameter sweep tractography improves connectome accuracy


29 Peter Kohler (Norcia Lab) Neural response for visual symmetry

[edit] September

17 Qiyong Gong West China Hospital

Translational MR Imaging for Mental Disorders

24 Arash Afraz MIT

The causal role of face-selective neurons in face perception

Many neurons in the inferior temporal cortex (IT) of primates respond more strongly to images of faces than to images of non-face objects. Such so-called “face neurons” are thought to be involved in face recognition behaviors such as face detection and face discrimination. While this view implies a causal role for face neurons in such behaviors, almost all evidence to support it is only correlational. Here, I bring together evidence from electrical microstimulation, optogenetic and pharmacological intervention to bridge the gap between the neural spiking of IT face selective neurons and face perception.

[edit] August

20 Maheen Adamson, Palo Alto Veteran Administration

Integration of Clinical and Research Neuroimaging to understand TBI in the Veteran Population

Traumatic Brain Injury (TBI) is frequently referred to as the “signature wound” of the Iraq and Afghanistan wars. Previous studies estimate that 10–20% of U.S. Veterans who served in these conflicts experienced mild to moderate TBI, mainly related to blast exposure (Jorge et al., 2012). Research studies conducted in the last decade reveal complex relationships between deployment-related factors and overlapping physical (e.g., injury) and psychiatric (e.g., mental health) outcomes (Vanderploeg et al., 2012). To understand and treat these conditions, it is necessary to apply an integrated, physical and mental health care approach to post-deployment care. The War Related Illness and Injury Study Center (WRIISC) is therefore at a unique junction where clinical research can inform and aid clinical practice by evaluating the complex problems faced by returning Veterans. WRIISC CA has been able to incorporate a state-of-the-art, clinical research Diffusion Tensor Imaging (DTI) sequence into the evaluations all Veterans who visit the clinic. In this talk, we report on our method of integrating measures obtained from clinical and research neuroimaging using primarily the Vista Lab (Stanford University) processing stream. Our mission is to understand the long-term effects of TBI and its relation to other post-deployment health problems, particularly PTSD and cognitive impairments.

27 Journal club

Kolster, Janssens, Orban and Vanduffel (2014) The Retinotopic Organization of Macaque Occipitotemporal Cortex Anterior to V4 and Caudoventral to the Middle Temporal (MT) Cluster.

Kevin and Hiromasa will lead discussion.

[edit] July

16 TBA

23 Michael Silver, Berkeley School of Optometry Attention, perception, and endogenous brain activity

Visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. We used fMRI to characterize attentional modulation of visual responses across the visual field in a large number of topographically-organized cortical areas and found that different cortical areas exhibit distinct patterns of attentional modulation as a function of eccentricity. These patterns may reflect separate roles of attention in form and object recognition and in planning motor responses to attended locations. In another project, we measured the effects of spatial attention on stimulus-evoked responses and on slow endogenous fluctuations in fMRI signals. Attention increased the amplitude of evoked responses and suppressed endogenous fluctuations in a large number of topographic cortical areas. Surprisingly, subjects’ visual detection performance was predicted by the magnitude of attentional suppression of endogenous fluctuations but not by the amount of enhancement of the brain’s response to the attended stimulus. These results raise questions regarding the functional consequences of signal enhancement by attention and emphasize the importance of attentional regulation of endogenous patterns of brain activity.


30 Dan Yamis, MIT Using computational models to predict neural responses in higher visual cortex

The ventral visual stream underlies key human visual object recognition abilities. However, neural encoding in the higher areas of the ventral stream remains poorly understood. Here, we describe a modeling approach that yields a quantitatively accurate model of inferior temporal (IT) cortex, the highest ventral cortical area. Our key idea is to leverage recent advances in high-performance computing to optimize neural networks for object recognition performance, and then use these high-performing networks as the basis of neural models. We found that, across a wide class of Hierarchical Convolutional Neural Networks (HCNNs), there is a strong correlation between a model’s categorization performance and its ability to predict IT neural response data. Pursuing this idea further, we then identified an HCNN that matches human performance on a range of recognition tasks. Critically, even though we did not constrain this model to match neural data, its top output layer turns out to be highly predictive of IT spiking responses to complex naturalistic images at both the single site and population levels. Moreover, the model’s intermediate layers are highly predictive of neural responses in the V4 cortex, a midlevel visual area that provides the dominant cortical input to IT. These results show that performance optimization — applied in a biologically appropriate model class — can be used to build quantitative predictive models of neural processing.

[edit] June

18 Adaptation through second-order homeostasis

Zachary Westrick, New York University

Pattern adaptation leads to suppressed neural responses among neurons that respond to an adapting stimuli. For orientation-selective neurons in V1, suppression is accompanied by a repulsive shift in preferred orientation tuning. This shift arises in neurons tuned near the adapter due to asymmetrically larger suppression on the tuning-curve flank facing the adapted orientation, and has been characterized as a stimulus-specific (as opposed to neuron-specific) adaptation effect. We develop a computational model for stimulus-specific adaptation based on the recent observation that neural response covariance structure is preserved even in a stimulus environment for which a single orientation is biased to occur much more often than any other (Benucci et al., Nat. Neurosci., 2013). The model consisted of a population of neurons with linear, orientation-selective receptive fields, divisive normalization, and Hebbian learning of normalization weights. For each stimulus presentation, the divisive normalization pools are updated as follows: 1) Measure the products of neural responses for each pair of orientation-tuned neurons. 2) Increase the contribution of neuron i to the divisive normalization pool of neuron j in proportion to this product for pair ij, minus its long-term expected value (fixed for each neuron pair, determined by their relative tuning). When simulated on a biased ensemble of rapidly flashed gratings, the model produces a combination of tuning-curve suppression and prefererred-orientation shifts that are in close agreement to those observed in the neural population. We discuss this model, as well as several alternatives, and describe possible implications for the functional role of adaptation in sensory processing.

11 Regularity-driven Texture Perception

Yanxi Liu, Penn State University

http://www.cse.psu.edu/~yanxi/

Texture has been a classic research topic across many different academic and application fields from computer vision, computer graphics to human vision. Static or dynamic textures are traditionally formulated as a visual (or tactile) phenomenon associated with some type of statistical distribution stability. Motivated by the well-defined 2D crystallographic groups (wallpaper groups), contrary to common practice, we establish a continuous texture regularity spectrum originated from its most regular form – wallpaper patterns. By capturing the statistical departures from the origin, we demonstrate effective machine perception (analysis, synthesis and manipulation) of real world textures, near-regular textures (NRT) in particular. In this talk, I will report our methodology, findings, partly validated by crowd sourcing, and applications in dynamic texture tracking and urban scene understanding.

[edit] May

7 VSS Practice Day 1

Chris Baldassano (Fei-Fei Li Lab, poster) Supervoxel parcellation of visual cortex connectivity

Michelle Greene (Fei-Fei Li Lab, talk, 15 min) Human estimates of object frequency are frequently over-estimated

Marius Cătălin Iordan (Fei-Fei Li lab, talk, 15 min) Locally-Optimized Inter-Subject Prediction of Functional Cortical Regions

14 VSS Practice Day 2

Ariel Rokem (Wandell Lab, talk, 20 min) Measuring and modeling diffusion and white matter tracts

Emily Cooper (Norcia Lab, talk, 15 min) Perceived depth in natural images reflects encoding of low-level depth statistics

Jacek Dmochowski (Norcia Lab, talk, 15 min) Neural dynamics of fine direction-of-motion discrimination

21 No Vision Lunch (VSS)

27 VSS Rehash

[edit] April

2 Dora Hermes, Stanford University

Stimulus dependence of gamma oscillations in human visual cortex

9 Walter Schneider, University of Pittsburg

http://schneiderlab.lrdc.pitt.edu/ | http://HDFT.info

16 Swaroop Guntupalli, Dartmouth University

Inter-subject hyperalignment of local representational spaces

We have previously developed a functional alignment method called 'hyperalignment' that exploits the rich stimulation provided by a movie, to align representational space of a brain region (ventral temporal cortex) across subjects that generalizes across studies. We extend this framework to the whole brain to derive a single matrix that transforms a subject's data into a common space with local neural representational spaces of multiple cortical systems aligned across subjects. We validated the generalization of these aligned common spaces in three different experiments at three levels of information representation: 1. early visual maps (V1,V2,V3), 2. category-selective regions (faces, places, objects, bodies), 3. multivariate patterns representing complex naturalistic information like movie scenes and animal species. We further validated hyperalignment in the auditory domain using naturalistic auditory stimulation - music and present evidence that a stimulus reconstruction model built in a hyperalignment-derived common model space performs as well as a reconstruction model built for each subject separately. These findings suggest that whole brain hyperalignment can facilitate mapping cortical responses from different individuals to a common template preserving fine-scale information, and provides an explicit, computational account for their topographic variability across individual brains. Such a common template populated by response patterns pertaining to different perceptual and cognitive states aggregated across different subjects from different studies has the potential to serve as a functional brain atlas.

23 Jochen Braun, Professor of Cognitive Biology, Magdeburg, Germany

Dynamics of visual perception and collective neural activity

Visual perception has all the hallmarks of an ongoing, cooperative-competitive process: probabilistic outcome, self-organization, order-disorder transitions, multi-stability, and hysteresis. Accordingly, it is tempting to speculate that the underlying collective neural activity performs an exploratory attractor dynamics (spontaneous transitions between distinct steady-states), perhaps at multiple spatial and temporal scales. Here I summarize our recent investigations of this dynamical hypothesis. In each case, a careful empirical study of perceptual dynamics fully constrains an idealized model of the stochastic dynamics of collective neural activity: The path-dependence of motion grouping (e.g., when motion coherence follows a random walk), reveals the effective energy landscape and relaxation time of grouping percepts, experimentally confirming the simultaneous presence of distinct attractor states. The scalar property of perceptual dominance times is readily explained by stochastic accumulation of activity across multiple independent nodes (idealized cortical columns), but not by other kinds of stochastic processes (e.g., diffusion-to-bound). The paradoxical input dependence of perceptual dominance in multi-stable phenomena (‘Levelt’s propositions’) constrains concurrent processing at two levels: stochastic accumulation of evidence by independent lower-level nodes, and cooperative-competitive interactions between tightly coupled upper-level nodes. This experimentally derived architecture maps naturally onto the well-known ‘saliency map’ and ‘predictive coding’ schemes of visual processing. I conclude that the dynamical hypothesis outlined above permits a particularly close and direct back-and-forth between perceptual experiment and computational theory and thus has the potential to dramatically accelerate our progress in understanding visual function.

30 Jason Yeatman, PhD defense, Stanford University

[edit] March

5 Nanjie (Nathan), GONG, Hong Kong University

Probing brain microstructural changes using non-Gaussian diffusion MRI

Diffusion tensor imaging (DTI) has already been extensively used to probe microstructural alterations in white matter tracts, but scarcely, in deep gray matter. Diffusional kurtosis imaging (DKI) is a mathematical extension of DTI, which is suggested to more comprehensively mirror tissue microstructure, particularly in isotropic tissues such as gray matter. We utilized the DKI method and a white-matter-model that provided metrics of explicit neurobiological interpretations in healthy participants (58 in total, age from 25 to 84 years). Results suggested that diffusional kurtosis can provide measurements in a new dimension that were complementary to diffusivity metrics. Kurtosis together with diffusivity can more comprehensively characterize microstructural compositions and age-related changes than diffusivity alone. Combined with appropropriate models, DKI has the promise to elucidate neurobiological alterations underlying a variety of neurological diseases.

19 Emily Cooper, Stanford University Humans use luminance cues to judge depth when viewing natural scenes

Seeing in 3D is typically understood as relying on a patchwork of visual depth cues. Accessing these cues requires computations that have challenged computer-vision algorithms and could only feasibly be performed by late-stage neural integration mechanisms. However, statistical analyses of natural scenes have revealed low-level luminance patterns that are predictive of distances, and that could be accessed with a low computational cost. For example, darker points tend to be farther away than brighter points in natural scenes, and this pattern is reflected in V1 cell tunings. In the current work, we are (1) investigating how this luminance cue might be used by the visual system and (2) testing whether perceptual depth judgments are affected by this cue.

26 Kirsten Dalrymple, University of Minnesota, http://www.faceblind.org/social_perception/Kirsten.html

Abnormal face perception as a tool for understanding the function and development of the human face processing system

I will begin my talk by providing a brief overview of the face processing system and a review of neuropsychological studies that I have conducted to investigate its structure and function. I will then discuss my primary area of research, which focuses on impaired face recognition in children and adults with developmental prosopagnosia (DP). DP is defined by severe face recognition difficulties due to the failure to develop the necessary mechanisms for processing faces. I will demonstrate how studying behavioural dissociations in DP allows us to make inferences about 1) functional dissociations within the face processing system, 2) the normal and abnormal development of the system, and 3) the developmental trajectory of DP itself.

[edit] February

5 E.J. Chichilnisky, Stanford University

The elementary visual signal in primate retina I

Computations in the nervous system are commonly understood in terms of how a neuron aggregates inputs from a discrete collection of other neurons. However, it has been difficult to examine such computations directly, by stimulating the inputs to a neuron individually and in combinations, and examining the resulting output. We recently developed methods to map the visual receptive fields of populations of ganglion cells in the primate retina at the resolution of the cone photoreceptor lattice. I will describe here how we have extended this approach to examine how the signal from an individual cone is represented in the retinal output, and to dissect how ganglion cells combine inputs from different cones across space.

12 E.J. Chichilnisky, Stanford University

The elementary visual signal in primate retina II

19 Christoph Leuze, Stanford Radiology

Layer-Specific Intracortical Connectivity Revealed with Diffusion MRI

In this work, we observed that the tangential diffusion component is orientationally coherent at the human cortical surface. Using diffusion magnetic resonance imaging (dMRI), we have succeeded in tracking intracortical fiber pathways running tangentially within the cortex. In contrast with histological methods, which reveal little regarding 3-dimensional organization in the human brain, dMRI delivers additional understanding of the layer dependence of the fiber orientation. A postmortem brain block was measured at very high angular and spatial resolution. The dMRI data had adequate resolution to allow analysis of the fiber orientation within 4 notional cortical laminae. We distinguished a lamina at the cortical surface where diffusion was tangential along the surface, a lamina below the surface where diffusion was mainly radial, an internal lamina covering the Stria of Gennari, where both strong radial and tangential diffusion could be observed, and a deep lamina near the white matter, which also showed mainly radial diffusion with a few tangential compartments. The measurement of the organization of the tangential diffusion component revealed a strong orientational coherence at the cortical surface.

[edit] January

22 Journal Club Lafer-Sousa R, Conway BR

Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex. Visual-object processing culminates in inferior temporal cortex (IT). To assess the organization of IT, we measured functional magnetic resonance imaging responses in alert monkeys to achromatic images (faces, fruit, bodies and places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations, color-biased regions spanned mid-peripheral representations and place-biased regions overlapped peripheral representations. These results show that IT comprises parallel, multi-stage processing networks subject to one organizing principle.

29 Silvio Savarese, Stanford University

Perceiving the 3D world from Images

When we look at an environment such as a coffee shop, we don't just recognize the objects in isolation, but rather perceive a rich scenery of the 3D space, its objects and all the relations among them. This allows us to effortlessly navigate through the environment, or to interact and manipulate objects in the scene with amazing precision. The past several decades of computer vision research have, on the other hand, addressed the problems of 2D object recognition and 3D space reconstruction as two independent ones. Tremendous progress have been made in both areas. However, while methods for object recognition attempt to describe the scene as a list of class labels, they often make mistakes due to the lack of a coherent understanding of the 3D spatial structure. Similarly, methods for scene 3D modeling can produce accurate metric reconstructions but cannot put the reconstructed scene into a semantically useful form. A major line of work from my group in recent years has been to design intelligent visual models that understand the 3D world by integrating 2D and 3D cues, inspired by what humans do. In this talk I will introduce a novel paradigm whereby objects and 3D space are modeled in a joint fashion to achieve a coherent and rich interpretation of the environment. I will start by giving an overview of our research for detecting objects and determining their geometric properties such as 3D location, pose or shape. Then, I will demonstrate that these detection methods play a critical role for modeling the interplay between objects and space which, in turn, enable simultaneous semantic reasoning and 3D scene reconstruction.

[edit] Past schedule (Archives)

[edit] 2013 Schedule

[edit] July

8 Rainer Goeble, Title: Unraveling Brain Mechanisms at Columnar and Laminar Organization Using High-Resolution Functional Brain Imaging at Ultra-High Magnetic Fields

Please notice this talk will be on a MONDAY


17 John A. Perrone, School of Psychology, The University of Waikato, Hamilton, New Zealand

Extracting 3-D depth information from 2-D movies using the properties of neurons along the primate visual motion pathway.

Mobile robots and autonomous vehicle systems currently rely on a multitude of sensors to extract information about the environment in front of them. It would be advantageous to be able to use a single video camera to capture this information. 3-D depth information can be determined from a movie if the 2-D image motion can be measured and if one has an estimate of the camera’s own motion (heading direction and rotation). However estimating the camera’s heading and rotation from its video output is a difficult problem with a long history. It is also difficult to obtain accurate measurements of the image motion occurring in the video. Despite these obstacles many animals, including humans, are able to solve this ‘depth from motion’ problem, and can safely navigate through complex environments using just the moving 2-D images projected onto the back of a single eye. It remains an open question as to how this is achieved. Over the years we have developed a model of the V1-Middle Temporal (MT/V5)-Medial Superior Temporal (MST) motion pathway that is considered to be the locus of many of the mechanisms responsible for extracting depth from motion. We have recently completed some initial successful tests of a system that uses the properties of V1, MT and MST neurons to extract depth from 2-D video sequences. This scheme represents a possible neural mechanism by which our visual system is able to generate depth maps that correspond to our perception of a 3-dimensional world.


22 Monday Patrick Cavanagh, Université Paris Descartes

Perceived location

How do we know where things are? The standard explanation for perceived position has always been that each neuron responds only to a particular location on the retina so, after correcting for movements of our eyes and head, there should really be no problem. However, perceived location can deviate dramatically from retinal location, showing that this simple explanation cannot be true. These deviations arise when the visual system predicts where targets should be and in this case we see the predicted, not the retinal location. We have found behavioral evidence of attention benefits at these predicted locations and we now show that when targets are moving, they are seen ahead of their actual retinal location because they are seen at their predicted next location. These results suggest that a core function of visual attention is to provide the position code for attended targets and the errors of prediction then allow us to use position perception as a new tool for studying attention. Evidence suggests that underlying both the attention and position representations are saccade maps acting as the “master map of locations” – for eye movements, for attention, and for perception. Interestingly, if the saccade system specifies perceived location, it reverses the usual assumption that action is guided by perception and suggests instead that perception is determined by action.

Please notice this will happen on Monday July 22nd at 11AM not on a Wednesday.

[edit] June

5 Kendrick Kay, Stanford

GLMdenoise: A fast, automated technique for denoising task-based fMRI data


12 Jeremy Freeman, NYU

A functional and perceptual signature of the second visual area

The functions of different cortical areas are determined by the unique response properties of their neurons. There is no generally accepted account of the second visual cortical area (V2), partly because no simple response properties robustly distinguish V2 neurons from those in primary visual cortex (V1). I will describe an inter-disciplinary attack on V2, in which we use computational principles to build synthetic stimuli with naturalistic structure, and then measure neuronal responses to those stimuli in areas V1 and V2 of both humans and non-human primates. Responses to these stimuli reliably and robustly differentiate V2 from V1. The responses in V2, but not V1, predict perceptual sensitivity to the same stimuli as measured in human observers. These findings situate V2 along a cascade of cortical computations that support the representation of naturally occurring patterns and objects.

26 Ativ Levi, Berkeley The neural basis of visual perception in patients with depression.

Major depression disorder (MDD) is a syndrome that involves impairment of cognitive functions such as memory, attention and plasticity. We found that the perceptual level, in task such as filling-in, is affected as well. The abnormal performance may be due to impaired function of the neuronal excitation-inhibition at the low levels of the visual areas, or due to cognitive impairment such as decision criterion or attention. The aim of this study was to explore whether the neural networks underlying perceptual processing in patients with depression is impaired. More specifically, determine whether there is a perceptual loss, in addition to the well-known cognitive loss, in patient with MDD. We used few well defined methods in order to study neural interactions in patients and outpatients with MDD. We studied lateral interactions, using Yes/No paradigm with limited time duration (transient) as well as static lateral masking paradigm (Contrast detection/discrimination task). We conducted experiments measuring the internal noise level, studied if decision making alterations (change in internal criterion) in filling-in phenomena, stem from low or high levels in the brain areas, by using Repetitive Transcranial Magnetic Stimulation (rTMS) and finally explored if medications that increase the inhibitory level, may be a factor that influence the balance between neuronal excitation and inhibition at the perceptual low-level areas. All together, the comprehensive data enabled us to better understand the origin of the perceptual deficit in MDD.

[edit] May

1 Evelina Fedorenko, MIT A novel framework for a neural architecture of language

What are the cognitive and neural mechanisms underlying the uniquely and universally human capacity for language? Since Broca's and Wernicke's seminal discoveries in the 19th century, a broad array of brain regions have been implicated in linguistic comprehension, production and learning, spanning frontal, temporal and parietal lobes, both hemispheres, and subcortical and cerebellar structures. However, characterizing the precise contributions of these different structures to language has proven challenging. Furthermore, although evidence from the investigations of patients with brain damage has long suggested some degree of independence between language and other high-level cognitive functions, many neuroimaging studies have argued that brain regions implicated in language are also engaged in many non-linguistic processes. In this talk I will argue that language is supported by the joint engagement of two functionally and computationally distinct brain systems. The first is comprised of the classic “language regions” on the lateral surfaces of left frontal and temporal lobes. Using individual-subject analysis methods which surpass traditional neuroimaging methods in sensitivity and functional resolution (Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, 2012; Saxe et al., 2006), I have shown that these brain regions are specifically engaged in language processing (Fedorenko et al., 2011; see also Monti et al., 2012). The second is the fronto-parietal "multiple demand" network, a set of regions that are engaged across a wide range of demanding cognitive demands (e.g., Duncan, 2001, 2010). Most past neuroimaging work on language processing has not explicitly distinguished between these two systems, especially in the frontal lobes, where subsets of each system reside side by side within the region referred to as “Broca’s area” (Fedorenko et al., 2012). Using a variety of research methods I am now beginning to characterize the important roles of both domain-specific and domain-general mechanisms in language.

15 No Vision Lunch VSS

22 VSS stories

29 Why microglia matter (TINS review) / MRI Finger printing: http://www.nature.com/nature/journal/v495/n7440/full/nature11971.html

Led by Jason and Brian and Aviv

[edit] 2012 Schedule

[edit] 2011 Schedule

[edit] 2010 Schedule

[edit] 2009 Schedule

[edit] 2008 Schedule

[edit] 2007 Schedule

[edit] 2006 Schedule

[edit] 2005 Schedule

Personal tools