Papers Classifiers


Jump to: navigation, search

In this section you will find various papers on the use of classifiers in fMRI. Articles are posted in no particular order. If you're aware of a paper that applies to the others in this section please take a moment and list it here.

[edit] Classifiers in fMRI

Walther, D. B., Caddigan, E., Fei-Fei, L., & Beck, D. M.
University of Illinois
Posted by: AMR 1.5.2010
    Human subjects are extremely efficient at categorizing natural scenes, despite the fact that different classes of natural scenes often share similar image statistics. Thus far, however, it is unknown where and how complex natural scene categories are encoded and discriminated in the brain. We used functional magnetic resonance imaging (fMRI) and distributed pattern analysis to ask what regions of the brain can differentiate natural scene categories (such as forests vs mountains vs beaches). Using completely different exemplars of six natural scene categories for training and testing ensured that the classification algorithm was learning patterns associated with the category in general and not specific exemplars. We found that area V1, the parahippocampal place area (PPA), retrosplenial cortex (RSC), and lateral occipital complex (LOC) all contain information that distinguishes among natural scene categories. More importantly, correlations with human behavioral experiments suggest that the information present in the PPA, RSC, and LOC is likely to contribute to natural scene categorization by humans. Specifically, error patterns of predictions based on fMRI signals in these areas were significantly correlated with the behavioral errors of the subjects. Furthermore, both behavioral categorization performance and predictions from PPA exhibited a significant decrease in accuracy when scenes were presented up-down inverted. Together these results suggest that a network of regions, including the PPA, RSC, and LOC, contribute to the human ability to categorize natural scenes.

Schwarzlose, Rebecca F; Swisher, Jascha D; Dang, Sabin; Kanwisher, Nancy
PNAS 2008 10.1073/pnas.0800431105
Posted by: AMR 1.5.2010
    Since Ungerleider and Mishkin [Underleider LG, Mishkin M (1982) Two cortical visual systems. Analysis of Visual Behavior, eds Ingle MA, Goodale MI, Masfield RJW (MIT Press, Cambridge, MA), pp 549-586] proposed separate visual pathways for processing object shape and location, steady progress has been made in characterizing the organization of the two kinds of information in extrastriate visual cortex in humans. However, to date, there has been no broad-based survey of category and location information across all major functionally defined object-selective regions. In this study, we used an fMRI region-of-interest (ROI) approach to identify eight regions characterized by their strong selectivity for particular object categories (faces, scenes, bodies, and objects). Participants viewed four types of stimuli (faces, scenes, bodies, and cars) appearing in each of three different spatial locations (above, below, or at fixation). Analyses based on the mean response and voxelwise patterns of response in each ROI reveal location information in almost all of the known object-selective regions. Furthermore, category and location information can be read out independently of one another such that most regions contain both position-invariant category information and category-invariant position information. Finally, we find substantially more location information in ROIs on the lateral than those on the ventral surface of the brain, even though these regions have equal amounts of category information. Although the presence of both location and category information in most object-selective regions argues against a strict physical separation of processing streams for object shape and location, the ability to extract position-invariant category information and category-invariant position information from the same neural population indicates that form and location information nonetheless remain functionally independent.

Kenneth A. Norman, Sean M. Polyn, Greg J. Detre and James V. Haxby
Trends in cognitive sciences 2006
Princeton University
Posted by: AMR 1.5.2010
    A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.

John-Dylan Haynes and Geraint Rees
Nat Rev Neurosci. 2006 Jul;7(7):523-34.
Max Planck Institute for Cognitive and Brain Sciences
Posted by: AMR 1.5.2010
    Recent advances in human neuroimaging have shown that it is possible to accurately decode a person's conscious experience based only on non-invasive measurements of their brain activity. Such 'brain reading' has mostly been studied in the domain of visual perception, where it helps reveal the way in which individual experiences are encoded in the human brain. The same approach can also be extended to other types of mental state, such as covert attitudes and lie detection. Such applications raise important ethical issues concerning the privacy of personal thought.

Haxby, J V, Gobbini, M I, Furey, M L, Ishai, A, Schouten, J L, Pietrini, P
Science 2001
Posted by: AMR 1.5.2010
    The functional architecture of the object vision pathway in the human brain was investigated using functional magnetic resonance imaging to measure patterns of response in ventral temporal cortex while subjects viewed faces, cats, five categories of man-made objects, and nonsense pictures. A distinct pattern of response was found for each stimulus category. The distinctiveness of the response to a given category was not due simply to the regions that responded maximally to that category, because the category being viewed also could be identified on the basis of the pattern of response when those regions were excluded from the analysis. Patterns of response that discriminated among all categories were found even within cortical regions that responded maximally to only one category. These results indicate that the representations of faces and objects in ventral temporal cortex are widely distributed and overlapping.

Hasson, Uri; Levy, Ifat; Behrmann, Marlene; Hendler, Talma; Malach, Rafael
Neuron 2002
Weizmann Institute of Science
Posted by: AMR 1.5.2010
    We have recently proposed a center-periphery organization based on resolution needs, in which objects engaging in recognition processes requiring central-vision (e.g., face-related) are associated with center-biased representations, while objects requiring large-scale feature integration (e.g., buildings) are associated with periphery-biased representations. Here we tested this hypothesis by comparing the center-periphery organization with activations to five object categories: faces, buildings, tools, letter strings, and words. We found that faces, letter strings, and words were mapped preferentially within the center-biased representation. Faces showed a hemispheric lateralization opposite to that of letter strings and words. In contrast, buildings were mapped mainly to the periphery-biased representation, while tools activated both central and peripheral representations. The results are compatible with the notion that center-periphery organization allows the optimal allocation of cortical magnification to the specific requirements of various recognition processes.

Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan
Neuroinformatics 2009
University of Magdeburg, Magdeburg, Germany
Posted by: AMR 1.5.2010
    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.

Gijs Joost Brouwer and David J. Heeger
The Journal of Neuroscience 2009
New York University
Posted by: AMR 1.5.2010
    How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.

Posted by:

Personal tools