Functional Primer


Jump to: navigation, search

This functional neuroimaging primer provides a conceptual overview of the steps involved in analyzing neuroimaging timeseries data (functional MRI or fMRI).

There is a separate diffusion imaging primer, although some of the steps for analyzing fMRI and diffusion MR data (dMRI) are the same; so we will cover both fMRI and dMRI here, noting important differences as they arise.


[edit] Create Time Series from k-space data

For stock vendor sequences (such as EPI), it is fully automatic and the resulting images appear in the vendor's image database. You can get these images from that database as DICOM files at the scanner.

For non-vendor sequences, you might need to run a separate 'recon' program to reconstruct the images from the raw k-space data that the scanner collects. At Stanford's Lucas Center, we often use Gary Glover's spiral sequence, which by default runs Gary's 'grecons' program to convert the raw k-space data (GE's 'P-file') and produce:

  1. PNNNNN.7.mag: raw magnitude image data
  2. PNNNNN.7.hdr: dump of raw GE header info
  3. E...PNNNNN.7: human-readable header info

These data files do not go into the scanner's image database; instead they are transferred to the analysis computer by file transfer protocols. At the Lucas center, we transfer these files directly from the scanner's 'raw' data directory.

[edit] Reformat raw images

See: mrLoad in mrVista2 and mrInitRet

The functional MRI program mrInit reads E and P.mag files and generates a set of Matlab, (.mat) files. We intend to move to the more standard NIFTI file format. The fMRI timeseries for each scan will then be stored in one 4-d NIFTI file. The code to manage different file types has been written (Rory) and is fairly advanced. We plan to integrate this function, mrLoad, from mrVista2.

The dMRI analysis pipeline (mrDiffusion) expects a 4-d NIFTI file of the raw diffusion-weighted (DW) images, including a correct scanner-space transform encoded in the header. To process dMRI data, you also need to know the DW strength (using specified as the 'b-value') and the list of DW directions that correspond to the raw DW images. mrDiffusion expects these to be stored in b-vals and b-vecs files, the same format that FSL's fdt uses.

[edit] Slice timing correction

See: AdjustSliceTiming([scans], [typeName]) and mrSliceTiming(ts,scan,slice,method)

Slice time correction corrects for differences in the slice-specific acquisition time.

Slices are acquired generally either in one of three orderings:

  • ascending: [1 2 3 4 5 6 7 8 9 10]
  • descending: [10 9 8 7 6 5 4 3 2 1]
  • interleaved: [1 3 5 7 9 2 4 6 8 10]

If you acquire a volume of data every 1.5 seconds (TR=1.5s) the delay between the first and last slice is the TR. The delay between adjacent slices is either TR/nSlices or TR/2, depending on whether you use an ordered or interleaved method.

These differences are modest compared to the hemodynamic response function (4-6 sec). For many purposes, it is not essential to correct the slice timing.

It is possible to approach slice timing correction two ways.

First, you can interpolate the raw data to correct for these timing differences. The advantage of this approach is that later preprocessing steps, such as smoothing or interpolating across slices (e.g. due to motion correction, spatial normalization) are not affected by slice-time differences. The disadvantage is that the data need to be interpolated and extrapolated at the edges. The slice timing also smooths the data a slight amount by the interpolation.

You will need to chose an interpolation method. At the Lucas Center (a) the slice order is ascending and (b) the slice acquisition times are equally spaced in time. These assumptions are not necessarily valid for other sequences. You can check the slice timing sequence for a data set by XXX???XXX. This will be incorporated properly at some point - ask BW/Bob

Second, you can correct for slice timing differences in the final analysis. In a GLM analysis your design matrix can be adjusted for each slice and in a phase-encoded design the phase of the best fitting sinusoid can be adjusted according to the acquisition of each slice. This has the advantage that no interpolation of the raw data is required. The disadvantage is that this correction is late in the processing pipeline and the slice timing may be altered by smoothing and spatial resampling of the data (e.g. due to motion correction or spatial normalization). (I don't think we ever do this. This comment should be removed, I think -- BW).

When we use multiple shots, the ordering is [1a 2a 3a ... 1b 2b 3b ...] where a and b are two shots. So the effective spacing between the acquisition times for slice timing = [1 / number of slices / number of shots]. If you have participants who are likely to move (e.g., children), you are probably better off using a single shot. However, you may be working with data from other labs that use multiple shots, or you may have subjects that are not likely to move for which multiple shots would result in nicer-looking data (e.g., Logothetis' 8-shot scans of anaesthetized monkeys).

[edit] Smoothing

To smooth or not to smooth...

[edit] Spatial smoothing

[edit] Temporal smoothing

[edit] Time series motion correction

Many labs do this routinely on every dataset. In the VISTA lab, we usually don't do this unless needed. There are several methods implemented in mrVista.

For dMRI, you will probably need to do eddy-current correction, which might also include motion correction. In fact, the same exact algorithms can be use for fMRI motion correction and dMRI eddy/motion correction. The only differences are that you migth want to limit fMRI motion correction to a rigid-body (6-parameter) transform, while eddy current distortions require more degrees of freedom (e.g., an affine (12-parameter) transform or spcialized constrained non-linear transforms). Also, you must use a robust error estimator, such as mutual information, with dMRI since the image contrast varies with the different diffusion weighing strengths and directions. In mrDiffusion, we use our implementation of the Rohde et. al. (2004 in MRM) 14-parameter transform to do motion and eddy-current correction (see DTI_data_pre-processing).

The basic steps for motion correction are:

  • Estimate motion
  • Resample images to correct the motion
  • Find and remove bad images?

[edit] fMRI slow-drift trend removal

For most fMRI sequences, the MR image intensity will usually drift over time. This is often attributed to thermal drift as the gradient heat up, but may have other causes as well. (Drift in dMRI sequences?)

One approach to dealing with such global signal drift is to simply add a slow-drift term to the model that you fit to the data (see below). In mrVista we often estimate the drift separate from the model fit and explicitly remove it from the time series.

[edit] Compute within-subject alignment

An EPI image (middle) roughly aligned with a T1-weighted high-res anatomy of the same brain (top) based on scanner header information.
Alignment refined by a mutual information algorithm. In the bottom row, the EPI is shown as a red overlay on T1-weighted image.

The high-resolution anatomical scan (usually a T1-weighted SPGR or MP-RAGE) forms a common reference space for all the data from an individual subject. By using the reference space, we can combine data across different fMRI scan sessions and between fMRI and dMRI datasets.

[edit] Compute alignment to a standard space

In the VISTA lab, we routinely ac-pc align each subject's high-res anatomical. This provides a rough alignment across subjects, useful for finding similar brain regions. However, we usually combine data across subjects using ROI analysis methods.

  • ROI analysis
  • alignment to a template (e.g., MNI space)

[edit] Fit a model to the data

To interpret the data, you will need to fit a model.

For fMRI data, many possibilities exist. For example, you might fit a simple sinusoidal model for classic retinotopy data and simple block designs. This model produces three parameters per voxel- the sinusoid amplitude and phase, and the coherence- which is the amplitude normalized by the other frequency componenets and is comparable to a correlation coefficient.

For more complicated block-design paradigms and event-related designs, you may build a general linear model (GLM) that is very similar to that used in SPM.

Finally, Serge Dumoulin has developed a more sophisticated model for retinotopy analysis- the 'pRF' model.

For diffusion data (dMRI), the most common model is the diffusion-tensor (DTI). mrDiffusion currently fits the diffusion tensor using a simple least-sqaures approach; we are exploring more robust tensor-fitting methods.

[edit] Infer something interesting

Finally, you'll look at your data and the resulting model parameters, combine data across some subjects, test hypotheses, make a discovery, and publish ground-breaking papers.

[edit] Lab Manual Overview

While this page provides a conceptual overview of the general analysis steps in MR imaging, this lab manual contains some "how-to" descriptions of how to perform particular tasks using mrVista.

  • The Software page describes practical steps in setting up the mrVista software, and describes some software development tools like CVS as they are used in the VISTA lab.
  • The Anatomical page describes steps used in processing T1-weighted anatomical images. This includes the within-subjects alignment and standardized space processing described above, as well as segmenting cortical white and gray matter.
  • The Alignment page describes methods used to align anatomies from two different sessions -- generally between the "Inplane" images for a functional session, and a reference anatomy that has been produced from the Anatomical stages.
  • The Functional page describes initialization of functional MRI sessions into mrVista, and traveling wave and event-related (or block design) analyses on the time series.
  • The DTI page describes analysis of dMRI and diffusion tensor imaging data.
  • The Visualization page describes tools used for visualizing the gray-matter surface of a subject, using a 3D tool (mrMesh) or a flattening tool (mrFlatMesh).
  • The Stimulus page covers the various tools we use to generate stimuli for psychophysics and during scanning.

There are also some tutorials for using the software for particular analyses:

Personal tools