MrVista TBSS

From VISTA LAB WIKI

Jump to: navigation, search

This page describes the steps needed to convert mrDiffusion data to a format for tract-based spatial statistics TBSS in the FSL package.

The techniques described here are useful for examining whether there are differences between two groups, or for examining whether there is a correlation between some behavioral measure and, say, fractional anisotropy.

There is a mailing list with Q/A about FSL software. Searching in there for TBSS provides some helpful suggestions. We could find almost none of the points that we needed in the main web-page or manual, but we could find them in the discussion list.

There is a separate page to describe the SPM-VBM approach.

Return to main DTI page.

Contents

[edit] Introduction

TBSS is a DTI analysis program created by the Steven Smith and the FSL group. It finds and aligns white matter tracts between subjects using FA images. It is likely an improvement over VBM-style analyses.

[edit] Description

For more detail, see Smith, et al. 2006.

  1. Start with FA images from a bunch of subjects.
  2. Find a “target brain” to which to align all other brains. The target is chosen from among the subjects.
    • 1. Each brain is aligned to every other brain in turn. Alignment is done using FNIRT, FMRIB's Nonlinear Registration Tool. (Changed from IRTK in version 1.2) (tbss_2_reg)
    • 2. The “most typical” brain is chosen as the target: Each aligned image has score indicating how much it was warped from its original form. The brain with the lowest summed score is the target.
    • NOTE: A target brain is chosen instead of using a blurry average brain because the TBSS creators found that the IRTK alignment method does better when aligning real FA images. (tbss_3_postreg)
  3. Warp all subjects to the target (I assume this means that the X-to-target images are selected from step 2). (tbss_3_postreg)
  4. Compute the average FA map from all warped brains. (tbss_3_postreg)
  5. The warped brains and average FA map are affine-transformed into 1x1x1mm3 MNI152 space. At this point, the data is pretty smooth because of the smaller resampling and in the case of the average map, because of averaging. (tbss_3_postreg)
  6. Compute the mean FA skeleton.
    • 1. The tract perpendicular direction is found for each voxel. This is done by finding the “center of gravity” of the surrounding 3x3x3 cluster. The path from the voxel to the center of gravity of the cluster is the tract perpendicular direction. After this is done for the whole brain, the perpendicular direction data is essentially smoothed by replacing each direction data point with the modal direction of the 3x3x3 surrounding area.
    • 2. A voxel is labeled as “skeleton” if its FA is higher than the FA of the two adjacent voxels in the tract perpendicular direction. (tbss_3_postreg)
  7. Find the FA skeleton in each subject: Start at the mean skeleton. Using the already-defined tract perpendicular directions, look for the highest FA voxel perpendicular to each point on the mean skeleton. Assign that value to the skeleton voxel. For limiting the search space, the program searches perpendicularly only until it's halfway between the current tract and another tract (as defined by a “distance map” computing the distance from any voxel to the nearest point on the skeleton). In addition, the FA values are multiplicatively weighted by a Gaussian function (FWHM 20mm) when searching for highest FA. This avoids sucking in high-noise voxels far away from the tract center. (tbss_4_prestats)

[edit] TBSS Processing

This analysis uses FSL, which runs on Linux. To run TBSS you must have a version of FSL that is at least 4.1.4_64. This was installed as of August 2009 on our Linux boxes.

Begin by identifying the directories with the dti6 data files from your study.

The timing numbers we quote here are based on running on the multiple processor azure computer.

[edit] Creating FA maps

First, create FA maps for each of your subjects. These FA maps should represent only voxels that are within the brain - the skulls should be stripped off. Generally, mrDiffusion only strips off the skulls at load time. Thus the images on the screen don't include the skull, but the raw files do. The skull is stripped during the loading process.

To create the FA maps you want to run both the loading and the skull stripping part of the process. One way to load the data and strip the skulls is described below. This is the approach JMT and ER took. Someone want to improve it?

[edit] Creating skull stripped FA

Save FA maps from all of your subjects into a directory. The code snippet below shows how to convert a single subject's data; run that code looping over all of your subjects. We suggest putting the FA maps in a directory called something like TBSS_yourProject. The FA maps will be saved into (subject).nii.gz files (compressed NIFTI).

% Save a fractional anisotropy (fa) map
% Note that tensors.nii.gz is in the dtiNN/bin folder, where NN is the # of directions. 
[dt6, xformToAcpc, mmPerVox] = dtiLoadTensorsFromNifti('path/to/tensors.nii.gz');
fa = dtiComputeFA(dt6);
% Save fa in NIFTI format:
dtiWriteNiftiWrapper(fa, xformToAcpc, 'fa.nii.gz', 1, 'FA');

For related code, say to take raw data into this form, click here.

If you have data with the skulls, as we mostly do in mrDiffusion, you must now strip them off. This code snippet from Elena, will work.

The code removes the NaNs in the FA image and the brain mask. It then applies a brain mask that should be saved in the subject's bin directory. Normally this mask is created during the mrDiffusion processing.

dir="Your directory", for example, "/biac3/wandell4/data/reading_longitude/dti_y4" 
for file in *fa.nii.gz 
do 
 fslmaths $dir/${file%_fa.nii.gz}/dti06/bin/brainMask.nii.gz -nan ${file%_fa.nii.gz}brainMask 
 fslmaths $file -nan ${file%_fa.nii.gz}nonan
 fslmaths ${file%_fa.nii.gz}nonan -mas ${file%_fa.nii.gz}brainMask ${file%_fa.nii.gz}_fa_brain.nii.gz
done 

You should now have five files per subject. One is the original (non-skull stripped), the second is the skull-stripped version, the third is the brain mask itself, the fourth is a brain mask with the NaNs removed, the fifth is the original with the NaNs removed. The only one that is important for TBSS is the skull-stripped FA version.

FSL assumes the only image files in your TBSS directory are your stripped-skull FA maps, so move the other three files somewhere else. Hawaii would be nice.

[edit] tbss_1_preproc

The TBSS command for the first step is simply:

tbss_1_preproc *.nii.gz

According to the TBSS website, this step "will erode your FA images slightly and zero the end slices".

This takes about 3 seconds per file. Your original data will be saved in a sub-directory called origdata.

The output of the command is two files per subject: a preprocessed image (original name with '_FA' appended) and a new brain mask (original name with '_FA_mask' appended). This output will be in a sub-directory called FA.

FSL also creates a web-page entry in the FA directory that allows you to view the slices individually and examine them for obvious problems. The web-page is a montage of all of the slices from the subjects. We suggest you change directory (cd) into the FA directory, examine the web-page, and see that the transformed FA images look OK. You can use the FSL program fslview to view one of the FA images.

Note: Going forward FSL overwrites the files that it produces. Thus, when you want to check a file do it at that step. The file might be gone sooner than you think.

[edit] tbss_2_reg

The next step nonlinearly registers the pre-processed FA data either 1) from each subject to every other subject (flag -n), or 2) from each subject to a particular target, either a subject identified by you (flag -t) or a template like FSL's FMRIB58_FA (flag -T).

We used the first option for our kid data. FSL alerts us that this should take a long time, 5 minutes per instance of N x N instances on a single processor. Use Kendrick's SGE system to use many processors at once and thus cut down the time substantially.

In the TBSS directory we run

tbss_2_reg -n

The output: Each subject now has an additional N niftis, where N=number of subject. Each nifti represents a warping to another subject, including him/herself. There is also a .log, .mat, and .msf file for each of these niftis. The output is added to the FA directory.

Note: When JMT first ran this, there were FA maps that included odd pieces of brain at the front, apparently disconnected from everything else. Visualizing this was important and made us go back and check the original data from mrDiffusion. The oddness was there. We fixed it by using FSL's BET (brain extraction tool) to mask the brains rather than the mrDiffusion tool.

(See the FSL web page to read more about the -T option, which maps your data into the FSL template and does not take nearly as long. This option returns a file target.img. This option is not recommended for children's data, though we are not sure whether it matters in our case.)

[edit] tbss_3_postreg

The third step is

tbss_3_postreg -S

If you've chosen the -n flag for the previous step, then in this step a representative image is chosen as requiring the least amount of warping to fit every other subject. All other subjects are warped to this 'target' image, and then these are transformed into MNI152 space. The output here is target.nii.gz, target_mask.nii.gz, target_to_MNI152.nii.gz, and for each subject a new NIFTI called subjectFAimg_to_target.nii.gz representing the FA image warped to the target.

These standard-space FA images are merged into a 4D image called all_FA.nii.gz, and the mean is taken in an image called mean_FA.nii.gz. Finally, this mean image is 'skeletonised' into an image named mean_FA_skeleton.nii.gz. They are placed in a new directory called stats, located in the TBSS directory.

This step took about 30 minutes on azure.

[edit] tbss_4_prestats

The fourth step is

tbss_4_prestats 0.2

This step thresholds the mean FA skeleton image and finds the skeleton in each individual subject.

From the TBSS website: This step thresholds the mean FA skeleton image at the chosen threshold - a common value that works well is 0.2. The resulting binary skeleton mask defines the set of voxels used in all subsequent processing. Next a "distance map" is created from the skeleton mask. This is used in the projection of FA onto the skeleton (see the TBSS paper for more detail). Finally, the script takes the 4D all_FA image (containing all subjects' aligned FA data) and, for each "timepoint" (i.e., subject ID), projects the FA data onto the mean FA skeleton. This results in a 4D image file containing the (projected) skeletonised FA data. It is this file that you will feed into voxelwise statistics.

This step took 20 minutes on azure. The output is all_FA_skeletonised.nii.gz, mean_FA_skeleton_mask.nii.gz, mean_FA_skeleton_mask_dst.nii.gz, and threshold.txt, all in the stats directory.

[edit] Statistical processing in FSL

We describe how to use FSL's randomise program to create a design and contrast matrix. (It is possible, of course, to analyze the data using our own methods. More on that later).

randomise --help

shows you the list of options for randomise.

In this example, we describe how JMT arranged her data to test for a correlation between FA and mental arithmetic scores. She had previously processed the FA data into the TBSS skeletons as above. These files were in the directory stats. The 4D file containing all of the subjects' skeletonised tracts was a NIFTI file all_FA_skeletonised.nii.gz.

[edit] Design matrix and contrast

She next created a design matrix and a contrast matrix using the FSL program Glm. (We couldn't find a manual for that program, so we started a little wiki of our own.) The design matrix has the linear model terms whose weighted sums are supposed to correlate with the TBSS fractional anisotropy. The basic calculation is to find which voxels' FA values correlate with the linear model terms. In this case the two terms JMT used were the subjects' age and the subjects' mental arithmetic score. Here are the file contents

white/bw[59]:  more agecorrectedapprox.mat 
/NumWaves       2
/NumPoints      28
/PPheights      5.040000e+00    3.541670e-01

/Matrix
1.813929e+00    6.070396e-02
2.053929e+00    -2.262904e-02
 ... 28 rows and 2 columns ...

Notice that

  • The number of rows is the number of subjects
  • The number of columns is the number of linear variables
  • The PPheights are the maximum values in the columns
  • The means of the columns have been subtracted (first column is age, second is score)

The design contrast specifies which types of correlation are being evaluated.

white/bw[61]:  more agecorrectedapprox.con
/ContrastName1  positivecorr
/ContrastName2  negativecorr
/NumWaves       2
/NumContrasts   2
/PPheights              3.277047e-01    3.277047e-01
/RequiredEffect         3.960   3.960

/Matrix
0.000000e+00 1.000000e+00 
0.000000e+00 -1.000000e+00 

The contrast matrix says (first row): Find a positive correlation with the second variable, controlling for the first variable. (second row): Find a negative correlation with the second variable, controlling for the first variable. If you are in the stats directory of this example, and the design and contrast matrices are in there as well, you can run:

[edit] Running randomise

randomise -i all_FA_skeletonised.nii.gz -o faMath -d design.mat -t design.con -n 1000 --T2 -D

The flags are

  • -D flag demeans the FA values.
  • --T2 flag sets Threshold-Free Cluster Enhancement with 2D optimisation (e.g. for TBSS data); H=2, E=1, C=26

These are the recommended flags for the randomise, version 2.5, build 414 and TBSS distributed in August, 2009. Demeaning is clear. The -T2 flag asks randomise to use the TFCE method to account for cluster sizes. TFCE adjusts the statistic in each voxel to reflect the "area of support", i.e. the values of the voxels around it, then runs the stats on the adjusted values. See Smith & Nichols (2009) NeuroImage.

The process took over an hour on green for JMT, 1000 permutations.

There are three classes of output files per contrast type.

  1. faMath_tstat(N).nii.gz
  2. faMath_tfce_p_tstat (and possibly faMath_tfce_p_fstat)
  3. faMath_tfce_corrp_tstat (and possibly faMath_tfce_corrp_fstat)

We don't have the fstat files in this example.

[edit] More randomise tips

See this intuitive description of how randomise works with TFCE cluster correction (TFCE is recommended by tbss).

See the FSL FEAT page on how to set up a design matrix for a 2-sample unpaired t-test using the GLM gui.

JMT found the following threads in the FSL archives useful in setting up randomise analyses for correlations and two-sample t-tests:

  • For a one-sample correlation analysis: "demean" your covariates manually, and demean the brain data by using the -D option:
  • For a two-sample t-test, "demean" your covariates manually. If you construct your design matrix such that all the columns sum to 0, you should demean your brain data using the -D option. If one or more of your columns does not sum to zero, then likely you are modeling the mean in your design matrix already, and you do not need to use the -D option.
  • A reminder about how to include covariates in your design matrix.

[edit] Visualizing the statistical results

You can use fslview to visualize the output statistics.

fslview $FSLDIR/data/standard/MNI152_T1_1mm mean_FA_skeleton -l Green -b 0.2,0.7 faMath_tfce_corrp_tstat1 -l Red-Yellow -b 0.95,1

The flags appear to be interpreted in groups. The first three apply to the background skeleton image. The color is set to green and the ranges for display of the FA are set to 0.2-0.7. The second three refer to the t-test statistics. They are overlaid as red-yellow and the range is shown for p<0.05.

You can use your own template by invoking

fslview <Your Template> all_FA_skeletonised -l Green -b 0.2,0.7 faMath_tfce_corrp_tstat1 -l Red-Yellow -b 0.95,1

mean_FA is the typical name of the template you generate (see above).

We should get a screenshot for here. We tried, but failed. Try again.

[edit] Transforming results back to native space

Run from inside the stats dir.

fslmaths approx_tfce_corrp_tstat1 -thr 0.95 grot95
tbss_deproject grot95 2

The first command creates a NIFTI called grot95 that just contains the voxels of approx_tfce_corrp_tstat1 with value >.95.

The second command transforms the image back to native space in two steps. The 2 flag specifies you would like both steps to be done. A 1 flag specifies you just want the first step.

  • Step 1 projects the skeleton space back to the all_FA space. Output: a NIFTI called grot95_to_all_FA. For JMT, this took 10 minutes on azure.
  • Step 2 projects this back to native space for each subject. Output: one NIFTI per subject, located in the stats directory. For JMT, this took 1 hour for 28 subjects on azure.

[edit] Odd programming notes about FSL

Jessica discovered that on her 64 bit machine fslview didn't run right away. It produced this error:

/home/jmtsang/software/fsl-4.1.4_64/bin/fslview_bin: error while loading shared libraries: libexpat.so.0: cannot open shared object file: No such file or directory

To solve this you must have the powers of sudo. In which case you can change into /lib64 and run

sudo ln -s libexpat.so.1.5.2  libexpat.so.0

Then, change into /usr/lib64 and run

sudo ln -s /lib64/libexpat.so.1.5.2 libexpat.so.0
Personal tools