DTI Preprocessing User Manual

From VISTA LAB WIKI

Jump to: navigation, search

Much of this content was migrated/updated from the VPNL lab wiki page on this topic.

Contents

[edit] How to find params necessary to define for preprocessing

There are several parameters necessary to specify in order for dtiRawPreprocess to work correctly. We recommend using a robust preprocessing script (e.g., dti_FFA_preprocessScript and associated functions) that will discover these parameters for you. However, you can also look in various places to get the information about these parameters directly from the raw datafiles. This process is described below.

It's very important to get these parameters right. Getting this information wrong means repreprocessing, which means many many many hours of lost time.

[edit] Dicom file header info

Some information can be extracted from the header of any of the dicom files acquired as raw DTI data using the following MATLAB commands:

  info = dicominfo('I0002.dcm');
  info.Private_0019_10b2 = gradient file code (pcode/dwepi): this is a unique identifier which 
                                 you input in user cvs if you scan at Lucas (for dtiRawPreprocess)
  info.Private_0019_10b0 = b-value
  info.InPlanePhaseEncodingDirection = phase encoding direction, which is 
                                       the OPPOSITE of frequency encoding   
                                       direction. 'COL' = A/P (freqDir L/R), 
                                       'ROW' = L/R (freqDir A/P)

[edit] Number of slices

I don't think this number is important to specify for dtiRawPreprocess, but is important to report in your methods section. Slice number can also be dumped with the following command (NOT MATLAB: type into unix command line)

  1. Change to the directory with raw dicom files 
  2. Type dicomDumpHeader I002.dcm (make sure it is the SECOND dicom    
     file, otherwise the header info will be off!) 
  3. Will print to screen number of slices, b-value, gradfilecode (dwepi), etc

Verifying the number of slices can be a bit complicated. Here's Bob's explanation

  DICOM is inherently a 2d protocol, so there isn't an official 'number of
  slices' field. That private field is probably fine, as most vendors
  stored the number of slices somewhere. Most dicom-to-nifti functions
  (including my niftiFromDicom, which calls my dicomLoadAllSeries) guess
  the number of slices by loading all of the files and then counting the
  slices. For 4d sequences like DTI and fMRI, where you have a set of
  volumes acquired many times, you also need to look at the DICOM
  SliceLocation field, which tells you the slice's position in the volume
  (in physical space units). The number of unique slice locations is the
  'official' DICOM indication of the number of slices. If you are curious,
  have a look at dicomLoadAllSeries or dicomNumSlices. Both will take a really
  really really long time to run. DicomNumSlices is slightly faster because
  it is less flexible (about an hour or less). 

NOTE: The number of directions is defined by the gradient file code. So, for grad file code 865, the number of directions is 30. This is the total number of non-collinear directions you used, not including the b0s. If Bob defined the protocol, the number of directions can be guessed from the protocol name stored in the header (info.SeriesDescription). MORE IMPORTANTLY -- we should never have to specify this, preprocessing script should derive this information from the grad file.

[edit] Software setup

To run scripts properly, we recommend:

a. Matlab R2008a 
b. The VISTASOFT tools in your matlab path, particularly mrDiffusion
c. Spm5 in your matlab path
d. SSH application (like cygwin)

[edit] Directory structure

This is the recommended structure in order to run a preprocessing script like dti_FFA_preprocessScript

Example of the subject's top-level directory

 /biac3/kgs4/mri/mmddyy_ss_dti

A softlink to this directory should be created in the Kids/dti directory

 /biac2/kgs/projects/Kids/dti/projectName/ss_#yo_mmddyy_FreqDirXX

Within this directory, create the following subdirectories

 SubjectDir/dti: 	put the raw dicoms from the dti scan here; "ls -l | wc" should yield 7800 images
 SubjectDir/t1/t1.nii.gz:    put a link to t1.nii.gz file, if it already exists; can usually be found in 
                                   biac1/kgs/3Danat/subject/Raw; if it doesn't exist, follow directions below

[edit] Is there already a t1.nii.gz?

It is crucial to CHECK FOR EXISTING 3DANATOMY AND USE IT IF IT EXISTS. If you create duplicate t1s and vAnatomy.dats, etc, there will be confusion about whose data is aligned to which anatomy. Because there are multiple people working on any given subject's data both fMRI and DTI, this kind of confusion is no good.

[edit] YES: t1.nii.gz exists

Some subjects already have a 3DAnatomy directory with a t1.nii.gz created. This is true if subjects have already been scanned repeatedly, brain segmented, fMRI data has been analyzed on the volume window, etc. If the subject already has a t1.nii.gz, complete the following three steps.

 1. Create a link to the 3DAnatomy directory (biac1/kgs/3Danat/subject/)
 2. Create a link to the t1.nii.gz (biac1/kgs/3Danat/subject/Raw/)
 3. Proceed to the next section

[edit] NO: I need to create a new t1.nii.gz, but vAnatomy/segmentation/mesh exist

This situation will most likely arise with an adult subject who is added to the DTI project, but has already been segmented and used in other Wandell or KGS lab experiments before. Check in /biac1/kgs/3Danat or in /biac1/wandell/data/anatomy for the subject's directory. If the subject has been scanned recently, they may already have a t1.nii.gz, in which case follow the directions above. However, a person who has been segmented many moons ago will have a vAnatomy, segmentation (class/gray files) and meshes (e.g., Left/3DMeshes), but no t1.nii.gz. This situation may also arise if there is confusion about the match between the t1.nii.gz and vAnatomy -- the user knows which is the 'correct' vAnatomy, but not the correct t1, and wishes to simply create a new t1 from the known correct vAnatomy.

Make a nifti from the existing vAnatomy with the following MATLAB code (many thanks to Rory for this):

   mr = mrLoad('vAnatomy.dat');
   mr.data = mrAnatRotateVAnat(mr.data);  % rotates to NIFTI orientation
   mrSave(mr, 't1.nii.gz');

The problem is that this new t1.nii.gz does not inherit the ac-pc transform. So the following steps are one way to restore this information.

Find the coordinates of the AC -- one easy way to do this is to act like you averaging/aligning the t1.nii.gz for the first time. Fire up the function (below), click on the ac/pc (they should be easy to find and with the same x and z coordinates, only different y coordinates. Write down the x,y,z coordinates.

   mrAnatAverageAcpcNifti('t1.nii.gz','newt1.nii.gz')

Check out the xform fields of the new nifti. It is tempting to just use this new nifti instead of fixing the xform fields of the original t1.nii.gz, but doing this may result in misalignment to the segmentation -- so don't do it.

   ni=readFileNifti('newt1.nii.gz');
   xformXyz=ni.qto.xyz

You should get a 4x4 matrix with the x, y, z coordinates of the AC on the fourth column (negative sign, and -1 off from the coordinates you wrote down on the previous step). You can also just create this matrix by hand if you have found the AC in image coordinates with some other program. So if your AC coordinates were 90, 126, and 72, you would fill in -91, -127, and -73 in the fourth column, first three rows.

   1   0   0   (-x-1)
   0   1   0   (-y-1)
   0   0   1   (-z-1)   
   0   0   0   1

Now load up the original t1.nii.gz, change the transform (qto) fields and save these changes.

   ni=readFileNifti('t1.nii.gz');
   ni=niftiSetQto('ni,xformXyz,true);
   writeFileNifti(ni)

[edit] NO: I need to change the voxel resolution of an existing t1.nii.gz

This can happen if you want to compute the segmentation on a t1.nii.gz with smaller voxels. According to Bob, this can make the segmentation process easier, and individual decisions about ambiguous voxels less critical. You will lose your segmentation, but you can re-use the same acpc definition.

To check the resolution of your original anatomical dicoms, try:

  info=dicominfo('I0001.dcm');
  info.SliceThickness 

Usually, we have isotropic voxels, so if the value of this field is 0.9, then the original resolution is probably 0.9x0.9x0.9. Other dimensions = FOV in that direction / matrix size in that direction. But I'm not sure how the FOV is stored in the header.

According to Bob's recommendation, we create a t1.nii.gz with 0.8x0.8x0.8 voxel size with the command below. In addition to the first two regular arguments (list of spgrs to average, name of output file), we also provide as the third argument the original acpc t1.nii.gz again (it is used so that you do not need to re-define acpc), and the desired voxel size (default = 1x1x1).

  mrAnatAverageAcpcNifti({'oldt1.nii.gz'},'newt1.nii.gz',...
                          'oldt1.nii.gz',[0.8 0.8 0.8]);

[edit] NO: I need to create a new t1.nii.gz from raw spgrs

The end goal of these steps is to have a volume anatomy of high resolution (resliced at 1x1x1mm, usually scanned at about 1.2mmx.9375x.9375), which will then be used for overlaying DTI data and fibers. This anatomy also serves to register functional and DTI data, and for presenting functional anatomy in the volume. Our lab also segments the gray/white matter using this anatomy and renders the gray as a 3D representation.

Make sure that spgrs are acquired as separate series during scan (i.e. Copy and paste the spgr protocol). We typically acquire about 4 spgr scans, in axial and sagittal planes.

 1. Place spgr images in subdirectories 3DAnatomy\raw\spgr1, ...\spgr2, etc
 2. Create a nifti for each spgr directory: niftiFromDicom('spgr1')
    NOTE: If all dicoms are in a single directory, will create niftis. I have not tried this yet. 
    NOTE TO SELF: Experiment with using mrAnatMakeNiftiFromIfiles, which lets you name the files (instead of having to rename them later from default)
 3. Align/average spgrs to create t1:
    mrAnatAverageAcpcNifti({‘spgr1.nii.gz’,‘spgr2.nii.gz’},‘t1.nii.gz’)
    * even if only one spgr, DO NOT SKIP THIS FUNCTION, since it aligns data
    * if multiple spgrs, run function on a linux box, since it's speedier
    * when finished, view the resulting nifti file in MRICRO(N) or another 
      nifti viewer to ensure that it looks okay before proceeding. 
 4. For the purpose of overlaying functional activation and segmentation, you 
    will need to create a vAnatomy.dat file: createVolAnat 
    * select your nifti file which is t1.nii.gz in this case. At the gui, you are 
    asked to delimit the distribution to optimize the contrast. Then, the 
    vAnatomy.dat will be saved at the same path as your t1 nifti file.

If a t1 already exists, but looks really crooked (not well ac-pc aligned), you will need to re-run step 3 and 4. Make sure to talk to anyone who might be working downstream (fiber-tracking, segmentation, etc), since all this will need to be re-done. One hint that someone has started segmentation is if the 3DAnatomy directory for that subject has left and right meshes (.class and .gray files).

[edit] Make a nifti from the dti raw data

 niftiFromDicom('raw/dti_g###_b###')

This will create a dti_g###_b###.nii.gz file in the raw directory. View the resulting nifti file in MRICRO(N) or another nifti viewer to ensure that it looks okay before proceeding. This step takes about 1.5h to complete on teal, and 45 min on davie2.

If the dti raw data were acquired in two separate scans (e.g. two 7-min data instead of one 14-min data = two folders with 3900 dicoms each, instead of one folder with 7800 dicoms), place each resulting raw directory inside another directory and give that directory as input to niftiFromDicom, which will recursively search the directory tree and try to use all the files that it finds, correctly ordering them based on dicom header info (filenames are ignored). Another strategy is to put all the dicoms in a single directory and renumber them so that they go from 1-7800. If, despite your efforts, two dicoms are created, the niftis can be merged with the code below.

ni1 = readFileNifti('first.nii.gz');
ni2 = readFileNifti('second.nii.gz');
ni1.data = cat(4, ni1.data, ni2.data);
ni1.fname = 'merged.nii.gz';
writeFileNifti(ni1);

niftiFromDicom vs dinifti: in 2008, Bob's recommendation has been NOT to sbustitute dinifti in place of niftiFromDicom. As of 2010, this recommendation has now changed. This is because niftiFromDicom can error or worse, fail to error but imprecisely calculate, transform matrices when the slice Rx is far away from axial at the scanner (oblique, closer to coronal). However, Bob has since updated dicomLoadAllSeries.m to incorporate dinifti code such that it should work similarly to dinifti and handle oblique non-axial prescriptions more accurately. However, this has not been extensively tested. Dinifti initially caused strange and mysterious bugs in preprocessing and thus was not preferred for a long time non-axial prescriptions better.

[edit] Run the dti preprocessing pipeline

This function will run for a very long time, so it's highly recommended to run it on a powerful linux workstation in the background (like sechel) as it can run for 3-4 hr/subject.

The whole pipeline is a Matlab function, dtiRawPreprocess.m, that needs the following arguments: Full name of the nifti that contains the dti data Full name of the nifti that contains the volume anatomy The b0 value used (divided by 1000) – for 800 put in 0.8, for 850 put in 0.85, etc: we put .9 The pcode of the dti sequence (865)

In a subject directory,
For Frequency Direction A/P (Phase Encoding Direction L/R)
>> dtiRawPreprocess('raw/dti_g865_b900.nii.gz', 't1/t1.nii.gz',.9,865,[],[],1) 

It is important to add in the 7th optional input argument a '1' because with this type of acquisition, the gradient matrix needs to be flipped in a particular way, and the '1' makes sure that this will happen. If you look in the help file for dtiRawPreprocess, it claims that it will ask you the phase encode direction, but it actually won't. Also, it labels the 7th input argument somewhat misleadingly as "Assetflag" -- we acquire all our data with ASSET, but we only give a '1' for the "Assetflag" in dtiRawPreprocess if the data were acquired with frequency encode direction A/P (phase encode L/R).

For Frequency Direction L/R (Phase Encoding Direction A/P)
>> dtiRawPreprocess('raw/dti_g865_b900.nii.gz', 't1/t1.nii.gz',.9,865,[],[]) 

Take note of any error messages. In particular, if the t1.nii.gz file was not placed in subjDir/t1, you will get a small error message "empty t1" but the dt6 will be created, not aligned to the t1. We don't want that to happen.

[edit] Fix negative eigenvalues

We will also be clipping negative eigenvalues to 0, because negative eigenvalues mess up FA values (FA is a function of the eigenvalues). Negative eigenvalues can be found for the reason described in the dtiFixTensors.m comments:

  % Simple linear tensor fits sometimes produce non-positive-definite
  % matrices- tensors that have negative eigenvalues due to noise in the raw
  % diffusion data. Some of these voxels with negative eigenvalues are in the
  % white matter, ususally in regions of very high anisotropy (like the
  % corpus callosum). This function will fix all voxels with negative
  % eigenvlaues by clipping all eigenvalues to 0 and writing the new tensors 
  % to the tensors NIFTI file.

For now, we will fix this by running a script that will create a new tensors nifti file with all negative eigenvalues clipped to 0. For more information, see the following functions.

  dti_FFA_preprocessScript 
  dtiFixTensorsAndDT6
  dtiFixTensors

To visualize where in your data the bad voxels occur, and how many there are, you can run the following code. This code will show you all bad voxels that occur in the white matter (setting all voxels outside white matter to 0), and count them.

 dt = dtiLoadDt6('dt6.mat');
 [vec,val] = dtiEig(dt.dt6);
 badData = any(val<0, 4); 
 wmProb=dtiFindWhiteMatter(dt.dt6,dt.b0,dt.xformToAcpc);
 badData(wmProb<0.8)=0;
 numberVoxelsWNegEig=sum(badData(:));  
 showMontage(badData);

More information on this topic can be found on the vista wiki here

[edit] Robust tensor fitting

The default least-square tensor fitting procedure is highly sensitive to outliers, because the goal is to fit a line that minimizes mean square error (thus a single outlier point can shift the fitted line by a lot, because not doing so would result in a very large mse). Thus, we try to find a procedure that will be "robust" to outlier effects. In the RANSAC bootstrapping method, we would fit many tensors on many different subsets of the data, then check that most of the fits will converge, while a few fits will be really off, and have a selection process for choosing the most likely fit. This gives at least a broad idea of how RESTORE might be working.

We have code implemented to do "robust tensor fitting" -- using the RESTORE procedure first proposed by Chang et al | RESTORE PDF. This is basically an automatic procedure for throwing out outlier measurement (e.g., from movement), as an alternative to a manual visual inspection procedure described in the section below: Find and exclude bad volumes.

This will take about 2-3 hours per person to run on a standard lab computer (e.g., sechel, davie2).

[edit] Notes from Bob on using RESTORE

Note that it creates a goodness-of-fit image (bin/gof.nii.gz) that represents the mean-squared residual of the tensor fit. It also outputs an image called nOutliers that indicates the number of data points that were considered outliers and thus ignored in the tensor fit (as per the RESTORE algorithm). When you load the dt6 file in mrDiffusion, you can add this image to find voxels that had many outliers. I was hoping that you could tell me if the robust fitting is helping. In some of my subjects who I've looked at (1.5T data), RT had no effect in most of the white matter- it just fixed some voxels in CSF and around the brain edge. But, I didn't look hard or deliberately try to find a subject who had serious motion artifact. What is your sense of RT?

As for contrack- Tony and I thought of ways to use RT along with bootstrap variance estimates, but nothing is implemented yet. So you should use the standard (non-RT) tensors for contrack. Even if we do add RT to the bootstrap some day, I don't think it will affect contrack much at all- it is pretty robust to noise in the data.

[edit] Reprocessing

[edit] When and how to CLOBBER

The function dtiRawPreprocess contains a clobberFlag input argument that can be set to three values:

  • ask: UI box pops up and will not proceed until user indicates whether or not clobber a particular file -- this is almost never the desired option b/c it requires an annoying level of process monitoring
  • always: the code will automatically calculate all files, rewriting over existing ones -- this is appropriate if something went wrong from the very beginning (e.g., wrong phase encode direction param), or early enough in the process that you don't mind being conservative and just recomputing everything.
  • never: appropriate if you want to re-run a later processing step (e.g., try a different tensor fitting method), but do not want to recompute time-consuming earlier steps like the dti_g865_b900_ecXform.mat file created by dtiRawRohdeEstimateEddyMotion.

If you are running the clobberFlag=false make sure to set it to the actual boolean value of false rather than 0, or dtiRawPreprocess will not interpret the flag as never. Finally, you will most likely want to first get rid of files you no longer want as they will not be clobbered, aka replaced. We recommend using the following pair of commands, first ls to verify that only the files you want deleted are in fact deleted, and then to remove them all. Start by issuing the command at the project directory level so that you do not have to remove the files for each subject individually.

  ls */raw/*alinged.*
  rm */raw/*alinged.*

[edit] Historical need for repreprocessing: ec correction bug fix

At a certain point coinciding with the rollout of the SVN repository (switched from CVS), there were some serious bugs with the preprocessing scripts. This included a bug in the way the eddy current correction was applied, and a brief period in which the eddy current correction flag was incorrectly set to false.

To be safe, repreprocess all subjects' whose dt6 files are timestamped prior to around 8/27/2008. I would actually change everyone prior to the date in the comments for dti_FFA_preprocessScript_fixEddyRT and use this script to fix everyone.

[edit] New T1 nifti requires realignment / repreprocessing

Erase the acpc xform, which is stored here:

   subjectdir \ raw \ dti_g###_b###_acpcxform.mat

Rerun dtiRawPreprocess with clobber argument set to false

   dtiRawPreprocess('raw/dti_g865_b900.nii.gz', 't1/t1.nii.gz',.9,865,false,[])

Rename/move/delete the old dti30 directory

  subject dir \ dti30

This is encapsulated in a new script: dti_FFA_preprocessScript_realign. However, I don't script the renaming/moving of the dti30 directory, so don't forget to do this before running the script, otherwise it will put up dialog box asking, and not loop through all subjects.

[edit] What you should end up with

If the dti_FFA_preprocessScript pipeline ran smoothly, you should find the following directories:

  subjectDir/raw

This should have a directory with all the dicoms in it (dti_g865_b900) and all the products of preprocessing with the same prefix, including:

  • _.nii.gz: the raw dti nifti file
  • _ecXform.mat: the motion/eddycurrent transform file
  • _b0.nii.gz: the average b0 nifti reference file
  • _.bvals & _.bvecs: record of bval/bvec scanning acquisition params
  subjectDir/dti30
  subjectDir/dti30_trilin
  subjectDir/dti30_trilinrt

These are the directories that contain your computed tensor data. The dti30 directory contains data fit prior to 2010 with a bspline interpolation method. Later subjects will not have this directory. The dti30_trilin directory contains the data fit after 2010 with a trilinear interpolation method standard to the Wandell lab pipeline. The dti30_trilinrt directory contains the same data from dti30_trilin, but with RESTORE outliers excluded and RESTORE outlier analysis results saved.

  subjectDir/dtiDir/bin/
  subjectDir/dtiDir/dt6.mat

Each of these dti30 directories will contain at least a bin directory and dt6.mat file at the top level.

In the dti30_trilinrt/bin directory, we additionally expect two additional niftis, which summarize the RESTORE outlier data per volume per voxel (outliers.nii.gz) and per voxel summed across all volumes (outlier_sum_image.nii.gz).

  subjectDir/dti30_trilinrt/bin/outliers.nii.gz
  subjectDir/dti30_trilinrt/bin/outlier_sum_image.nii.gz

[edit] Quality Checking Pipeline

Below we describe the set of quality checking steps agreed on as "best practices" by Davie, Kalanit, and Bob as of February, 2010. For information on practices prior to this date, see the Legacy Checks section below.

These steps are particularly important if the goal is to compare datasets from different groups (e.g., kids vs adults). We do not yet have a set of concrete criteria from which to make definitive inclusion/exclusion decisions. The Wandell lab hasn't developed their own criteria as their kid data is used longitudinally rather than cross-sectionally.

[edit] Check motion parameter estimates

Code for creating figures summarizing subject-related translation/rotation/total motion parameters is currently under development in the /biac2/kgs/projects/Kids/dti/alldata directory. More details soon.

[edit] Check RESTORE outliers

Code for creating figures summarizing subject-related translation/rotation/total motion parameters is currently under development in the /biac2/kgs/projects/Kids/dti/alldata directory. More details soon.

[edit] Visual inspection of data images

This is often the best approach, as you will get a sense of where/when your data is likely to be more/less reliable. It's good to always look at your data as much as possible prior to measuring things. Visual inspection, particularly of the raw data, allow you to detect even things you didn't expect, whereas targeted checks only allow you to detect things you were already looking for.

[edit] Viewing dt6 in dtiFiberUI

  • RGB view: M<-->L should be red, A<-->P should be green, S<-->I should be blue. If these colors are off, it's probably because of the preprocessing function not adjusting to the particular frequency-encoding direction of your data
  • FA view: should not be a confetti halo around the brain (where the skull is), if there is, that means the mask is not being applied correctly, and could indicate that you are working with old/bad code that needs to be updated, or that something went wrong with the alignment
  • T1 background not already loaded: this is a sign that the dtiRawPreprocess did NOT find your t1.nii.gz, and your DTI data is not aligned to the t1. bad sign. If, however, the t1.nii.gz is not loaded for a benign reason, you can simply fix the dt6.mat file to reflect the correct T1 location as described here
  • B0 & FA view: should be comparably smooth (not blocky/lego-looking) relative to all your other datasets; if you look at enough datasets, you will get a feel for how this should look; would be helpful to get some example good/bad screenshots up here one day

[edit] Manual editing of wmProb image

DTI data from brain stem and cerebellum tend to be of low quality and are, in this case, irrelevant areas of study. Before determining the quality of DTI images to ensure comparable images in different age groups, these areas must be removed so that they do not affect our quality check comparisons.

In addition, When conducting fiber-tracking with conTrack, we often get fibers that impossibly dip from cortex straight into the cerebellum through occipital/ventraltemporal cortex, or curve crazily through the brainstem. It can therefore be useful to create a manually edited version of the wmProb.nii.gz file that excludes the cerebellum and brainstem. Because this file is treated as a mask by conTrack, conTrack will not consider fibers that leave this mask. Therefore, we will not have to deal with cerebellar/brainstem fibers if we exclude them from the wmProb nifti.

For the reasons above, we choose to manually edit the wmProb image to exclude cerebellum and cortex. A description of how to do this is maintained here.

[edit] Legacy Checks

This section contains advice and instructions for quality checking your data -- mostly relevant to analyses prior to 2008. For current "best practices", please see the [1] section above.

[edit] PDD

Ask Bob if it's also worth visually inspecting all pddDisp.nii.gz images for banding. This is a side-effect of moving from a bspline to a trilinear interpolation method.

  ni=readFileNifti('pddDispersion.nii.gz')
  showMontage(ni.data)
  Right click to adjust colorbar
  View and make sure there isn't banding

[edit] FSLview of raw dti nifti

The goal is to view the subject's raw dti nifti (dti_g865_b900.nii.gz) in fslview, and scroll around any suspect volume (as estimated from motion parameters) to see what it looks like. You may have to adjust the Max pixel intensity value (e.g,. 3000, in upper panel) to get the image bright enough to inspect.

Generally, look for volumes that on the coronal/sagittal views look like legos (black stripes in the axial plane), view that axial slice and see what it looks like. Sometimes, it's not bad. Other times, it appears that there is no data for that slice, or there are weird black specks or barber pole stripes on the brain. This is probably not good.

  • speckles of high intensity pixels (super black or super white pixels)
  • low image intensity for whole slice (entire slice is darker than it should be)
  • large stripes (abnormal coloration forming wide stripes on axial slice)

Things that look bad but are expected, given known acquisition imperfections

  • Regional drop out in the middle of the brain (white matter). This occurs frequently in areas where the fibers grow perpendicular to the water path. This is fine. Only include if the dark spot is in the outer regions of the brain (gray matter). However, this is unlikely.
  • Brightness near the ear canals. This is due to the sudden tissue difference between the ear canal and brain tissue which causes an accumulation of data in that area, making it extremely bright. Only include the slice if brightness is spotted in regions not near the ears.

Step-by-step instructions

  1. cd into the subject's raw directory
  2. open fslview by typing the command "fslview" into the terminal
  3. go to FILE >> OPEN >> and open the dti_g865_6900.nii.gz file
  4. change the max pixel value to a value so that you can see the whole brain clearly (~between 2000 and 5000. Will vary with each brain)
  5. it has been noted that major problems can occur on brains that do not look strippy at all, so make sure to scroll through every volume by putting your cursor at the top of the brain, and pressing the "-" button next to the z coordinate. This will take you through a "movie" of every axial slice. Check for abnormalities.

Screenshots of poor quality slices in raw data acquisition

[edit] Evaluate data reliability

We want to know if the quality of the data is reliable across subjects, and there are no between group differences. For example, perhaps children move more during DTI scans than adults, so tensor estimates for children are less reliable than those for adults.

To evaluate data reliability, we can take advantage of the maps created by dtiRawFitTensorMex, and further described on the vistawiki. The default bootstrap algorithm now uses a residual bootstrap method (see Chung et al., NeuroImage 2006). But, you can also do a traditional repetition-based bootstrap if you have enough repeated measurements.

  • faStd.nii.gz: FA standard deviation map
  • mdStd.nii.gz: Mean Diffusivity standard deviation map
  • pddDispersion.nii.gz: Principal diffusion direction dispersion maps (where dispersion is the direction variance parameter in the Watson distribution). These dispersion maps are in units of degrees, with 54 degrees representing maximal dispersion on the sphere.
  1. File --> Add Nifti: Choose white matter probability (wmMask.nii.gz)
  2. ROI --> New Other --> ROI from image mask: choose 0.9 through infinity, nosmoothing
  3. File --> Add Nifti: Choose map (e.g., mdstd.nii.gz)
  4. Make sure this new variance map is the current background
  5. Analyze --> ROI --> ROI stats (verbose)

View histogram of stdev.

Script and functions now exist to do this automatically
dti_FFA_calculateTensorReliabilityScript
dtiCalculateTensorReliability

[edit] Evaluate subject image motion

During eddy correction, motion parameters are calculated (not total motion, but motion that was detected and corrected). This motion includes not JUST the subject's motion, but also eddy currents in the data (independent of subject motion). Motion is calculated in units of voxels. Bob's recommendation is to view this, but without necessarily excluding anyone, as slow drift over time is not going to be bad for the tensor reliability, and fast motion will show up as a bad image (which can be seen by viewing the raw series movie described here).

In the subject's raw directory, load the eddy correction parameter file.

   load dti_g865_b900_ecXform.mat

You will see a struct XFORM in your workspace, with the field ECPARAMS. Each row will have 14 columns. The first three refer to translation, the next three refer to rotation, the rest to nonlinear transformation parameters.

   [ xt, yt, zt, xr, yr, zr, ... nonlinear transformation parameters]

To plot the motion for a subject, you can use the following commands (which are displayed in units of voxels):

   mot=vertcat(xform(:).ecParams);
   plot(mot(:,1))
   hold on
   plot(mot(:,2),'r')
   hold on
   plot(mot(:,3),'g')
   etc


[edit] Find and exclude bad volumes

What we have found is that there are some slices where data is bad for one reason or another (e.g., scanner hiccups, subject moved during that acquisition timepoint). You can view the estimated motion to get a sense for where these bad slices are likely to occur.

Prior to 2010 when we added RESTORE processing to our standard pipeline, we would find and exclude bad volumes manually. Although we no longer do this, we leave a description of this procedure below for the record.

Our procedure for finding and excluding bad volumes is as follows: Someone (e.g., Alina, Lucy) will scroll through all the slices in all the volumes to try to detect axial slices that "jump out" and don't look like their neighbors. Common bad slices are either slices where the overall image intensity is much dimmer than for the rest of the brain, or if there are large stripes or dark "ant tracks" in parts of the brain where there shouldn't be. An extra clue is if that axial slice shows up as a thin black line or stripe on the sagittal/coronal views or if, when viewing the subject's estimated motion parameters, there was a large spike in motion at a particular volume. The procedure for this visualization using fslview is described on Lucy's wiki page.

In theory, we might not remove an entire volume if there is just one bad slice, and it is not in a region that we care about. Also, we should try not to exclude more than ten volumes: excluding two or three bad volumes should increase the reliability of the tensor fitting, without throwing away so much data that the tensor fitting becomes unreliable due to the fact that there isn't enough data. However, excluding ten or more volumes is getting into the range where the trade-off might work the other way. If this is the case, talk to Bob and see if there is a way to exclude only the bad slices, and not the entire volume.

We will use the following script to get rid of bad volumes. It takes about 5-10 minutes to finish one person. It will also clip negative eigenvalues and log a record to a textfile (ExcludeBadVolumes.txt).
dti_FFA_excludeBadVolumes

Bob's take on removing bad volumes: Keep a record of all the volumes that are removed for each subject. Look at the dt6's side by side and see if they look okay. Measures of increase/decrease in fastd probably won't change too much from before and after volume removal, but we do expect them to go up if indeed horribly bad slices are removed. Maybe in the end we will be trying to equate number of volumes removed from each dataset. If we really want to equate a variance measure, we should do this on a defined ROI, rather than on the whole brain, as the scripts dtiCalculateTensorReliability currently does. Nothing is better than just looking at the data.

[edit] View movie of raw images

We can view our raw data, just as were could view the raw timeseries, using dtiRawMovie. It can output an animated gif to file, and takes as input the original raw dicom nifti, as well as the eddy-current correction transform file, and a slice number (30 is a good, middle of the volume slice)

   dtiRawMovie('dti_g865_b900.nii.gz','dti_g865_b900_ecXform.mat',30)

If there is a subject with high variance (e.g., in MD), or a subject whose data is otherwise iffy (e.g., they moved several voxels during the scan). Then you will want to carefully scrutinize the movie to see if there are any unusable images in the series. Such a blowout image would be one that does not depict a brain, shape of the brain is very off from the normal shape of the brain, etc. It is usually the result of the subject making a quick movement during image acquisition.

You may not need to throw out this subject's data, but to get rid of some of the frames (ask Bob).

[edit] General checks

Try to look at the results of each stage before moving on. You do not want to look at your dt6 after running dtiRawPreprocess all night, only to discover one of the initial niftis was corrupted.

  • View all niftis created at each stage before moving on (e.g., with MRICRON). You can also view niftis in MATLAB with the following commands
   ni=readFilenifti('file.nii.gz');
   showMontage(ni.data);
  • Take note of any error messages. For example, "empty t1" is a bad error, since it means that the dt6 is not aligned to the T1. This is fixed in the newest version of dtiRawPreprocess, however.

Look at the dt6 in dtiFiberUI

  • RGB view: M<-->L should be red, A<-->P should be green, S<-->I should be blue. If these colors are off, it's probably because of the preprocessing function not adjusting to the particular frequency-encoding direction of your data; make sure that the asset flag and
  • FA view: should not be a confetti halo around the brain (where the skull is), if there is, that means the mask is not being applied correctly, and could indicate that you are working with old/bad code that needs to be updated, or that something went wrong with the alignment
  • T1 background not already loaded: this is a sign that the dtiRawPreprocess did NOT find your t1.nii.gz, and your DTI data is not aligned to the t1. bad sign. If, however, the t1.nii.gz is not loaded for a benign reason, you can simply fix the dt6.mat file to reflect the correct T1 location as described here

[edit] For more information

[edit] Frequency vs Phase Encode Direction

It is frequency, not phase encode direction that is entered at the scanner, so we try to refer to frequency direction as much as possible to avoid confusion about what should be entered into the scanning protocol.

FrequencyDir PhaseDir Phase FOV DicomHdrField Distortion Dir Distortion Description
A/P A/P L/R 0.75 'ROW' L/R Wave-y looking temporal lobes (axial/coronal), but shorter scan
L/R L/R A/P 1 'COL' A/P Front and back tips of brain squashed in, but longer scan (phase FOV not shortened)


[edit] About dt6.mat files

You can load this file with the following command

  dt6 = load('dt6.mat')

Doing so will give you a variable dt6 with the fields adcUnits, params, files. There is no actual data stored in dt6.mat file. Instead, it is a pointer to other files where the data is located.

          b0: 'dti30/bin/b0.nii.gz'
   brainMask: 'dti30/bin/brainMask.nii.gz'
      wmMask: 'dti30/bin/wmMask.nii.gz'
     tensors: 'dti30/bin/tensors.nii.gz'
       faStd: 'dti30/bin/faStd.nii.gz'
       mdStd: 'dti30/bin/mdStd.nii.gz'
     pddDisp: 'dti30/bin/pddDispersion.nii.gz'
          t1: 't1.nii.gz'

These path and filenames are set relative to the subject's top-level directory during preprocessing. Sometimes, you will move a file (e.g., t1.nii.gz) after preprocessing, but would like dtiFiberUI to find and automatically load these files. If you would like to change the path names, you can do so with the following commands. STRUCT is important in the save command in order for load('dt6.mat') to work correctly.

  dt6.t1='t1/t1.nii.gz';
  save('dt6.mat','-struct','dt6');

[edit] dtiLoadDt6

For most scripting purposes, you will want to use the following command

   dt = dtiLoadDt6('dt6.mat')

This function will read the dt6 struct to get the tensor data and other information. Fields include

   dt.dataFile: path to dt6.mat
   dt.subName: string with just subject directory name
   dt.dt6: tensor data
   dt.xformToAcpc: transforms tensor image coordinates to ac-pc coordinates
   dt.xformVAnatToAcpc: used to transform mrVista vAnatomy coordinates to 
                   mrDiffusion ac-pc space. You should only need this one when 
                   dealing with mrVista ROIs.

Xforms are often called for in functions that are looking at ROI or fiber coordinate fields to get corresponding tensor data for those coordinates. If it's unclear which xform you should use, or if you need to invert the xform, try looking at example scripts, or reading the help comments for the function.

If the information is not clear, a rule of thumb is to usually provide

   inv(dt.xformToAcpc)

ROI and fiber coordinates are usually stored as ac-pc coordinates, whereas tensor data is in a different coordinate space. Therefore, if you are starting with fiber coordinates, and would like to extract FA values (from tensor data) for corresponding voxels, then you would pass into the function the inverted xform, since you are transforming from acpc to tensor space. This is used in functions like dtiGetValFromFibers.

If, however, you are tracking fibers, then you are starting in tensor space, and want to end up with fibers in acpc space. So you would pass in the dt.xformToAcpc without inverting it, because you want to go from tensor space to ac-pc coordinates as for functions like dtiTrackFibers.

[edit] Preprocessing scripts

There are some example code snippets at the end of dtiRawPreprocess, which can be viewed by typing:

  edit dtiRawPreProcess

Davie has also written a preprocessing script that goes through the three different age directories in the Kids project DTI directory. It only preprocesses subjects that do not currently have dt6s, and do have dti and t1 nifti files in the correct locations. It also creates a text file log documenting the preprocessing history.

  VISTASOFT/mrDiffusion/analysisScripts/dti_FFA_preprocessScript
  VISTASOFT/mrDiffusion/analysisScripts/checkForNiftiDicomT1files
  VISTASOFT/mrDiffusion/analysisScripts/dti_FFA_createDTIniftis


The top-level script, dti_FFA_preprocessScript will call two other functions.


The first function, dti_FFA_createDTIniftis will operate overall all subject directories on the /Kids/dti/ directory that have a dti directory at the top-level. It will move/rename this directory and create the initial raw dti_g865_b900.nii.gz file. If the subject does not also have a t1.nii.gz (as would be the case if you are calling dti_FFA_preprocessScript right at the scanner), just this initial nifti will be created. It can be evaluated for initial quality. See here for more details.


The second function, checkForNiftiDicomeT1files expects there all of the conditions below to be met. If so, it will call dtiRawPreprocess and the result will be a dti30 directory with a dt6.mat file inside

  • Only one nifti file in subject/raw/ directory
  • A directory with the same name as the nifti file above, also in the subject/raw/ directory
  • An I0001.dcm file at the top level within the directory above
  • A t1.nii.gz in the subject/t1 directory
  • No already existing dti30 directory or dt6.mat file


This script can be run with the actual dtiRawPreprocess command commented out to allow the user to make sure that the script is catching and skipping all the correct subjects.

[edit] More scripting notes

Everything in the dti30/bin directory is in the same space, same voxel size, etc (so no need to compute transforms or anything like this for scripting purposes).

Xforms: nifti files have two xforms, qto_ijk and qto_xyz (see dticreateFirstFromNifti for example usage). One goes from acpc -> image space, and the other from image space -> acpc space. If you are coding, and the resulting coordinates are off, try substituting the other xform.

Wandell lab has smoothed their tensor data. Others lab do this and previous data was smoothed (by accident, as result of resampling multiple times). For STT, we may want to use smoothed data. Try dtiTensorSmoothing, 2 iterations, providing dtig865b900_ALIGNED.nii.gz. Bob discourages use of conTrack on smoothed data, but can't remember why. This smoothing algorithm is smart -- it smooths using the tensor itself as a kernel (smooth more in the direction of the tensor), and only resamples at the very end.

[edit] dtiRawPreprocess bugs

The most efficient way to deal with these bugs is to go in with a good sense for how long things should take, and if things are taking several orders of magnitude longer than they should, find out what is going on. Usually, dtiRawPreprocess will tell you what it is doing and how long each step should take. Overall, a good rule of thumb is that if you set a subject going at the beginning of the day, and they are still not finished by the end of the day (or start them overnight and they aren't finished by morning), something is wrong/frozen.

The exception to this heuristic is if there is a network processing speed problem due to disks being full. Even if your specific partition is not full (e.g., /biac3/kgs5/), if a neighboring partition is full (e.g., /biac3/wandell5/) this could cause problems. To check for this, navigate in a terminal window to the directory you suspect is causing a problem, then type:

  df -h .  % this will print out how much space is used/available

Once you have decided that something is definitely wrong/stuck/frozen, take these steps:

  • Try the process on multiple machines (e.g., teal, sechel), make sure you are using the write Matlab and spm (e.g., SPM2008A, /usr/local/matlab/toolbox/mri/spm5_r2008), or at least know which combination you did use. This info will help zero in on what could be going wrong, and whoever helps might need to know.
  • Check the trac repository and see if there were any recent changes that could explain why dtiRawPreprocess.m is broken now, yet wasn't before.
  • Ctrl+c on the frozen process and note down what the error is, this is a clue to what is breaking.

[edit] Random

  • Diffusion is expressed as the change in the surface area of a "sphere" (where the sphere is the probability cloud of movt) over time; that's why the units are micrometers squared / millisecond

[edit] KGS lab questions, Bob answers

  1. How is the dtiRawResample done? What is the order of the transformations - first aligning and then eddy correction or the opposite? Does the the order of transformations matter? align then eddy-correct, no
  2. What is the reference to which the motion correction is done during the eddy current correction? mean of motion-corrected b0 images (raw/b0.nii.gz). This also used to align to the T1
  3. What is advantage and disadvantage of using b-splines vs trilinear interpollation durong dtiRawResample? How does that relate to banding in the variance estimates? bspline is a better interpolation method that preserves edges. But, it can amplify noise, which is why we no long recommend it. Trilin does produce banding in the variance estimates, but that has no impact on tracking and very little (maybe none?) impact on FA/MD/AD/RD estimates. My observation suggests that b-spline eliminated the banding, but Rhode et. al. claim that it should still be there. So, given all this, we suggest sticking with trilin and not worrying about variance banding. DY addition: PDD doesn't make a huge difference in conTrack tracking, in Tony's investigations; inspection of the trilinear-fit PDD doesn't show any banding; banding is apparent in viewing the faStd image using showMontage(ni.data). DY More thoughts from Tony/Bob conversation 1/22/2010: The switch to trilinear was motivated by the FA map quality -- FA is a highly noise-sensitive measure, since it is a stdev. You can see that the FA map for a trilinear fit is much smoother than a bspline fit FA map, which has sharper edges, but also more heterogeneity. Given our subject population (likely to move) and likeliest best resolution, it makes sense (at least DY was temporarily convinced) to go with the trilinear interp method. As a SEPARATE issue, we can wonder whether the fitting method matters for the PDDDisp image conTrack uses. The answer is, (1) you can see that there isn't visible noise banding in the PDD disp image, (2) even if there was, conTrack imposes a minimum non-zero dispersion, thus ensuring some level of exploration no matter what, even if the disp was artificially underestimated, (3) if the disp was artificially overestimated in some regions, that's OK, it will just make the algorithm take longer to run and explore more options, (4) 1-3 are somewhat moot in that the shape of the tensor matter much more for determining how conTrack samples, and because varying levels of noise hardly budge the PDD disp estimates -- PDD disp is much less sensitive to noise than FA in principle. THE ANSWER: USE TRILINEAR FOR TENSOR FITS; DON'T WORRY ABOUT PDD METHOD
  4. The DtiRawFitTensor step fits the tensor model to the resampled dMRI data using the bvals and the reoriented bvecs. On what data is the least squares fit done and is it a linear least square fit? dtiRawFitTensor can do linear least squares (set fitMethod='ls'), but that's been mexified in dtiRawFitTensorMex, which is much faster. If you use dtiRawFitTensor with fitMethod='rt', then you are doing a robust tensor fit with non-linear least squares with outlier reject, as described in the Chang paper. As for 'what data'-- it fits whatever data that you give it, typically the resampled dMRI data. (I'm not sure why you would be asking this. Maybe you mean something else?)
  5. To discuss with Kalanit and Brian: conTrack tracking on RESTORE data -- use mismatched bsplineinterp PDD and RESTORE tensors? track on matching non-RESTORE trilinear tensors and non-RESTORE trilinear PDD? or figure out how to do bootstrap on the RESTORE data and feed in the RESTORE trilin PDD? for the stt tracking run it twice, once on the ls and once on rt, and see if it makes any difference on detecting the mori fibers tracks. Try to use RESTORE tensors with conTrack, if I'm convinced that RESTORE is good. The issue is that we don't have a way to create the PDD disp from the RESTORE dataset currently implemented. So the choice is to create a way to do this, or don't worry about it since the PDD disp map doesn't really matter so much anyhow (see above). THE ANSWER: FOR CONTRACK, TRY THE TRILINEAR RESTORE TENSORS WITH NON-RESTORE TRILINEAR PDD DISP
  6. When we are getting the motion params, we get one number per time point. What is that number? The mean motion across the entire brain? The mean motion of the center of mass?
Personal tools