Tensor Summary Measures


Revision as of 23:44, 20 July 2015 by Rjpatruno (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

After we estimate the fibers, using between two ROIs, we often want to summarize the fiber properties. There are some scripts that illustrate how to do this, including ctrCompStatsORBundles.m, orDiffusivity.m and orVolume.m. These were developed to study MM and they are in the analysis section of the mrDiffusion directory. Here we list some of the issues we considered when writing these scripts.

An alternative to combining data along the path is computing tensor properties (FA/MD/AD/RD) along the pathway trajectory in a stepwise fashion, one node at a time. This can be done using GUI (walk-through), for an example of a batch script see dtiComputeDiffusionPropertiesAlongFG_group.m.

Combining data along the path

First, we considered two methods that we might use to combine data along the path. One method includes each DTI voxel once, but not more. That is, suppose that multiple fibers pass through the same voxel, we still only count the data from that voxel once. The alternative method is to include data from a voxel for every fiber that passes through the voxel. This effectively weights some voxels in the data much more heavily than others. The weighting corresponds to the fiber tracking estimate of the likelihood that a particular voxel is actually in the path.

For diffusivity estimates, we decided that we prefer to use the weighting implicit in the fiber tracking rather than to include each voxel once and only once.

We prefer the second way because (a) voxels at the edge of the fiber paths are less certain, and (b) these voxels are likely to be partial-volumed. For both of these reasons, they should not contribute as much to the estimate as voxels that are in the middle of the path and that are confidently identified.

In the scripts performing these diffusion analyses, there is a corresponding flag called 'uniqueeig' or 'alleig' that set the computation one way or another. These computations produce different results, so this decision matters. The 'uniqueeig' setting allows each voxel to be selected only once. The 'alleig' uses each voxel every time a fiber tract passes through it, and thus the uses weighting implicit in the fiber tracts to compute the summary statistic. In general, we prefer the 'alleig' setting over the 'uniqueeig'. Be sure to check the setting on this flag should you run the code. And be sure to check that the reasoning we applied here works for your situation. (Eig refers to the eigenvalues of the tensor).

Second, we considered the related question of whether we might include or eliminate voxels based on some property of the voxel itself. In early publications and analyses, for example, TS made summaries that included only those voxels with a minimum linearity threshold. He did this because he was concerned that the fiber tracts would pass through some voxels that were plainly crossing fibers or voxels at the margin. he didn't want to include x-fibers in the estimate, and he felt that by inserting a linearity requirement we would be seeing the pure fibers that included the tract itself. The problem with this strategy is that there is no secure way to set the linearity threshold. The estimated values do differ significantly as this threshold changes. So although we did once in a publication, we no longer think this is a great idea.

The one true path

The principle of retrieving estimates from the most secure voxels on the path is a good idea. The way we are doing it is by the implicit weighting from the fiber tracts. We will continue to think about better ways.

Fiber counts and densities

Using the fiber counts is a natural enough way to get the relative weights of different voxels. Another way would be to build a model (say a generalized cylinder) of the hull of the fibers and then have a exponential or linear ramp fall off function for the weights that declines from the center of the cylinder to the edge. For example, this type of approach is taken in dtiFiberGroupPropertyWeightedAverage.

The pattern of weights can be visualized using code that shows fiber density maps. We can do this in mrDiffusion. If you believe that Quench or other tools can visualize this please say how here. We think that seeing these weights should be made easy for users.

NOTE: There was an issue about whether there was a difference in the absolute location identified by mrDiffusion (STT calculation) and by ConTrack. We need to check whether we are off by one pixel or not. Netta is testing this right now ... and it looks like there is no error. When she is convinced, then she will delete this paragraph.

Personal tools