next up previous
Next: A note on interpretation Up: Global Connectivity estimation: Theory Previous: Interpolation

Stopping criteria

Algorithms which generate streamlines based on maximum likelihood fiber directions (principal eigenvector from a diffusion tensor fit) have tended to require harsh streamline stopping criteria based on fractional anisotropy and local curvature (angle between successive steps). Fractional anisotropy thresholds have tended to be in the range of 0.2-0.4 (e.g. [7]), and curvature thresholds have been as strict as requiring successive steps to be within $ 45^\circ$ (e.g. [5]). These criteria are in place to reduce the sensitivity of the streamlining to noise in the image, partial volume effects, and other related problems. The aim is to reduce the possibility of seeing false positives in the results by only progressing when there is high confidence in fiber direction, and when the direction is anatomically plausible. The downside of these constraints is the limitations that they impose on which fiber tracts may be reconstructed and where in the brain they may occur. For example deep gray matter structures, despite displaying a high degree of order in their principal diffusion directions, tend to have low anisotropy (often below the threshold for streamlining algorithms). Streamlines will also tend to terminate well before cortex as anisotropy reduces, and uncertainty in fiber direction increases.

In such circumstances a probabilistic algorithm has significant advantages. First, in regions where fiber direction is uncertain (these often coincide with regions of low anisotropy), the algorithm has available to it a direct representation of this uncertainty. Hence, even though it cannot progress along a single direction with high confidence, it can progress in many directions. The uncertainty in this area will be represented by voxels further along the path having lower probabilities associated with them, however a high probability of connectivity to the seed voxel may still be associated with the region into which the paths progress. A second useful advantage of a probabilistic algorithm is robustness to noise. It can be difficult to track beyond a noisy voxel using a non-probabilistic algorithm as it may initiate a meaningless change in path. However, with a probabilistic algorithm, paths which have taken errant routes tend to disperse quickly, so that voxels along these paths are classified with low probability. In contrast "true" paths tend to group together, giving a much higher probability of connection for voxels on these paths.

These advantages significantly reduce the need for anisotropy and curvature stopping criteria. The results presented here are generated with no anisotropy threshold, and with a local curvature threshold of $ \pm 80^\circ$ for each sample. This curvature threshold is required, as, without it, the sampled streamlines may track back along a path similar to one already visited, artificially increasing the probability along the path. In order to reduce this effect further, we check, at every step, whether the path is entering an area it has already visited, and terminate those that are.


next up previous
Next: A note on interpretation Up: Global Connectivity estimation: Theory Previous: Interpolation
Tim Behrens 2004-01-22