next up previous
Next: A Simple Partial Volume Up: Local Parameter Estimation Previous: Local Parameter Estimation

Local Parameter Estimation: Theory

The Diffusion Tensor Model. The diffusion tensor has often been used to model local diffusion within a voxel (e.g. [10,15,16]). The assumption made is that local diffusion may be characterized with a 3 Dimensional Gaussian distribution ([10]), whose covariance matrix is proportional to the diffusion tensor, $ \vec{D}$. The resulting diffusion weighted signal, $ \mu_i$ along a gradient direction $ \vec{r}_i$, with $ b$-value $ b_i$ is modeled as:

$\displaystyle \mu_i=S_0\exp{(-b_i\vec{r}_i^T\vec{D}\vec{r}_i)},$ (5)

where $ S_0$ is the signal with no diffusion gradients applied. $ \vec{D}$, the diffusion tensor is:

$\displaystyle \vec{D}=\left[ \begin{array}{ccc} D_{xx}&D_{xy}&D_{xz}\\ D_{xy}&D_{yy}&D_{yz}\\ D_{xz}&D_{yz}&D_{zz} \end{array}\right]$ (6)

When performing point estimation of the parameters in the diffusion tensor model, it has been convenient to choose the free parameters in the model to be the 6 independent elements of the tensor, $ D_{xx}-D_{zz}$, and the signal strength when no diffusion gradients are applied,$ S_0$. This parametrization allows estimation to take the form of a simple least squares fit to the log data. When sampling, however, our choice of parametrization is far less constrained by our estimation technique. The parameters of real interest in the tensor are the three eigenvalues, and the three angles defining the shape and orientation of the tensor. By choosing these as the free parameters in the model , not only do we give ourselves immediate access to the posterior pdfs on the parameters of real interest, but we also allow ourselves the freedom to apply constraints or add information exactly where we would like to. As a simple example, as will be seen later, a sensible choice of prior distribution on the eigenvalues makes it easy to constrain them to be positive. So the Diffusion Tensor is now parametrized as follows:

$\displaystyle \vec{D}= \vec{V}\vec{\Lambda}\vec{V^T},$ (7)

where

$\displaystyle \vec{\Lambda}=\left[ \begin{array}{ccc} \lambda_1&0&0\\ 0&\lambda_2&0\\ 0&0&\lambda_3 \end{array} \right]$ (8)

and $ \vec{V}$ rotates $ \vec{\Lambda}$ to ( $ \theta,\phi,\psi$), such that the tensor is first rotated so that its principal eigenvector aligns with ( $ \theta,\phi$) in spherical polar coordinates, and then rotated by $ \psi$ around its principal eigenvector1.

The noise is modeled separately for each voxel as independently identically distributed (iid) Gaussian. with a mean of zero and standard deviation across acquisitions of $ \sigma$. The probability of seeing the data at each voxel $ \vec{Y}$ given the model, $ M$, and any realization of parameter set, $ \omega=(\theta,\phi,\psi,\lambda_1,\lambda_2,\lambda_3, S_0,\sigma)$ may now be written as:


$\displaystyle \mathcal{P}(\vec{Y}\vert\omega,M)$ $\displaystyle =$ $\displaystyle \prod_{i=1}^n
\mathcal{P}(y_i\vert\omega,M)$  
$\displaystyle \mathcal{P}(y_i\vert\omega,M)$ $\displaystyle \sim$ $\displaystyle \mathcal{N}(\mu_i,\sigma)$ (9)

where $ n$ is the number of acquisitions, and $ y_i$ and $ \mu_i$ are the measured and predicted values of the $ i^\textrm{th}$ acquisition respectively. (Note that throughout this paper, $ i$ will be used to index acquisition number).

$\displaystyle \mu_i = S_0\exp{-b_i\vec{r}_i^T\vec{D}\vec{r}_i}.$ (10)

Thus, the model at each voxel has 8 free parameters each of which is subject to a prior distribution. Priors are chosen to be non-informative, with the exception of ensuring positivity where sensible2.

$\displaystyle \mathcal{P}(\theta,\phi,\psi)\propto\sin(\theta)$      
$\displaystyle \mathcal{P}(S_0)\sim\mathcal{U}(0,\infty)$      
$\displaystyle \mathcal{P}(\lambda_1)=\mathcal{P}(\lambda_2)=\mathcal{P}(\lambda_3)\sim\Gamma(a_\lambda,b_\lambda)$      
$\displaystyle \mathcal{P}(\frac{1}{\sigma^2})\sim\Gamma(a_\sigma,b_\sigma)$     (11)

Parameters $ a$ and $ b$ in the Gamma distributions are chosen to give these priors a suitably high variance such that they have little effect on the posterior distributions except for where we ensure positivity. Note that the non-informative prior in angle space is proportional to $ sin(\theta)$ ensuring that every elemental area on the surface of the sphere, $ \delta A=sin(\theta)\delta\theta\delta\phi$ has the same prior probability.



Subsections
next up previous
Next: A Simple Partial Volume Up: Local Parameter Estimation Previous: Local Parameter Estimation
Tim Behrens 2004-01-22