next up previous
Next: Hypothesis testing and Estimability Up: General Linear Model Previous: General Linear Model

The model for single-subject analysis

Let $y_t$, be the time series observed from $t=1\cdots T$, at a given voxel, this is represented as a vector on $I\!\!R^T$; $y$ is a $T\times 1$ vector. The relationship between the observed time series and the paradigm of the experiment ( with any other covariates) is expressed as:

\begin{displaymath}y=X\beta+\epsilon\end{displaymath}

where $X$ is a $T\times q$ matrix representing the design and the covariates, $\beta$ is a $q\times 1$ vector of parameters (expressing the linear relationship)9 and $\epsilon$ is a $T\times 1$ vector of random errors (departure from the model or residuals) assumed identically distributed (independent or not independent) with mean 0 and common variance $\sigma^2$. That will make the distribution of the vector $\epsilon$ with mean 0 and variance covariance matrix $\sigma^2I\!d_T$ if independent10 or $\sigma^2V$, V being then the autocorrelation matrix (supposed to be known in this section). One usual way to estimate $\beta$ is by Least Squares of the ``residuals''11:
\begin{displaymath}\widehat{\beta}=arg\min_\beta [^t(y-X\beta)(y-X\beta)]\end{displaymath} (14)

This problem can be solved by taking partial derivatives; this leads to the so called normal equation $^tXX\beta = ^tXY$ and to a solution using the inverse or g-inverse12 of $^tXX$:
\begin{displaymath}\widehat{\beta}=(^tXX)^-{\;}^tXY\end{displaymath} (15)

This is called the ordinary least squares (OLS) estimate, and if the errors are independent ( $var(\epsilon)=\sigma^2I\!d_T$) according to Gauss-Markov theorem13 [11][3] it is the best linear unbiased estimate (BLUE) of $\beta$ which means that its mean (expectation) is $\beta$ and the estimate is of minimum variance (among the linear unbiased estimates14):
\begin{displaymath}var(\; \hat{\beta} \;)=\sigma^2(^tXX)^-\end{displaymath} (16)

If $var(\epsilon)=\sigma^2V$, $V$ can be replaced with $V=K^tK$ (if $V$ is positive definite $K^{-1}$ exists15), applying $K^{-1}$ to the GLM gives the model:
$K^{-1}y=K^{-1}X\beta+K^{-1}\epsilon$ meeting the Gauss-Markov properties. The OLS estimate of $\beta$ is then written:
\begin{displaymath}
\hat{\beta}=(^t(K^{-1}X)K^{-1}X)^-{\;}^t(K^{-1}X)K^{-1}y=(^tXV^{-1}X)^-{\;}^tXV^{-1}y\end{displaymath} (17)

and rewriting the optimisation problem shows that this is the solution of $\min_\beta
[^t(y-X\beta)V^{-1}(y-X\beta)]$ called the Generalised Least Squares (GLS)16. One has also
\begin{displaymath}var(\hat{\beta}_{GLS})=\sigma^2(^tXV^{-1}X)^-\end{displaymath} (18)

Remark: When $var(\epsilon)=\sigma^2V$, $var(\hat{\beta})=\sigma^2(^tXX)^-{\;}^tXV X(^tXX)^-$, then the OLS is BLUE if and only if $\mathcal{I}m(VX)\subset\mathcal{I}m(X)$ (the space generated by VX is included in the one generated from X); so in that case OLS$\equiv$ GLS [12].
next up previous
Next: Hypothesis testing and Estimability Up: General Linear Model Previous: General Linear Model
Didier Leibovici 2001-03-01