next up previous
Next: Variance ratio smoothing Up: Alternatives to the Random Previous: Alternatives to the Random

Conjunction analysis

Facing this dilemma of an invalid population analysis (fixed-effect approach) and a valid one (random-effect) but difficult to apply usually because of overestimation of variances due to small samples (random-effects), some alternatives have been investigated. One is the ``conjunction analysis'' (Friston et al. [5]), which uses all the subjects' maps to localise where all the subjects activated (at a chosen level; see single subject analysis, $i.e.$ it is a thresholded map of the $min$ map over the subjects). Then using simple probability theory, one can relate the ``level of activation'' (p-value calculated for example from Gaussian Random Field Theory - see Worsley(1999 submitted)) and the proportion of the population which shows this activation at the previously defined level (in the single-subject analysis). Let $(t)$ and $(a)$ mean respectively, tested status of activation with the experiment and true status of activation, while $+$ and $-$ is the status activated or not, simple probability calculus gives:
$\displaystyle \alpha_c=P(all\; t_+)$ $\textstyle =$ $\displaystyle [P(t_+)]^n$  
  $\textstyle =$ $\displaystyle [P(t_+/a_-)P(a_-)+P(t_+/a_+)P(a_+)]^n$  
  $\textstyle =$ $\displaystyle [\alpha(1-\gamma)+\beta\gamma]^n$ (10)

Where $\alpha$ is the chosen single-subject level of activation (p-value level to threshold each map), $\beta$ is the power or sensitivity ($1-\alpha$ is the specificity) of the experiment which is not known and can be set at $1$ to provide a lower bound of the proportion $\gamma$ of the population showing the effect. Setting $\beta=1$ gives

\begin{displaymath}\gamma \geq
\gamma_1=\frac{\alpha_c^{1/n}-\alpha}{1-\alpha}\end{displaymath}

Thus the conclusion about the population is qualitative, $e.g.$ with a certainty of 0.95 ($1-\alpha_c$), we can say that at least 80% ($\gamma_1$) of the population would activate at level 0.001 ($\alpha$). The results given above can also be used to decide on a sample size[6], $e.g.$ for the above conclusion one would need at least $14$ subjects. Remarks: The $\alpha_c$ could be calculated using permutation testing procedure instead of using random field theory. The conjunction could also be defined as a given proportion of subjects; this could cope better with problems such as poor localisation due to registration problems, as conjunction analysis is certainly very sensitive to subject outliers in terms of the locations of the activations. If one decides that $m\leq n$ must activate to define a conjunction then given $m$:
$\displaystyle \alpha_{cm}$ $\textstyle =$ $\displaystyle P(m \; t_+)$  
  $\textstyle =$ $\displaystyle \left ( \begin{array}{c}
n \\
m \end{array} \right ) P(t_+)^m (1-P(t_+))^{n-m}$ (11)
  $\textstyle =$ $\displaystyle \alpha_c \left ( \begin{array}{c}
n \\
m \end{array} \right ) \left ( \frac{1-P(t_+)}{P(t_+)} \right )^{(n-m)}$  

with $P(t_+)=[\alpha(1-\gamma)+\beta\gamma]$ as given above. One must notice that the amount of conjunction $m/n$ must be at least the expected $\gamma$, otherwise the multiplicative function introduced in the above equation is not monotonic with $m$. This problem is also linked with values of $\gamma$ and the sample size $n$. Roughly speaking when $\gamma$ is close to $1$ one would need an $m$ very close to $n$ and so a large $n$ to achieve some gain in performing a conjunction of only $m$.
next up previous
Next: Variance ratio smoothing Up: Alternatives to the Random Previous: Alternatives to the Random
Didier Leibovici 2001-03-01