next up previous
Next: Derivation of Similarity Function Up: Problem Formulation Previous: Example

Probabilistic Forms

Likelihood is:

$\displaystyle p(Y\vert T,S,\theta)$ $\displaystyle =$ $\displaystyle p\left( \epsilon = Y - G(T(S)) \right)$  
  $\displaystyle =$ $\displaystyle \left( \frac{\beta}{2 \pi} \right)^{N/2} \exp\left( \frac{-\beta \Vert Y- G(T(S))\Vert^2}{2} \right)$  

Priors:

$\displaystyle p(\beta)$ $\displaystyle =$ $\displaystyle \frac{1}{\beta}$  

For $ \alpha $ there are two alternative priors that are useful.

  1. Flat prior:
    $\displaystyle p(\alpha_j)$ $\displaystyle =$ \begin{displaymath}\left\{
\begin{array}{ccl} C_1 & \qquad & \mathrm{for~}0 \le ...
...j \le C_1^{-1} \\
0 & & \mathrm{otherwise}
\end{array} \right.\end{displaymath}  

    where the range of $ \alpha_j$ is restricted to $ [0,C_1^{-1}]$.
  2. Or a prior encoding prior knowledge:
    $\displaystyle p(\alpha \vert \lambda, Q )$ $\displaystyle =$ $\displaystyle \vert\det(Q)\vert^{1/2} \left(\frac{\lambda}{\pi}\right)^{D/2} \exp\left( - \lambda \alpha^{\mathrm{\textsf{T}}}Q \alpha \right)$  
    $\displaystyle p(\lambda)$ $\displaystyle =$ $\displaystyle C_0$  

    where $ Q$ represents the prior knowledge about the expected intensities in the image formation; $ \lambda$ is a parameter expressing the unknown scaling between the learnt prior distribution of $ \alpha $ parameters and the intensities in a new image; and $ C_0$ is a constant, representing the (improper) flat prior on $ \lambda$.

Parameters:

$\displaystyle \theta = \{ \beta, \alpha , \lambda \} $

where $ \beta = \frac{1}{\sigma^2}$ is the precision parameter.


next up previous
Next: Derivation of Similarity Function Up: Problem Formulation Previous: Example