next up previous
Next: Model Fitting Using the Up: MRF-MAP Classification Previous: MRF-MAP Classification

MRF-MAP Estimation

We seek a labelling $\mathbf{\hat x}$ of an image, which is an estimate of the true labelling x*, according to the MAP criterion,

 \begin{displaymath}
{\mathbf{\hat x}} = \arg\max_{\mathbf x \in \mathcal X}\{P({\mathbf y\vert x})P(\mathbf
x)\}.
\end{displaymath} (14)

From (14), we need to compute the prior probability of the class and the likelihood probability of the observation. Since x is considered as a realization of an MRF, its prior probability can be derived from

 \begin{displaymath}
P({\mathbf x})=\frac{1}{Z}\exp(-U(\mathbf x)).
\end{displaymath} (15)

It is also assumed that the pixel intensity yi follows a Gaussian distribution with parameters $\theta_i=\{\mu_\ell,
\sigma_\ell\}$, given the class label $x_i=\ell$,

\begin{displaymath}p(y_i\vert x_i)=g(y_i;\theta_\ell)=\frac{1}{\sqrt{2\pi\sigma_...
...^2}}
\exp\Big(-\frac{(y_i-\mu_\ell)^2}{2\sigma_\ell^2}\Big).
\end{displaymath} (16)

Based on the conditional independence assumption of y (8), we have the joint likelihood probability

\begin{eqnarray*}P({\mathbf y\vert x})&=&\prod_{i\in \mathcal S}p(y_i\vert x_i)
...
...\mu_{x_i})^2}{2\sigma_{x_i}^2}
-\log(\sigma_{x_i})\Big)\bigg]
\end{eqnarray*}


which can be written as

 \begin{displaymath}
P({\mathbf y\vert x})=\frac{1}{Z'} \exp(-U(\mathbf y\vert x)),
\end{displaymath} (17)

with the likelihood energy

 \begin{displaymath}
U({\mathbf y\vert x})=\sum_{i\in \mathcal S}U(y_i\vert x_...
..._i-\mu_{x_i})^2}{2\sigma_{x_i}^2}
+\log(\sigma_{x_i})\Big],
\end{displaymath} (18)

and the constant normalization term $Z'=(2\pi)^{\frac{N}{2}}$. It is easy to show that

 \begin{displaymath}
\log P({\mathbf x\vert y}) \propto -U(\mathbf x\vert\mathbf y),
\end{displaymath} (19)

where

 
U(x|y) = U(y|x)+U(x)+const (20)

is the posterior energy. The MAP estimation is equivalent to minimizing the posterior energy function

 \begin{displaymath}
{\mathbf{\hat x}} = \arg\min_{\mathbf x \in \mathcal X}
\{U({\mathbf y\vert x})+U(\mathbf x)\}.
\end{displaymath} (21)

Although mathematically simple, this type of MAP estimation clearly presents a computationally infeasible problem. Therefore, optimal solutions are usually computed using some iterative optimization (minimization) techniques. In this paper, we adopt a Iterated conditional modes (ICM) algorithm proposed by Besag [2], which uses the ``greedy'' strategy in the iterative local minimization and convergence is guaranteed after only a few iterations. Given the data y and the other labels $x_{{\mathcal S} -\{i\}}^{(k)}$, the algorithm sequentially updates each xi(k) into xi(k+1) by minimizing $U(x_i\vert{\mathbf y}, x_{{{\mathcal S} - \{i\}}})$, the conditional posterior probability, with respect to xi.
next up previous
Next: Model Fitting Using the Up: MRF-MAP Classification Previous: MRF-MAP Classification
Yongyue Zhang
2000-05-11