Identifying anatomical structures of interest in medical images is of major importance in many areas, especially neuroimaging (e.g. [4,2]). Due to poor contrast to noise it is advantageous to include prior information about structures as seen in the population rather than relying purely on the image intensities. However, a similarity measure is needed to combine prior information about structures, and their variation, with the image data.

Existing applications of model-based segmentation rely on explicit or implicit image similarity metrics, prior information and regularisation terms (often expressed as forces in a deformable model) which often require empirically set parameters to weight these terms (e.g. [4,2]). Using a fully Bayesian approach naturally combines probabilistic prior shape information with a model of the image formation process in a probabilistic framework, without the need for additional, ad-hoc parameters to control the relative strength of the prior and data-driven information.

Previous Bayesian derivations of similarity functions for registration applications exist (see [5,7]). This formulation is different because it is based on segmentation via fitting shape models, not matching two arbitrary images without any model of the image content. In addition to incorporating anatomical knowledge (via shape priors), this approach models the effects of partial volume, bias fields and changes in field of view. A somewhat similar approach was recently taken in [1], but for voxel-based tissue-type classification without the use of shape models.

The similarity function derived in this report is similar to the Correlation Ratio [6] but uses the residuals from an image formation model together with terms that normalise for the amount of partial volume and field of view of the image acquisition. The performance of the similarity function is compared to several existing functions to demonstrate the advantages of the new formulation.