next up previous
Next: Difficulties Up: Mathematical Formulation Previous: Cost Functions

   
Optimisation

The theoretical registration problem, as posed above, is fully specified by a transformation space, an interpolation method and a cost function. However, in practice, an optimisation method is required to find the transformation that minimises the cost function. This, therefore, is an implementational issue rather than one that relates purely to the theory. Furthermore, it is necessary that the method be a global optimisation method rather than one of the more common local methods (for example, Powell's Method, Gradient Descent, etc.).

Although there do exist general global optimisation methods (Simulated Annealing being a notable example), it has been shown (in the No Free Lunch theorem [Wolpert and Macready, 1996]) that there is no general method that is superior for all problems (when taken on average). Therefore the method used should ideally be tuned for the particular problem at hand -- in this case, volumetric registration.

The performance of the optimisation method is important since evaluating the cost function requires a large amount of computation. This, together with the general combinatorial explosion which occurs for higher dimensional spaces (for example, ${\mathcal R}^{12}$), makes any simple search algorithm impractical. In addition, it is also difficult to satisfy the convergence criteria in reasonable amounts of time for statistical optimisation methods such as Simulated Annealing (even in the fast version [Ingber, 1989]). Consequently, a compromise is made by shortening the schedule in order to achieve a solution within the time available.

To overcome these problems, one common tactic is to adopt a multi-resolution approach in conjunction with a local optimisation method [Woods et al., 1993,Studholme et al., 1996]. The multi-resolution approach involves computing the cost function for various sub-samplings of the volumes. For instance, let Irn denote the reference volume sampled with cubic voxels of side length n mm, and similarly for the floating volume. This creates a set of scales at which the problem can be solved. That is,

\begin{displaymath}T^*_n = \arg \min_{T \in S_T} C(I^r_n,I^f_n \circ T).
\end{displaymath} (3)

Then, an initial solution is found for large n, and progressively refined by lowering n since

\begin{displaymath}T^* = \lim_{n \rightarrow 0} T^*_n.
\end{displaymath} (4)

Furthermore, denote the local optimisation as a function that takes an initial transformation and returns a new transformation, which is an estimate of a nearby local minima. That is, $\ensuremath{{\mathrm{Op}}} _n : S_T \rightarrow
S_T$ so that

\begin{displaymath}T_{\mathrm{new}} = \ensuremath{{\mathrm{Op}}} _n(T_{\mathrm{old}}).
\end{displaymath} (5)

Note that the optimisation function also depends on the sub-sampling, n, via the cost function since this is based on the sub-sampled images and so changes with the sub-sampling. Therefore, the multi-resolution approach begins with an initial transformation estimate, T0, usually taken to be the identity transformation, and proceeds iteratively as:
T1 = $\displaystyle \ensuremath{{\mathrm{Op}}} _{n_0}(T_0)$  
T2 = $\displaystyle \ensuremath{{\mathrm{Op}}} _{n_1}(T_1)$  
$\displaystyle {\vdots}$   $\displaystyle {\vdots}$  
TP = $\displaystyle \ensuremath{{\mathrm{Op}}} _{n_{P-1}}(T_{P-1})$ (6)

where a typical set of values would be P = 4 with $n_0 = 8, \, n_1 = 4, \,
n_2 = 2, \, n_3 = 1$ in mm.

The advantage of this multi-resolution approach is that the initial optimisation, at large n, has a dramatically reduced computational load, since the number of sample points is substantially less. In addition, for large sub-samplings it should be the gross features of the image which dominate, and so the overall alignment should be easier to find. This belief is equivalent to saying that there are less local minima for the optimisation method to get caught in, although, as shown in the next section, this is not necessarily true.


next up previous
Next: Difficulties Up: Mathematical Formulation Previous: Cost Functions
Mark Jenkinson
2000-05-10