next up previous
Next: Marginalising over in the Up: Appendix Previous: Multivariate Non-central t-distribution fit


Determining Reference Priors

Here we show how we determine the reference prior for a vector of parameters $ \theta$ for a model with likelihood $ p(y\vert\theta)$. This is taken from section 5.4.5. of (2):

The Fisher information matrix, $ \vec{H(\theta)}$, is given by:

$\displaystyle H(\vec{\theta}) = -E_{y\vert\theta} \left\{ \frac{\partial^2}{\partial
\theta_i \partial \theta_j} \log p(y\vert\theta) \right\}$     (32)

For the models in this paper, the Fisher information matrix, $ \vec{H(\theta)}$, is block diagonal:
$\displaystyle H(\vec{\theta}) \! =\! \left[\! \begin{array}{cccc}
h_{11}(\vec{\...
... 0 & \ddots & 0 \\
0 & \hdots & 0 & h_{mm}(\vec{\theta})
\end{array}\! \right]$     (33)

and we can separate out the block $ h_{jj}(\vec{\theta})$ as being the product:
$\displaystyle \{h_{jj}(\theta)\}^{1/2}=f_j(\theta_j)g_j(\vec{\theta_{-j}})$     (34)

where $ f_j(\theta_j)$ is a function depending only on $ \theta_j$ and $ g_j(\vec{\theta_{-j}})$ does not depend on $ \theta_j$. The Berger-Bernardo reference prior is then given by:
$\displaystyle \pi(\vec{\theta})\propto \prod_j^m f_{j}(\theta_j)$     (35)

Note that this approach yields the Jeffreys prior in one-dimensional problems.


next up previous
Next: Marginalising over in the Up: Appendix Previous: Multivariate Non-central t-distribution fit