next up previous
Next: Point estimate of Up: Fast Approximation Point Estimates Previous: Fast Approximation Point Estimates

Point estimate of $ \sigma _g^2$

We get a point estimate of $ \sigma _g^2$ by finding the maximum a posterior (MAP) over the marginal posterior distribution $ p(\sigma_g^2,\vec{\tau_K}\vert Y)$. If we marginalise out $ \beta _g$ then the marginal posterior is:

$\displaystyle p(\sigma_g^2,\vec{\tau_K}\vert Y)$ $\displaystyle =$ $\displaystyle \vert U^{-1}\vert^{1/2}\vert X_{G}^TU^{-1}X_{G}\vert^{-1/2}
\exp\...
...tilde{\beta}_g^TX_{G}^T
U^{-1}X_{G}\tilde{\beta}_g \right)\right\} 1/\sigma_g^2$ (43)

where
$\displaystyle \tilde{\beta}_g$ $\displaystyle =$ $\displaystyle (X_{G}^T U^{-1}
X_{G})^{-1}X_{G}^TU^{-1}\vec{\mu_{\beta_K}}$ (44)

We then assume $ \tau_k=1$ and look to find the MAP for $ \sigma _g^2$. However, there is a question of parameterisation. The mode we get will depend on the parametrisation we use. For example, we could look to maximise with respect to $ \sigma _g^2$, or $ \sigma_g$, or $ \log(\sigma_g^2)$, or $ \phi_g = 1/\sigma_g^2$ etc., all of which will give us different MAPs. Note that as we reparameterise, the reference prior might change but the reference posterior always stays the same, see (2). Hence, a natural way to reparameterise such that the parameter we use gives us a uniform reference prior.

The parameterisation which gives us a uniform reference prior is $ \theta=\log(\sigma_g^2)$. Hence, we need to solve:

$\displaystyle \widehat{\theta}$ $\displaystyle =$ $\displaystyle \arg \max_{\theta}
p(\sigma_g^2\vert Y,\vec{\tau_K=1})$ (45)

where $ p(\sigma_g^2\vert Y,\vec{\tau_K=1})$ is the marginal in equation 43 with $ \vec{\tau_K=1}$. To solve for $ \widehat{\theta}$ using this equation we use Brent's algorithm (3). We can then easily convert from $ \widehat{\theta}$ to $ \widehat{\sigma_g^2}$.


next up previous
Next: Point estimate of Up: Fast Approximation Point Estimates Previous: Fast Approximation Point Estimates