Instead of presetting to what is considered to be a reasonable value,
we instead apply some constraints. The first assumption is that
for all
.
The second assumption is that the autocorrelation is monotonically decreasing.
This means that low frequency components
are favoured, which are widely accepted in the literature as being
the most important to account for.
The autocorrelation is estimated
using a standard unbiased estimator (equation 11) and then
the best least squares fit that satisfies the constraint of monotonicity
can be obtained using techniques from the literature of isotonic
regression. The particular algorithm that we use is the Pool Adjacent
Violators Algorithm (PAVA) (T. Robertson and Dykstra, 1988), which provides a
unique, least squares fit under the constraint. Before using the
algorithm, we set
for
and
for
.
This is done partly because it reduces the amount of data the
algorithm iterates over, and also because the raw autocorrelation
estimate is very noisy for
(there is less data
available to compute autocorrelations at high lags), and we do not
expect significant autocorrelation at such high lags. Furthermore, the value
of zero will propagate, eventually stopping at the lag which gives
.
For the purpose of the algorithm it is also necessary to define a
weighting function
for
. The
algorithm then proceeds as follows:
The algorithm was tested on artificial data which consisted of white noise
of length , and had been low-pass filtered with a
Gaussian of varying standard deviation. This highlighted
a slight bias for white noise data, which was easily remedied by setting
if
.