The set of equations 3-5
can alternatively be expressed as simple matrix products, e.g. the
set of equations 5
can be expressed as:
From equation 8, the matrix of spatial factors,
, has least-squares estimates of
The PARAFAC model and the ALS algorithm for
estimation treat all
three domains equally and do not utilise any
domain-specific information. Section 4
demonstrates how this can lead to PARAFAC results which are
difficult to interpret, mainly due to significant cross-talk between
estimated spatial maps.
In order to address this, we formulate a tensor-PICA model which
incorporates the assumption of
maximally non-Gaussian distributions of estimated spatial maps,
:
equation 8
is
identical to a standard (2-D) factor analysis or noisy ICA model [Beckmann and Smith, 2004], where
the matrix
denotes the
'mixing' matrix and
contains the set of spatial maps as row
vectors. Unlike the single subject (2-D) PICA model, however, the
mixing matrix now has a special block structure which can be used to
identify the factor matrices
and
.
Given the first matrix factor in
equation 8
, it is easy to recover the two underlying
matrices
and
: each of the
columns in
is formed by
scaled
repetitions of a single column from
, i.e. when reshaped into a
matrix is of rank 1. Thus, we can transform each
column
into a
matrix and calculate its (single) non-zero
left Eigenvector of length
, together with a set of
factor loadings
(projections of the matrix onto the left Eigenvector), using a Singular
Value Decomposition (SVD) and use these to re-constitute a column of the underlying factor matrices
and
.
This needs to be repeated for each of the
columns separately and the matrices
and
are proportional to
the
different Eigenvectors and factor loadings respectively, i.e. the
values obtained by projecting the matrix of
matrix of time
courses onto the Eigenvector of the SVD.
This gives the
following algorithm for a rank-1 tensor PICA decomposition of three-way data
:
Note that, like PARAFAC, the rank-1 tensor PICA decomposition estimates
factor matrices for the generative model of
equation 1. The estimated matrices, however,
provide a different structural
representation of the three-way data
. Note also, that the singular value decomposition of each matrix
not only provides the left and right Eigenvectors which form the relevant columns in
and
but also gives a set of Eigenvalues. The ratio of the largest Eigenvalue and the sum of all Eigenvalues can be used to assess the quality of the rank-1 approximation: if the matrix
is not well approximated by the outer product of the left and right Eigenvectors the corresponding ratio will be low, i.e. only represent a small amount of variability in
.