Abstract
In model-based clustering and classification, the cluster-weighted model is a convenient approach when the random vector of interest is constituted by a response variable \(Y\) and by a vector \({\varvec{X}}\) of \(p\) covariates. However, its applicability may be limited when \(p\) is high. To overcome this problem, this paper assumes a latent factor structure for \({\varvec{X}}\) in each mixture component, under Gaussian assumptions. This leads to the cluster-weighted factor analyzers (CWFA) model. By imposing constraints on the variance of \(Y\) and the covariance matrix of \({\varvec{X}}\), a novel family of sixteen CWFA models is introduced for model-based clustering and classification. The alternating expectation-conditional maximization algorithm, for maximum likelihood estimation of the parameters of all models in the family, is described; to initialize the algorithm, a 5-step hierarchical procedure is proposed, which uses the nested structures of the models within the family and thus guarantees the natural ranking among the sixteen likelihoods. Artificial and real data show that these models have very good clustering and classification performance and that the algorithm is able to recover the parameters very well.
Similar content being viewed by others
References
Airoldi, J, Hoffmann R (1984) Age variation in voles (Microtus californicus, M. ochrogaster) and its significance for systematic studies, Occasional papers of the Museum of Natural History, vol 111. University of Kansas, Lawrence
Aitken AC (1926) On Bernoulli’s numerical solution of algebraic equations. Proc Royal Soc Edinburgh 46:289–305
Andrews JL, McNicholas PD, Subedi S (2011) Model-based classification via mixtures of multivariate t-distributions. Comput Stat Data Anal 55(1):520–529
Baek J, McLachlan GJ, Flack LK (2010) Mixtures of factor analyzers with common factor loadings: applications to the clustering and visualization of high-dimensional data. IEEE Trans Pattern Anal Mach Intell 32:1298–1309
Banfield JD, Raftery AE (1993) Model-based Gaussian and non-Gaussian clustering. Biometrics 49(3):803–821
Bartholomew DJ, Knott M (1999) Latent variable models and factor analysis. In: Kendall’s library of statistics, vol 7, 2nd edn. Edward Arnold, London
Bartlett M (1953) Factor analysis in psychology as a statistician sees it. In: Uppsala symposium on psychological factor analysis, Number 3 in Nordisk Psykologi’s Monograph Series, Uppsala, Sweden, pp 23–34. Almquist and Wiksell, Uppsala
Biernacki C, Celeux G, Govaert G (2000) Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans Pattern Anal Mach Intell 22(7):719–725
Böhning D, Dietz E, Schaub R, Schlattmann P, Lindsay B (1994) The distribution of the likelihood ratio for mixtures of densities from the one-parameter exponential family. Annals Inst Stat Math 46(2):373–388
Bouveyron C, Girard S, Schmid C (2007) High-dimensional data clustering. Comput Stat Data Anal 52(1):502–519
Browne RP, McNicholas PD (2012) Model-based clustering, classification, and discriminant analysis of data with mixed type. J Stat Plann Infer 142(11):2976–2984
Browne RP, McNicholas PD, Sparling MD (2012) Model-based learning using a mixture of mixtures of Gaussian and uniform distributions. IEEE Trans Pattern Anal Mach Intell 34(4):814–817
Carvalho C, Chang J, Lucas J, Nevins J, Wang Q, West M (2008) High-dimensional sparse factor modeling: applications in gene expression genomics. J Am Stat Assoc 103(484):1438–1456
Celeux G, Govaert G (1995) Gaussian parsimonious clustering models. Pattern Recogn 28(5):781–793
Dean N, Murphy TB, Downey G (2006) Using unlabelled data to update classification rules with applications in food authenticity studies. J Royal Stat Soc Ser C 55(1):1–14
Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J Royal Stat Soc Ser B 39(1):1–38
DeSarbo W, Cron W (1988) A maximum likelihood methodology for clusterwise linear regression. J Classif 5(2):249–282
Flury B (1997) A first course in multivariate statistics. Springer, New York
Fraley C, Raftery AE (2002) Model-based clustering, discriminant analysis, and density estimation. J Am Stat Assoc 97(458):611–631
Fraley C, Raftery AE, Murphy TB, Scrucca L (2012) mclust version 4 for R: normal mixture modeling for model-based clustering, classification, and density estimation. Technical report 597, Department of Statistics, University of Washington, Seattle, Washington, USA
Frühwirth-Schnatter S (2006) Finite mixture and markov switching models. Springer, New York
Gershenfeld N (1997) Nonlinear inference and cluster-weighted modeling. Ann New York Acad Sci 808(1):18–24
Ghahramani Z, Hinton G (1997) The EM algorithm for factor analyzers. Technical report CRG-TR-96-1, University of Toronto, Toronto
Grün B, Leisch F (2008) Flexmix version 2: finite mixtures with concomitant variables and varying and constant parameters. J Stat Softw 28(4):1–35
Hennig C (2000) Identifiablity of models for clusterwise linear regression. J Classif 17(2):273–296
Hosmer D Jr (1973) A comparison of iterative maximum likelihood estimates of the parameters of a mixture of two normal distributions under three different types of sample. Biometrics 29(4):761–770
Hubert L, Arabie P (1985) Comparing partitions. J Classif 2(1):193–218
Ingrassia S, Minotti SC, Punzo A (2013) Model-based clustering via linear cluster-weighted models. DOI:10.1016/j.csda.2013.02.012 Computational Statistics and Data Analysis
Ingrassia, S, Minotti SC, Punzo A, Vittadini G (2012a) Generalized linear cluster-weighted models. eprint arXiv: 1211.1171, http://arxiv.org/abs/1211.1171
Ingrassia S, Minotti SC, Vittadini G (2012b) Local statistical modeling via the cluster-weighted approach with elliptical distributions. J Classif 29(3):363–401
Karlis D, Santourian A (2009) Model-based clustering with non-elliptically contoured distributions. Stat Comput 19(1):73–83
Leisch F (2004) Flexmix: a general framework for finite mixture models and latent class regression in R. J Stat Softw 11(8):1–18
Lin T-I (2010) Robust mixture modeling using multivariate skew t distributions. Stat Comput 20:343–356
Lindsay BG (1995) Mixture models: theory, geometry and applications. In: NSF-CBMS regional conference series in probability and statistics, vol. 5. Institute of Mathematical Statistics, Hayward
McLachlan GJ, Basford KE (1988) Mixture models: inference and applications to clustering. Marcel Dekker, New York
McLachlan GJ, Peel D (2000a) Finite mixture models. Wiley, New York
McLachlan GJ, D Peel (2000b) Mixtures of factor analyzers. In: Proceedings of the seventh international conference on machine learning, pp 599–606. Morgan Kaufmann, San Francisco.
McNicholas PD (2010) Model-based classification using latent Gaussian mixture models. J Stat Plann Infer 140(5):1175–1181
McNicholas PD, Jampani KR, McDaid AF, Murphy TB, Banks L (2011) PGMM: Parsimonious Gaussian Mixture Models. R package version 1.0.
McNicholas PD, Murphy TB (2008) Parsimonious Gaussian mixture models. Stat Comput 18(3):285–296
McNicholas PD, Murphy TB (2010a) Model-based clustering of longitudinal data. Can J Stat 38(1):153–168
McNicholas PD, Murphy TB (2010b) Model-based clustering of microarray expression data via latent Gaussian mixture models. Bioinformatics 26(21):2705–2712
McNicholas PD, Murphy TB, McDaid AF, Frost D (2010) Serial and parallel implementations of model-based clustering via parsimonious Gaussian mixture models. Comput Stat Data Anal 54(3):711–723
McNicholas PD, Subedi S (2012) Clustering gene expression time course data using mixtures of multivariate t-distributions. J Stat Plann Infer 142(5):1114–1127
Meng XL, Rubin DB (1993) Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika 80(2):267–278
Meng XL, van Dyk D (1997) The EM algorithm: an old folk-song sung to a fast new tune. J Royal Stat Soc Ser B (Stat Methodol) 59(3):511–567
Montanari A, Viroli C (2010) Heteroscedastic factor mixture analysis. Stat Modell 10(4):441–460
Montanari A, Viroli C (2011) Dimensionally reduced mixtures of regression models. J Stat Plann Infer 141(5):1744–1752
Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in C: the art of scientific computation, 2nd edn. Cambridge University Press, Cambridge
Punzo, A (2012) Flexible mixture modeling with the polynomial Gaussian cluster-weighted model. eprint arXiv: 1207.0939, http://arxiv.org/abs/1207.0939
Rand W (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850
Sakamoto Y, Ishiguro M, Kitagawa G (1983) Akaike information criterion statistics. Reidel, Boston
Schöner B (2000) Probabilistic characterization and synthesis of complex data driven systems. Ph. D. thesis, MIT
Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464
Scrucca L (2010) Dimension reduction for model-based clustering. Stat Comput 20(4):471–484
R Development Core Team (2012) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna
Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15(1):72–101
Tipping TE, Bishop CM (1999) Mixtures of probabilistic principal component analysers. Neural Comput 11(2):443–482
Titterington DM, Smith AFM, Makov UE (1985) Statistical analysis of finite mixture distributions. Wiley, New York
Wang Q, Carvalho C, Lucas J, West M (2007) BFRM: Bayesian factor regression modelling. Bull Int Soc Bayesian Anal 14(2):4–5
West M (2003) Bayesian factor regression models in the “large $p$, small $n$” paradigm. In: Bernardo J, Bayarri M, Berger J, Dawid A, Heckerman D, Smith A, West M (eds) Bayesian statistics, vol 7. Oxford University Press, Oxford, pp 723–732
Wolfe JH (1963) Object cluster analysis of social areas. Master’s thesis, University of California, Berkeley
Wolfe JH (1970) Pattern clustering by multivariate mixture analysis. Multivariate Behav Res 5(3):329–350
Woodbury MA (1950) Inverting modified matrices. Statistical Research Group, Memo. Rep. no. 42. Princeton University, Princeton
Acknowledgments
The authors sincerely thank the Associate Editor and the referees for helpful comments and valuable suggestions that have contributed to improving the quality of the manuscript. The work of Subedi and McNicholas was partly supported by an Early Researcher Award from the Ontario Ministry of Research and Innovation.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: The conditional distribution of \(Y|{\varvec{x}},{\varvec{u}}\)
To compute the distribution of \(Y|{\varvec{x}},{\varvec{u}}\), we begin by recalling that if \({\varvec{Z}}\sim N_q\left( {\varvec{m}},{{\varvec{\Gamma }}}\right) \) is a random vector with values in \(\mathbb{R }^q\) and if \({\varvec{Z}}\) is partitioned as \({\varvec{Z}}=\left( {\varvec{Z}}^{\prime }_1,{\varvec{Z}}^{\prime }_2\right) ^{\prime }\), where \({\varvec{Z}}_1\) takes values in \(\mathbb{R }^{q_1}\) and \({\varvec{Z}}_2\) in \(\mathbb{R }^{q_2}=\mathbb{R }^{q-q_1}\), then we can write
Now, because \({\varvec{Z}}\) has a multivariate normal distribution, \({\varvec{Z}}_1|{\varvec{Z}}_2={\varvec{z}}_2\) and \({\varvec{Z}}_2\) are statistically independent with \({\varvec{Z}}_1|{\varvec{Z}}_2={\varvec{z}}_2 \sim N_{q_1}\left( {\varvec{m}}_{1|2}, {{\varvec{\Gamma }}}_{1|2}\right) \) and \({\varvec{Z}}_2 \sim N_{q_2}\left( {\varvec{m}}_2, {{\varvec{\Gamma }}}_{22}\right) \), where
Therefore, setting \({\varvec{Z}}=\left( {\varvec{Z}}^{\prime }_1,{\varvec{Z}}^{\prime }_2\right) ^{\prime }\), where \({\varvec{Z}}^{\prime }_1=Y\) and \({\varvec{Z}}_2=\left( {\varvec{X}}^{\prime }, {\varvec{U}}^{\prime }\right) ^{\prime }\), gives \({\varvec{m}}_1= \beta _0 + {{\varvec{\beta }}}^{\prime }_1 {{\varvec{\mu }}}\) and \({\varvec{m}}_2 = \left( {{\varvec{\mu }}}^{\prime },{\varvec{0}}^{\prime }\right) ^{\prime }\), with the elements in \({{\varvec{\Gamma }}}\) given by
It follows that \(Y|{\varvec{x}},{\varvec{u}}\) is Gaussian with mean \({\varvec{m}}_{y|{\varvec{x}},{\varvec{u}}}= \mathbb E \left( Y|{\varvec{x}},{\varvec{u}}\right) \) and variance \(\sigma ^2_{y|{\varvec{x}},{\varvec{u}}}=\text{ Var}\left( Y|{\varvec{x}},{\varvec{u}}\right) \), in accordance with the formulae in (15). Because the inverse matrix of \({{\varvec{\Gamma }}}_{22}\) is required in (15), the following formula for the inverse of a partitioned matrix is utilized:
Again, writing \({{\varvec{\Sigma }}}={{\varvec{\Lambda }}}{{\varvec{\Lambda }}}^{\prime } + {{\varvec{\Psi }}}\), we have
Moreover, according to the Woodbury identity (Woodbury 1950),
Now,
Finally, according to (15), we have
Appendix B: Details on the AECM algorithm for the parsimonious models
This appendix details the AECM algorithm for the models summarized in Table 1.
1.1 B.1 Constraint on the \(Y\) variable
In all of the models whose identifier starts with ‘C’, that is the models in which the error variance terms \(\sigma _g^2\) (of the response variable \(Y\)) are constrained to be equal across groups, i.e., \(\sigma _g^2 =\sigma ^2\) for \(g=1,\ldots ,G\), the common variance \(\sigma ^2\) at the \(\left( k+1\right) \)th iteration of the algorithm is computed as
1.2 B.2 constraints on the \({\varvec{X}}\) variable
With respect to the \({\varvec{X}}\) variable, as explained in Sect. 3.2, we considered the following constraints on \({{\varvec{\Sigma }}}_g={{\varvec{\Lambda }}}_g{{\varvec{\Lambda }}}_g^{\prime }+{{\varvec{\Psi }}}_g\): (i) equal loading matrices \({{\varvec{\Lambda }}}_g = {{\varvec{\Lambda }}}\), (ii) equal error variance \({{\varvec{\Psi }}}_g = {{\varvec{\Psi }}}\), and (iii) isotropic assumption: \({{\varvec{\Psi }}}_g = \psi _g {\varvec{I}}_p\). In such cases, the \(g\)th term of the expected complete-data log-likelihood \(Q_2\left( {{\varvec{\theta }}}_2; {{\varvec{\theta }}}^{(k+1/2)}\right) \), and then the estimates (12) and (13) in Sect. 4.3, are computed as follows.
1.2.1 B.2.1 Isotropic assumption: \({{\varvec{\Psi }}}_g=\psi _g{\varvec{I}}_p\)
In this case, Eq. (9) becomes
yielding
Then the estimated \(\psi _g\) is attained for \(\hat{\psi }_g\), satisfying
Thus, according to (12), for \({{\varvec{\Lambda }}}_g=\hat{{{\varvec{\Lambda }}}}_g = {\varvec{S}}^{\left( k+1\right) }_g {{\varvec{\gamma }}}^{\prime \left( k\right) }_g{{\varvec{\Theta }}}_g^{-1}\) we get \(\text{ tr}\left\{ {{\varvec{\Lambda }}}_g {{\varvec{\Theta }}}^{\left( k\right) }_g {{\varvec{\Lambda }}}_g^{\prime } \right\} = \text{ tr} \left\{ {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}_g \right\} \) and, finally, \(\hat{\psi }_g =\frac{1}{p}\text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } -\hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} .\) Thus,
with \({{\varvec{\Theta }}}_g^+\) computed according to (14).
1.2.2 B.2.2 Equal error variance: \({{\varvec{\Psi }}}_g ={{\varvec{\Psi }}}\)
In this case, from Eq. (9), we have
yielding
Then the estimated \(\hat{{{\varvec{\Psi }}}}\) is obtained by satisfying
that is
which can be simplified as
with \(\displaystyle \sum _{g=1}^G n^{\left( k+1\right) }_g =n\). Again, according to (12), for \({{\varvec{\Lambda }}}_g=\hat{{{\varvec{\Lambda }}}}_g = {\varvec{S}}^{\left( k+1\right) }_g {{\varvec{\gamma }}}^{\prime \left( k\right) }_g{{\varvec{\Theta }}}_g^{-1}\) we get \(\hat{{{\varvec{\Lambda }}}}_g{{\varvec{\Theta }}}^{\left( k\right) }_g \hat{{{\varvec{\Lambda }}}}_g^{\prime } =\hat{{{\varvec{\Lambda }}}}_g {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } \) and, afterwards,
Thus,
where \({{\varvec{\Theta }}}_g^+\) is computed according to (14).
1.2.3 B.2.3 Equal loading matrices: \({{\varvec{\Lambda }}}_g={{\varvec{\Lambda }}}\)
In this case, Eq. (9) can be written as
yielding
Then the estimated \(\hat{{{\varvec{\Lambda }}}}\) is obtained by solving
with \({{\varvec{\gamma }}}^{\left( k\right) }_g ={{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }\left( {{\varvec{\Lambda }}}^{\left( k\right) }{{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }+{{\varvec{\Psi }}}^{\left( k\right) }_g\right) ^{-1}\). In this case, the loading matrix cannot be solved directly and must be solved in a row-by-row manner as suggested by McNicholas and Murphy (2008). Therefore,
where \(\lambda ^+_i\) is the \(i\)th row of the matrix \({{\varvec{\Lambda }}}^+\), \(\psi _{g\left( i\right) }\) is the \(i\)th diagonal element of \({{\varvec{\Psi }}}_g\), and \(\mathbf r _i\) represents the \(i\)th row of the matrix \(\displaystyle \sum \nolimits _{g=1}^G n_g^{\left( k+1\right) } \left( {{\varvec{\Psi }}}^{\prime }_g\right) ^{-1} {\varvec{S}}_g^{\left( k+1\right) }\).
1.2.4 B.2.4 Further details
A further schematization is here given without considering the constraint on the \(Y\) variable. Thus, with reference to the model identifier, we will only refer to the last three letters.
-
Models ended by UUU: no constraint is assumed.
-
Models ended by UUC: \({{\varvec{\Psi }}}_g =\psi _g{\varvec{I}}_p\), where the parameter \(\psi _g\) is updated according to (16).
-
Models ended by UCU: \({{\varvec{\Psi }}}_g ={{\varvec{\Psi }}}\), where the matrix \({{\varvec{\Psi }}}\) is updated according to (18).
-
Models ended by UCC: \( {{\varvec{\Psi }}}_g =\psi {\varvec{I}}_p\). By combining (16) and (18) we obtain
$$\begin{aligned} \hat{\psi }&= \frac{1}{p}\sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{n}\text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } -\hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} \nonumber \\&= \frac{1}{p}\sum _{g=1}^G \hat{\pi }_g^{\left( k+1\right) } \text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } -\hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} . \end{aligned}$$(23)Thus, \(\psi ^+ = ({1}/{p})\sum _{g=1}^G \pi _g^{\left( k+1\right) } \text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } - {{\varvec{\Lambda }}}^+_g} {{\varvec{\gamma }}}_g {\varvec{S}}_g^{\left( k+1\right) }\right\} \) and \({{\varvec{\gamma }}}^+_g = {{\varvec{\Lambda }}}^{\prime +}_g \left( {{\varvec{\Lambda }}}^+_g {{\varvec{\Lambda }}}^{\prime +}_g+\psi ^+ {\varvec{I}}_p\right) ^{-1}\), with \({{\varvec{\Theta }}}_g^+\) computed according to (14).
-
Models ended by CUU: \({{\varvec{\Lambda }}}_g ={{\varvec{\Lambda }}}\), where the matrix \({{\varvec{\Lambda }}}\) is updated according to (20). In this case, \({{\varvec{\Psi }}}_g\) is estimated directly from (11) and thus \({{\varvec{\Psi }}}^+_g =\text{ diag}\left\{ {\varvec{S}}_g^{\left( k+1\right) }-2 {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}_g {\varvec{S}}_g^{\left( k+1\right) } +{{\varvec{\Lambda }}}^+ {{\varvec{\Theta }}}_g {{\varvec{\Lambda }}}^{\prime +} \right\} \), with \({{\varvec{\gamma }}}_g^+\) and \({{\varvec{\Theta }}}^+_g\) computed according to (21) and (22), respectively.
-
Models ended by CUC: \({{\varvec{\Lambda }}}_g ={{\varvec{\Lambda }}}\) and \({{\varvec{\Psi }}}_g =\psi _g{\varvec{I}}_p\). In this case, Equation (19), for \({{\varvec{\Psi }}}_g =\psi _g{\varvec{I}}_p\), yields
$$\begin{aligned} \sum _{g=1}^G \frac{\partial Q_2 \left( {{\varvec{\Lambda }}}, \psi _g; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Lambda }}}}&= \sum _{g=1}^G n_g^{\left( k+1\right) } \psi _g^{-1}{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) } \\&- \sum _{g=1}^G n_g^{\left( k+1\right) } \psi _g^{-1} {{\varvec{\Theta }}}^{\left( k\right) }_g = {\varvec{0}}, \end{aligned}$$and afterwards
$$\begin{aligned} \hat{{{\varvec{\Lambda }}}} = \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) } \right) \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {{\varvec{\Lambda }}}\right) ^{-1}, \end{aligned}$$with \({{\varvec{\gamma }}}^{\left( k\right) }_g ={{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }\left( {{\varvec{\Lambda }}}^{\left( k\right) }{{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }+\psi ^{\left( k\right) }_g {\varvec{I}}_p\right) ^{-1}\). Moreover, from
$$\begin{aligned} \frac{\partial Q_2\left( {{\varvec{\Lambda }}}, \psi _g; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial \psi _g^{-1}}&= \frac{p}{2} \psi _g -\frac{n_g^{\left( k+1\right) }}{2} \left[ \text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) }\right\} -2 \text{ tr}\left\{ {\varvec{S}}_g^{^{\prime }\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) }_g {{\varvec{\Lambda }}}^{\prime }\right\} \right. \\&\left. + \text{ tr}\left\{ {{\varvec{\Lambda }}}{{\varvec{\Theta }}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}^{\prime }\right\} \right] = 0 \end{aligned}$$we get \(\hat{\psi }_g =({1}/{p})\text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) }-2\hat{{{\varvec{\Lambda }}}} {{\varvec{\gamma }}}^{\prime \left( k\right) }_g {\varvec{S}}_g+\hat{{{\varvec{\Lambda }}}} {{\varvec{\Theta }}}_g\hat{{{\varvec{\Lambda }}}}^{\prime }\right\} \). Thus,
$$\begin{aligned} {{\varvec{\Lambda }}}^+&= \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{\prime } \right) \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {{\varvec{\Lambda }}}\right) ^{-1}\\ \psi ^+_g&= \frac{1}{p}\text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) }-2\ {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}^{\prime }_g {\varvec{S}}_g+ {{\varvec{\Lambda }}}^+ {{\varvec{\Theta }}}{{\varvec{\Lambda }}}^{\prime +} \right\} \\ {{\varvec{\gamma }}}^+_g&= {{\varvec{\Lambda }}}^{\prime +}\left( {{\varvec{\Lambda }}}^+{{\varvec{\Lambda }}}^{\prime +} +\psi ^+_g {\varvec{I}}_p\right) ^{-1}, \end{aligned}$$with \({{\varvec{\Theta }}}_g^+\) computed according to (22).
-
Models ended by CCU: \({{\varvec{\Lambda }}}_g\!=\!{{\varvec{\Lambda }}}\) and \({{\varvec{\Psi }}}_g \!\!=\!{{\varvec{\Psi }}}\), so that \({{\varvec{\gamma }}}^{\left( k\right) } \!\!=\!{{\varvec{\Lambda }}}^{\prime \left( k\right) }\left( \!{{\varvec{\Lambda }}}^{\left( \!k\!\right) }{{\varvec{\Lambda }}}^{\left( \!k\!\right) }\!+\!{{\varvec{\Psi }}}^{\left( \!k\!\right) }\!\right) ^{-1}\). Setting \({{\varvec{\Psi }}}_g={{\varvec{\Psi }}}\) in (19), we get
$$\begin{aligned} \sum _{g=1}^G \frac{\partial Q_2\left( {{\varvec{\Lambda }}}, {{\varvec{\Psi }}}; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Lambda }}}}&= \sum _{g=1}^G n_g^{\left( k+1\right) } {{\varvec{\Psi }}}^{-1} \left[ {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) } - {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) }_g \right] \\&= {{\varvec{\Psi }}}^{-1} \left[ {{\varvec{\gamma }}}^{\prime \left( k\right) } \sum _{g=1}^G n_g^{\left( k+1\right) } {\varvec{S}}_g^{\left( k+1\right) } - {{\varvec{\Lambda }}}\sum _{g=1}^G n_g^{\left( k+1\right) } {{\varvec{\Theta }}}^{\left( k\right) }_g \right] \\&= {{\varvec{\Psi }}}^{-1} \left[ {{\varvec{\gamma }}}^{\prime \left( k\right) } {\varvec{S}}^{\left( k+1\right) } - {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) } \right] = {\varvec{0}}, \end{aligned}$$where \({\varvec{S}}^{\left( k+1\right) } = \sum _{g=1}^G \pi _g^{\left( k+1\right) } {\varvec{S}}_g^{\left( k+1\right) } \) and \({{\varvec{\Theta }}}^{\left( k\right) } =\sum _{g=1}^G \pi _g^{\left( k+1\right) } {{\varvec{\Theta }}}^{\left( k\right) }_g = {\varvec{I}}_q-{{\varvec{\gamma }}}^{\left( k\right) } {{\varvec{\Lambda }}}^{\left( k\right) }+ {{\varvec{\gamma }}}^{\left( k\right) } {\varvec{S}}^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) }\). Thus,
$$\begin{aligned} \hat{{{\varvec{\Lambda }}}} ={\varvec{S}}^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) } \left( {{\varvec{\Theta }}}^{\left( k\right) }\right) ^{-1}. \end{aligned}$$(24)Moreover, setting \({{\varvec{\Lambda }}}_g={{\varvec{\Lambda }}}\) in (17), we get \(\hat{{{\varvec{\Psi }}}} = \text{ diag}\left\{ {{\varvec{S}}^{\left( k+1\right) }-\hat{{{\varvec{\Lambda }}}}}{{\varvec{\gamma }}}^{\left( k\right) } {\varvec{S}}^{\left( k+1\right) }\right\} \). Hence,
$$\begin{aligned} {{\varvec{\Lambda }}}^+&= {\varvec{S}}^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime } {{\varvec{\Theta }}}^{-1}\\ {{\varvec{\Psi }}}^+&= \text{ diag}\left\{ {\varvec{S}}^{\left( k+1\right) }- {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}{\varvec{S}}^{\left( k+1\right) }\right\} \nonumber \\ {{\varvec{\gamma }}}^+_g&= {{\varvec{\Lambda }}}^{\prime +}\left( {{\varvec{\Lambda }}}^+{{\varvec{\Lambda }}}^{\prime +} +{{\varvec{\Psi }}}^+\right) ^{-1}, \nonumber \end{aligned}$$(25)with \({{\varvec{\Theta }}}_g^+\) computed according to (22).
-
Models ended by CCC: \({{\varvec{\Lambda }}}_g ={{\varvec{\Lambda }}}\) and \({{\varvec{\Psi }}}_g =\psi {\varvec{I}}_p\), so that \({{\varvec{\gamma }}}^{\left( k\right) } ={{\varvec{\Lambda }}}^{\prime \left( k\right) }\left( {{\varvec{\Lambda }}}^{\left( k\right) }{{\varvec{\Lambda }}}^{\prime \left( k\right) }+\psi ^{\left( k\right) }\right) ^{-1}\). Here, the estimated loading matrix is again (24), while the isotropic term obtained from (23) for \({{\varvec{\Lambda }}}_g\!=\!{{\varvec{\Lambda }}}\) is \(\hat{\psi } \!=\! ({1}/{p})\text{ tr}\Big \{{\varvec{S}}^{\left( k+1\right) }- \hat{{{\varvec{\Lambda }}}}{{\varvec{\gamma }}}^{\left( k\right) }{\varvec{S}}^{\left( k+1\right) }\Big \}\), with \({{\varvec{\gamma }}}^{\left( k\right) }_g \!=\!{{\varvec{\Lambda }}}^{\prime \left( k\right) }_g\left( \!{{\varvec{\Lambda }}}^{\left( k\right) }_g{{\varvec{\Lambda }}}^{\prime \left( k\right) }_g\!+\!\psi ^{\left( k\right) } {\varvec{I}}_p\right) ^{-1}\). Hence, \(\psi ^+\! =\! ({1}/{p}) \text{ tr}\left\{ {\varvec{S}}^{\left( k+1\right) }- {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}{\varvec{S}}^{\left( k+1\right) }\right\} \) and \({{\varvec{\gamma }}}^+ \!=\! {{\varvec{\Lambda }}}^{\prime \!}+\!\left( {{\varvec{\Lambda }}}^+{{\varvec{\Lambda }}}^{\prime +} + \psi ^+ {\varvec{I}}_p\right) ^{-1}\), with \({{\varvec{\Lambda }}}^+\) and \({{\varvec{\Theta }}}_g^+\) computed according to (25) and (22), respectively.
Appendix C: True and estimated covariance matrices of Sect. 6.1
Because the loading matrices are not unique, for the simulated data of Examples 1 and 2 we limit the attention to a comparison, for each \(g=1,\ldots ,G\), of true and estimated covariance matrices.
1.1 C.1 Example 1
and
1.2 C.2 Example 2
and
Rights and permissions
About this article
Cite this article
Subedi, S., Punzo, A., Ingrassia, S. et al. Clustering and classification via cluster-weighted factor analyzers. Adv Data Anal Classif 7, 5–40 (2013). https://doi.org/10.1007/s11634-013-0124-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11634-013-0124-8