Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Clustering and classification via cluster-weighted factor analyzers

  • Regular Article
  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

In model-based clustering and classification, the cluster-weighted model is a convenient approach when the random vector of interest is constituted by a response variable \(Y\) and by a vector \({\varvec{X}}\) of \(p\) covariates. However, its applicability may be limited when \(p\) is high. To overcome this problem, this paper assumes a latent factor structure for \({\varvec{X}}\) in each mixture component, under Gaussian assumptions. This leads to the cluster-weighted factor analyzers (CWFA) model. By imposing constraints on the variance of \(Y\) and the covariance matrix of \({\varvec{X}}\), a novel family of sixteen CWFA models is introduced for model-based clustering and classification. The alternating expectation-conditional maximization algorithm, for maximum likelihood estimation of the parameters of all models in the family, is described; to initialize the algorithm, a 5-step hierarchical procedure is proposed, which uses the nested structures of the models within the family and thus guarantees the natural ranking among the sixteen likelihoods. Artificial and real data show that these models have very good clustering and classification performance and that the algorithm is able to recover the parameters very well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Airoldi, J, Hoffmann R (1984) Age variation in voles (Microtus californicus, M. ochrogaster) and its significance for systematic studies, Occasional papers of the Museum of Natural History, vol 111. University of Kansas, Lawrence

  • Aitken AC (1926) On Bernoulli’s numerical solution of algebraic equations. Proc Royal Soc Edinburgh 46:289–305

    MATH  Google Scholar 

  • Andrews JL, McNicholas PD, Subedi S (2011) Model-based classification via mixtures of multivariate t-distributions. Comput Stat Data Anal 55(1):520–529

    Article  MathSciNet  MATH  Google Scholar 

  • Baek J, McLachlan GJ, Flack LK (2010) Mixtures of factor analyzers with common factor loadings: applications to the clustering and visualization of high-dimensional data. IEEE Trans Pattern Anal Mach Intell 32:1298–1309

    Article  Google Scholar 

  • Banfield JD, Raftery AE (1993) Model-based Gaussian and non-Gaussian clustering. Biometrics 49(3):803–821

    Article  MathSciNet  MATH  Google Scholar 

  • Bartholomew DJ, Knott M (1999) Latent variable models and factor analysis. In: Kendall’s library of statistics, vol 7, 2nd edn. Edward Arnold, London

  • Bartlett M (1953) Factor analysis in psychology as a statistician sees it. In: Uppsala symposium on psychological factor analysis, Number 3 in Nordisk Psykologi’s Monograph Series, Uppsala, Sweden, pp 23–34. Almquist and Wiksell, Uppsala

  • Biernacki C, Celeux G, Govaert G (2000) Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans Pattern Anal Mach Intell 22(7):719–725

    Article  Google Scholar 

  • Böhning D, Dietz E, Schaub R, Schlattmann P, Lindsay B (1994) The distribution of the likelihood ratio for mixtures of densities from the one-parameter exponential family. Annals Inst Stat Math 46(2):373–388

    Article  MATH  Google Scholar 

  • Bouveyron C, Girard S, Schmid C (2007) High-dimensional data clustering. Comput Stat Data Anal 52(1):502–519

    Article  MathSciNet  MATH  Google Scholar 

  • Browne RP, McNicholas PD (2012) Model-based clustering, classification, and discriminant analysis of data with mixed type. J Stat Plann Infer 142(11):2976–2984

    Article  MathSciNet  MATH  Google Scholar 

  • Browne RP, McNicholas PD, Sparling MD (2012) Model-based learning using a mixture of mixtures of Gaussian and uniform distributions. IEEE Trans Pattern Anal Mach Intell 34(4):814–817

    Article  Google Scholar 

  • Carvalho C, Chang J, Lucas J, Nevins J, Wang Q, West M (2008) High-dimensional sparse factor modeling: applications in gene expression genomics. J Am Stat Assoc 103(484):1438–1456

    Article  MathSciNet  Google Scholar 

  • Celeux G, Govaert G (1995) Gaussian parsimonious clustering models. Pattern Recogn 28(5):781–793

    Article  Google Scholar 

  • Dean N, Murphy TB, Downey G (2006) Using unlabelled data to update classification rules with applications in food authenticity studies. J Royal Stat Soc Ser C 55(1):1–14

    Article  MathSciNet  MATH  Google Scholar 

  • Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J Royal Stat Soc Ser B 39(1):1–38

    MathSciNet  MATH  Google Scholar 

  • DeSarbo W, Cron W (1988) A maximum likelihood methodology for clusterwise linear regression. J Classif 5(2):249–282

    Article  MathSciNet  MATH  Google Scholar 

  • Flury B (1997) A first course in multivariate statistics. Springer, New York

    Book  MATH  Google Scholar 

  • Fraley C, Raftery AE (2002) Model-based clustering, discriminant analysis, and density estimation. J Am Stat Assoc 97(458):611–631

    Article  MathSciNet  MATH  Google Scholar 

  • Fraley C, Raftery AE, Murphy TB, Scrucca L (2012) mclust version 4 for R: normal mixture modeling for model-based clustering, classification, and density estimation. Technical report 597, Department of Statistics, University of Washington, Seattle, Washington, USA

  • Frühwirth-Schnatter S (2006) Finite mixture and markov switching models. Springer, New York

    MATH  Google Scholar 

  • Gershenfeld N (1997) Nonlinear inference and cluster-weighted modeling. Ann New York Acad Sci 808(1):18–24

    Article  Google Scholar 

  • Ghahramani Z, Hinton G (1997) The EM algorithm for factor analyzers. Technical report CRG-TR-96-1, University of Toronto, Toronto

  • Grün B, Leisch F (2008) Flexmix version 2: finite mixtures with concomitant variables and varying and constant parameters. J Stat Softw 28(4):1–35

    Google Scholar 

  • Hennig C (2000) Identifiablity of models for clusterwise linear regression. J Classif 17(2):273–296

    Article  MathSciNet  MATH  Google Scholar 

  • Hosmer D Jr (1973) A comparison of iterative maximum likelihood estimates of the parameters of a mixture of two normal distributions under three different types of sample. Biometrics 29(4):761–770

    Article  Google Scholar 

  • Hubert L, Arabie P (1985) Comparing partitions. J Classif 2(1):193–218

    Article  Google Scholar 

  • Ingrassia S, Minotti SC, Punzo A (2013) Model-based clustering via linear cluster-weighted models. DOI:10.1016/j.csda.2013.02.012 Computational Statistics and Data Analysis

  • Ingrassia, S, Minotti SC, Punzo A, Vittadini G (2012a) Generalized linear cluster-weighted models. eprint arXiv: 1211.1171, http://arxiv.org/abs/1211.1171

  • Ingrassia S, Minotti SC, Vittadini G (2012b) Local statistical modeling via the cluster-weighted approach with elliptical distributions. J Classif 29(3):363–401

    Google Scholar 

  • Karlis D, Santourian A (2009) Model-based clustering with non-elliptically contoured distributions. Stat Comput 19(1):73–83

    Article  MathSciNet  Google Scholar 

  • Leisch F (2004) Flexmix: a general framework for finite mixture models and latent class regression in R. J Stat Softw 11(8):1–18

    Google Scholar 

  • Lin T-I (2010) Robust mixture modeling using multivariate skew t distributions. Stat Comput 20:343–356

    Article  MathSciNet  Google Scholar 

  • Lindsay BG (1995) Mixture models: theory, geometry and applications. In: NSF-CBMS regional conference series in probability and statistics, vol. 5. Institute of Mathematical Statistics, Hayward

  • McLachlan GJ, Basford KE (1988) Mixture models: inference and applications to clustering. Marcel Dekker, New York

    MATH  Google Scholar 

  • McLachlan GJ, Peel D (2000a) Finite mixture models. Wiley, New York

    Book  MATH  Google Scholar 

  • McLachlan GJ, D Peel (2000b) Mixtures of factor analyzers. In: Proceedings of the seventh international conference on machine learning, pp 599–606. Morgan Kaufmann, San Francisco.

  • McNicholas PD (2010) Model-based classification using latent Gaussian mixture models. J Stat Plann Infer 140(5):1175–1181

    Article  MathSciNet  MATH  Google Scholar 

  • McNicholas PD, Jampani KR, McDaid AF, Murphy TB, Banks L (2011) PGMM: Parsimonious Gaussian Mixture Models. R package version 1.0.

  • McNicholas PD, Murphy TB (2008) Parsimonious Gaussian mixture models. Stat Comput 18(3):285–296

    Article  MathSciNet  Google Scholar 

  • McNicholas PD, Murphy TB (2010a) Model-based clustering of longitudinal data. Can J Stat 38(1):153–168

    MathSciNet  MATH  Google Scholar 

  • McNicholas PD, Murphy TB (2010b) Model-based clustering of microarray expression data via latent Gaussian mixture models. Bioinformatics 26(21):2705–2712

    Article  Google Scholar 

  • McNicholas PD, Murphy TB, McDaid AF, Frost D (2010) Serial and parallel implementations of model-based clustering via parsimonious Gaussian mixture models. Comput Stat Data Anal 54(3):711–723

    Article  MathSciNet  MATH  Google Scholar 

  • McNicholas PD, Subedi S (2012) Clustering gene expression time course data using mixtures of multivariate t-distributions. J Stat Plann Infer 142(5):1114–1127

    Article  MathSciNet  MATH  Google Scholar 

  • Meng XL, Rubin DB (1993) Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika 80(2):267–278

    Article  MathSciNet  MATH  Google Scholar 

  • Meng XL, van Dyk D (1997) The EM algorithm: an old folk-song sung to a fast new tune. J Royal Stat Soc Ser B (Stat Methodol) 59(3):511–567

    Article  MATH  Google Scholar 

  • Montanari A, Viroli C (2010) Heteroscedastic factor mixture analysis. Stat Modell 10(4):441–460

    Article  MathSciNet  Google Scholar 

  • Montanari A, Viroli C (2011) Dimensionally reduced mixtures of regression models. J Stat Plann Infer 141(5):1744–1752

    Article  MathSciNet  MATH  Google Scholar 

  • Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in C: the art of scientific computation, 2nd edn. Cambridge University Press, Cambridge

    Google Scholar 

  • Punzo, A (2012) Flexible mixture modeling with the polynomial Gaussian cluster-weighted model. eprint arXiv: 1207.0939, http://arxiv.org/abs/1207.0939

  • Rand W (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850

    Article  Google Scholar 

  • Sakamoto Y, Ishiguro M, Kitagawa G (1983) Akaike information criterion statistics. Reidel, Boston

    Google Scholar 

  • Schöner B (2000) Probabilistic characterization and synthesis of complex data driven systems. Ph. D. thesis, MIT

  • Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464

    Article  MATH  Google Scholar 

  • Scrucca L (2010) Dimension reduction for model-based clustering. Stat Comput 20(4):471–484

    Article  MathSciNet  Google Scholar 

  • R Development Core Team (2012) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna

  • Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15(1):72–101

    Article  Google Scholar 

  • Tipping TE, Bishop CM (1999) Mixtures of probabilistic principal component analysers. Neural Comput 11(2):443–482

    Article  Google Scholar 

  • Titterington DM, Smith AFM, Makov UE (1985) Statistical analysis of finite mixture distributions. Wiley, New York

    MATH  Google Scholar 

  • Wang Q, Carvalho C, Lucas J, West M (2007) BFRM: Bayesian factor regression modelling. Bull Int Soc Bayesian Anal 14(2):4–5

    Google Scholar 

  • West M (2003) Bayesian factor regression models in the “large $p$, small $n$” paradigm. In: Bernardo J, Bayarri M, Berger J, Dawid A, Heckerman D, Smith A, West M (eds) Bayesian statistics, vol 7. Oxford University Press, Oxford, pp 723–732

    Google Scholar 

  • Wolfe JH (1963) Object cluster analysis of social areas. Master’s thesis, University of California, Berkeley

  • Wolfe JH (1970) Pattern clustering by multivariate mixture analysis. Multivariate Behav Res 5(3):329–350

    Article  Google Scholar 

  • Woodbury MA (1950) Inverting modified matrices. Statistical Research Group, Memo. Rep. no. 42. Princeton University, Princeton

Download references

Acknowledgments

The authors sincerely thank the Associate Editor and the referees for helpful comments and valuable suggestions that have contributed to improving the quality of the manuscript. The work of Subedi and McNicholas was partly supported by an Early Researcher Award from the Ontario Ministry of Research and Innovation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salvatore Ingrassia.

Appendices

Appendix A: The conditional distribution of \(Y|{\varvec{x}},{\varvec{u}}\)

To compute the distribution of \(Y|{\varvec{x}},{\varvec{u}}\), we begin by recalling that if \({\varvec{Z}}\sim N_q\left( {\varvec{m}},{{\varvec{\Gamma }}}\right) \) is a random vector with values in \(\mathbb{R }^q\) and if \({\varvec{Z}}\) is partitioned as \({\varvec{Z}}=\left( {\varvec{Z}}^{\prime }_1,{\varvec{Z}}^{\prime }_2\right) ^{\prime }\), where \({\varvec{Z}}_1\) takes values in \(\mathbb{R }^{q_1}\) and \({\varvec{Z}}_2\) in \(\mathbb{R }^{q_2}=\mathbb{R }^{q-q_1}\), then we can write

$$\begin{aligned} {\varvec{m}}= \begin{bmatrix} {\varvec{m}}_1 \\ {\varvec{m}}_2 \end{bmatrix} \quad \text{ and} \quad {{\varvec{\Gamma }}}= \begin{bmatrix} {{\varvec{\Gamma }}}_{11}&{{\varvec{\Gamma }}}_{12} \\ {{\varvec{\Gamma }}}_{21}&{{\varvec{\Gamma }}}_{22} \end{bmatrix}. \end{aligned}$$

Now, because \({\varvec{Z}}\) has a multivariate normal distribution, \({\varvec{Z}}_1|{\varvec{Z}}_2={\varvec{z}}_2\) and \({\varvec{Z}}_2\) are statistically independent with \({\varvec{Z}}_1|{\varvec{Z}}_2={\varvec{z}}_2 \sim N_{q_1}\left( {\varvec{m}}_{1|2}, {{\varvec{\Gamma }}}_{1|2}\right) \) and \({\varvec{Z}}_2 \sim N_{q_2}\left( {\varvec{m}}_2, {{\varvec{\Gamma }}}_{22}\right) \), where

$$\begin{aligned} {\varvec{m}}_{1|2} = {\varvec{m}}_1+ {{\varvec{\Gamma }}}_{12}{{\varvec{\Gamma }}}_{22}^{-1}({\varvec{z}}_2-{\varvec{m}}_2) \quad \text{ and} \quad {{\varvec{\Gamma }}}_{1|2} ={{\varvec{\Gamma }}}_{11}-{{\varvec{\Gamma }}}_{12}{{\varvec{\Gamma }}}_{22}^{-1}{{\varvec{\Gamma }}}_{21}. \end{aligned}$$
(15)

Therefore, setting \({\varvec{Z}}=\left( {\varvec{Z}}^{\prime }_1,{\varvec{Z}}^{\prime }_2\right) ^{\prime }\), where \({\varvec{Z}}^{\prime }_1=Y\) and \({\varvec{Z}}_2=\left( {\varvec{X}}^{\prime }, {\varvec{U}}^{\prime }\right) ^{\prime }\), gives \({\varvec{m}}_1= \beta _0 + {{\varvec{\beta }}}^{\prime }_1 {{\varvec{\mu }}}\) and \({\varvec{m}}_2 = \left( {{\varvec{\mu }}}^{\prime },{\varvec{0}}^{\prime }\right) ^{\prime }\), with the elements in \({{\varvec{\Gamma }}}\) given by

$$\begin{aligned} {{\varvec{\Gamma }}}_{11} = {{\varvec{\beta }}}^{\prime }_1 {{\varvec{\Sigma }}}{{\varvec{\beta }}}_1 + \sigma ^2, \quad {{\varvec{\Gamma }}}_{22}&= \begin{bmatrix} {{\varvec{\Sigma }}}&{{\varvec{\Lambda }}}\\ {{\varvec{\Lambda }}}^{\prime }&{\varvec{I}}_q \\ \end{bmatrix}, \quad \text{ and} \quad {{\varvec{\Gamma }}}_{12} = \begin{bmatrix} {{\varvec{\beta }}}^{\prime }_1 {{\varvec{\Sigma }}}&{{\varvec{\beta }}}^{\prime }_1{{\varvec{\Lambda }}}\end{bmatrix}. \end{aligned}$$

It follows that \(Y|{\varvec{x}},{\varvec{u}}\) is Gaussian with mean \({\varvec{m}}_{y|{\varvec{x}},{\varvec{u}}}= \mathbb E \left( Y|{\varvec{x}},{\varvec{u}}\right) \) and variance \(\sigma ^2_{y|{\varvec{x}},{\varvec{u}}}=\text{ Var}\left( Y|{\varvec{x}},{\varvec{u}}\right) \), in accordance with the formulae in (15). Because the inverse matrix of \({{\varvec{\Gamma }}}_{22}\) is required in (15), the following formula for the inverse of a partitioned matrix is utilized:

$$\begin{aligned} \begin{bmatrix} {\varvec{A}}&{\varvec{B}}\\ {\varvec{C}}&{\varvec{D}}\end{bmatrix}^{-1} = \begin{bmatrix} \left( {\varvec{A}}- {\varvec{B}}{\varvec{D}}^{-1} {\varvec{C}}\right) ^{-1}&-{\varvec{A}}^{-1} {\varvec{B}}\left( {\varvec{D}}-{\varvec{C}}{\varvec{A}}^{-1} {\varvec{B}}\right) ^{-1} \\ -{\varvec{D}}^{-1} {\varvec{C}}\left( {\varvec{A}}- {\varvec{B}}{\varvec{D}}^{-1}{\varvec{C}}\right) ^{-1}&\left( {\varvec{D}}-{\varvec{C}}{\varvec{A}}^{-1} {\varvec{B}}\right) ^{-1} \end{bmatrix}. \end{aligned}$$

Again, writing \({{\varvec{\Sigma }}}={{\varvec{\Lambda }}}{{\varvec{\Lambda }}}^{\prime } + {{\varvec{\Psi }}}\), we have

$$\begin{aligned} {{\varvec{\Gamma }}}_{22}^{-1} = \begin{bmatrix} {{\varvec{\Sigma }}}&{{\varvec{\Lambda }}}\\ {{\varvec{\Lambda }}}^{\prime }&{\varvec{I}}_q \\ \end{bmatrix}^{-1} = \begin{bmatrix} {{\varvec{\Psi }}}^{-1}&-{{\varvec{\Sigma }}}^{-1} {{\varvec{\Lambda }}}\left( {\varvec{I}}_q -{{\varvec{\Lambda }}}^{\prime } {{\varvec{\Sigma }}}^{-1} {{\varvec{\Lambda }}}\right) ^{-1} \\ -{{\varvec{\Lambda }}}^{\prime } {{\varvec{\Psi }}}^{-1}&\left( {\varvec{I}}_q -{{\varvec{\Lambda }}}^{\prime }{{\varvec{\Sigma }}}^{-1} {{\varvec{\Lambda }}}\right) ^{-1} \end{bmatrix}. \end{aligned}$$

Moreover, according to the Woodbury identity (Woodbury 1950),

$$\begin{aligned} {{\varvec{\Sigma }}}^{-1}= ({{\varvec{\Lambda }}}{{\varvec{\Lambda }}}^{\prime } + {{\varvec{\Psi }}})^{-1} = {{\varvec{\Psi }}}^{-1} - {{\varvec{\Psi }}}^{-1} {{\varvec{\Lambda }}}({\varvec{I}}_q + \Lambda ^{\prime } {{\varvec{\Psi }}}^{-1} {{\varvec{\Lambda }}})^{-1} {{\varvec{\Lambda }}}^{\prime }{{\varvec{\Psi }}}^{-1}. \end{aligned}$$

Now,

$$\begin{aligned} {{\varvec{\Gamma }}}_{12}{{\varvec{\Gamma }}}_{22}^{-1} = \begin{bmatrix} {{\varvec{\beta }}}^{\prime }_1{{\varvec{\Sigma }}}&{{\varvec{\beta }}}^{\prime }_1{{\varvec{\Lambda }}}\end{bmatrix} \begin{bmatrix} {{\varvec{\Psi }}}^{-1}&-{{\varvec{\Sigma }}}^{-1} {{\varvec{\Lambda }}}\left( {\varvec{I}}_q -{{\varvec{\Lambda }}}^{\prime } {{\varvec{\Sigma }}}^{-1} {{\varvec{\Lambda }}}\right) ^{-1} \\ -{{\varvec{\Lambda }}}{{\varvec{\Psi }}}^{-1}&({\varvec{I}}_q -{{\varvec{\Lambda }}}^{\prime }{{\varvec{\Sigma }}}^{-1} {{\varvec{\Lambda }}})^{-1} \end{bmatrix} = \begin{bmatrix} {{\varvec{\beta }}}^{\prime }_1&0 \end{bmatrix}. \end{aligned}$$

Finally, according to (15), we have

$$\begin{aligned} {\varvec{m}}_{y|{\varvec{x}},{\varvec{u}}}&= {\varvec{m}}_1+{{\varvec{\Gamma }}}_{12}{{\varvec{\Gamma }}}_{22}^{-1} \begin{bmatrix} {\varvec{z}}_2-{\varvec{m}}_2 \end{bmatrix} = \left( \beta _0 + {{\varvec{\beta }}}^{\prime }_1 {{\varvec{\mu }}}\right) + \begin{bmatrix} {{\varvec{\beta }}}^{\prime }_1&0 \end{bmatrix} \begin{bmatrix} {\varvec{x}}-{{\varvec{\mu }}}\\ {\varvec{u}}-{\varvec{0}}\end{bmatrix} = \beta _0 + {{\varvec{\beta }}}^{\prime }_1 {\varvec{x}}, \\ \sigma ^2_{y|{\varvec{x}},{\varvec{u}}}&= {{\varvec{\Gamma }}}_{11}-{{\varvec{\Gamma }}}_{12}{{\varvec{\Gamma }}}_{22}^{-1}{{\varvec{\Gamma }}}_{21} ={{\varvec{\beta }}}^{\prime }_1{{\varvec{\Sigma }}}{{\varvec{\beta }}}_1+\sigma ^2- \begin{bmatrix} {{\varvec{\beta }}}^{\prime }_1&0 \end{bmatrix} \begin{bmatrix} {{\varvec{\Sigma }}}{{\varvec{\beta }}}_1 \\ {{\varvec{\Lambda }}}{{\varvec{\beta }}}_1 \end{bmatrix} = \sigma ^2. \end{aligned}$$

Appendix B: Details on the AECM algorithm for the parsimonious models

This appendix details the AECM algorithm for the models summarized in Table 1.

1.1 B.1 Constraint on the \(Y\) variable

In all of the models whose identifier starts with ‘C’, that is the models in which the error variance terms \(\sigma _g^2\) (of the response variable \(Y\)) are constrained to be equal across groups, i.e., \(\sigma _g^2 =\sigma ^2\) for \(g=1,\ldots ,G\), the common variance \(\sigma ^2\) at the \(\left( k+1\right) \)th iteration of the algorithm is computed as

$$\begin{aligned} \sigma ^{2\left( k+1\right) } = \frac{1}{n} \sum _{i=1}^n \sum _{g=1}^G z_{ig}^{\left( k+1\right) } \left\{ y_i-\left( \beta _{0g}^{\left( k+1\right) } + {{\varvec{\beta }}}_{1g}^{^{\prime }\left( k+1\right) }{\varvec{x}}_i\right) \right\} ^2. \end{aligned}$$

1.2 B.2 constraints on the \({\varvec{X}}\) variable

With respect to the \({\varvec{X}}\) variable, as explained in Sect. 3.2, we considered the following constraints on \({{\varvec{\Sigma }}}_g={{\varvec{\Lambda }}}_g{{\varvec{\Lambda }}}_g^{\prime }+{{\varvec{\Psi }}}_g\): (i) equal loading matrices \({{\varvec{\Lambda }}}_g = {{\varvec{\Lambda }}}\), (ii) equal error variance \({{\varvec{\Psi }}}_g = {{\varvec{\Psi }}}\), and (iii) isotropic assumption: \({{\varvec{\Psi }}}_g = \psi _g {\varvec{I}}_p\). In such cases, the \(g\)th term of the expected complete-data log-likelihood \(Q_2\left( {{\varvec{\theta }}}_2; {{\varvec{\theta }}}^{(k+1/2)}\right) \), and then the estimates (12) and (13) in Sect. 4.3, are computed as follows.

1.2.1 B.2.1 Isotropic assumption: \({{\varvec{\Psi }}}_g=\psi _g{\varvec{I}}_p\)

In this case, Eq. (9) becomes

$$\begin{aligned} Q_2\left( \!{{\varvec{\Lambda }}}_g,\psi _g;{{\varvec{\theta }}}^{(k+1/2)}\!\right) \!&= \! \text{ C}\left( \!{{\varvec{\theta }}}_1^{\left( \!k+1\right) }\right) \! +\!\frac{1}{2} n_g^{\left( k+1\right) } \ln |\psi ^{-1}_g{\varvec{I}}_p | - \frac{1}{2} n_g^{\left( k+1\right) } \psi _g^{-1}\text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) } \! \right\} \nonumber \\&+ n_g^{\left( k+1\right) } \psi ^{-1}_g \text{ tr}\left\{ {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}_g\right\} -\frac{1}{2} n_g^{\left( k+1\right) }\psi _g^{-1}\text{ tr}\\&\times \left\{ {{\varvec{\Lambda }}}_g {{\varvec{\Theta }}}^{\left( k\right) }_g {{\varvec{\Lambda }}}_g^{\prime } \right\} , \end{aligned}$$

yielding

$$\begin{aligned} \frac{\partial Q_2}{\partial \psi ^{-1}_g} = \frac{1}{2} n_g^{\left( k+1\right) } \left[ p\psi _g- \text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) } \right\} + 2\text{ tr}\left\{ {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}_g\right\} - \text{ tr}\left\{ {{\varvec{\Lambda }}}_g {{\varvec{\Theta }}}^{\left( k\right) }_g {{\varvec{\Lambda }}}_g^{\prime } \right\} \right] . \end{aligned}$$

Then the estimated \(\psi _g\) is attained for \(\hat{\psi }_g\), satisfying

$$\begin{aligned} \frac{\partial Q_2}{\partial \psi ^{-1}_g} = 0 \,\Rightarrow \,p \psi _g- \text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) } \right\} + 2\text{ tr}\left\{ {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}_g\right\} - \text{ tr}\left\{ {{\varvec{\Lambda }}}_g {{\varvec{\Theta }}}^{\left( k\right) }_g {{\varvec{\Lambda }}}_g^{\prime } \right\} = 0. \end{aligned}$$

Thus, according to (12), for \({{\varvec{\Lambda }}}_g=\hat{{{\varvec{\Lambda }}}}_g = {\varvec{S}}^{\left( k+1\right) }_g {{\varvec{\gamma }}}^{\prime \left( k\right) }_g{{\varvec{\Theta }}}_g^{-1}\) we get \(\text{ tr}\left\{ {{\varvec{\Lambda }}}_g {{\varvec{\Theta }}}^{\left( k\right) }_g {{\varvec{\Lambda }}}_g^{\prime } \right\} = \text{ tr} \left\{ {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}_g \right\} \) and, finally, \(\hat{\psi }_g =\frac{1}{p}\text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } -\hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} .\) Thus,

$$\begin{aligned} \psi ^+_g&= \frac{1}{p}\text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } - {{\varvec{\Lambda }}}_g}{{\varvec{\gamma }}}_g^+ {\varvec{S}}_g^{\left( k+1\right) }\right\} \\ {{\varvec{\gamma }}}_g^+&= {{\varvec{\Lambda }}}^{\prime }_g\left( {{\varvec{\Lambda }}}_g{{\varvec{\Lambda }}}^{\prime }_g+\psi _g^+ {\varvec{I}}_p\right) ^{-1}, \nonumber \end{aligned}$$
(16)

with \({{\varvec{\Theta }}}_g^+\) computed according to (14).

1.2.2 B.2.2 Equal error variance: \({{\varvec{\Psi }}}_g ={{\varvec{\Psi }}}\)

In this case, from Eq. (9), we have

$$\begin{aligned} Q_2\left( {{\varvec{\Lambda }}}_g, {{\varvec{\Psi }}}; {{\varvec{\theta }}}^{(k+1/2)}\right)&= \text{ C}({{\varvec{\theta }}}_1^{\left( k+1\right) }) - \frac{1}{2} n_g^{\left( k+1\right) }\ln | {{\varvec{\Psi }}}| - \frac{1}{2} n_g^{\left( k+1\right) } \text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Psi }}}^{-1} \right\} \\&+ n_g^{\left( k+1\right) }\text{ tr}\left\{ {{\varvec{\Lambda }}}_g {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Psi }}}^{-1} \right\} -\frac{1}{2} n_g^{\left( k+1\right) }\text{ tr}\\&\times \left\{ {{\varvec{\Lambda }}}_g^{\prime }{{\varvec{\Psi }}}^{-1} {{\varvec{\Lambda }}}_g {{\varvec{\Theta }}}^{\left( k\right) }_g\right\} , \end{aligned}$$

yielding

$$\begin{aligned} \frac{\partial Q_2\left( {{\varvec{\Lambda }}}_g, {{\varvec{\Psi }}}; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Psi }}}^{-1}}&= \frac{1}{2}n^{\left( k+1\right) }_g {{\varvec{\Psi }}}- \frac{1}{2}n^{\left( k+1\right) }_g {\varvec{S}}_g^{\left( k+1\right) }+ n^{\left( k+1\right) }_g {\varvec{S}}_g^{^{\prime }\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) }{{\varvec{\Lambda }}}_g^{\prime }\\&\quad - \frac{1}{2}n^{\left( k+1\right) }_g {{\varvec{\Lambda }}}_g{{\varvec{\Theta }}}^{\left( k\right) }_g{{\varvec{\Lambda }}}_g^{\prime }. \end{aligned}$$

Then the estimated \(\hat{{{\varvec{\Psi }}}}\) is obtained by satisfying

$$\begin{aligned} \sum _{g=1}^G \frac{\partial Q_2\left( {{\varvec{\Lambda }}}_g, {{\varvec{\Psi }}}; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Psi }}}^{-1}} = {\varvec{0}}, \end{aligned}$$

that is

$$\begin{aligned} \frac{n}{2}{{\varvec{\Psi }}}\! -\! \frac{1}{2} \sum _{g=1}^G n^{\left( k+1\right) }_g {\varvec{S}}_g^{\left( k+1\right) }\!+\! \sum _{g=1}^G n^{\left( k+1\right) }_g {\varvec{S}}_g^{^{\prime }\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) }{{\varvec{\Lambda }}}_g^{\prime }\!-\! \frac{1}{2} \sum _{g=1}^G n^{\left( k+1\right) }_g {{\varvec{\Lambda }}}_g{{\varvec{\Theta }}}^{\left( k\right) }_g{{\varvec{\Lambda }}}_g^{\prime } \!=\! {\varvec{0}}, \end{aligned}$$

which can be simplified as

$$\begin{aligned} \frac{n}{2}{{\varvec{\Psi }}}- \frac{1}{2} \sum _{g=1}^G n^{\left( k+1\right) }_g \left[ {\varvec{S}}_g^{\left( k+1\right) }+ 2 {\varvec{S}}_g^{^{\prime }\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) }{{\varvec{\Lambda }}}_g^{\prime }- {{\varvec{\Lambda }}}_g{{\varvec{\Theta }}}^{\left( k\right) }_g{{\varvec{\Lambda }}}_g^{\prime } \right] = {\varvec{0}}, \end{aligned}$$

with \(\displaystyle \sum _{g=1}^G n^{\left( k+1\right) }_g =n\). Again, according to (12), for \({{\varvec{\Lambda }}}_g=\hat{{{\varvec{\Lambda }}}}_g = {\varvec{S}}^{\left( k+1\right) }_g {{\varvec{\gamma }}}^{\prime \left( k\right) }_g{{\varvec{\Theta }}}_g^{-1}\) we get \(\hat{{{\varvec{\Lambda }}}}_g{{\varvec{\Theta }}}^{\left( k\right) }_g \hat{{{\varvec{\Lambda }}}}_g^{\prime } =\hat{{{\varvec{\Lambda }}}}_g {{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } \) and, afterwards,

$$\begin{aligned} \hat{{{\varvec{\Psi }}}}&= \sum _{g=1}^G\frac{n_g}{n}\text{ diag} \left\{ {{\varvec{S}}_g^{\left( k+1\right) }-\hat{{{\varvec{\Lambda }}}}_g} {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} \nonumber \\&= \sum _{g=1}^G\pi _g^{\left( k+1\right) } \text{ diag}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } - \hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} . \end{aligned}$$
(17)

Thus,

$$\begin{aligned} {{\varvec{\Psi }}}^+&= \sum _{g=1}^G \pi _g^{\left( k+1\right) } \text{ diag}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } - {{\varvec{\Lambda }}}^+_g} {{\varvec{\gamma }}}_g {\varvec{S}}_g^{\left( k+1\right) }\right\} , \\ {{\varvec{\gamma }}}_g^+&= {{\varvec{\Lambda }}}^{\prime }_g\left( {{\varvec{\Lambda }}}^+_g {{\varvec{\Lambda }}}^{\prime +}_g + {{\varvec{\Psi }}}^+\right) ^{-1}, \nonumber \end{aligned}$$
(18)

where \({{\varvec{\Theta }}}_g^+\) is computed according to (14).

1.2.3 B.2.3 Equal loading matrices: \({{\varvec{\Lambda }}}_g={{\varvec{\Lambda }}}\)

In this case, Eq. (9) can be written as

$$\begin{aligned} Q_2\left( {{\varvec{\Lambda }}}, {{\varvec{\Psi }}}_g; {{\varvec{\theta }}}^{(k+1/2)}\right)&= \text{ C}({{\varvec{\theta }}}_1^{\left( k+1\right) }) +\frac{1}{2} n_g^{\left( k+1\right) }\ln | {{\varvec{\Psi }}}^{-1}_g| - \frac{1}{2} n_g^{\left( k+1\right) } \text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Psi }}}_g^{-1} \right\} \\&+ n_g^{\left( k+1\right) }\text{ tr}\left\{ {{\varvec{\Lambda }}}{{\varvec{\gamma }}}_g^{\left( k\right) }{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\Psi }}}^{-1}_g \right\} \! -\!\frac{1}{2} n_g^{\left( k+1\!\right) }\text{ tr}\left\{ {{\varvec{\Lambda }}}^{\prime }{{\varvec{\Psi }}}_g^{-1} {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) }_g\!\right\} , \end{aligned}$$

yielding

$$\begin{aligned} \frac{\partial Q_2\left( {{\varvec{\Lambda }}}, {{\varvec{\Psi }}}_g; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Lambda }}}}&= n_g^{\left( k+1\right) } {{\varvec{\Psi }}}_g^{-1} {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) }_g - n_g^{\left( k+1\right) } {{\varvec{\Psi }}}_g^{-1} {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) }_g = {\varvec{0}}. \end{aligned}$$

Then the estimated \(\hat{{{\varvec{\Lambda }}}}\) is obtained by solving

$$\begin{aligned} \sum _{g=1}^G \frac{\partial Q_2\left( {{\varvec{\Lambda }}}, {{\varvec{\Psi }}}_g; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Lambda }}}} = \sum _{g=1}^G n_g^{\left( k+1\right) } {{\varvec{\Psi }}}_g^{-1} \left[ {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) } - {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) }_g \right] = {\varvec{0}}, \end{aligned}$$
(19)

with \({{\varvec{\gamma }}}^{\left( k\right) }_g ={{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }\left( {{\varvec{\Lambda }}}^{\left( k\right) }{{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }+{{\varvec{\Psi }}}^{\left( k\right) }_g\right) ^{-1}\). In this case, the loading matrix cannot be solved directly and must be solved in a row-by-row manner as suggested by McNicholas and Murphy (2008). Therefore,

$$\begin{aligned} \lambda ^+_i&= \mathbf r _i\left( \sum _{g=1}^G\frac{n_g}{\psi _{g\left( i\right) }} {{\varvec{\Theta }}}_g \right) ^{-1}\end{aligned}$$
(20)
$$\begin{aligned} {{\varvec{\gamma }}}_g^+&= {{\varvec{\Lambda }}}^{\prime }\left( {{\varvec{\Lambda }}}^+ {{\varvec{\Lambda }}}^{\prime +} + {{\varvec{\Psi }}}^+_g\right) ^{-1}\end{aligned}$$
(21)
$$\begin{aligned} {{\varvec{\Theta }}}^+_g&= {\varvec{I}}_q-{{\varvec{\gamma }}}^+_g {{\varvec{\Lambda }}}^+ + {{\varvec{\gamma }}}^+_g {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime +}_g , \end{aligned}$$
(22)

where \(\lambda ^+_i\) is the \(i\)th row of the matrix \({{\varvec{\Lambda }}}^+\), \(\psi _{g\left( i\right) }\) is the \(i\)th diagonal element of \({{\varvec{\Psi }}}_g\), and \(\mathbf r _i\) represents the \(i\)th row of the matrix \(\displaystyle \sum \nolimits _{g=1}^G n_g^{\left( k+1\right) } \left( {{\varvec{\Psi }}}^{\prime }_g\right) ^{-1} {\varvec{S}}_g^{\left( k+1\right) }\).

1.2.4 B.2.4 Further details

A further schematization is here given without considering the constraint on the \(Y\) variable. Thus, with reference to the model identifier, we will only refer to the last three letters.

  • Models ended by UUU: no constraint is assumed.

  • Models ended by UUC: \({{\varvec{\Psi }}}_g =\psi _g{\varvec{I}}_p\), where the parameter \(\psi _g\) is updated according to (16).

  • Models ended by UCU: \({{\varvec{\Psi }}}_g ={{\varvec{\Psi }}}\), where the matrix \({{\varvec{\Psi }}}\) is updated according to (18).

  • Models ended by UCC: \( {{\varvec{\Psi }}}_g =\psi {\varvec{I}}_p\). By combining (16) and (18) we obtain

    $$\begin{aligned} \hat{\psi }&= \frac{1}{p}\sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{n}\text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } -\hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} \nonumber \\&= \frac{1}{p}\sum _{g=1}^G \hat{\pi }_g^{\left( k+1\right) } \text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } -\hat{{{\varvec{\Lambda }}}}_g}{{\varvec{\gamma }}}_g^{\left( k\right) } {\varvec{S}}_g^{\left( k+1\right) }\right\} . \end{aligned}$$
    (23)

    Thus, \(\psi ^+ = ({1}/{p})\sum _{g=1}^G \pi _g^{\left( k+1\right) } \text{ tr}\left\{ {{\varvec{S}}_g^{\left( k+1\right) } - {{\varvec{\Lambda }}}^+_g} {{\varvec{\gamma }}}_g {\varvec{S}}_g^{\left( k+1\right) }\right\} \) and \({{\varvec{\gamma }}}^+_g = {{\varvec{\Lambda }}}^{\prime +}_g \left( {{\varvec{\Lambda }}}^+_g {{\varvec{\Lambda }}}^{\prime +}_g+\psi ^+ {\varvec{I}}_p\right) ^{-1}\), with \({{\varvec{\Theta }}}_g^+\) computed according to (14).

  • Models ended by CUU: \({{\varvec{\Lambda }}}_g ={{\varvec{\Lambda }}}\), where the matrix \({{\varvec{\Lambda }}}\) is updated according to (20). In this case, \({{\varvec{\Psi }}}_g\) is estimated directly from (11) and thus \({{\varvec{\Psi }}}^+_g =\text{ diag}\left\{ {\varvec{S}}_g^{\left( k+1\right) }-2 {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}_g {\varvec{S}}_g^{\left( k+1\right) } +{{\varvec{\Lambda }}}^+ {{\varvec{\Theta }}}_g {{\varvec{\Lambda }}}^{\prime +} \right\} \), with \({{\varvec{\gamma }}}_g^+\) and \({{\varvec{\Theta }}}^+_g\) computed according to (21) and (22), respectively.

  • Models ended by CUC: \({{\varvec{\Lambda }}}_g ={{\varvec{\Lambda }}}\) and \({{\varvec{\Psi }}}_g =\psi _g{\varvec{I}}_p\). In this case, Equation (19), for \({{\varvec{\Psi }}}_g =\psi _g{\varvec{I}}_p\), yields

    $$\begin{aligned} \sum _{g=1}^G \frac{\partial Q_2 \left( {{\varvec{\Lambda }}}, \psi _g; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Lambda }}}}&= \sum _{g=1}^G n_g^{\left( k+1\right) } \psi _g^{-1}{\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) } \\&- \sum _{g=1}^G n_g^{\left( k+1\right) } \psi _g^{-1} {{\varvec{\Theta }}}^{\left( k\right) }_g = {\varvec{0}}, \end{aligned}$$

    and afterwards

    $$\begin{aligned} \hat{{{\varvec{\Lambda }}}} = \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{^{\prime }\left( k\right) } \right) \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {{\varvec{\Lambda }}}\right) ^{-1}, \end{aligned}$$

    with \({{\varvec{\gamma }}}^{\left( k\right) }_g ={{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }\left( {{\varvec{\Lambda }}}^{\left( k\right) }{{\varvec{\Lambda }}}^{^{\prime }\left( k\right) }+\psi ^{\left( k\right) }_g {\varvec{I}}_p\right) ^{-1}\). Moreover, from

    $$\begin{aligned} \frac{\partial Q_2\left( {{\varvec{\Lambda }}}, \psi _g; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial \psi _g^{-1}}&= \frac{p}{2} \psi _g -\frac{n_g^{\left( k+1\right) }}{2} \left[ \text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) }\right\} -2 \text{ tr}\left\{ {\varvec{S}}_g^{^{\prime }\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) }_g {{\varvec{\Lambda }}}^{\prime }\right\} \right. \\&\left. + \text{ tr}\left\{ {{\varvec{\Lambda }}}{{\varvec{\Theta }}}_g^{\left( k+1\right) } {{\varvec{\Lambda }}}^{\prime }\right\} \right] = 0 \end{aligned}$$

    we get \(\hat{\psi }_g =({1}/{p})\text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) }-2\hat{{{\varvec{\Lambda }}}} {{\varvec{\gamma }}}^{\prime \left( k\right) }_g {\varvec{S}}_g+\hat{{{\varvec{\Lambda }}}} {{\varvec{\Theta }}}_g\hat{{{\varvec{\Lambda }}}}^{\prime }\right\} \). Thus,

    $$\begin{aligned} {{\varvec{\Lambda }}}^+&= \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}_g^{\prime } \right) \left( \sum _{g=1}^G \frac{n_g^{\left( k+1\right) }}{\psi _g^{-1}} {{\varvec{\Lambda }}}\right) ^{-1}\\ \psi ^+_g&= \frac{1}{p}\text{ tr}\left\{ {\varvec{S}}_g^{\left( k+1\right) }-2\ {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}^{\prime }_g {\varvec{S}}_g+ {{\varvec{\Lambda }}}^+ {{\varvec{\Theta }}}{{\varvec{\Lambda }}}^{\prime +} \right\} \\ {{\varvec{\gamma }}}^+_g&= {{\varvec{\Lambda }}}^{\prime +}\left( {{\varvec{\Lambda }}}^+{{\varvec{\Lambda }}}^{\prime +} +\psi ^+_g {\varvec{I}}_p\right) ^{-1}, \end{aligned}$$

    with \({{\varvec{\Theta }}}_g^+\) computed according to (22).

  • Models ended by CCU: \({{\varvec{\Lambda }}}_g\!=\!{{\varvec{\Lambda }}}\) and \({{\varvec{\Psi }}}_g \!\!=\!{{\varvec{\Psi }}}\), so that \({{\varvec{\gamma }}}^{\left( k\right) } \!\!=\!{{\varvec{\Lambda }}}^{\prime \left( k\right) }\left( \!{{\varvec{\Lambda }}}^{\left( \!k\!\right) }{{\varvec{\Lambda }}}^{\left( \!k\!\right) }\!+\!{{\varvec{\Psi }}}^{\left( \!k\!\right) }\!\right) ^{-1}\). Setting \({{\varvec{\Psi }}}_g={{\varvec{\Psi }}}\) in (19), we get

    $$\begin{aligned} \sum _{g=1}^G \frac{\partial Q_2\left( {{\varvec{\Lambda }}}, {{\varvec{\Psi }}}; {{\varvec{\theta }}}^{(k+1/2)}\right) }{\partial {{\varvec{\Lambda }}}}&= \sum _{g=1}^G n_g^{\left( k+1\right) } {{\varvec{\Psi }}}^{-1} \left[ {\varvec{S}}_g^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) } - {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) }_g \right] \\&= {{\varvec{\Psi }}}^{-1} \left[ {{\varvec{\gamma }}}^{\prime \left( k\right) } \sum _{g=1}^G n_g^{\left( k+1\right) } {\varvec{S}}_g^{\left( k+1\right) } - {{\varvec{\Lambda }}}\sum _{g=1}^G n_g^{\left( k+1\right) } {{\varvec{\Theta }}}^{\left( k\right) }_g \right] \\&= {{\varvec{\Psi }}}^{-1} \left[ {{\varvec{\gamma }}}^{\prime \left( k\right) } {\varvec{S}}^{\left( k+1\right) } - {{\varvec{\Lambda }}}{{\varvec{\Theta }}}^{\left( k\right) } \right] = {\varvec{0}}, \end{aligned}$$

    where \({\varvec{S}}^{\left( k+1\right) } = \sum _{g=1}^G \pi _g^{\left( k+1\right) } {\varvec{S}}_g^{\left( k+1\right) } \) and \({{\varvec{\Theta }}}^{\left( k\right) } =\sum _{g=1}^G \pi _g^{\left( k+1\right) } {{\varvec{\Theta }}}^{\left( k\right) }_g = {\varvec{I}}_q-{{\varvec{\gamma }}}^{\left( k\right) } {{\varvec{\Lambda }}}^{\left( k\right) }+ {{\varvec{\gamma }}}^{\left( k\right) } {\varvec{S}}^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) }\). Thus,

    $$\begin{aligned} \hat{{{\varvec{\Lambda }}}} ={\varvec{S}}^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime \left( k\right) } \left( {{\varvec{\Theta }}}^{\left( k\right) }\right) ^{-1}. \end{aligned}$$
    (24)

    Moreover, setting \({{\varvec{\Lambda }}}_g={{\varvec{\Lambda }}}\) in (17), we get \(\hat{{{\varvec{\Psi }}}} = \text{ diag}\left\{ {{\varvec{S}}^{\left( k+1\right) }-\hat{{{\varvec{\Lambda }}}}}{{\varvec{\gamma }}}^{\left( k\right) } {\varvec{S}}^{\left( k+1\right) }\right\} \). Hence,

    $$\begin{aligned} {{\varvec{\Lambda }}}^+&= {\varvec{S}}^{\left( k+1\right) } {{\varvec{\gamma }}}^{\prime } {{\varvec{\Theta }}}^{-1}\\ {{\varvec{\Psi }}}^+&= \text{ diag}\left\{ {\varvec{S}}^{\left( k+1\right) }- {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}{\varvec{S}}^{\left( k+1\right) }\right\} \nonumber \\ {{\varvec{\gamma }}}^+_g&= {{\varvec{\Lambda }}}^{\prime +}\left( {{\varvec{\Lambda }}}^+{{\varvec{\Lambda }}}^{\prime +} +{{\varvec{\Psi }}}^+\right) ^{-1}, \nonumber \end{aligned}$$
    (25)

    with \({{\varvec{\Theta }}}_g^+\) computed according to (22).

  • Models ended by CCC: \({{\varvec{\Lambda }}}_g ={{\varvec{\Lambda }}}\) and \({{\varvec{\Psi }}}_g =\psi {\varvec{I}}_p\), so that \({{\varvec{\gamma }}}^{\left( k\right) } ={{\varvec{\Lambda }}}^{\prime \left( k\right) }\left( {{\varvec{\Lambda }}}^{\left( k\right) }{{\varvec{\Lambda }}}^{\prime \left( k\right) }+\psi ^{\left( k\right) }\right) ^{-1}\). Here, the estimated loading matrix is again (24), while the isotropic term obtained from (23) for \({{\varvec{\Lambda }}}_g\!=\!{{\varvec{\Lambda }}}\) is \(\hat{\psi } \!=\! ({1}/{p})\text{ tr}\Big \{{\varvec{S}}^{\left( k+1\right) }- \hat{{{\varvec{\Lambda }}}}{{\varvec{\gamma }}}^{\left( k\right) }{\varvec{S}}^{\left( k+1\right) }\Big \}\), with \({{\varvec{\gamma }}}^{\left( k\right) }_g \!=\!{{\varvec{\Lambda }}}^{\prime \left( k\right) }_g\left( \!{{\varvec{\Lambda }}}^{\left( k\right) }_g{{\varvec{\Lambda }}}^{\prime \left( k\right) }_g\!+\!\psi ^{\left( k\right) } {\varvec{I}}_p\right) ^{-1}\). Hence, \(\psi ^+\! =\! ({1}/{p}) \text{ tr}\left\{ {\varvec{S}}^{\left( k+1\right) }- {{\varvec{\Lambda }}}^+ {{\varvec{\gamma }}}{\varvec{S}}^{\left( k+1\right) }\right\} \) and \({{\varvec{\gamma }}}^+ \!=\! {{\varvec{\Lambda }}}^{\prime \!}+\!\left( {{\varvec{\Lambda }}}^+{{\varvec{\Lambda }}}^{\prime +} + \psi ^+ {\varvec{I}}_p\right) ^{-1}\), with \({{\varvec{\Lambda }}}^+\) and \({{\varvec{\Theta }}}_g^+\) computed according to (25) and (22), respectively.

Appendix C: True and estimated covariance matrices of Sect. 6.1

Because the loading matrices are not unique, for the simulated data of Examples 1 and 2 we limit the attention to a comparison, for each \(g=1,\ldots ,G\), of true and estimated covariance matrices.

1.1 C.1 Example 1

$$\begin{aligned} \varvec{\Sigma }_1&= \left[ \begin{array}{lllll} 103.36 &{} 103.07&{} 101.37&{} 79.41&{} 105.66\\ 103.08 &{} 119.39&{} 110.23&{} 85.97&{} 115.47\\ 101.37 &{} 110.23&{} 129.77&{} 106.08&{} 118.50\\ 79.41 &{} 85.97&{} 106.08&{} 101.46&{} 95.21\\ 105.66 &{} 115.47&{} 118.50&{} 95.21&{} 121.63\\ \end{array}\right] \\ \hat{\varvec{\Sigma }}_1&= \left[ \begin{array}{lllll} 107.59&{}114.55&{}110.42&{}87.29&{}114.43\\ 114.55&{}139.40&{} 127.06&{} 100.09&{}132.06\\ 110.42&{} 127.06&{} 146.31&{} 122.92&{}134.12\\ 87.29&{} 100.09&{} 122.92&{} 117.97&{}110.09\\ 114.43&{}132.06&{} 134.12&{} 110.09&{} 135.66\\ \end{array}\right] , \end{aligned}$$

and

$$\begin{aligned} \varvec{\Sigma }_2&= \left[ \begin{array}{lllll} 34.25&{}15.16&{} 17.81&{} 22.39&{}14.62\\ 15.16&{} 17.01&{} 11.42&{} 13.98&{} 8.95\\ 17.81&{} 11.42&{} 17.62&{} 16.12&{} 10.45\\ 22.39&{} 13.98&{} 16.12&{} 28.11&{} 13.11\\ 14.62&{} 8.95&{} 10.45&{} 13.11&{} 10.19\\ \end{array}\right] \\ \hat{\varvec{\Sigma }}_2&= \left[ \begin{array}{lllll} 22.16&{} 7.44&{} 13.71&{} 12.89&{} 10.12\\ 7.44&{} 11.25&{} 7.59&{} 8.05&{} 5.48\\ 13.71&{} 7.59&{} 18.83&{} 13.53&{} 10.13\\ 12.89&{} 8.05&{} 13.53&{} 22.00 &{} 9.41\\ 10.12&{} 5.48&{} 10.13&{} 9.41&{} 8.63\\ \end{array}\right] . \end{aligned}$$

1.2 C.2 Example 2

$$\begin{aligned} \varvec{\Sigma }_1&= \left[ \begin{array}{lllll} 10.41&{} 3.61&{} 4.07&{} 4.48&{} 5.71\\ 3.61&{} 7.83&{} 2.88&{} 3.18&{} 4.03\\ 4.07&{} 2.88&{} 8.67&{} 3.81&{} 4.64\\ 4.48&{} 3.18&{} 3.81&{} 9.61&{} 5.17\\ 5.71&{} 4.04&{} 4.64&{} 5.17&{} 11.73\\ \end{array}\right] \\ \hat{\varvec{\Sigma }}_1&= \left[ \begin{array}{lllll} 8.86&{}3.89&{}5.06&{} 3.84&{} 5.72\\ 3.89&{} 7.23&{} 3.59&{}1.79&{} 4.04\\ 5.06&{} 3.59&{} 8.44&{} 3.85&{} 5.50\\ 3.84&{} 1.79&{} 3.85&{} 7.74&{} 4.38\\ 5.72&{} 4.04&{} 5.50&{} 4.38&{} 9.81\\ \end{array}\right] , \end{aligned}$$
$$\begin{aligned} \varvec{\Sigma }_2&= \left[ \begin{array}{lllll} 103.36&{} 103.07&{} 101.37&{} 79.41&{} 105.66\\ 103.08&{} 122.1&{} 110.23&{} 85.97&{} 115.47\\ 101.37&{} 110.23&{} 134.33&{} 106.08&{} 118.50\\ 79.41&{} 85.97&{}106.08&{} 102.73&{}95.21\\ 105.66&{} 115.47&{} 118.50&{} 95.21&{}129.21\\ \end{array}\right] \\ \hat{\varvec{\Sigma }}_2&= \left[ \begin{array}{lllll} 106.17&{}100.46&{}93.18&{}73.81&{}105.01\\ 100.46&{} 113.71&{} 92.97&{} 72.22&{} 107.88\\ 93.18&{} 92.97&{} 108.25&{} 83.08&{} 102.36\\ 73.81&{} 72.22&{} 83.08&{} 80.09&{} 81.85\\ 105.01&{} 107.88&{} 102.36&{} 81.85&{} 122.59\\ \end{array}\right] . \end{aligned}$$

and

$$\begin{aligned} \varvec{\Sigma }_3&= \left[ \begin{array}{lllll} 25.19&{} 15.16&{} 17.81&{} 22.39&{} 14.62\\ 15.16&{} 10.67&{} 11.42&{} 13.98&{} 8.95\\ 17.81&{} 11.42&{} 13.12&{} 16.12&{} 10.45\\ 22.39&{} 13.98&{} 16.12&{} 20.31&{} 13.11\\ 14.62&{} 8.95&{} 10.45&{} 13.11&{} 8.70 \end{array}\right] \\ \hat{\varvec{\Sigma }}_3&= \left[ \begin{array}{lllll} 32.47&{} 19.91&{} 23.06&{} 28.78&{} 18.80\\ 19.91&{} 14.10&{} 14.96&{} 18.25&{} 11.66\\ 23.06&{} 14.96&{} 16.95&{} 20.77&{} 13.45\\ 28.78&{} 18.25&{} 20.77&{} 25.95&{} 16.77\\ 18.80&{} 11.66&{} 13.45&{} 16.77&{} 11.10 \end{array}\right] . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Subedi, S., Punzo, A., Ingrassia, S. et al. Clustering and classification via cluster-weighted factor analyzers. Adv Data Anal Classif 7, 5–40 (2013). https://doi.org/10.1007/s11634-013-0124-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-013-0124-8

Keywords

Mathematics Subject Classification (2010)

Navigation