Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Data fusion using factor analysis and low-rank matrix completion

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Data fusion involves the integration of multiple related datasets. The statistical file-matching problem is a canonical data fusion problem in multivariate analysis, where the objective is to characterise the joint distribution of a set of variables when only strict subsets of marginal distributions have been observed. Estimation of the covariance matrix of the full set of variables is challenging given the missing-data pattern. Factor analysis models use lower-dimensional latent variables in the data-generating process, and this introduces low-rank components in the complete-data matrix and the population covariance matrix. The low-rank structure of the factor analysis model can be exploited to estimate the full covariance matrix from incomplete data via low-rank matrix completion. We prove the identifiability of the factor analysis model in the statistical file-matching problem under conditions on the number of factors and the number of shared variables over the observed marginal subsets. Additionally, we provide an EM algorithm for parameter estimation. On several real datasets, the factor model gives smaller reconstruction errors in file-matching problems than the common approaches for low-rank matrix completion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Abdelaal, T., Höllt, T., van Unen, V., Lelieveldt, B.P.F., Koning, F., Reinders, M.J.T., Mahfouz, A.: CyTOFmerge: integrating mass cytometry data across multiple panels. Bioinformatics 35(20), 4063–4071 (2019)

    Article  Google Scholar 

  • Ahfock, D., Pyne, S., Lee, S.X., McLachlan, G.J.: Partial identification in the statistical matching problem. Comput. Stat. Data Anal. 104, 79–90 (2016)

    Article  MathSciNet  Google Scholar 

  • Anderson, T.W., Rubin, H.: Statistical inference in factor analysis. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, pp. 238–246 (1956)

  • Barry, J.T.: An investigation of statistical matching. J. Appl. Stat. 15(3), 275–283 (1988)

    Article  Google Scholar 

  • Bekker, P.A., ten Berge, J.M.: Generic global identification in factor analysis. Linear Algebra Appl. 264, 255–263 (1997)

    Article  MathSciNet  Google Scholar 

  • Bishop, W.E., Byron, M.Y.: Deterministic symmetric positive semidefinite matrix completion. In: Advances in Neural Information Processing Systems, pp. 2762–2770 (2014)

  • Browne, M.W.: Asymptotically distribution-free methods for the analysis of covariance structures. Br. J. Math. Stat. Psychol. 37(1), 62–83 (1984)

    Article  MathSciNet  Google Scholar 

  • Candes, E.J., Plan, Y.: Matrix completion with noise. Proc. IEEE 98(6), 925–936 (2010)

    Article  Google Scholar 

  • Conti, P.L., Marella, D., Scanu, M.: Uncertainty analysis in statistical matching. J. Off. Stat. 28(1), 69–88 (2012)

    MATH  Google Scholar 

  • Conti, P.L., Marella, D., Scanu, M.: Statistical matching analysis for complex survey data with applications. J. Am. Stat. Assoc. 111(516), 1715–1725 (2016)

    Article  MathSciNet  Google Scholar 

  • Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm (with discussion). J. Royal Stat. Soc. B 39, 1–38 (1977)

    MATH  Google Scholar 

  • D’Orazio, M.: Statistical learning in official statistics: the case of statistical matching. Stat. J. IAOS 35(3), 435–441 (2019)

  • D’Orazio, M., Di Zio, M., Scanu, M.: Statistical Matching: Theory and Practice. Wiley, New York (2006a)

  • D’Orazio, M., Zio, M., Scanu, M.: Statistical matching for categorical data: displaying uncertainty and using logical constraints. J. Off. Stat. 22(1), 137 (2006b)

  • Gustafson, P.: Bayesian Inference for Partially Identified Models: Exploring the Limits of Limited Data. CRC Press, Boca Raton (2015)

    Book  Google Scholar 

  • Hastie, T., Mazumder, R.: softImpute: Matrix Completion via Iterative Soft-Thresholded SVD (2021). R package version 1.4-1

  • Ibrahim, J.G., Zhu, H., Tang, N.: Model selection criteria for missing-data problems using the EM algorithm. J. Am. Stat. Assoc. 103(484), 1648–1658 (2008)

    Article  MathSciNet  Google Scholar 

  • Kadane, J.B.: Some statistical problems in merging data files. J. Off. Stat. 17(3), 423 (2001)

    Google Scholar 

  • Kamakura, W.A., Wedel, M.: Factor analysis and missing data. J. Market. Res. 37(4), 490–498 (2000)

    Article  Google Scholar 

  • Koltchinskii, V., Lounici, K., Tsybakov, A.B.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39(5), 2302–2329 (2011)

    Article  MathSciNet  Google Scholar 

  • Ledermann, W.: On the rank of the reduced correlational matrix in multiple-factor analysis. Psychometrika 2(2), 85–93 (1937)

    Article  Google Scholar 

  • Lee, G., Finn, W., Scott, C.: Statistical file matching of flow cytometry data. J. Biomed. Inform. 44(4), 663–676 (2011)

    Article  Google Scholar 

  • Li, G., Jung, S.: Incorporating covariates into integrated factor analysis of multi-view data. Biometrics 73(4), 1433–1442 (2017)

    Article  MathSciNet  Google Scholar 

  • Little, R.J.: Missing-data adjustments in large surveys. J. Bus. Econ. Stat. 6(3), 287–296 (1988)

    Google Scholar 

  • Little, R.J.A., Rubin, D.B.: Statistical Analysis with Missing Data, 2nd edn. Wiley, Hoboken (2002)

    Book  Google Scholar 

  • Mazumder, R., Hastie, T., Tibshirani, R.: Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res. 11, 2287–2322 (2010)

    MathSciNet  MATH  Google Scholar 

  • Moriarity, C., Scheuren, F.: Statistical matching: a paradigm for assessing the uncertainty in the procedure. J. Off. Stat. 17(3), 407 (2001)

    Google Scholar 

  • O’Connell, M.J., Lock, E.F.: Linked matrix factorization. Biometrics 75(2), 582–592 (2019)

  • O’Neill, K., Aghaeepour, N., Parker, J., Hogge, D., Karsan, A., Dalal, B., Brinkman, R.R.: Deep profiling of multitube flow cytometry data. Bioinformatics 31(10), 1623–1631 (2015)

  • Park, J.Y., Lock, E.F.: Integrative factorization of bidimensionally linked matrices. Biometrics 76(1), 61–74 (2020)

    Article  MathSciNet  Google Scholar 

  • Pedreira, C.E., Costa, E.S., Barrena, S., Lecrevisse, Q., Almeida, J., van Dongen, J.J.M., Orfao, A.: Generation of flow cytometry data files with a potentially infinite number of dimensions. Cytom. Part A 73(9), 834–846 (2008)

    Article  Google Scholar 

  • Preacher, K.J., Zhang, G., Kim, C., Mels, G.: Choosing the optimal number of factors in exploratory factor analysis: a model selection perspective. Multivar. Behav. Res. 48(1), 28–56 (2013)

    Article  Google Scholar 

  • Rässler, S.: Statistical Matching: A Frequentist Theory, Practical Applications, and Alternative Bayesian Approaches. Springer-Verlag, New York (2002)

    Book  Google Scholar 

  • Rodgers, W.L.: An evaluation of statistical matching. J. Bus. Econ. Stat. 2(1), 91 (1984)

    Google Scholar 

  • Rubin, D.B., Thayer, D.T.: EM algorithms for ML factor analysis. Psychometrika 47(1), 69–76 (1982)

    Article  MathSciNet  Google Scholar 

  • Sachs, K., Itani, S., Carlisle, J., Nolan, G.P., Pe’er, D., Lauffenburger, D.A.: Learning signaling network structures with sparsely distributed data. J. Comput. Biol. 16(2), 201–212 (2009)

  • Schönemann, P.H.: A generalized solution of the orthogonal Procrustes problem. Psychometrika 31(1), 1–10 (1966)

    Article  MathSciNet  Google Scholar 

  • Schwarz, G.: Estimating the dimension of a model. Ann. Stat. 6, 461–464 (1978)

    Article  MathSciNet  Google Scholar 

  • Shapiro, A.: Identifiability of factor analysis: some results and open problems. Linear Algebra Appl. 70, 1–7 (1985)

    Article  MathSciNet  Google Scholar 

  • Troyanskaya, O., Cantor, M., Sherlock, G., Brown, P., Hastie, T., Tibshirani, R., Botstein, D., Altman, R.B.: Missing value estimation methods for DNA microarrays. Bioinformatics 17(6), 520–525 (2001)

    Article  Google Scholar 

  • Van Buuren, S.: Flexible Imputation of Missing Data. CRC Press, Boca Raton (2018)

    Book  Google Scholar 

  • You, K.: filling: Matrix Completion, Imputation, and Inpainting Methods (2020). R package version 0.2.1

Download references

Acknowledgements

We would like to thank the reviewers for thoughtful suggestions that have helped to shape and clarify the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Ahfock.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was partially funded by the Australian Government through the Australian Research Council (Project Number DP180101192)

Appendix

Appendix

1.1 Proof of Lemma 1

Due to the rotational invariance of the factor model, we have that

$$\begin{aligned}&\begin{bmatrix} \varvec{\varLambda }_{X}^{A} \\ \varvec{\varLambda }_{Y}^{A} \\ \varvec{\varLambda }_{Z}^{A} \end{bmatrix} = \begin{bmatrix} \varvec{\varLambda }_{X} \\ \varvec{\varLambda }_{Y} \\ \varvec{\varLambda }_{Z} \end{bmatrix}\varvec{R}_{1}, \quad \begin{bmatrix} \varvec{\varLambda }_{X}^{B} \\ \varvec{\varLambda }_{Y}^{B} \\ \varvec{\varLambda }_{Z}^{B} \end{bmatrix} = \begin{bmatrix} \varvec{\varLambda }_{X} \\ \varvec{\varLambda }_{Y} \\ \varvec{\varLambda }_{Z} \end{bmatrix}\varvec{R}_{2},\nonumber \\&\begin{bmatrix} \varvec{\varLambda }_{X}^{B} \\ \varvec{\varLambda }_{Y}^{B} \\ \varvec{\varLambda }_{Z}^{B} \end{bmatrix} = \begin{bmatrix} \varvec{\varLambda }_{X}^{A} \\ \varvec{\varLambda }_{Y}^{A} \\ \varvec{\varLambda }_{Z}^{A} \end{bmatrix}\varvec{R}_{3} , \end{aligned}$$
(17)

for orthogonal matrices \(\varvec{R}_{1}, \varvec{R}_{2}\), and \(\varvec{R}_{3}\). The alignment of \(\varvec{\varLambda }_{X}^{A}\) and \(\varvec{\varLambda }_{X}^{B}\) is an orthogonal Procrustes problem. Let \(\varvec{R}\) be the solution to the optimisation problem

$$\begin{aligned} \varvec{R}&= {{\,\mathrm{argmin}\,}}\ \Vert \varvec{\varLambda }_{X}^{A} - \varvec{\varLambda }_{X}^{B}\varvec{R} \Vert _{F}, \quad \text { subject to } \varvec{R}^{\mathsf {T}}\varvec{R} = \varvec{I}. \end{aligned}$$
(18)

Assuming that \(\varvec{\varLambda }_{X}^{A}\) and \(\varvec{\varLambda }_{X}^{B}\) are of full column rank, Schönemann (1966) showed that there is a unique solution to (18). As \(\text {rank}(\varvec{\varLambda }_{X}^{A})=\text {rank}(\varvec{\varLambda }_{X}^{B})=\text {rank}(\varvec{\varLambda }_{X})\), both \(\varvec{\varLambda }_{X}^{A}\) and \(\varvec{\varLambda }_{X}^{B}\) are of rank q under Assumption 1. Define \(\varvec{M} = (\varvec{\varLambda }_{X}^{B})^{\mathsf {T}}{\varvec{\varLambda }_{X}^{A}}\) and let the singular value decomposition of \(\varvec{M}\) be given by \(\varvec{M} =\varvec{W}\varvec{D}\varvec{Q}^{\mathsf {T}} \). Then using the result from Schönemann (1966), the unique solution to (18) is given by \(\varvec{R} = \varvec{W}\varvec{Q}^{\mathsf {T}}\). The uniqueness of the solution implies that \(\varvec{R}=\varvec{R}_{3}^{\mathsf {T}}\) as \(\varLambda _{X}^{B}\varvec{R}_{3}\varvec{R}_{3}^{\mathsf {T}} = \varvec{\varLambda }_{X}^{A}\) from (17). Then \(\varvec{\varLambda }_{Z}^{B}\varvec{R} = \varvec{\varLambda }_{Z}^{B}\varvec{R}_{3}^{\mathsf {T}} =\varvec{\varLambda }_{Z}^{A}\varvec{R}_{3}\varvec{R}_{3}^{\mathsf {T}} = \varvec{\varLambda }_{Z}^{A}\) again using (17). Finally, \(\varvec{\varLambda }_{Y}^{A}(\varvec{\varLambda }_{Z}^{B}\varvec{R})^{\mathsf {T}} = \varvec{\varLambda }_{Y}^{A}(\varvec{\varLambda }_{Z}^{A})^{\mathsf {T}} = \varvec{\varLambda }_{Y}\varvec{R}_{1}\varvec{R}_{1}^{\mathsf {T}}\varvec{\varLambda }_{Z}^{\mathsf {T}} = \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Z}^{\mathsf {T}}\).

1.2 Proof of Theorem 1

Using Theorem 5.1 in Anderson and Rubin (1956), Assumption 2 guarantees that if

$$\begin{aligned}&\begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{X}^{\mathsf {T}} + \varvec{\varPsi }_{X} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Y}^{\mathsf {T}} \\ \varvec{\varLambda }_{Y}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Y}^{\mathsf {T}} + \varvec{\varPsi }_{Y} \end{pmatrix} \\&\quad = \begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} + \varvec{\varPsi }_{X}^{*} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} + \varvec{\varPsi }_{Y}^{*} \end{pmatrix}, \\&\begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Z}^{\mathsf {T}} + \varvec{\varPsi }_{X} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Z}^{\mathsf {T}} \\ \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Z}^{\mathsf {T}} &{} \varvec{\varLambda }_{Z}\varvec{\varLambda }_{Z}^{\mathsf {T}} + \varvec{\varPsi }_{Z} \end{pmatrix} \\&\quad = \begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} + \varvec{\varPsi }_{X}^{*} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} + \varvec{\varPsi }_{Z}^{*} \end{pmatrix}. \end{aligned}$$

then the uniquenesses are equal, \(\varvec{\varPsi }_{X}=\varvec{\varPsi }_{X}^{*}\), \(\varvec{\varPsi }_{Y}=\varvec{\varPsi }_{Y}^{*}\), and \(\varvec{\varPsi }_{Z}=\varvec{\varPsi }_{Z}^{*}\), implying

$$\begin{aligned} \begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Y}^{\mathsf {T}} \\ \varvec{\varLambda }_{Y}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Y}^{\mathsf {T}} \end{pmatrix}&= \begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} \end{pmatrix}, \end{aligned}$$
(19)
$$\begin{aligned} \begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Z}^{\mathsf {T}} \\ \varvec{\varLambda }_{Z}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Z}\varvec{\varLambda }_{Z}^{\mathsf {T}} \end{pmatrix}&= \begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \end{pmatrix}. \end{aligned}$$
(20)

Using Lemma 1, \(\varLambda _{Y}\varLambda _{Z}^{\mathsf {T}}\) can be uniquely recovered given the matrices on the left-hand side of (19) and (20). Likewise, \(\varLambda _{Y}^{*}\varLambda _{Z}^{*\mathsf {T}}\) can be uniquely recovered given the matrices on the right hand side of (19) and (20). It remains to show that \(\varLambda _{Y}\varLambda _{Z}^{\mathsf {T}} = \varLambda _{Y}^{*}\varLambda _{Z}^{*\mathsf {T}}\). To do so, define the eigendecompositions

$$\begin{aligned} \varvec{V}_{A}\varvec{D}_{A}\varvec{V}_{A}^{\mathsf {T}}&= \begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Y}^{\mathsf {T}} \\ \varvec{\varLambda }_{Y}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Y}^{\mathsf {T}} \end{pmatrix}=\begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} \end{pmatrix} , \\ \varvec{V}_{B}\varvec{D}_{B}\varvec{V}_{B}^{\mathsf {T}}&= \begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Z}^{\mathsf {T}} \\ \varvec{\varLambda }_{Z}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Z}\varvec{\varLambda }_{Z}^{\mathsf {T}} \end{pmatrix}= \begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \end{pmatrix}, \end{aligned}$$

and the rotated and scaled eigenvectors

$$\begin{aligned} \begin{pmatrix}\varvec{\varvec{\varGamma }}_{X}^{A} \\ \varvec{\varvec{\varGamma }}_{Y}^{A} \end{pmatrix}&= \varvec{V}_{A}\varvec{D}_{A}^{1/2}, \quad \begin{pmatrix} \varvec{\varvec{\varGamma }}_{X}^{B} \\ \varvec{\varvec{\varGamma }}_{Z}^{B}\end{pmatrix} = \varvec{V}_{B}\varvec{D}_{B}^{1/2}. \end{aligned}$$

Using Assumption 1 and Lemma 1, the equality

$$\begin{aligned} \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Z}^{\mathsf {T}}&= \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}}= \varvec{\varGamma }_{Y}^{A}\varvec{\varGamma }_{Z}^{B}\varvec{W}\varvec{Q}^{\mathsf {T}}, \end{aligned}$$
(21)

must hold, where \(\varvec{W}\) and \(\varvec{Q}\) are the left and right singular vectors of the matrix \(\varvec{M} = (\varvec{\varGamma }_{X}^{B})^{\mathsf {T}}{\varvec{\varGamma }_{X}^{A}} = \varvec{W}\varvec{D}\varvec{Q}^{\mathsf {T}}\). Combining the equalities in (19), (20) and (21) gives the main result

$$\begin{aligned} \begin{pmatrix} \varvec{\varLambda }_{X}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Y}^{\mathsf {T}} &{} \varvec{\varLambda }_{X}\varvec{\varLambda }_{Z}^{\mathsf {T}} \\ \varvec{\varLambda }_{Y}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Y}^{\mathsf {T}} &{} \varvec{\varLambda }_{Y}\varvec{\varLambda }_{Z}^{\mathsf {T}} \\ \varvec{\varLambda }_{Z}\varvec{\varLambda }_{X}^{\mathsf {T}} &{} \varvec{\varLambda }_{Z}\varvec{\varLambda }_{Y}^{\mathsf {T}} &{} \varvec{\varLambda }_{Z}\varvec{\varLambda }_{Z}^{\mathsf {T}} \end{pmatrix}&= \begin{pmatrix} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} &{} \varvec{\varLambda }_{X}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Y}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \\ \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{X}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{Y}^{*\mathsf {T}} &{} \varvec{\varLambda }_{Z}^{*}\varvec{\varLambda }_{Z}^{*\mathsf {T}} \end{pmatrix}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ahfock, D., Pyne, S. & McLachlan, G.J. Data fusion using factor analysis and low-rank matrix completion. Stat Comput 31, 58 (2021). https://doi.org/10.1007/s11222-021-10033-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11222-021-10033-7

Keywords

Navigation