Non-Parametric Semi-Supervised Learning in Many-Body Hilbert Space with Rescaled Logarithmic Fidelity
<p>The <math display="inline"><semantics> <mi>β</mi> </semantics></math> dependence of the testing accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of the testing samples for the ten classes in the MNIST dataset, where <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>t</mi> </msub> </semantics></math> is evaluated by randomly taking N=10 training samples from each classes. Note the RLF becomes the standard fidelity for <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>. We take the average of <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>t</mi> </msub> </semantics></math> by implementing the simulations for 20 times, and the variances are indicated by the shadowed area. By t-SNE, the insets show the visualized distributions of 2000 effective vectors <math display="inline"><semantics> <mrow> <mo>{</mo> <mover accent="true"> <mi mathvariant="bold">y</mi> <mo stretchy="false">˜</mo> </mover> <mo>}</mo> </mrow> </semantics></math> ( Equation (<a href="#FD5-mathematics-10-00940" class="html-disp-formula">5</a>)) that are randomly taken from the testing samples.</p> "> Figure 2
<p>Testing accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of non-parametric supervised learning using rescaled logarithmic fidelity (RLF-NSL) on the MNIST dataset with different number of labeled samples <span class="html-italic">N</span> in each class.</p> "> Figure 3
<p>Testing accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of non-parametric semi-supervised and supervised learning using rescaled logarithmic fidelity (RLF-NSSL and RLF-NSL, respectively) on the MNIST dataset. Our results are compared with <span class="html-italic">k</span>-nearest neighbors with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and 10, naive Bayesian classifiers, and a baseline model by simply replacing RLF by the Euclidean distance. For more results of the KNN with different values of <span class="html-italic">k</span> and those of the p-norm distance with different values of <span class="html-italic">p</span>, please refer to the <a href="#app1-mathematics-10-00940" class="html-app">Appendix A</a>.</p> "> Figure 4
<p>The testing accuracy of the RLF-NSL on the IMDb dataset comparing different kernels (Euclidean, Gaussian, and RLF) and classification strategies (KNN and NSL). The <span class="html-italic">x</span>-axis shows the number of labeled samples in each class. For more results of the KNN with different values of <span class="html-italic">k</span> and those of the Gaussian kernel with different values of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>, please refer to the <a href="#app1-mathematics-10-00940" class="html-app">Appendix A</a>.</p> "> Figure 5
<p>For the RLF-NSSL on the MNIST dataset with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> labeled samples in each class (few-shot case), (<b>a</b>,<b>b</b>) show the confidence <math display="inline"><semantics> <mi>η</mi> </semantics></math> and classification accuracy <math display="inline"><semantics> <msub> <mi>γ</mi> <mi>c</mi> </msub> </semantics></math> of the samples in the clusters, respectively, in different epochs. (<b>c</b>) shows the classification accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> for the testing set. The insets of (<b>c</b>) illustrate the visualizations by applying t-SNE to the testing samples in the low-dimensional space (Equation (<a href="#FD5-mathematics-10-00940" class="html-disp-formula">5</a>)). See the details in the main text.</p> "> Figure A1
<p>The classification accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> on the MNIST dataset for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>600</mn> </mrow> </semantics></math>. The solid line with symbols show the results by using p-norm as the kernel in the NSL algorithm. The horizontal dash lines show the accuracies of the RLF-NSL with <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1.08</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1.35</mn> </mrow> </semantics></math>, respectively. The shadows demonstrate the standard deviation.</p> "> Figure A2
<p>The classification accuracies <math display="inline"><semantics> <mi>γ</mi> </semantics></math> on the IMDb dataset for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1200</mn> </mrow> </semantics></math>, obtained by the NSL with the p-norm for different values of <span class="html-italic">p</span> (solid lines with symbols) and with the RLF as the kernel (horizontal dash lines). The shadows show the standard deviation.</p> ">
Abstract
:1. Introduction
2. Hilbert Space and Rescaled Logarithmic Fidelity
3. Rescaled Logarithmic Fidelity and Classification Scheme
4. Non-Parametric Semi-Supervised Learning with Pseudo-Labels
5. Discussion from the Perspective of Rate Reduction
6. Summary
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Supplemental Results on the MNIST and IMDb Datasets
Appendix A.1. MNIST Dataset
1 | 2 | 3 | 4 | 6 | 8 | 10 | 12 | RLF-NSL | |
---|---|---|---|---|---|---|---|---|---|
(%) | |||||||||
std. |
1 | 2 | 3 | 4 | 6 | 8 | 10 | 12 | RLF-NSL | |
---|---|---|---|---|---|---|---|---|---|
(%) | |||||||||
std. |
Appendix A.2. IMDb Dataset
1 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | RLF-NSL | |
---|---|---|---|---|---|---|---|---|---|---|
(%) | ||||||||||
std. |
1 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | RLF-NSL | |
---|---|---|---|---|---|---|---|---|---|---|
(%) | ||||||||||
std. |
RLF-NSL | |||||||
---|---|---|---|---|---|---|---|
(%) | |||||||
std. |
RLF-NSL | |||||||
---|---|---|---|---|---|---|---|
(%) | |||||||
std. |
References
- Shawe-Taylor, J.; Cristianini, N. Kernel Methods for Pattern Analysis; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Hofmann, T.; Schölkopf, B.; Smola, A.J. Kernel methods in machine learning. Ann. Stat. 2008, 36, 1171–1220. [Google Scholar] [CrossRef] [Green Version]
- Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef] [PubMed]
- Schuld, M.; Killoran, N. Quantum Machine Learning in Feature Hilbert Spaces. Phys. Rev. Lett. 2019, 122, 040504. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Havlíček, V.; Córcoles, A.D.; Temme, K.; Harrow, A.W.; Kandala, A.; Chow, J.M.; Gambetta, J.M. Supervised learning with quantum-enhanced feature spaces. Nature 2019, 567, 209–212. [Google Scholar] [CrossRef] [Green Version]
- Lloyd, S.; Schuld, M.; Ijaz, A.; Izaac, J.; Killoran, N. Quantum embeddings for machine learning. arXiv 2020, arXiv:quant-ph/2001.03622. [Google Scholar]
- Schuld, M. Supervised quantum machine learning models are kernel methods. arXiv 2021, arXiv:quant-ph/2101.11020. [Google Scholar]
- Wiebe, N.; Braun, D.; Lloyd, S. Quantum Algorithm for Data Fitting. Phys. Rev. Lett. 2012, 109, 050505. [Google Scholar] [CrossRef] [Green Version]
- Lloyd, S.; Mohseni, M.; Rebentrost, P. Quantum principal component analysis. Nat. Phys. 2014, 10, 631–633. [Google Scholar] [CrossRef] [Green Version]
- Stoudenmire, E.; Schwab, D. Supervised learning with tensor networks. Adv. Neural Inf. Process. Syst. 2016, 29, 4806–4814. [Google Scholar]
- Schuld, M.; Sinayskiy, I.; Petruccione, F. Prediction by linear regression on a quantum computer. Phys. Rev. A 2016, 94, 022342. [Google Scholar] [CrossRef] [Green Version]
- Benedetti, M.; Realpe-Gómez, J.; Biswas, R.; Perdomo-Ortiz, A. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models. Phys. Rev. X 2017, 7, 041052. [Google Scholar] [CrossRef] [Green Version]
- Schuld, M.; Fingerhuth, M.; Petruccione, F. Implementing a distance-based classifier with a quantum interference circuit. EPL (Europhys. Lett.) 2017, 119, 60002. [Google Scholar] [CrossRef] [Green Version]
- Kerenidis, I.; Landman, J.; Luongo, A.; Prakash, A. q-means: A quantum algorithm for unsupervised machine learning. In Proceedings of the NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Zhao, Z.; Fitzsimons, J.K.; Fitzsimons, J.F. Quantum-assisted Gaussian process regression. Phys. Rev. A 2019, 99, 052331. [Google Scholar] [CrossRef] [Green Version]
- LaRose, R.; Coyle, B. Robust data encodings for quantum classifiers. Phys. Rev. A 2020, 102, 032420. [Google Scholar] [CrossRef]
- Huang, H.Y.; Broughton, M.; Mohseni, M.; Babbush, R.; Boixo, S.; Neven, H.; McClean, J.R. Power of data in quantum machine learning. Nat. Commun. 2021, 12, 2631. [Google Scholar] [CrossRef] [PubMed]
- Park, D.K.; Blank, C.; Petruccione, F. The theory of the quantum kernel-based binary classifier. Phys. Lett. A 2020, 384, 126422. [Google Scholar] [CrossRef] [Green Version]
- Han, Z.Y.; Wang, J.; Fan, H.; Wang, L.; Zhang, P. Unsupervised Generative Modeling Using Matrix Product States. Phys. Rev. X 2018, 8, 031012. [Google Scholar] [CrossRef] [Green Version]
- Liu, D.; Ran, S.J.; Wittek, P.; Peng, C.; García, R.B.; Su, G.; Lewenstein, M. Machine learning by unitary tensor network of hierarchical tree structure. New J. Phys. 2019, 21, 073059. [Google Scholar] [CrossRef]
- Sun, Z.Z.; Peng, C.; Liu, D.; Ran, S.J.; Su, G. Generative tensor network classification model for supervised machine learning. Phys. Rev. B 2020, 101, 075135. [Google Scholar] [CrossRef] [Green Version]
- Ran, S.J.; Sun, Z.Z.; Fei, S.M.; Su, G.; Lewenstein, M. Tensor network compressed sensing with unsupervised machine learning. Phys. Rev. Res. 2020, 2, 033293. [Google Scholar] [CrossRef]
- Wang, K.; Xiao, L.; Yi, W.; Ran, S.J.; Xue, P. Quantum image classifier with single photons. arXiv 2020, arXiv:quant-ph/2003.08551. [Google Scholar]
- Nielsen, M.A.; Chuang, I. Quantum computation and quantum information. Am. J. Phys. 2002, 70, 558. [Google Scholar] [CrossRef] [Green Version]
- D’Ariano, G.M.; Lo Presti, P. Quantum Tomography for Measuring Experimentally the Matrix Elements of an Arbitrary Quantum Operation. Phys. Rev. Lett. 2001, 86, 4195–4198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Buhrman, H.; Špalek, R. Quantum Verification of Matrix Products. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, Miami, FL, USA, 22–24 January 2006; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2006. SODA ’06. pp. 880–889. [Google Scholar]
- Zhou, H.Q.; Orús, R.; Vidal, G. Ground State Fidelity from Tensor Network Representations. Phys. Rev. Lett. 2008, 100, 080601. [Google Scholar] [CrossRef] [Green Version]
- Abasto, D.F.; Hamma, A.; Zanardi, P. Fidelity analysis of topological quantum phase transitions. Phys. Rev. A 2008, 78, 010301. [Google Scholar] [CrossRef] [Green Version]
- Schwandt, D.; Alet, F.; Capponi, S. Quantum Monte Carlo Simulations of Fidelity at Magnetic Quantum Phase Transitions. Phys. Rev. Lett. 2009, 103, 170501. [Google Scholar] [CrossRef] [Green Version]
- Quan, H.T.; Cucchietti, F.M. Quantum fidelity and thermal phase transitions. Phys. Rev. E 2009, 79, 031101. [Google Scholar] [CrossRef] [Green Version]
- Zhao, J.H.; Zhou, H.Q. Singularities in ground-state fidelity and quantum phase transitions for the Kitaev model. Phys. Rev. B 2009, 80, 014403. [Google Scholar] [CrossRef] [Green Version]
- Xiong, H.N.; Ma, J.; Sun, Z.; Wang, X. Reduced-fidelity approach for quantum phase transitions in spin-12 dimerized Heisenberg chains. Phys. Rev. B 2009, 79, 174425. [Google Scholar] [CrossRef] [Green Version]
- Ran, S.J. Encoding of matrix product states into quantum circuits of one- and two-qubit gates. Phys. Rev. A 2020, 101, 032310. [Google Scholar] [CrossRef] [Green Version]
- Yang, Y.; Sun, Z.Z.; Ran, S.J.; Su, G. Visualizing quantum phases and identifying quantum phase transitions by nonlinear dimensional reduction. Phys. Rev. B 2021, 103, 075106. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Ma, Y.; Derksen, H.; Hong, W.; Wright, J. Segmentation of Multivariate Mixed Data via Lossy Data Coding and Compression. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1546–1562. [Google Scholar] [CrossRef] [PubMed]
- Yu, Y.; Chan, K.H.; You, C.; Song, C.; Ma, Y. Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction. Adv. Neural Inf. Process. Syst. 2020, 33, 9422–9434. [Google Scholar]
- Socher, R.; Chen, D.; Manning, C.D.; Ng, A. Reasoning with Neural Tensor Networks for Knowledge Base Completion. In Advances in Neural Information Processing Systems; Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2013; Volume 26. [Google Scholar]
- Cheng, S.; Chen, J.; Wang, L. Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines. Entropy 2018, 20, 583. [Google Scholar] [CrossRef] [Green Version]
- Chen, J.; Cheng, S.; Xie, H.; Wang, L.; Xiang, T. Equivalence of restricted Boltzmann machines and tensor network states. Phys. Rev. B 2018, 97, 085104. [Google Scholar] [CrossRef] [Green Version]
- Cheng, S.; Wang, L.; Xiang, T.; Zhang, P. Tree tensor networks for generative modeling. Phys. Rev. B 2019, 99, 155131. [Google Scholar] [CrossRef] [Green Version]
- Huggins, W.; Patil, P.; Mitchell, B.; Whaley, K.B.; Stoudenmire, E.M. Towards quantum machine learning with tensor networks. Quantum Sci. Technol. 2019, 4, 024001. [Google Scholar] [CrossRef] [Green Version]
- Schröder, F.A.Y.N.; Turban, D.H.P.; Musser, A.J.; Hine, N.D.M.; Chin, A.W. Tensor network simulation of multi-environmental open quantum dynamics via machine learning and entanglement renormalisation. Nat. Commun. 2019, 10, 1062. [Google Scholar] [CrossRef] [Green Version]
- Efthymiou, S.; Hidary, J.; Leichenauer, S. TensorNetwork for Machine Learning. arXiv 2019, arXiv:cs.LG/1906.06329. [Google Scholar]
- Sun, Z.Z.; Ran, S.J.; Su, G. Tangent-space gradient optimization of tensor network for machine learning. Phys. Rev. E 2020, 102, 012152. [Google Scholar] [CrossRef] [PubMed]
- Guo, C.; Modi, K.; Poletti, D. Tensor-network-based machine learning of non-Markovian quantum processes. Phys. Rev. A 2020, 102, 062414. [Google Scholar] [CrossRef]
- Cheng, S.; Wang, L.; Zhang, P. Supervised learning with projected entangled pair states. Phys. Rev. B 2021, 103, 125117. [Google Scholar] [CrossRef]
- Reyes, J.; Stoudenmire, E.M. A multi-scale tensor network architecture for machine learning. Mach. Learn. Sci. Technol. 2021, 2, 035036. [Google Scholar] [CrossRef]
- Zhu, D.; Linke, N.M.; Benedetti, M.; Landsman, K.A.; Nguyen, N.H.; Alderete, C.H.; Perdomo-Ortiz, A.; Korda, N.; Garfoot, A.; Brecque, C.; et al. Training of quantum circuits on a hybrid quantum computer. Sci. Adv. 2019, 5, eaaw9918. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Benedetti, M.; Garcia-Pintos, D.; Perdomo, O.; Leyton-Ortega, V.; Nam, Y.; Perdomo-Ortiz, A. A generative modeling approach for benchmarking and training shallow quantum circuits. NPJ Quantum Inf. 2019, 5, 45. [Google Scholar] [CrossRef]
- Benedetti, M.; Lloyd, E.; Sack, S.; Fiorentini, M. Parameterized quantum circuits as machine learning models. Quantum Sci. Technol. 2019, 4, 043001. [Google Scholar] [CrossRef] [Green Version]
- Chen, S.Y.C.; Yang, C.H.H.; Qi, J.; Chen, P.Y.; Ma, X.; Goan, H.S. Variational Quantum Circuits for Deep Reinforcement Learning. IEEE Access 2020, 8, 141007–141024. [Google Scholar] [CrossRef]
- Du, Y.; Hsieh, M.H.; Liu, T.; Tao, D. Expressive power of parametrized quantum circuits. Phys. Rev. Res. 2020, 2, 033125. [Google Scholar] [CrossRef]
- Cao, S.; Wossnig, L.; Vlastakis, B.; Leek, P.; Grant, E. Cost-function embedding and dataset encoding for machine learning with parametrized quantum circuits. Phys. Rev. A 2020, 101, 052309. [Google Scholar] [CrossRef]
- Xin, T.; Che, L.; Xi, C.; Singh, A.; Nie, X.; Li, J.; Dong, Y.; Lu, D. Experimental Quantum Principal Component Analysis via Parametrized Quantum Circuits. Phys. Rev. Lett. 2021, 126, 110502. [Google Scholar] [CrossRef] [PubMed]
- Cincio, L.; Rudinger, K.; Sarovar, M.; Coles, P.J. Machine Learning of Noise-Resilient Quantum Circuits. PRX Quantum 2021, 2, 010324. [Google Scholar] [CrossRef]
- Farhi, E.; Neven, H. Classification with Quantum Neural Networks on Near Term Processors. arXiv 2018, arXiv:quant-ph/1802.06002. [Google Scholar]
- McClean, J.R.; Boixo, S.; Smelyanskiy, V.N.; Babbush, R.; Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 2018, 9, 4812. [Google Scholar] [CrossRef] [Green Version]
- Cong, I.; Choi, S.; Lukin, M.D. Quantum convolutional neural networks. Nat. Phys. 2019, 15, 1273–1278. [Google Scholar] [CrossRef] [Green Version]
- Killoran, N.; Bromley, T.R.; Arrazola, J.M.; Schuld, M.; Quesada, N.; Lloyd, S. Continuous-variable quantum neural networks. Phys. Rev. Res. 2019, 1, 033063. [Google Scholar] [CrossRef] [Green Version]
- Mari, A.; Bromley, T.R.; Izaac, J.; Schuld, M.; Killoran, N. Transfer learning in hybrid classical-quantum neural networks. Quantum 2020, 4, 340. [Google Scholar] [CrossRef]
- Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.J.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training deep quantum neural networks. Nat. Commun. 2020, 11, 808. [Google Scholar] [CrossRef] [Green Version]
- Shen, H.; Zhang, P.; You, Y.Z.; Zhai, H. Information Scrambling in Quantum Neural Networks. Phys. Rev. Lett. 2020, 124, 200504. [Google Scholar] [CrossRef]
- For Those Who Are Interested in Reproducing Our Results, We Have Made Our Codes Publicly. Available online: https://github.com/Li-Wei-Ming/rlf.git (accessed on 16 September 2021).
- LeCun, Y.; Cortes, C.; Burges, C.J. The MNIST Database of Handwritten Digits. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 17 April 2019).
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Langley, P.; Iba, W.; Thompson, K. An analysis of Bayesian classifiers. Aaai 1992, 90, 223–228. [Google Scholar]
- Sarle, W.S. Algorithms for clustering data. Technometrics 1990, 32, 227–229. [Google Scholar] [CrossRef]
- Kaufman, L.; Rousseeuw, P.J. Finding Groups in Data: An Introduction to Cluster Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 344. [Google Scholar]
- Mehrotra, K.; Mohan, C.K.; Ranka, S. Elements of Artificial Neural Networks; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Ng, A.Y.; Jordan, M.I.; Weiss, Y. On Spectral Clustering: Analysis and an Algorithm; MIT Press: Cambridge, MA, USA, 2001; pp. 849–856, NIPS’01. [Google Scholar]
- Kamvar, K.; Sepandar, S.; Klein, K.; Dan, D.; Manning, M.; Christopher, C. Spectral Learning; Technical Report 2003-25; Stanford InfoLab: Stanford, CA, USA, 2003. [Google Scholar]
- Munkres, J. Algorithms for the Assignment and Transportation Problems. J. Soc. Ind. Appl. Math. 1957, 5, 32–38. [Google Scholar] [CrossRef] [Green Version]
- Maas, A.L.; Daly, R.E.; Pham, P.T.; Huang, D.; Ng, A.Y.; Potts, C. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 19–24 June 2011; Association for Computational Linguistics: Portland, OR, USA, 2011; pp. 142–150. [Google Scholar]
k-Means | Spectral Clustering | RLF-NSSL () | |
---|---|---|---|
(%) | 56.21 | 65.46 | 72.64 |
Std. | 1.83 | 0.1 | 5.43 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, W.-M.; Ran, S.-J. Non-Parametric Semi-Supervised Learning in Many-Body Hilbert Space with Rescaled Logarithmic Fidelity. Mathematics 2022, 10, 940. https://doi.org/10.3390/math10060940
Li W-M, Ran S-J. Non-Parametric Semi-Supervised Learning in Many-Body Hilbert Space with Rescaled Logarithmic Fidelity. Mathematics. 2022; 10(6):940. https://doi.org/10.3390/math10060940
Chicago/Turabian StyleLi, Wei-Ming, and Shi-Ju Ran. 2022. "Non-Parametric Semi-Supervised Learning in Many-Body Hilbert Space with Rescaled Logarithmic Fidelity" Mathematics 10, no. 6: 940. https://doi.org/10.3390/math10060940