Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3517343.3517368acmotherconferencesArticle/Chapter ViewAbstractPublication PagesniceConference Proceedingsconference-collections
research-article
Public Access

Integer Factorization with Compositional Distributed Representations

Published: 03 May 2022 Publication History

Abstract

In this paper, we present an approach to integer factorization using distributed representations formed with Vector Symbolic Architectures. The approach formulates integer factorization in a manner such that it can be solved using neural networks and potentially implemented on parallel neuromorphic hardware. We introduce a method for encoding numbers in distributed vector spaces and explain how the resonator network can solve the integer factorization problem. We evaluate the approach on factorization of semiprimes by measuring the factorization accuracy versus the scale of the problem. We also demonstrate how the proposed approach generalizes beyond the factorization of semiprimes; in principle, it can be used for factorization of any composite number. This work demonstrates how a well-known combinatorial search problem may be formulated and solved within the framework of Vector Symbolic Architectures, and it opens the door to solving similarly difficult problems in other domains.

References

[1]
I. Ahmed, P.-W. Chiu, W. Moy, and C. H. Kim. 2021. A Probabilistic Compute Fabric Based on Coupled Ring Oscillators for Solving Combinatorial Optimization Problems. IEEE Journal of Solid-State Circuits(2021).
[2]
B. Apolloni, C. Carvalho, and D. De Falco. 1989. Quantum Stochastic Optimization. Stochastic Processes and their Applications 33, 2(1989), 233–244.
[3]
T. Bandaragoda, D. De Silva, 2019. Trajectory Clustering of Road Traffic in Urban Environments using Incremental Machine Learning in Combination with Hyperdimensional Computing. In IEEE Intelligent Transportation Systems Conference (ITSC). 1664–1670.
[4]
T. Bekolay, J. Bergstra, 2014. Nengo: A Python Tool for Building Large-scale Functional Brain Models. Frontiers in Neuroinformatics 7 (2014), 1–13.
[5]
M. Davies, N. Srinivasa, T.-H. Lin, 2018. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 38, 1 (2018), 82–99.
[6]
M. Davies, A. Wild, G. Orchard, 2021. Advancing Neuromorphic Computing with Loihi: A Survey of Results and Outlook. Proceedings of the IEEE 109, 5 (2021), 911–934.
[7]
C. Eliasmith and C. H. Anderson. 2003. Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. MIT Press.
[8]
C. Eliasmith, T. C. Stewart, 2012. A Large-scale Model of the Functioning Brain. Science 338, 6111 (2012), 1202–1205.
[9]
E. P. Frady, S. J. Kent, B. A. Olshausen, and F. T. Sommer. 2020. Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures. Neural Computation 32, 12 (2020), 2311–2331.
[10]
E. P. Frady, D. Kleyko, C. J. Kymn, B. A. Olshausen, and F. T. Sommer. 2021. Computing on Functions Using Randomized Vector Representations. arXiv:2109.03429 (2021), 1–33.
[11]
E. P. Frady, D. Kleyko, C. J. Kymn, B. A. Olshausen, and F. T. Sommer. 2022. Computing on Functions Using Randomized Vector Representations (in brief). In Neuro-Inspired Computational Elements Workshop (NICE). 1–12.
[12]
E. P. Frady, D. Kleyko, and F. T. Sommer. 2018. A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks. Neural Computation 30(2018), 1449–1513.
[13]
E. P. Frady, D. Kleyko, and F. T. Sommer. 2021. Variable Binding for Sparse Distributed Representations: Theory and Applications. IEEE Transactions on Neural Networks and Learning Systems PP, 99(2021), 1–14.
[14]
E. P. Frady, G. Orchard, D. Florey, N. Imam, R. Liu, J. Mishra, J. Tse, A. Wild, F. T. Sommer, and M. Davies. 2020. Neuromorphic Nearest-Neighbor Search Using Intel’s Pohoiki Springs. In Neuro-inspired Computational Elements Workshop (NICE). 1–10.
[15]
E. P. Frady and F. T. Sommer. 2019. Robust Computation with Rhythmic Spike Patterns. Proceedings of the National Academy of Sciences 116, 36(2019), 18050–18059.
[16]
A. A. Frolov, D. Husek, and D. A. Rachkovskij. 2006. Time of Searching for Similar Binary Vectors in Associative Memory. Cybernetics and Systems Analysis 42, 5 (2006), 615–623.
[17]
A. A. Frolov, D. A. Rachkovskij, and D. Husek. 2002. On Informational Characteristics of Willshaw-like Auto-associative Memory. Neural Network World 12, 2 (2002), 141–157.
[18]
R. W. Gayler. 2003. Vector Symbolic Architectures Answer Jackendoff’s Challenges for Cognitive Neuroscience. In Joint International Conference on Cognitive Science (ICCS/ASCS). 133–138.
[19]
L. Ge and K. K. Parhi. 2020. Classification using Hyperdimensional Computing: A Review. IEEE Circuits and Systems Magazine 20, 2 (2020), 30–47.
[20]
A. Hernández-Cano, Y. Kim, and M. Imani. 2021. A Framework for Efficient and Binary Clustering in High-Dimensional Space. In Design, Automation Test in Europe Conference Exhibition (DATE). 1859–1864.
[21]
J. J. Hopfield. 1982. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proceedings of the National Academy of Sciences 79, 8 (1982), 2554–2558.
[22]
J. J. Hopfield and D. W. Tank. 1985. “Neural” Computation of Decisions in Optimization Problems. Biological Cybernetics 52, 3 (1985), 141–152.
[23]
M. Imani, Y. Kim, T. Worley, S. Gupta, and T. Rosing. 2019. HDCluster: An Accurate Clustering Using Brain-Inspired High-Dimensional Computing. In Design, Automation Test in Europe Conference Exhibition (DATE). 1591–1594.
[24]
H. Jaeger. 2021. Towards a Generalized Theory Comprising Digital, Neuromorphic and Unconventional Computing. Neuromorphic Computing and Engineering 1, 1 (2021), 1–38.
[25]
P. Kanerva. 2009. Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors. Cognitive Computation 1, 2 (2009), 139–159.
[26]
G. Karunaratne, M. Le Gallo, 2020. In-Memory Hyperdimensional Computing. Nature Electronics 3, 6 (2020), 327–337.
[27]
G. Karunaratne, M. Schmuck, M. Le Gallo, G. Cherubini, L. Benini, A. Sebastian, and A. Rahimi. 2021. Robust High-dimensional Memory-augmented Neural Networks. Nature Communications 12, 1 (2021), 1–12.
[28]
S. J. Kent, E. P. Frady, F. T. Sommer, and B. A. Olshausen. 2020. Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods. Neural Computation 32, 12 (2020), 2332–2388.
[29]
D. Kleyko, M. Davies, E. P. Frady, 2021. Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware. arXiv:2106.05268 (2021), 1–28.
[30]
D. Kleyko, M. Kheffache, E. P. Frady, U. Wiklund, and E. Osipov. 2021. Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 32, 8(2021), 3777–3783.
[31]
D. Kleyko, E. Osipov, 2019. Distributed Representation of n-gram Statistics for Boosting Self-Organizing Maps with Hyperdimensional Computing. In International Andrei Ershov Memorial Conference on Perspectives of System Informatics (PSI). 64–79.
[32]
D. Kleyko, D. A. Rachkovskij, E. Osipov, and A. Rahimi. 2021. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. arXiv:2111.06077 (2021), 1–27.
[33]
D. Kleyko, D. A. Rachkovskij, E. Osipov, and A. Rahimi. 2021. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. arXiv:2112.15424 (2021), 1–36.
[34]
D. Kleyko, A. Rahimi, R. W. Gayler, and E. Osipov. 2020. Autoscaling Bloom Filter: Controlling Trade-off Between True and False Positives. Neural Computing and Applications 32 (2020), 3675–3684.
[35]
D. Kleyko, A. Rosato, E. P. Frady, M. Panella, and F. T. Sommer. 2020. Perceptron Theory for Predicting the Accuracy of Neural Networks. arXiv:2012.07881 (2020), 1–12.
[36]
A. Knoblauch, G. Palm, and F. T. Sommer. 2010. Memory Capacities for Synaptic and Structural Plasticity. Neural Computation 22, 2 (2010), 289–341.
[37]
B. Komer and C. Eliasmith. 2020. Efficient Navigation using a Scalable, Biologically Inspired Spatial Representation. In Annual Meeting of the Cognitive Science Society (CogSci). 1532–1538.
[38]
B. Komer, T. C. Stewart, A. R. Voelker, and C. Eliasmith. 2019. A Neural Representation of Continuous Space using Fractional Binding. In Annual Meeting of the Cognitive Science Society (CogSci). 2038–2043.
[39]
P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, 2014. A Million Spiking-neuron Integrated Circuit with a Scalable Communication Network and Interface. Science 345, 6197 (2014), 668–673.
[40]
R. Moughan. 2021. Parallel Architectures for Hyperdimensional Computing. Master’s thesis. University of California at Berkeley.
[41]
E. Osipov, S. Kahawala, 2021. HyperSeed: Unsupervised Learning with Vector Symbolic Architectures. arXiv:2110.08343 (2021), 1–12.
[42]
G. Palm. 1980. On Associative Memory. Biological Cybernetics 36, 1 (1980), 19–31.
[43]
T. A. Plate. 1992. Holographic Recurrent Networks. In Advances in Neural Information Processing Systems (NIPS). 34–41.
[44]
T. A. Plate. 1994. Distributed Representations and Nested Compositional Structure. University of Toronto, PhD Thesis.
[45]
T. A. Plate. 1995. Holographic Reduced Representations. IEEE Transactions on Neural Networks 6, 3 (1995), 623–641.
[46]
D. A. Rachkovskij. 2001. Representation and Processing of Structures with Binary Sparse Distributed Codes. IEEE Transactions on Knowledge and Data Engineering 3, 2(2001), 261–276.
[47]
A. Rahimi, S. Datta, 2017. High-dimensional Computing as a Nanoscalable Paradigm. IEEE Transactions on Circuits and Systems I: Regular Papers 64, 9(2017), 2508–2521.
[48]
A. Rahimi, P. Kanerva, L. Benini, and J. M. Rabaey. 2019. Efficient Biosignal Processing Using Hyperdimensional Computing: Network Templates for Combined Learning and Classification of ExG Signals. Proc. IEEE 107, 1 (2019), 123–143.
[49]
A. Rahimi and B. Recht. 2007. Random Features for Large-Scale Kernel Machines. In Advances in Neural Information Processing Systems (NIPS), Vol. 20. 1–8.
[50]
P. W. Shor. 1999. Polynomial-time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM Review 41, 2 (1999), 303–332.
[51]
T. Wang and Jaijeet J. Roychowdhury. 2019. OIM: Oscillator-based Ising Machines for Solving Combinatorial Optimisation Problems. In International Conference on Unconventional Computation and Natural Computation (UCNC). 232–256.

Cited By

View all
  • (2023)A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and ChallengesACM Computing Surveys10.1145/355800055:9(1-52)Online publication date: 16-Jan-2023
  • (2023)In-memory factorization of holographic perceptual representationsNature Nanotechnology10.1038/s41565-023-01357-818:5(479-485)Online publication date: 30-Mar-2023
  • (2022)Recursive Binding for Similarity-Preserving Hypervector Representations of Sequences2022 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN55064.2022.9892462(1-8)Online publication date: 18-Jul-2022

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
NICE '22: Proceedings of the 2022 Annual Neuro-Inspired Computational Elements Conference
March 2022
122 pages
ISBN:9781450395595
DOI:10.1145/3517343
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 May 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Collective-State Computing
  2. Fractional Power Encoding
  3. Hyperdimensional Computing
  4. Integer Factorization
  5. Resonator network
  6. Vector Symbolic Architectures

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • NIH
  • AFOSR
  • Horizon 2020 MSCA

Conference

NICE 2022
NICE 2022: Neuro-Inspired Computational Elements Conference
March 28 - April 1, 2022
Virtual Event, USA

Acceptance Rates

Overall Acceptance Rate 25 of 40 submissions, 63%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)126
  • Downloads (Last 6 weeks)19
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and ChallengesACM Computing Surveys10.1145/355800055:9(1-52)Online publication date: 16-Jan-2023
  • (2023)In-memory factorization of holographic perceptual representationsNature Nanotechnology10.1038/s41565-023-01357-818:5(479-485)Online publication date: 30-Mar-2023
  • (2022)Recursive Binding for Similarity-Preserving Hypervector Representations of Sequences2022 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN55064.2022.9892462(1-8)Online publication date: 18-Jul-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media