Abstract
The boolean \(\Sigma \) \(\Pi \)-neuron is a biologically inspired formal model for logical information processing. The boolean \(\Sigma \) \(\Pi \)-neuron model adequately reflects information processing processes in the cerebral cortex and in the dendritic trees of neurons. The advantage of the boolean \(\Sigma \) \(\Pi \)-neuron model is the ability to accurately represent any boolean function and the possibility of constructive learning (direct construction) in a single pass of the training sample. Another possibility is the direct construction of an ensemble of boolean \(\Sigma \) \(\Pi \)-neurons that function correctly on the training sample. This article discusses a new algorithm for constructing an ensemble of boolean \(\Sigma \) \(\Pi \)-neurons in parameterized form. This form can also be easily represented as a single boolean \(\Sigma \) \(\Pi \)-network with a hidden layer of linear and threshold linear units. In some cases, this makes it easier to retrain on new inputs by setting the appropriate control parameter values.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
\((s)_{+}=\max \{s,0\}\).
References
Mel, B.W.: The sigma-pi column: a model of associative learning in cerebral cortex. Technical report CNS Memo 6, Computation and Neural Systems Program, California Institute of Technology (1990)
Jadi, M.P., Behabadi, B.F., Poleg-Polsky, A., Schiller, J., Mel, B.W.: An augmented two-layer model captures nonlinear analog spatial integration effects in pyramidal neuron dendrites. Proc. IEEE 102(5), 782–798 (2014)
Mel, B.W.: Toward a simplified model of an active dendritic tree. In: Dendrites, 3rd edn. Chapter 16, pp. 405–486. Oxford University Press. (2016)
Timofeev, A.V., Pshibikhov, V.K.: Learning algorithms and minimizing the complexity of polynomial recognizing systems. Izv. AN SSSR. Tech. Cybern. 7, 214–217 (1974). (in russian)
Shibzukhov, Z.M.: Constructive learning methods of \(\Sigma \Pi \)-neural networks. MAIK Nauka (2006). (in Russian)
Shibzukhov, Z.M.: Correct aggregation operations with algorithms. Pattern Recognit Image Anal. 24(3), 377–382 (2014)
Lyutikova, L.A.: Sigma-Pi neural networks: error correction methods. Procedia Comput. Sci. 145, 312–318 (2018)
Potapov, A.A., Zhang, W., Feng, T., Rekhviashvili, S.Sh.: Sigma-pi neural unit: switching circuit simulation. In: AIIPCC ’19: Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing. Article No. 21, pp. 1–5 (2019). https://doi.org/10.1145/3371425.3371477
Delalleau, O., Bengio, Y.: Shallow vs. deep sum-product networks. In: Advances in Neural Information Processing Systems, vol. 24, pp. 666–674 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Shibzukhov, Z.M., Belov, O. (2025). Constructive Learning of Parameterized Boolean \(\Sigma \) \(\Pi \)-networks. In: Redko, V., Yudin, D., Dunin-Barkowski, W., Kryzhanovsky, B., Tiumentsev, Y. (eds) Advances in Neural Computation, Machine Learning, and Cognitive Research VIII. NI 2024. Studies in Computational Intelligence, vol 1179. Springer, Cham. https://doi.org/10.1007/978-3-031-80463-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-80463-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-80462-5
Online ISBN: 978-3-031-80463-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)