Abstract
The neural solids are novel neural networks devised for solving optimization problems. They are dual to Hopfield networks, but with a quartic energy function. These solids are open architectures, in the sense that different choices of the basic elements and interfacings solve different optimization problems. The basic element is the neural resonator (triangle for the three dimensional case), composed of resonant neurons underlying a self-organizing learning. This module is able to solve elementary optimization problems such as the search for the nearest orthonormal matrix to a given one. Then, an example of a more complex solid, the neural decomposer, whose architecture is composed of neural resonators and their mutual connections, is given. This solid can solve more complex optimization problems such as the decomposition of the essential matrix, which is a very important technique in computer vision.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bjorck, A. and Bowie, C.: An iterative algorithm for computing the best estimate of an orthogonal matrix, SIAM J. Num. Anal. 8 (1971), 358–364.
Cichocki, A. and Unbehauer, R.: Neural Networks for Optimization and Signal Pro-cessing, John Wiley and Sons (1993).
Cirrincione, G.: A neural approach to the structure from motion problem, Ph.D. thesis, LIS INPG, Grenoble (1998).
Cirrincione, G. and Cirrincione, M.: Neural networks for the singular value decomposition, IEEE Transactions on Signal Processing, submitted (1999).
Cliff, N.: Orthogonal rotation to congruence, Psychometrika 31 (1966), 33–42.
Eckart, G. and Young, G.: The approximation of one matrix by another of lower rank, Psychometrica 1 (1936), 211–218.
Gibson, W.: On the least-square othogonalisation of an oblique transformation, Psychometrika 27 (1962), 193–195.
Golub, G. and Loan, C. V.: Matrix Computations, Baltimore, MD: Johns Hopkins University Press, 2nd Edn. (1989).
Green, B.: The orthogonal approximation of an oblique structure in factor analysis, Psychometrika 1 (1952), 429–440.
Hestenes, M.: Conjugate Direction Methods in Optimization, Springer-Verlag, New York (1980).
Hestenes, M. and Stiefel, E.: Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards 49(6) (1952), 409–436.
Higham, N.: Computing the polar decomposition ‐ with applications, SIAM J. Sci. Statist. Comput. 7 (1986), 1160–1174.
Horn, B., Hilden, H. and Negahdaripour, S.: Closed-form solution of absolute orientation using orthonormal matrices, Journal of Optical Society America A 5(7) (1988), 1127–1135.
Johnson, R.: On a theorem stated by Eckart and Young, Psychometrika 28 (1963), 259–263.
Kanatani, K.: Geometric Computation for Machine Vision, The Oxford engineering science series, Clarendon Press, Oxford (1993).
Kanatani, K.: Analysis of 3-d rotation fitting, IEEE Trans. on Pattern Analysis and Machine Intelligence 16(5) (1994), 543–549.
Moller, M.: A scaled conjugate gradient algorithm for fast supervised learning, Neural Networks 6 (1993), 525–533.
Press, W., Teukolsky, S., Wetterling, W. and Flannery, B.: Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, 2nd Edn. (1992).
Schonemann, P.: A generalized solution of the orthogonal procrustes problem, Psychometrika 31 (1966), 1–10.
Schonemann, P. and Carroll, R.: Fitting one matrix to another under choice of a central dilation and a rigid motion, Psychometrika 35 (1970), 169–183.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Cirrincione, G., Cirrincione, M. The Neural Solids; For optimization problems. Neural Processing Letters 13, 1–15 (2001). https://doi.org/10.1023/A:1009660910503
Issue Date:
DOI: https://doi.org/10.1023/A:1009660910503