Nothing Special   »   [go: up one dir, main page]

Skip to main content

Application of the “Winner Takes All” Principle in Wang’s Recurrent Neural Network for the Assignment Problem

  • Conference paper
Advances in Neural Networks – ISNN 2005 (ISNN 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3496))

Included in the following conference series:

Abstract

One technique that uses Wang’s Recurrent Neural Networks with the “Winner Takes All” principle is presented to solve the Assignment problem. With proper choices for the parameters of the Recurrent Neural Network, this technique reveals to be efficient solving the Assignment problem in real time. In cases of multiple optimal solutions or very closer optimal solutions, the Wang’s Neural Network does not converge. The proposed technique solves these types of problem. Comparisons between some traditional ways to adjust the RNN’s parameters are made, and some proposals concerning to parameters with dispersion measures of the problem’s cost matrix’ coefficients are show.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Ahuja, R.K., Mangnanti, T.L., Orlin, J.B.: Network Flows. Theory, Algorithms and Applications. Prentice Hall, New Jersey (1993)

    Google Scholar 

  2. Bazaraa, M.S., Jarvis, J.J., Sherali, H.D.: Linear Programming and Network Flows, 2nd edn. Wiley, New York (1990)

    MATH  Google Scholar 

  3. Siqueira, P.H., Carnieri, C., Steiner, M.T.A., Barboza, A.O.: Uma proposta de solução para o problema da construção de escalas de motoristas e cobradores de ônibus por meio do algoritmo do matching de peso máximo. Gestão & Produção 11, 187–196 (2004)

    Article  Google Scholar 

  4. Hopfield, J.J., Tank, D.W.: Neural Computation of decisions in optimization problems. Biological Cybernetics 52, 141–152 (1985)

    MATH  MathSciNet  Google Scholar 

  5. Matsuda, S.: Optimal. Hopfield network for combinatorial optimization with linear cost function. IEEE Transactions on Neural Networks 9, 1319–1330 (1998)

    Article  MathSciNet  Google Scholar 

  6. Wang, J.: Analog Neural Network for Solving the Assignment Problem. Electronic Letters 28, 1047–1050 (1992)

    Article  Google Scholar 

  7. Wang, J.: Primal and Dual Assignment Networks. IEEE Transactions on Neural Networks 8, 784–790 (1997)

    Article  Google Scholar 

  8. Wang, J.: Analysis and Design of a Recurrent Neural Network for Linear Programming. IEEE Transactions on Circuits and Systems 40, 613–618 (1993)

    Article  MATH  Google Scholar 

  9. Hung, D.L., Wang, J.: Digital Hardware Realization of a Recurrent Neural Network for Solving the Assignment Problem. Neurocomputing 51, 447–461 (2003)

    Article  Google Scholar 

  10. Wang, J.: A Deterministic Annealing Neural Network for Convex Programming. Neural Networks 7, 629–641 (1994)

    Article  MATH  Google Scholar 

  11. Wang, J.: Analysis and Design of a analog Sorting Network. IEEE Transactions on Neural Networks 6, 962–971 (1995)

    Article  Google Scholar 

  12. Liu, D., Hu, S., Wang, J.: Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks with Time-Varying Thresholds. IEEE Transactions on Circuits and Systems 51, 161–167 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Siqueira, P.H., Scheer, S., Steiner, M.T.A. (2005). Application of the “Winner Takes All” Principle in Wang’s Recurrent Neural Network for the Assignment Problem. In: Wang, J., Liao, X., Yi, Z. (eds) Advances in Neural Networks – ISNN 2005. ISNN 2005. Lecture Notes in Computer Science, vol 3496. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11427391_117

Download citation

  • DOI: https://doi.org/10.1007/11427391_117

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25912-1

  • Online ISBN: 978-3-540-32065-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics