Nothing Special   »   [go: up one dir, main page]

Skip to main content

Decentralized Learning in Wireless Sensor Networks

  • Conference paper
Adaptive and Learning Agents (ALA 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5924))

Included in the following conference series:

Abstract

In this work we present a reinforcement learning algorithm that aims to increase the autonomous lifetime of a Wireless Sensor Network (WSN) and decrease its latency in a decentralized manner. WSNs are collections of sensor nodes that gather environmental data, where the main challenges are the limited power supply of nodes and the need for decentralized control. To overcome these challenges, we make each sensor node adopt an algorithm to optimize the efficiency of a small group of surrounding nodes, so that in the end the performance of the whole system is improved. We compare our approach to conventional ad-hoc networks of different sizes and show that nodes in WSNs are able to develop an energy saving behaviour on their own and significantly reduce network latency, when using our reinforcement learning algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Carle, J., Simplot-Ryl, D.: Energy-Efficient Area Monitoring for Sensor Networks. IEEE Computer Society 47, 40–46 (2004)

    Article  Google Scholar 

  2. Rogers, A., Dash, R.K., Jennings, N.R., Reece, S., Roberts, S.: Computational Mechanism Design for Information Fusion within Sensor Networks. In: 9th FUSION (2006)

    Google Scholar 

  3. Mihaylov, M., Nowé, A., Tuyls, K.: Collective IntelligentWireless Sensor Networks. In: Proc. of the 20th BNAIC, pp. 169–176 (2008)

    Google Scholar 

  4. van Dam, T., Langendoen, K.: An Adaptive Energy-Efficient Mac Protocol For Wireless Sensor Networks. In: Proceedings of The 1st SenSys, pp. 171–180 (2003)

    Google Scholar 

  5. Yick, J., Mukherjee, B., Ghosal, D.: Wireless Sensor Network Survey. Computer Networks 52, 2292–2330 (2008)

    Article  Google Scholar 

  6. Ai, J., Kong, J., Turgut, D.: An adaptive coordinated medium access control for wireless sensor networks. In: Proceedings of 9th ISCC, vol. 2, pp. 214–219 (2004)

    Google Scholar 

  7. Barroso, A., Roedig, U., Sreenan, C.J.: μ-MAC: An Energy-Efficient Medium Access Control for Wireless Sensor Networks. In: Proceedings of the 2nd EWSN (2005)

    Google Scholar 

  8. Buettner, M., Yee, G., Anderson, E., Han, R.: X-MAC: A Short Preamble MAC Protocol For Duty-Cycled Wireless Sensor Networks. University of Colorado at Boulder (2006)

    Google Scholar 

  9. Farinelli, A., Rogers, A., Petcu, A., Jennings, N.R.: Decentralised coordination of low-power embedded devices using the max-sum algorithm. In: Proceedings of the 7th AAMAS, pp. 639–646 (2008)

    Google Scholar 

  10. Jain, M., Taylor, M., Yokoo, M., Tambe, M.: DCOPs Meet the Real World: Exploring Unknown Reward Matrices with Applications to Mobile Sensor Networks. In: Proceedings of the 21st IJCAI (2009)

    Google Scholar 

  11. Leng, J.: Reinforcement learning and convergence analysis with applications to agent-based systems. University of South Australia (2008)

    Google Scholar 

  12. Wolpert, D.H., Tumer, K.: An Introduction To Collective Intelligence. NASA Ames Research Center (2008)

    Google Scholar 

  13. Martinez, K., Ong, R., Hart, J.: Glacsweb: a sensor network for hostile environments. In: The 1st IEEE Secon (2004)

    Google Scholar 

  14. Esseghir, M., Bouabdallah, N.: Node density control for maximizing wireless sensor network lifetime. Int. J. Netw. Manag. 18, 159–170 (2008)

    Article  Google Scholar 

  15. Verbeeck, K.: Coordinated Exploration in Multi-Agent Reinforcement Learning. Ph.D Thesis, Computational Modeling Lab, Vrije Universiteit Brussel (2004)

    Google Scholar 

  16. Vrancx, P., Verbeeck, K., Nowe, A.: Decentralized Learning in Markov Games. IEEE Transactions on Systems, Man and Cybernetics 38, 976–981 (2008)

    Article  Google Scholar 

  17. Thathachar, M.A.L., Sastry, P.S.: Networks of learning automata: Techniques for online stochastic optimization. Kluwer Academic Publishers, Dordrecht (2004)

    Book  Google Scholar 

  18. Tuyls, K., Hoen, P.J., Vanschoenwinkel, B.: An Evolutionary Dynamical Analysis of Multi-Agent Learning in Iterated Games. JAAMAS 12(1), 115–153 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mihaylov, M., Tuyls, K., Nowé, A. (2010). Decentralized Learning in Wireless Sensor Networks. In: Taylor, M.E., Tuyls, K. (eds) Adaptive and Learning Agents. ALA 2009. Lecture Notes in Computer Science(), vol 5924. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11814-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-11814-2_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-11813-5

  • Online ISBN: 978-3-642-11814-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics