Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

HWM: a hybrid workload migration mechanism of metadata server cluster in data center

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

In data center, applications of big data analytics pose a big challenge to massive storage systems. It is significant to achieve high availability, high performance and high scalability for PB-scale or EB-scale storage systems. Metadata server (MDS) cluster architecture is one of the most effective solutions to meet the requirements of applications in data center. Workload migration can achieve load balance and energy saving of cluster systems. In this paper, a hybrid workload migration mechanism of MDS cluster is proposed and named as HWM. In HWM, workload of MDS is classified into two categories: metadata service and state service, and they can be migrated rapidly from a source MDS to a target MDS in different ways. Firstly, in metadata service migration, all the dirty metadata of one sub file system is flushed to a shared storage pool by the source MDS, and then is loaded by the target MDS. Secondly, in state service migration, all the states of that sub file system are migrated from source MDS to target MDS through network at file granularity, and then all of the related structures of these states are reconstructed in targetMDS. Thirdly, in the process of workload migration, instead of blocking client requests, the source MDS can decide which MDS will respond to each request according to the operation type and the migration stage. The proposed mechanismis implemented in the BlueWhaleMDS cluster. The performance measurements show that the HWM mechanism is efficient to migrate the workload of a MDS cluster system and provides low-latency access to metadata and states.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Turner V, Gantz J F, Reinsel D, Minton S. The digital universe of opportunities: rich data and the increasing value of the Internet of things. IDC Analyze the Future, 2014

    Google Scholar 

  2. Ghemawat S, Gobioff H, Leung S T. The Google file system. ACM SIGOPS Operating Systems Review, 2003, 37(5): 29–43

    Article  Google Scholar 

  3. McKusick K, Quinlan S. GFS: evolution on fast-forward. Communications of the ACM, 2010, 53(3): 42–49

    Article  Google Scholar 

  4. Makoto S, Hiroki K, Shoji K. Performance evaluation of scaleout NAS for HDFS. In: Proceedings of the 3rd International Conference on Advances in Information Mining and Management. 2013, 32–35

    Google Scholar 

  5. Xia M, Saxena M, Blaum M, Pease D A. A tale of two erasure codes in HDFS. In: Proceedings of the 13th USENIX Conference on File and Storage Technologies. 2015, 213–226

    Google Scholar 

  6. Jain R, Sarkar P, Subhraveti D. GPFS-SNC: an enterprise cluster file system for big data. IBM Journal of Research and Development, 2013, 57(3/4): 5:1–5:10

    Article  Google Scholar 

  7. Davies A, Orsaria A. Scale out with GlusterFS. Linux Journal, 2013, (235): 1

    Google Scholar 

  8. Chasapis K, Dolz M F, Kuhn M, Ludwig T. Evaluating Lustre’s metadata server on a multi-socket platform. In: Proceedings of the 9th Parallel Data Storage Workshop. 2014, 13–18

    Google Scholar 

  9. Kim T, Noh S H. PNFS for everyone: an empirical study of a low-cost, highly scalable networked storage. International Journal of Computer Science and Network Security, 2014, 14(3): 52–59

    Google Scholar 

  10. Weil S A, Brandt S A, Miller E L, Long D D, Maltzahn C. Ceph: a scalable, high-performance distributed file system. In: Proceedings of the 7th Symposium on Operating Systems Design and Implementation. 2006, 307–320

    Google Scholar 

  11. Wang F, Nelson M, Oral S, Atchley S, Weil S, Settlemyer B W, Caldwell B, Hill J. Performance and scalability evaluation of the Ceph parallel file system. In: Proceedings of the 8th Parallel Data StorageWorkshop. 2013, 14–19

    Google Scholar 

  12. Sevilla M A, Watkins N, Maltzahn C, Nassi I, Brandt S A, Weil S A, Farnum G, Fineberg S. Mantle: a programmable metadata load balancer for the Ceph file system. In: Proceedings of the 27th International Conference for High Performance Computing, Networking, Storage and Analysis. 2015, 1–12

    Chapter  Google Scholar 

  13. Sinnamohideen S, Sambasivan R R, Hendricks J, Liu L, Ganger G R. A transparently-scalable metadata service for the UrsaMinor storage system. In: Proceedings of USENIX Annual Technical Conference. 2010, 13–26

    Google Scholar 

  14. Abd-El-Malek M, Courtright IIWV, Cranor C, Ganger G R, Hendricks J, Klosterman A J, Mesnier M P, Prasad M, Salmon B, Sambasivan R R, S Sinnamohideen, Strunk J D, Thereska E, Wachs M, Wylie J J. Ursa Minor: versatile cluster-based storage. In: Proceedings of the 4th USENIX Conference on File and Storage Technologies. 2005, 59–72

    Google Scholar 

  15. Menon J, Pease D A, Rees R, Duyanovich L, Hillsberg B. IBM Storage Tank—a heterogeneous scalable SAN file system. IBM Systems Journal, 2003, 42(2): 250–267

    Article  Google Scholar 

  16. Thomasian A. Storage research in industry and universities. ACM SIGARCH Computer Architecture News, 2010, 38(2): 1–48

    Article  Google Scholar 

  17. An overview of NFSv4: NFSv4.0, NFSv4.1, pNFS, and proposed NFSv4.2 features. SNIA Ethernet Storage Forum, 2012

  18. Mohr R, Peltz P. Benchmarking SSD-based Lustre file system configurations. In: Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment. 2014

    Google Scholar 

  19. Aishwarya K, Sreevatson M, Babu C, Prabavathy B. Efficient prefetching technique for storage of heterogeneous small files in hadoop distributed file system federation. In: Proceedings of the 15th International Conference on Advanced Computing. 2013, 523–530

    Google Scholar 

  20. Chen G, Jagadish H, Jiang D, Maier D, Ooi B C, Tan K L, Tan W C. Federation in cloud data management: challenges and opportunities. IEEE Transactions on Knowledge and Data Engineering, 2014, 26(7): 1670–1678

    Article  Google Scholar 

  21. Patil S, Gibson G A. Scale and concurrency of GIGA+: file system directories with millions of files. In: Proceedings of the 9th USENIX Conference on File and Storage Technologies. 2011, 1–14

    Google Scholar 

  22. Patil S V, Gibson G A, Lang S, Polte M. GIGA+: scalable directories for shared file systems. In: Proceedings of the 2nd International Workshop on Petascale Data Storage. 2007, 26–29

    Google Scholar 

  23. Douceur J R, Howell J. Distributed directory service in the Farsite file system. In: Proceedings of the 7th Symposium on Operating Systems Design and Implementation. 2006, 321–334

    Google Scholar 

  24. Ma H, Liu Z, Zhang H, Feng S, Han X, Xu L. Experiences with hierarchical storage management support in blue whale file system. In: Proceedings of the 11th International Conference on Parallel and Distributed Computing, Applications and Technologies. 2010, 369–374

    Google Scholar 

  25. Solar R, Gil-Costa V, Marin M. Dynamic load balance for approximate parallel simulations with consistent hashing. In: Proceedings of the 47th Summer Computer Simulation Conference. 2015, 1–10

    Google Scholar 

  26. Xu Z Y, Wang X X. A predictive modified round robin scheduling algorithm for Web server clusters. In: Proceedings of the 34th Chinese Control Conference. 2015, 5804–5808

    Google Scholar 

  27. Xia Y, Dobra A, Han S C. Multiple-choice random network for server load balancing. In: Proceedings of the 26th IEEE International Conference on Computer Communications. 2007, 1982–1990

    Google Scholar 

  28. Wu Y, Luo S, Li Q. An adaptive weighted least-load balancing algorithm based on server cluster. In: Proceedings of the 5th International Conference on Intelligent Human-Machine Systems and Cybernetics. 2013, 224–227

    Google Scholar 

  29. Allayear S M, Salahuddin M, Ahmed F, Park S S. Introducing iSCSI protocol on online based mapreduce mechanism. In: Proceedings of the 14th International Conference on Computational Science and Its Applications. 2014, 691–706

    Google Scholar 

  30. Guo T, Shen Y L, Liu Z J, Xu L. BW-FILERAID: a kind of file based distributed RAID system and optimization. Applied Mechanics and Materials, 2011, 80: 1208–1216

    Article  Google Scholar 

  31. Chen Y, Wu F, Yu K, Zhang L, Chen Y, Yang Y, Mao J. Instant bug testing service for linux kernel. In: Proceedings of the 10th IEEE International Conference on High Performance Computing and Communications. 2013, 1860–1865

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA06010401), and the Tianjin Science and Technology Program (15ZXDSGX00020).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lu Xu.

Additional information

Jian Liu received his MS in computer application form Yanshan University, China in 2008. He is currently a PhD candidate at the University of Chinese Academy of Science, China. He works in the Data Storage and Management Technology Research Center of Institute of Computing Technology, Chinese Academy of Sciences. His reresearch interests include load balance and high availability of cluster file system.

Huanqing Dong received his PhD in computer science and technology from Northwestern Polytechnical University, China in 2005. He is currently a senior engineer at the Data Storage and Management Technology Research Center of Institute of Computing Technology, Chinese Academy of Sciences, China. His research interests include cluster file systems and network storage.

Junwei Zhang received his PhD in computer architecture from Graduate University of Chinese Academy of Science, China in 2010. He is currently an associate professor at the Data Storage and Management Technology Research Center of Institute of Computing Technology, Chinese Academy of Sciences. His research interests include cluster file systems and network storage.

Zhenjun Liu received his PhD in computer architecture from Graduate University of Chinese Academy of Science, China in 2006. Now, he is currently an associate professor at the Data Storage and Management Technology Research Center of Institute of Computing Technology, Chinese Academy of Sciences. His research interests include WAN storage, distributed file system and raid management.

Lu Xu received his PhD in computer systems software from Purdue University, USA in 1995. Now, He is currently a professor and the director at the Data Storage and Management Technology Research Center of Institute of Computing Technology, Chinese Academy of Sciences, China. He is an associate director of the Division of Information Storage Technology, China Computer Federation, China. His research interests include computer architecture, high performance network storage, and computer system software.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, J., Dong, H., Zhang, J. et al. HWM: a hybrid workload migration mechanism of metadata server cluster in data center. Front. Comput. Sci. 11, 75–87 (2017). https://doi.org/10.1007/s11704-016-6036-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-016-6036-y

Keywords

Navigation