Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2987550.2987579acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article
Public Access

Towards Weakly Consistent Local Storage Systems

Published: 05 October 2016 Publication History

Abstract

Heterogeneity is a fact of life for modern storage servers. For example, a server may spread terabytes of data across many different storage media, ranging from magnetic disks, DRAM, NAND-based solid state drives (SSDs), as well as hybrid drives that package various combinations of these technologies. It follows that access latencies to data can vary hugely depending on which media the data resides on. At the same time, modern storage systems naturally retain older versions of data due to the prevalence of log-structured designs and caches in software and hardware layers. In a sense, a contemporary storage system is very similar to a small-scale distributed system, opening the door to consistency/performance trade-offs. In this paper, we propose a class of local storage systems called StaleStores that support relaxed consistency, returning stale data for better performance. We describe several examples of StaleStores, and show via emulations that serving stale data can improve access latency by between 35% and 20X. We describe a particular StaleStore called Yogurt, a weakly consistent local block storage system. Depending on the application's consistency requirements (e.g. bounded staleness, mono-tonic reads, read-my-writes, etc.), Yogurt queries the access costs for different versions of data within tolerable staleness bounds and returns the fastest version. We show that a distributed key-value store running on top of Yogurt obtains a 6X speed-up for access latency by trading off consistency and performance within individual storage servers.

References

[1]
D. G. Andersen, J. Franklin, M. Kaminsky, A. Phanishayee, L. Tan, and V. Vasudevan. FAWN: A fast array of wimpy nodes. In ACM Symposium on Operating Systems Principles, 2009.
[2]
C. Chao, R. English, D. Jacobson, A. Stepanov, and J. Wilkes. Mime: a high performance parallel storage device with strong recovery guarantees. Technical report, HPL-CSP-92-9, Hewlett-Packard Laboratories, 1992.
[3]
B. Cornell, P. A. Dinda, and F. E. Bustamante. Wayback: A user-level versioning file system for linux. In USENIX Annual Technical Conference, FREENIX Track, 2004.
[4]
W. De Jonge, M. F. Kaashoek, and W. C. Hsieh. The logical disk: A new approach to improving file systems. ACM SIGOPS Operating Systems Review, 27(5):15--28, 1993.
[5]
M. Flouris and A. Bilas. Clotho: Transparent data versioning at the block I/O level. In International Conference on Massive Storage Systems and Technology, 2004.
[6]
G. R. Ganger. Blurring the line between OSes and storage devices. School of Computer Science, Carnegie Mellon University, 2001.
[7]
M. P. Herlihy and J. M. Wing. Axioms for concurrent objects. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, 1987.
[8]
D. Hitz, J. Lau, and M. A. Malcolm. File system design for an NFS file server appliance. In USENIX Winter Conference, 1994.
[9]
Y. Hu and Q. Yang. DCD -- disk caching disk: A new approach for boosting I/O performance. In ACM/IEEE International Symposium on Computer Architectures, 1996.
[10]
R. Koller and R. Rangaswami. I/O deduplication: Utilizing content similarity to improve I/O performance. ACM Transactions on Storage (TOS), 6(3):13, 2010.
[11]
M. Mesnier, G. R. Ganger, and E. Riedel. Object-based storage. IEEE Communications Magazine, 41(8):84--90, 2003.
[12]
K.-K. Muniswamy-Reddy, C. P. Wright, A. Himmer, and E. Zadok. A versatile and user-oriented versioning file system. In USENIX Conference on File and Storage Technologies, 2004.
[13]
V. Prabhakaran, L. N. Bairavasundaram, N. Agrawal, H. S. Gunawi, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau. Iron file systems. In ACM Symposium on Operating Systems Principles, 2005.
[14]
S. Quinlan and S. Dorward. Venti: A new approach to archival storage. In USENIX Conference on File and Storage Technologies, 2002.
[15]
M. Rosenblum and J. K. Ousterhout. The design and implementation of a log-structured file system. In ACM Symposium on Operating Systems Principles, 1991.
[16]
J.-Y. Shin, M. Balakrishnan, T. Marian, and H. Weatherspoon. Gecko: Contention-oblivious disk arrays for cloud storage. In USENIX Conference on File and Storage Technologies, 2013.
[17]
J.-Y. Shin, M. Balakrishnan, T. Marian, and H. Weatherspoon. Isotope: Transactional isolation for block storage. In USENIX Conference on File and Storage Technologies, 2016.
[18]
G. Soundararajan, V. Prabhakaran, M. Balakrishnan, and T. Wobber. Extending SSD lifetimes with disk-based write caches. In USENIX Conference on File and Storage Technologies, 2010.
[19]
D. B. Terry, V. Prabhakaran, R. Kotla, M. Balakrishnan, M. K. Aguilera, and H. Abu-Libdeh. Consistency-based service level agreements for cloud storage. In ACM Symposium on Operating Systems Principles, 2013.
[20]
R. Y. Wang, T. E. Anderson, and D. A. Patterson. Virtual log based file systems for a programmable disk. ACM SIGOPS Operating Systems Review, 33:29--44, 1998.
[21]
J. Wilkes, R. Golding, C. Staelin, and T. Sullivan. The HP AutoRAID hierarchical storage system. ACM Transactions on Computer Systems (TOCS), 14(1):108--136, 1996.
[22]
G. Wu and X. He. Delta-FTL: Improving SSD lifetime via exploiting content locality. In European Conference on Computer Systems, 2012.
[23]
X. Yu, B. Gum, Y. Chen, R. Y. Wang, K. Li, A. Krishnamurthy, and T. E. Anderson. Trading capacity for performance in a disk array. In USENIX Symposium on Operating Systems Design and Implementation, 2000.
[24]
Y. Zhang, L. P. Arulraj, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau. De-indirection for flash-based ssds with nameless writes. In USENIX Conference on File and Storage Technologies, 2012.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SoCC '16: Proceedings of the Seventh ACM Symposium on Cloud Computing
October 2016
534 pages
ISBN:9781450345255
DOI:10.1145/2987550
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 October 2016

Check for updates

Author Tags

  1. Weak consistency
  2. local storage

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

SoCC '16
Sponsor:
SoCC '16: ACM Symposium on Cloud Computing
October 5 - 7, 2016
CA, Santa Clara, USA

Acceptance Rates

SoCC '16 Paper Acceptance Rate 38 of 151 submissions, 25%;
Overall Acceptance Rate 169 of 722 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 618
    Total Downloads
  • Downloads (Last 12 months)58
  • Downloads (Last 6 weeks)4
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media