Abstract
Resource sharing, multi-tenant interference and bursty workloads in cloud computing lead to high tail-latency that severely affects user quality of experience (QoE), where response latency is a critical factor. A lot of research efforts are dedicated to reducing high tail-latency and improving user QoE, such as software-defined cloud computing (SDC). However, the traditional availability analysis of cloud computing captures the pure failure-repair behavior with user QoE ignored. In this paper, we propose a conceptual framework, experience availability, to properly assess the effectiveness of SDC while taking into account both availability and response latency simultaneously. We review the related work on availability models and methods of cloud systems, and discuss open problems for evaluating experience availability in SDC. We also show some of our preliminary results to demonstrate the feasibility of our ideas.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Dean J, Barrosoluiz A B. The tail at scale. Commun. ACM, 2013, 56(2): 74-80.
Grandl R, Chen Y, Khalid J, Yang S, Anand A, Benson T, Akella A. Harmony: Coordinating network, compute, and storage in software-defined clouds. In Proc. the 4th Annual Symposium on Cloud Computing (poster), Oct. 2013.
Buyya R, Calheiros R N, Son J, Dastjerdi A V, Yoon Y. Software-defined cloud computing: Architectural elements and open challenges. In Proc. International Conference on Advances in Computing, Communications and Informatics, Sept. 2014.
Jararweh Y, Al-Ayyoub M, Benkhelifa E, Vouk M, Rindos A et al. Software defined cloud: Survey, system and evaluation. Future Generation Computer Systems, 2016, 58: 56-74.
Bao Y G, Wang S. Labeled von Neumann architecture for software-defined cloud. Journal of Computer Science and Technology, 2017, 32(2): 220-224.
Amazon EC2 service level agreement. 2013. http://aws.amazon.com/ec2/sla/, Feb. 2017.
App engine service level agreement (SLA). https://developers.google.com/appengine/sla, Feb. 2017.
Microsoft. Service level agreements. https://azure.microsoft.com/en-us/support/legal/sla/. Feb. 2017.
Neamtiu I, Dumitraş T. Cloud software upgrades: Challenges and opportunities. In Proc. International Workshop on the Maintenance and Evolution of Service-Oriented and Cloud-Based Systems, Sept. 2011.
Lu Q, Xu X, Zhu L, Bass L, Li Z, Sakr S, Bannerman P L, Liu A. Incorporating uncertainty into in-cloud application deployment decisions for availability. In Proc. IEEE International Conference on Cloud Computing, Jun. 2013, pp.454-461.
Meyer J F. On evaluating the performability of degradable computing systems. IEEE Transactions on computers, 1980, 100(8): 720-731.
Smith R, Trivedi K S, Ramesh A. Performability analysis: Measures, an algorithm, and a case study. IEEE Transactions on Computers, 1988, 37(4): 406-417.
Amari S V, Xing L, Shrestha A, Akers J, Trivedi K S. Performability analysis of multistate computing systems using multivalued decision diagrams. IEEE Transactions on Computers, 2010, 59(10): 1419-1433.
Ghosh R, Trivedi K S, Naik V K, Kim D S. End-to-end performability analysis for Infrastructure-as-a-Service cloud: An interacting stochastic models approach. In Proc. the 16th IEEE Pacific Rim International Symposium on Dependable Computing, Dec. 2010, pp.125-132.
Entezari-Maleki R, Trivedi K S, Movaghar A. Performability evaluation of grid environments using stochastic reward nets. IEEE Transactions on Dependable and Secure Computing, 2015, 12(2): 204-216.
Wei B, Lin C, Kong X. Dependability modeling and analysis for the virtual data center of cloud computing. In Proc. High Performance Computing and Communications, Sept. 2011, pp.784-789.
Ahmed W, Hasan O, Tahar S. Formalization of reliability block diagrams in higher-order logic. Journal of Applied Logic, 2016, 18: 19-41.
Wang Y, Luo C, Liu Z. Reliability analysis of multi-node SDDC using fault tree. In Proc. International Industrial Informatics and Computer Engineering Conference, Jan. 2015, pp.1155-1158.
Trivedi K S. Probability and Statistics with Reliability, Queuing and Computer Science Applications. John Wiley & Sons, 2008.
Ivanchenko O, Kharchenko V. Semimarkov availability models for an Infrastructure as a Service cloud with multiple pools. In Proc. International Conference on ICT in Education, Research, and Industrial Applications, Nov. 2016, pp.349-360.
Longo F, Ghosh R, Naik V K, Trivedi K S. A scalable availability model for Infrastructure-as-a-Service cloud. In Proc. the 41st IEEE/IFIP International Conference on Dependable Systems & Networks, Jun. 2011, pp.335-346.
Ghosh R, Longo F, Frattini F, Russo S, Trivedi K S. Scalable analytics for IaaS cloud availability. IEEE Transactions on Cloud Computing, 2014, 2(1): 57-70.
Wei B, Lin C, Kong X. Dependability modeling and analysis for the virtual clusters. In Proc. International Conference on Computer Science and Network Technology, Volume 4, Dec. 2011, pp.2316-2320.
Dantas J, Matos R, Araujo J, Maciel P. Models for dependability analysis of cloud computing architectures for eucalyptus platform. International Transactions on Systems Science and Applications, 2012, 8: 13-25.
Dantas J, Matos R, Araujo J, Maciel P. Eucalyptus-based private clouds: Availability modeling and comparison to the cost of a public cloud. Computing, 2015, 97(11): 1121-1140.
Qiu X, Dai Y, Xiang Y, Xing L. A hierarchical correlation model for evaluating reliability, performance, and power consumption of a cloud service. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2016, 46(3): 401-412.
Cooper B F, Silberstein A, Tam E, Ramakrishnan R, Sears R. Benchmarking cloud serving systems with YCSB. In Proc. the 1st ACM Symposium on Cloud Computing, Jun. 2010, pp.143-154.
Leitner P, Cito J. Patterns in the chaos — A study of performance variation and predictability in public IaaS clouds. ACM Transactions on Internet Technology, 2014, 16(3): 1-15.
Iosup A, Prodan R, Epema D. IaaS cloud benchmarking: Approaches, challenges, and experience. In Cloud Computing for Data-Intensive Applications, Li X, Qiu J (eds.), Springer, 2014, pp.83-104.
Varghese B, Subba L T, Thai L T, Barker A D. DocLite: A Docker-based lightweight cloud benchmarking tool. In Proc. the 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2016), May. 2016, pp.213-222.
Fujita H, Matsuno Y, Hanawa T, Sato M, Kato S, Ishikawa Y. DS-Bench Toolset: Tools for dependability bench-marking with simulation and assurance. In Proc. IEEE/IFIP International Conference on Dependable Systems and Networks, Jun. 2012.
Sangroya A, Serrano D, Bouchenak S. Benchmarking dependability of MapReduce systems. In Proc. the 31st IEEE Symposium on Reliable Distributed Systems, Feb. 2012, pp.21-30.
Sangroya A, Bouchenak S, Serrano D. Experience with benchmarking dependability and performance of MapReduce systems. Perform. Eval., 2016, 101: 1-19.
Little J D C. A proof for the queuing formula: L = λw. Operations Research, 1961, 9(3): 383-387.
Trivedi K S, Sahner R. Sharpe at the age of twenty two. ACM SIGMETRICS Performance Evaluation Review, 2009, 36(4): 52-57.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
ESM 1
(PDF 103 kb)
Rights and permissions
About this article
Cite this article
Cai, BL., Zhang, RQ., Zhou, XB. et al. Experience Availability: Tail-Latency Oriented Availability in Software-Defined Cloud Computing. J. Comput. Sci. Technol. 32, 250–257 (2017). https://doi.org/10.1007/s11390-017-1719-x
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11390-017-1719-x