Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/3291291.3291304dlproceedingsArticle/Chapter ViewAbstractPublication PagescasconConference Proceedingsconference-collections
research-article

Evaluating efficiency, effectiveness and satisfaction of AWS and azure from the perspective of cloud beginners

Published: 29 October 2018 Publication History

Abstract

Quality has long been regarded as an important driver of cloud adoption. In particular, quality in use (QiU) of cloud platforms may drive cloud beginners to the cloud platform that offers the best cloud experience. Cloud beginners are critical to the cloud market because they currently represent nearly a third of cloud users. We carried out three experiments to measure the QiU (dependent variable) of public cloud platforms (independent variable) regarding efficiency, effectiveness and satisfaction. AWS EC2 and Azure Virtual Machines are the two cloud services used as representative proxies to evaluate cloud platforms (treatments). Eleven undergraduate students with limited cloud knowledge (participants) manually created 152 VMs (task) using the web interface of cloud platforms (instrument) following seven different configurations (trials) for each cloud platform. Whereas AWS performed significantly better than Azure for efficiency (p-value not exceeding 0.001, A-statistic = 0.68), we could not find a significant difference between platforms for effectiveness (p-value exceeding 0.05) - although the effect size was found relevant (odds ratio = 0.41). Regarding satisfaction, most of our participants perceived the AWS as (i) having the best GUI to benefiting user interaction, (ii) the easiest platform to use, and (iii) the preferred cloud platform for creating VMs. Once confirmed by independent replications, our results suggest that AWS outperforms Azure regarding QiU. Therefore, cloud beginners might have a better cloud experience starting off their cloud projects by using AWS rather than Azure. In addition, our results may help to explain the AWS's cloud leadership.

References

[1]
Jehad Al Dallal. 2013. Object-oriented class maintainability prediction using internal quality attributes. Information and Software Technology 55, 11 (nov 2013), 2028--2048.
[2]
H. Al-Kilidar, K. Cox, and Barbara A. Kitchenham. 2005. The use and usefulness of the ISO/IEC 9126 quality standard. In 2005 International Symposium on Empirical Software Engineering, 2005. IEEE, 122--128.
[3]
David Ameller, Claudia Ayala, Jordi Cabot, and Xavier Franch. 2012. How do software architects consider non-functional requirements: An exploratory study. In 2012 20th IEEE International Requirements Engineering Conference (RE). IEEE, Chicago, IL, 41--50.
[4]
Frank M. Andrews, Laura Klem, Terrence N. Davidson, Patrick M. O'Malley, and Willard L. Rodgers. 1981. A Guide for selecting statistical techniques for analyzing social science data (2nd ed.). Survey Research Center, Institute for Social Research, University of Michigan, Michigan. 71 pages.
[5]
Andrea Arcuri and Lionel C. Briand. 2011. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceeding of the 33rd international conference on Software engineering - ICSE '11. ACM Press, New York, New York, USA, 1.
[6]
Erik Arisholm. {n. d.}. Information and Software Technology ({n. d.}).
[7]
Erik Arisholm and Dag I.K. Sjøberg. 2004. Evaluating the effect of a delegated versus centralized control style on the maintainability of object-oriented software. IEEE Transactions on Software Engineering 30, 8 (aug 2004), 521--534.
[8]
V.R. Basili, F. Shull, and F. Lanubile. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering 25, 4 (1999), 456--473.
[9]
Len Bass, Paul Clements, and Rick Kazman. 2013. Software Architecture in Practice (3 ed.). Addison-Wesley, Boston, Massachusetts. 640 pages.
[10]
J Breaugh. 2003. Effect Size Estimation: Factors to Consider and Mistakes to Avoid. Journal of Management 29, 1 (jan 2003), 79--97.
[11]
Lionel C. Briand, C. Bunse, and J.W. Daly. 2001. A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs. IEEE Transactions on Software Engineering 27, 6 (jun 2001), 513--530.
[12]
Lionel C. Briand and Jürgen Wüst. 2002. Empirical Studies of Quality Models in Object-Oriented Systems. Advances in Computers 56 (2002), 97--166.
[13]
Capgemini. 2012. Business Cloud: The State of Play Shifts Rapidly. Technical Report. 24 pages. http://www.in.capgemini.com/business-cloud-the-state-of-play-shifts-rapidly
[14]
Jeffrey C. Carver, Letizia Jaccheri, Sandro Morasca, and Forrest Shull. 2010. A checklist for integrating student empirical studies with research and teaching goals. Empirical Software Engineering 15, 1 (feb 2010), 35--59.
[15]
Jorge Cham and Daniel Whiteson. 2017. We Have No Idea: A Guide to the Unknown Universe. Riverhead Books, London. 368 pages.
[16]
G. Costagliola, F. Ferrucci, G. Tortora, and G. Vitiello. 2005. Class point: an approach for the size estimation of object-oriented systems. IEEE Transactions on Software Engineering 31, 1 (jan 2005), 52--74.
[17]
Shirley Cruz, Fabio Q.B. da Silva, and Luiz Fernando Capretz. 2015. Forty years of research on personality in software engineering: A mapping study. Computers in Human Behavior 46 (may 2015), 94--113.
[18]
Tore Dybå, Vigdis By Kampenes, and Dag I.K. Sjøberg. 2006. A systematic review of statistical power in software engineering experiments. Information and Software Technology 48, 8 (aug 2006), 745--755.
[19]
Joe Emison. 2018. 2018 State of the Cloud. Technical Report. Interop ITX. 38 pages. http://reg.interop.com/stateofcloud{_}2018?kcode=cync
[20]
Janet Feigenspan, Christian Kastner, Jorg Liebig, Sven Apel, and Stefan Hanenberg. 2012. Measuring programming experience. In 2012 20th IEEE International Conference on Program Comprehension (ICPC). IEEE, Passau, 73--82.
[21]
A Field, J Miles, and Z Field. 2012. Discovering Statistics Using R. SAGE Publications, London, UK. 992 pages.
[22]
Gordon L. Freeman and Stephen R. Schach. 2005. The task-dependent nature of the maintenance of object-oriented programs. Journal of Systems and Software 76, 2 (may 2005), 195--206.
[23]
Matthias Galster and Eva Bucherer. 2008. A Taxonomy for Identifying and Specifying Non-Functional Requirements in Service-Oriented Development. In 2008 IEEE Congress on Services - Part I. IEEE, Honolulu, HI, 345--352.
[24]
Martin Höst, Björn Regnell, and Claes Wohlin. 2000. Using Students as Subjects - A Comparative Study of Students and Professionals in Lead-Time Impact Assessment. Empirical Software Engineering 5, 3 (2000), 201--214.
[25]
Ronald Jabangwe, Jürgen Börstler, Darja Šmite, and Claes Wohlin. 2014. Empirical evidence on the link between object-oriented measures and external quality attributes: a systematic literature review. Empirical Software Engineering 20, 3 (mar 2014), 640--693.
[26]
Magne Jørgensen, Tore Dybå, Knut Liestøl, and Dag I.K. Sjøberg. 2015. Incorrect results in software engineering experiments: How to improve research practices. Journal of Systems and Software (mar 2015).
[27]
Barbara A. Kitchenham. 2010. What's up with software metrics? - A preliminary mapping study. Journal of Systems and Software 83, 1 (jan 2010), 37--51.
[28]
Barbara A. Kitchenham. 2015. Robust statistical methods. In Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering - EASE '15. ACM Press, New York, New York, USA, 1--6.
[29]
Barbara A. Kitchenham, S.L. Pfleeger, L.M. Pickard, P.W. Jones, D.C. Hoaglin, K. El Emam, and J. Rosenberg. 2002. Preliminary guidelines for empirical research in software engineering. IEEE Transactions on Software Engineering 28, 8 (aug 2002), 721--734.
[30]
Andrew J. Ko, Thomas D. LaToza, and Margaret M. Burnett. 2013. A practical guide to controlled experiments of software engineering tools with human participants. Empirical Software Engineering 20, 1 (sep 2013), 110--141.
[31]
Ang Li, Xiaowei Yang, Srikanth Kandula, and Ming Zhang. 2010. CloudCmp: comparing public cloud providers. In Proceedings of the 10th annual conference on Internet measurement - IMC '10. ACM Press, Melbourne, Australia, 1--14.
[32]
Zheng Li, Liam O'Brien, He Zhang, and Rainbow Cai. 2012. On a Catalogue of Metrics for Evaluating Commercial Cloud Services. In 2012 ACM/IEEE 13th International Conference on Grid Computing. IEEE, Beijing, 164--173.
[33]
Joa Sang Lim, Seung Ryul Jeong, and Stephen R. Schach. 2005. An empirical investigation of the impact of the object-oriented paradigm on the maintainability of real-world mission-critical software. Journal of Systems and Software 77, 2 (aug 2005), 131--138.
[34]
Don MacVittie. 2014. App Dev Priorities Survey. Technical Report. InformationWeek. 32 pages. https://www.informationweek.com/whitepaper/soa/data-centers/2015-app-dev-priorities-survey/347333
[35]
Peter Mell and Timoty Grance. 2011. NIST Definition of Cloud Computing. Technical Report. National Institute of Standards and Technology (Special Publication 800-145), Gaithersburg. 7 pages. http://www.nist.gov/customcf/get{_}pdf.cfm?pub{_}id=909616
[36]
Microsoft. 2017. Cloud Service Map. Technical Report. Microsoft. 37 pages. https://azure.microsoft.com/pt-br/blog/cloud-service-map-for-aws-and-azure-available-now/
[37]
James Miller. 2005. Replicating software engineering experiments: a poisoned chalice or the Holy Grail. Information and Software Technology 47, 4 (mar 2005), 233--244.
[38]
James Miller, John Daly, Murray Wood, Marc Roper, and Andrew Brooks. 1997. Statistical power and its subcomponents - missing and misunderstood concepts in empirical software engineering research. Information and Software Technology 39, 4 (apr 1997), 285--295.
[39]
Sofia Ouhbi, Ali Idri, Jose Luis Fernandez Aleman, and Ambrosio Toval. 2014. Evaluating Software Product Quality: A Systematic Mapping Study. In 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement. IEEE, Rotterdam, 141--151.
[40]
Outsystems. 2017. The State of Application Development 2017 Research Report. Technical Report. outsystems, Atlanta. 21 pages. https://www.outsystems.com/1/state-app-development-trends/
[41]
Mikhail Perepletchikov and Caspar Ryan. 2011. A Controlled Experiment for Evaluating the Impact of Coupling on the Maintainability of Service-Oriented Software. IEEE Transactions on Software Engineering 37, 4 (jul 2011), 449--465.
[42]
Reginaldo Ré, Rômulo Manciola Meloca, Douglas Nassif Roma, Marcelo Alexandre da Cruz Ismael, and Gabriel Costa Silva. 2018. An empirical study for evaluating the performance of multi-cloud APIs. Future Generation Computer Systems 79 (feb 2018), 726--738.
[43]
Mehwish Riaz, Emilia Mendes, and Ewan Tempero. 2009. A systematic review of software maintainability prediction and metrics. In 2009 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM 2009. IEEE, Lake Buena Vista, FL, 367--377.
[44]
RightScale. 2018. State of the Cloud Report. Technical Report. RightScale. https://www.rightscale.com/lp/state-of-the-cloud/
[45]
Juliana de A.G. Saraiva, Micael S. de França, Sérgio C.B. Soares, Fernando J.C.L. Filho, and Renata M.C.R. de Souza. 2015. Classifying Metrics for Assessing Object-Oriented Software Maintainability: A Family of Metrics' Catalogs. Journal of Systems and Software 103 (jan 2015), 85--101.
[46]
W R Shadish, T D Cook, and D T Campbell. 2001. Experimental and Quasi-experimental Designs for Generalized Causal Inference (2nd ed.). Cengage Learning, Boston. 656 pages.
[47]
Forrest Shull and Raimund L. Feldmann. 2008. Building Theories from Multiple Evidence Sources. In Guide to Advanced Empirical Software Engineering (1 ed.), Forrest Shull, Janice Singer, and Dag I. K. Sjøberg (Eds.). Springer London, London, Chapter 13, 337--364.
[48]
Gabriel Costa Silva, Louis M. Rose, and Radu Calinescu. 2013. Towards a Model-Driven Solution to the Vendor Lock-In Problem in Cloud Computing. In 2013 IEEE 5th International Conference on Cloud Computing Technology and Science. IEEE, Bristol, UK, 711--716.
[49]
Gabriel Costa Silva, Louis M. Rose, and Radu Calinescu. 2014. Cloud DSL: A Language for Supporting Cloud Portability by Describing Cloud Entities. In MD2P2 2014 âĂŞ Model-Driven Development Processes and Practices. ACM/IEEE, Valencia, Spain, 18--27.
[50]
Dag I.K. Sjøberg, B. Anda, E. Arisholm, Tore Dybå, Magne Jorgensen, A. Karahasanovic, E.F. Koren, and M. Vokac. 2002. Conducting realistic experiments in software engineering. In Proceedings International Symposium on Empirical Software Engineering. IEEE Comput. Soc, Nara, 17--26.
[51]
Dag I.K. Sjøberg, J.E. Hannay, O. Hansen, Vigdis By Kampenes, A. Karahasanovic, N.-K. Liborg, and A.C. Rekdal. 2005. A survey of controlled experiments in software engineering. IEEE Transactions on Software Engineering 31, 9 (sep 2005), 733--753.
[52]
Ian Sommerville. 2016. Software Engineering (10 ed.). Pearson, Essex. 810 pages.
[53]
A. Vargha and H. D. Delaney. 2000. A Critique and Improvement of the CL Common Language Effect Size Statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics 25, 2 (jan 2000), 101--132.
[54]
Stefan Wagner. 2013. Quality Models. In Software Product Quality Control. Springer Berlin Heidelberg, Berlin, Heidelberg, 29--89.
[55]
Claes Wohlin, Per Runeson, Martin Höst, Magnus C. Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in Software Engineering. Springer Berlin Heidelberg, Berlin, Heidelberg. 249 pages.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image DL Hosted proceedings
CASCON '18: Proceedings of the 28th Annual International Conference on Computer Science and Software Engineering
October 2018
439 pages

Publisher

IBM Corp.

United States

Publication History

Published: 29 October 2018

Author Tags

  1. cloud platforms
  2. experimentation
  3. quality in use

Qualifiers

  • Research-article

Acceptance Rates

Overall Acceptance Rate 24 of 90 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 221
    Total Downloads
  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)1
Reflects downloads up to 28 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media