A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach
<p>Respondent experience in the IT sector.</p> "> Figure 2
<p>Response to questions.</p> "> Figure 3
<p>Responses for each survey by question, illustrating the distribution of answers based on job role. (<b>a</b>) <b>Q3</b> Lack of skilled resources prevents automated testing from being used; (<b>b</b>) <b>Q4</b> Individuals not having enough time prevents the use of test automation; (<b>c</b>) <b>Q5</b> Difficulties in preparing test data and environments prevents their use; (<b>d</b>) <b>Q6</b> Not having the right automation tools and frameworks is preventing use; (<b>e</b>) <b>Q7</b> Difficult to integrate different automation tools/frameworks together is preventing their use; (<b>f</b>) <b>Q8</b> Requirements change too often in software projects resulting in them being too time-consuming when required to quickly react to change; (<b>g</b>) <b>Q9</b> Not realising and understanding the benefits of test automation is preventing their use; (<b>h</b>) <b>Q10</b> A lack of support from senior management is preventing their use; (<b>i</b>) <b>Q11</b> Commercial tools are too expensive, which prevents their use; (<b>j</b>) <b>Q12</b> Open-source tools are not easy to use; (<b>k</b>) <b>Q13</b> Test automation tools require a high level of expertise, which is often not available; (<b>l</b>) <b>Q14</b> Automated testing requires strong programming skills; (<b>m</b>) <b>Q15</b> Automated testing techniques are time-consuming to learn; (<b>n</b>) <b>Q16</b> Automated testing tools and techniques lack the necessary functionality; (<b>o</b>) <b>Q17</b> They are not reliable enough to make them suitable for use; (<b>p</b>) <b>Q18</b> They lack support for testing non-functional requirements (usability, safety, security, etc.); (<b>q</b>) <b>Q19</b> Expensive to generate test cases/test scripts; (<b>r</b>) <b>Q20</b> They require high maintenance costs for test cases, test scripts and test data; (<b>s</b>) <b>Q21</b> Automated testing tools and techniques change too often, introducing problems that need fixing; (<b>t</b>) <b>Q22</b> Difficult to reuse test scripts and data across stages of testing.</p> "> Figure 3 Cont.
<p>Responses for each survey by question, illustrating the distribution of answers based on job role. (<b>a</b>) <b>Q3</b> Lack of skilled resources prevents automated testing from being used; (<b>b</b>) <b>Q4</b> Individuals not having enough time prevents the use of test automation; (<b>c</b>) <b>Q5</b> Difficulties in preparing test data and environments prevents their use; (<b>d</b>) <b>Q6</b> Not having the right automation tools and frameworks is preventing use; (<b>e</b>) <b>Q7</b> Difficult to integrate different automation tools/frameworks together is preventing their use; (<b>f</b>) <b>Q8</b> Requirements change too often in software projects resulting in them being too time-consuming when required to quickly react to change; (<b>g</b>) <b>Q9</b> Not realising and understanding the benefits of test automation is preventing their use; (<b>h</b>) <b>Q10</b> A lack of support from senior management is preventing their use; (<b>i</b>) <b>Q11</b> Commercial tools are too expensive, which prevents their use; (<b>j</b>) <b>Q12</b> Open-source tools are not easy to use; (<b>k</b>) <b>Q13</b> Test automation tools require a high level of expertise, which is often not available; (<b>l</b>) <b>Q14</b> Automated testing requires strong programming skills; (<b>m</b>) <b>Q15</b> Automated testing techniques are time-consuming to learn; (<b>n</b>) <b>Q16</b> Automated testing tools and techniques lack the necessary functionality; (<b>o</b>) <b>Q17</b> They are not reliable enough to make them suitable for use; (<b>p</b>) <b>Q18</b> They lack support for testing non-functional requirements (usability, safety, security, etc.); (<b>q</b>) <b>Q19</b> Expensive to generate test cases/test scripts; (<b>r</b>) <b>Q20</b> They require high maintenance costs for test cases, test scripts and test data; (<b>s</b>) <b>Q21</b> Automated testing tools and techniques change too often, introducing problems that need fixing; (<b>t</b>) <b>Q22</b> Difficult to reuse test scripts and data across stages of testing.</p> "> Figure 3 Cont.
<p>Responses for each survey by question, illustrating the distribution of answers based on job role. (<b>a</b>) <b>Q3</b> Lack of skilled resources prevents automated testing from being used; (<b>b</b>) <b>Q4</b> Individuals not having enough time prevents the use of test automation; (<b>c</b>) <b>Q5</b> Difficulties in preparing test data and environments prevents their use; (<b>d</b>) <b>Q6</b> Not having the right automation tools and frameworks is preventing use; (<b>e</b>) <b>Q7</b> Difficult to integrate different automation tools/frameworks together is preventing their use; (<b>f</b>) <b>Q8</b> Requirements change too often in software projects resulting in them being too time-consuming when required to quickly react to change; (<b>g</b>) <b>Q9</b> Not realising and understanding the benefits of test automation is preventing their use; (<b>h</b>) <b>Q10</b> A lack of support from senior management is preventing their use; (<b>i</b>) <b>Q11</b> Commercial tools are too expensive, which prevents their use; (<b>j</b>) <b>Q12</b> Open-source tools are not easy to use; (<b>k</b>) <b>Q13</b> Test automation tools require a high level of expertise, which is often not available; (<b>l</b>) <b>Q14</b> Automated testing requires strong programming skills; (<b>m</b>) <b>Q15</b> Automated testing techniques are time-consuming to learn; (<b>n</b>) <b>Q16</b> Automated testing tools and techniques lack the necessary functionality; (<b>o</b>) <b>Q17</b> They are not reliable enough to make them suitable for use; (<b>p</b>) <b>Q18</b> They lack support for testing non-functional requirements (usability, safety, security, etc.); (<b>q</b>) <b>Q19</b> Expensive to generate test cases/test scripts; (<b>r</b>) <b>Q20</b> They require high maintenance costs for test cases, test scripts and test data; (<b>s</b>) <b>Q21</b> Automated testing tools and techniques change too often, introducing problems that need fixing; (<b>t</b>) <b>Q22</b> Difficult to reuse test scripts and data across stages of testing.</p> "> Figure 4
<p>Responses for each survey by question, illustrating the distribution of answers based on number of years of experience. (<b>a</b>) <b>Q3</b> Lack of skilled resources prevents automated testing from being used; (<b>b</b>) <b>Q4</b> Individuals not having enough time prevents the use of test automation; (<b>c</b>) <b>Q5</b> Difficulties in preparing test data and environments prevents their use; (<b>d</b>) <b>Q6</b> Not having the right automation tools and frameworks is preventing use; (<b>e</b>) <b>Q7</b> Difficult to integrate different automation tools/frameworks together is preventing their use; (<b>f</b>) <b>Q8</b> Requirements change too often in software projects resulting in them being too time-consuming when required to quickly react to change; (<b>g</b>) <b>Q9</b> Not realising and understanding the benefits of test automation is preventing their use; (<b>h</b>) <b>Q10</b> A lack of support from senior management is preventing their use; (<b>i</b>) <b>Q11</b> Commercial tools are too expensive, which prevents their use; (<b>j</b>) <b>Q12</b> Open-source tools are not easy to use; (<b>k</b>) <b>Q13</b> Test automation tools require a high level of expertise, which is often not available; (<b>l</b>) <b>Q14</b> Automated testing requires strong programming skills; (<b>m</b>) <b>Q15</b> Automated testing techniques are time-consuming to learn; (<b>n</b>) <b>Q16</b> Automated testing tools and techniques lack the necessary functionality; (<b>o</b>) <b>Q17</b> They are not reliable enough to make them suitable for use; (<b>p</b>) <b>Q18</b> They lack support for testing non-functional requirements (usability, safety, security, etc.); (<b>q</b>) <b>Q19</b> Expensive to generate test cases/test scripts; (<b>r</b>) <b>Q20</b> They require high maintenance costs for test cases, test scripts and test data; (<b>s</b>) <b>Q21</b> Automated testing tools and techniques change too often, introducing problems that need fixing.; (<b>t</b>) <b>Q22</b> Difficult to reuse test scripts and data across stages of testing.</p> "> Figure 4 Cont.
<p>Responses for each survey by question, illustrating the distribution of answers based on number of years of experience. (<b>a</b>) <b>Q3</b> Lack of skilled resources prevents automated testing from being used; (<b>b</b>) <b>Q4</b> Individuals not having enough time prevents the use of test automation; (<b>c</b>) <b>Q5</b> Difficulties in preparing test data and environments prevents their use; (<b>d</b>) <b>Q6</b> Not having the right automation tools and frameworks is preventing use; (<b>e</b>) <b>Q7</b> Difficult to integrate different automation tools/frameworks together is preventing their use; (<b>f</b>) <b>Q8</b> Requirements change too often in software projects resulting in them being too time-consuming when required to quickly react to change; (<b>g</b>) <b>Q9</b> Not realising and understanding the benefits of test automation is preventing their use; (<b>h</b>) <b>Q10</b> A lack of support from senior management is preventing their use; (<b>i</b>) <b>Q11</b> Commercial tools are too expensive, which prevents their use; (<b>j</b>) <b>Q12</b> Open-source tools are not easy to use; (<b>k</b>) <b>Q13</b> Test automation tools require a high level of expertise, which is often not available; (<b>l</b>) <b>Q14</b> Automated testing requires strong programming skills; (<b>m</b>) <b>Q15</b> Automated testing techniques are time-consuming to learn; (<b>n</b>) <b>Q16</b> Automated testing tools and techniques lack the necessary functionality; (<b>o</b>) <b>Q17</b> They are not reliable enough to make them suitable for use; (<b>p</b>) <b>Q18</b> They lack support for testing non-functional requirements (usability, safety, security, etc.); (<b>q</b>) <b>Q19</b> Expensive to generate test cases/test scripts; (<b>r</b>) <b>Q20</b> They require high maintenance costs for test cases, test scripts and test data; (<b>s</b>) <b>Q21</b> Automated testing tools and techniques change too often, introducing problems that need fixing.; (<b>t</b>) <b>Q22</b> Difficult to reuse test scripts and data across stages of testing.</p> "> Figure 4 Cont.
<p>Responses for each survey by question, illustrating the distribution of answers based on number of years of experience. (<b>a</b>) <b>Q3</b> Lack of skilled resources prevents automated testing from being used; (<b>b</b>) <b>Q4</b> Individuals not having enough time prevents the use of test automation; (<b>c</b>) <b>Q5</b> Difficulties in preparing test data and environments prevents their use; (<b>d</b>) <b>Q6</b> Not having the right automation tools and frameworks is preventing use; (<b>e</b>) <b>Q7</b> Difficult to integrate different automation tools/frameworks together is preventing their use; (<b>f</b>) <b>Q8</b> Requirements change too often in software projects resulting in them being too time-consuming when required to quickly react to change; (<b>g</b>) <b>Q9</b> Not realising and understanding the benefits of test automation is preventing their use; (<b>h</b>) <b>Q10</b> A lack of support from senior management is preventing their use; (<b>i</b>) <b>Q11</b> Commercial tools are too expensive, which prevents their use; (<b>j</b>) <b>Q12</b> Open-source tools are not easy to use; (<b>k</b>) <b>Q13</b> Test automation tools require a high level of expertise, which is often not available; (<b>l</b>) <b>Q14</b> Automated testing requires strong programming skills; (<b>m</b>) <b>Q15</b> Automated testing techniques are time-consuming to learn; (<b>n</b>) <b>Q16</b> Automated testing tools and techniques lack the necessary functionality; (<b>o</b>) <b>Q17</b> They are not reliable enough to make them suitable for use; (<b>p</b>) <b>Q18</b> They lack support for testing non-functional requirements (usability, safety, security, etc.); (<b>q</b>) <b>Q19</b> Expensive to generate test cases/test scripts; (<b>r</b>) <b>Q20</b> They require high maintenance costs for test cases, test scripts and test data; (<b>s</b>) <b>Q21</b> Automated testing tools and techniques change too often, introducing problems that need fixing.; (<b>t</b>) <b>Q22</b> Difficult to reuse test scripts and data across stages of testing.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Requirement for Automated Software Testing
2.2. Current Limitations of Automated Testing
2.3. Survey-Based Research
3. Materials and Methods
3.1. Questions and Process
3.2. Participants
3.3. Results: Stage 1
3.3.1. Time
3.3.2. Cost
3.3.3. Tools and Techniques
3.3.4. Utilisation
3.3.5. Organisation and Capabilities
3.4. Results: Stage 2
4. Discussion and Findings
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Response | Summary Points |
---|---|
2 |
|
3 |
|
5 |
|
15 |
|
20 |
|
24 |
|
33 |
|
35 |
|
39 |
|
43 |
|
49 |
|
51 |
|
55 |
|
56 |
|
64 |
|
66 |
|
75 |
|
77 |
|
81 |
|
References
- Charette, R. Why software fails. IEEE Spectr. 2005, 42, 42–49. [Google Scholar] [CrossRef]
- Ammann, P.; Offutt, J. Introduction to Software Testing; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
- Dustin, E.; Rashka, J.; Paul, J. Automated Software Testing: Introduction, Management, and Performance; Addison-Wesley Professional: Boston, MA, USA, 1999. [Google Scholar]
- Elghondakly, R.; Moussa, S.; Badr, N. Waterfall and agile requirements-based model for automated test cases generation. In Proceedings of the 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 12–14 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 607–612. [Google Scholar]
- Al-Saqqa, S.; Sawalha, S.; AbdelNabi, H. Agile software development: Methodologies and trends. Int. J. Interact. Mob. Technol. 2020, 14, 246–270. [Google Scholar] [CrossRef]
- Rafi, D.M.; Moses, K.R.K.; Petersen, K.; Mäntylä, M.V. Benefits and limitations of automated software testing: Systematic literature review and practitioner survey. In Proceedings of the 2012 7th International Workshop on Automation of Software Test (AST), Zurich, Switzerland, 2–3 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 36–42. [Google Scholar]
- Asfaw, D. Benefits of Automated Testing Over Manual Testing. Int. J. Innov. Res. Inf. Secur. 2015, 2, 5–13. [Google Scholar]
- Collins, E.F.; De Lucena, V.F. Software test automation practices in agile development environment: An industry experience report. In Proceedings of the 2012 7th International Workshop on Automation of Software Test (AST), Zurich, Switzerland, 2–3 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 57–63. [Google Scholar]
- Wiklund, K.; Eldh, S.; Sundmark, D.; Lundqvist, K. Impediments for software test automation: A systematic literature review. Softw. Test. Verif. Reliab. 2017, 27, e1639. [Google Scholar] [CrossRef]
- Bear, S. State of Testing; Technical report; Smart Bear: Somerville, MA, USA, 2018. [Google Scholar]
- Taipale, O.; Kasurinen, J.; Karhu, K.; Smolander, K. Trade-off between automated and manual software testing. Int. J. Syst. Assur. Eng. Manag. 2011, 2, 114–125. [Google Scholar] [CrossRef]
- Nass, M.; Alégroth, E.; Feldt, R. Why many challenges with GUI test automation (will) remain. Inf. Softw. Technol. 2021, 138, 106625. [Google Scholar] [CrossRef]
- Khan, A.Z.; Iftikhar, S.; Bokhari, R.H.; Khan, Z.I. Issues/challenges of automated software testing: A case study. Pak. J. Comput. Inf. Syst. 2018, 3, 61–75. [Google Scholar]
- Evans, I.; Porter, C.; Micallef, M. Scared, frustrated and quietly proud: Testers’ lived experience of tools and automation. In Proceedings of the 32nd European Conference on Cognitive Ergonomics, Siena Italy, 26–29 April 2021; pp. 1–7. [Google Scholar]
- Li, B.; Zhao, Q.; Jiao, S.; Liu, X. DroidPerf: Profiling Memory Objects on Android Devices. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, Madrid Spain, 2–6 October 2023; pp. 1–15. [Google Scholar]
- Li, B.; Xu, H.; Zhao, Q.; Su, P.; Chabbi, M.; Jiao, S.; Liu, X. OJXPerf: Featherlight object replica detection for Java programs. In Proceedings of the 44th International Conference on Software Engineering, Pittsburgh, PA, USA, 21–29 May 2022; pp. 1558–1570. [Google Scholar]
- Hynninen, T.; Kasurinen, J.; Knutas, A.; Taipale, O. Guidelines for software testing education objectives from industry practices with a constructive alignment approach. In Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, Larnaca, Cyprus, 2–4 July 2018; pp. 278–283. [Google Scholar]
- Felderer, M.; Büchler, M.; Johns, M.; Brucker, A.D.; Breu, R.; Pretschner, A. Security testing: A survey. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2016; Volume 101, pp. 1–51. [Google Scholar]
- Larusdottir, M.K.; Bjarnadottir, E.R.; Gulliksen, J. The focus on usability in testing practices in industry. In Human-Computer Interaction, Proceedings of the Second IFIP TC 13 Symposium, HCIS 2010, Held as Part of WCC 2010, Brisbane, Australia, 20–23 September 2010; Proceedings; Springer: Berlin/Heidelberg, Germany, 2010; pp. 98–109. [Google Scholar]
- Hourani, H.; Hammad, A.; Lafi, M. The impact of artificial intelligence on software testing. In Proceedings of the 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), Amman, Jordan, 9–11 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 565–570. [Google Scholar]
- Vos, T.E.; Marin, B.; Escalona, M.J.; Marchetto, A. A methodological framework for evaluating software testing techniques and tools. In Proceedings of the 2012 12th International Conference on Quality Software, Xi’an, China, 27–29 August 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 230–239. [Google Scholar]
- Eldh, S.; Hansson, H.; Punnekkat, S.; Pettersson, A.; Sundmark, D. A framework for comparing efficiency, effectiveness and applicability of software testing techniques. In Proceedings of the Testing: Academic & Industrial Conference-Practice And Research Techniques (TAIC PART’06), Windsor, UK, 29–31 August 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 159–170. [Google Scholar]
- Infosys. Infosys Test Automation Accelerator. 2019. Available online: https://www.infosys.com/IT-services/validation-solutions/Documents/infosys-test-automation-accelerator.pdf (accessed on 20 November 2023).
- Kumar, D.; Mishra, K. The Impacts of Test Automation on Software’s Cost, Quality and Time to Market. Procedia Comput. Sci. 2016, 79, 8–15. [Google Scholar] [CrossRef]
- Mittal, V.; Garg, N. Test Automation using Selenium Webdriver 3.0 with C#; AdactIn Group Pty Limited: Parramatta, Australia, 2018. [Google Scholar]
- Vogel-Heuser, B.; Fay, A.; Schaefer, I.; Tichy, M. Evolution of software in automated production systems: Challenges and research directions. J. Syst. Softw. 2015, 110, 54–84. [Google Scholar] [CrossRef]
- Zhou, Z.Q.; Sinaga, A.; Susilo, W.; Zhao, L.; Cai, K.Y. A cost-effective software testing strategy employing online feedback information. Inf. Sci. 2018, 422, 318–335. [Google Scholar] [CrossRef]
- Panichella, S.; Di Sorbo, A.; Guzman, E.; Visaggio, C.A.; Canfora, G.; Gall, H.C. How can i improve my app? Classifying user reviews for software maintenance and evolution. In Proceedings of the 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME), Bremen, Germany, 29 September–1 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 281–290. [Google Scholar]
- Tracey, N.; Clark, J.; Mander, K.; McDermid, J. An automated framework for structural test-data generation. In Proceedings of the Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No. 98EX239), Honolulu, HI, USA, 13–16 October 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 285–288. [Google Scholar]
- Fewster, M.; Graham, D. Software Test Automation: Effective Use of Test Execution Tools; ACM Press: New York, NY, USA; Addison-Wesley Publishing Co.: Boston, MA, USA, 1999. [Google Scholar]
- Graham, D.; Fewster, M. Experiences of Test Automation: Case Studies of Software Test Automation; Addison-Wesley Professional: Boston, MA, USA, 2012. [Google Scholar]
- Böhme, M.; Paul, S. A probabilistic analysis of the efficiency of automated software testing. IEEE Trans. Softw. Eng. 2015, 42, 345–360. [Google Scholar] [CrossRef]
- Rahman, A.A.; Hasim, N. Defect Management Life Cycle Process for Software Quality Improvement. In Proceedings of the 2015 3rd International Conference on Artificial Intelligence, Modelling and Simulation (AIMS), Kota Kinabalu, Malaysia, 2–4 December 2015; pp. 241–244. [Google Scholar] [CrossRef]
- Garrett, T. Useful Automated Software Testing Metrics. Softw. Test. Geek 2011. [Google Scholar]
- Rex, B. Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing; Rex Black Inc.: Dallas, TX, USA, 2002. [Google Scholar]
- Berner, S.; Weber, R.; Keller, R.K. Observations and lessons learned from automated testing. In Proceedings of the 27th International Conference on Software Engineering, St. Louis, MO, USA, 15–21 May 2005; ACM: New York, NY, USA, 2005; pp. 571–579. [Google Scholar]
- Jansing, D.; Novillo, J.; Cavallo, R.; Spetka, S. Enhancing the Effectiveness of Software Test Automation. Ph.D Thesis, State University of New York Polytechnic Institute Utica, New York, NY, USA, 2015. [Google Scholar]
- Dustin, E.; Garrett, T.; Gauf, B. Implementing Automated Software Testing: How to Save Time and Lower Costs While Raising Quality; Pearson Education: London, UK, 2009. [Google Scholar]
- Melton, J.R. The Hidden Benefits of automated Testing. In Proceedings of the 2015 Aerospace Testing Seniar, CVENTS, Los Angeles, California, 27–29 October 2015. [Google Scholar]
- Kasurinen, J.; Taipale, O.; Smolander, K. Software test automation in practice: Empirical observations. Adv. Softw. Eng. 2010, 2010, 620836. [Google Scholar] [CrossRef]
- Leitner, A.; Ciupa, I.; Meyer, B.; Howard, M. Reconciling manual and automated testing: The autotest experience. In Proceedings of the 2007 40th Annual Hawaii International Conference on System Sciences (HICSS’07), Big Island, HI, USA, 3–6 January 2007; IEEE: Piscataway, NJ, USA, 2007; p. 261a. [Google Scholar]
- Monier, M.; El-mahdy, M.M. Evaluation of automated web testing tools. Int. J. Comput. Appl. Technol. Res. 2015, 4, 405–408. [Google Scholar] [CrossRef]
- Garousi, V.; Felderer, M. Worlds Apart: Industrial and Academic Focus Areas in Software Testing. IEEE Softw. 2017, 34, 38–45. [Google Scholar] [CrossRef]
- Zou, W.; Lo, D.; Chen, Z.; Xia, X.; Feng, Y.; Xu, B. How practitioners perceive automated bug report management techniques. IEEE Trans. Softw. Eng. 2018, 46, 836–862. [Google Scholar] [CrossRef]
- Lo, D.; Nagappan, N.; Zimmermann, T. How practitioners perceive the relevance of software engineering research. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, Bergamo, Italy, 30 August 2015; pp. 415–425. [Google Scholar]
- Meyer, A.N.; Fritz, T.; Murphy, G.C.; Zimmermann, T. Software developers’ perceptions of productivity. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, Hong Kong, China, 16–21 November 2014; pp. 19–29. [Google Scholar]
- Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
- Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef]
- Boone, H.N.; Boone, D.A. Analyzing likert data. J. Ext. 2012, 50, 48. [Google Scholar] [CrossRef]
- Etikan, I.; Musa, S.A.; Alkassim, R.S. Comparison of convenience sampling and purposive sampling. Am. J. Theor. Appl. Stat. 2016, 5, 1–4. [Google Scholar] [CrossRef]
- Faraj, S.; Sproull, L. Coordinating expertise in software development teams. Manag. Sci. 2000, 46, 1554–1568. [Google Scholar] [CrossRef]
- Gaskin, C.J.; Happell, B. On exploratory factor analysis: A review of recent evidence, an assessment of current practice, and recommendations for future use. Int. J. Nurs. Stud. 2014, 51, 511–521. [Google Scholar] [CrossRef] [PubMed]
- Nunnally, J.C.; Ira, H.B. Psychometric Theory; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
- Ferketich, S. Focus on psychometrics. Aspects of item analysis. Res. Nurs. Health 1991, 14, 165–168. [Google Scholar] [CrossRef] [PubMed]
- Cortina, J.M. What is coefficient alpha? An examination of theory and applications. J. Appl. Psychol. 1993, 78, 98. [Google Scholar] [CrossRef]
- Streiner, D.L.; Norman, G.R.; Cairney, J. Health Measurement Scales: A Practical Guide to Their Development and Use; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
- Ferguson, E.; Cox, T. Exploratory factor analysis: A users’ guide. Int. J. Sel. Assess. 1993, 1, 84–94. [Google Scholar] [CrossRef]
- Tobias, S.; Carlson, J.E. Brief report: Bartlett’s test of sphericity and chance findings in factor analysis. Multivar. Behav. Res. 1969, 4, 375–377. [Google Scholar] [CrossRef]
- Tabachnick, B.G.; Fidell, L.S.; Ullman, J.B. Using Multivariate Statistics; Pearson: Boston, MA, USA, 2007; Volume 5. [Google Scholar]
- Warren, C.R.; Lumsden, C.; O’Dowd, S.; Birnie, R.V. ‘Green on green’: Public perceptions of wind power in Scotland and Ireland. J. Environ. Plan. Manag. 2005, 48, 853–875. [Google Scholar] [CrossRef]
Construct | Questions | Sources |
---|---|---|
Biographic |
| [32] |
Time |
| [24,36,40] |
Cost |
| [22,27,30,31,38] |
Tools and Techniques |
| [6,39,44] |
Utilisation |
| [34,35,37] |
Organisation and Capability |
| [11,28] |
Job Role | # of Participants |
---|---|
CEO | 3 |
Consultant | 3 |
Senior Consultant | 1 |
Manager | 7 |
Student | 1 |
QA | 7 |
Senior QA | 8 |
Tester/Engineer/Analyst/Architect/Automation | 27 |
Senior Tester/Engineer/Analyst/Architect/Automation | 24 |
Total | 81 |
Cronbach’s Alpha | Number of Items |
---|---|
0.860 | 20 |
Test Technique | Result |
---|---|
Kaiser–Meyer–Olkin Measure of Sampling Adequacy | 0.778 |
Bartlett’s Test of Sphericity Approx. Chi-Square | 553.333 |
Significance | 0.000 |
Question | Nonsoftware Factors | Software Factors | Commonalities |
---|---|---|---|
Q20 | 0.786 | 0.606 | |
Q19 | 0.708 | 0.568 | |
Q13 | 0.688 | 0.499 | |
Q14 | 0.623 | 0.384 | |
Q21 | 0.608 | 0.375 | |
Q8 | 0.572 | 0.294 | |
Q15 | 0.515 | 0.347 | |
Q11 | 0.492 | 0.217 | |
Q18 | 0.483 | 0.397 | |
Q4 | 0.447 | 0.278 | |
Q22 | 0.419 | 0.362 | 0.404 |
Q6 | 0.741 | 0.496 | |
Q3 | 0.662 | 0.401 | |
Q9 | 0.618 | 0.371 | |
Q7 | 0.617 | 0.452 | |
Q10 | 0.504 | 0.243 | |
Q5 | 0.500 | 0.386 | |
Q16 | 0.487 | 0.419 | |
Q17 | 0.469 | 0.303 | |
Q12 | 0.343 | 0.380 | 0.346 |
Eigenvalues | 5.815 | 1.971 | |
Percent variance explained | 29.076 | 9.857 |
Technical Role | Nontechnical Role | |||||
---|---|---|---|---|---|---|
Component | % Disagree | % Neutral | % Agree | % Disagree | % Neutral | % Agree |
Nonsoftware | 35 | 25 | 41 | 22 | 23 | 55 |
Software | 50 | 21 | 29 | 31 | 24 | 45 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Murazvu, G.; Parkinson, S.; Khan, S.; Liu, N.; Allen, G. A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach. Software 2024, 3, 1-27. https://doi.org/10.3390/software3010001
Murazvu G, Parkinson S, Khan S, Liu N, Allen G. A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach. Software. 2024; 3(1):1-27. https://doi.org/10.3390/software3010001
Chicago/Turabian StyleMurazvu, George, Simon Parkinson, Saad Khan, Na Liu, and Gary Allen. 2024. "A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach" Software 3, no. 1: 1-27. https://doi.org/10.3390/software3010001
APA StyleMurazvu, G., Parkinson, S., Khan, S., Liu, N., & Allen, G. (2024). A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach. Software, 3(1), 1-27. https://doi.org/10.3390/software3010001