Abstract
Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility between the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The results show that the startup sequence gives good code coverage values for the selected MIDlets. The feedback method gives somewhat better code coverage than the random method, but requires real-time code coverage measurements, which decreases the run speed of the tests. Conclusion The random method with startup sequence is the best trade-off in the current setting.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Montgomery, D.C.: Design and Analysis of Experiments, 5th edn. Wiley, New York (2001)
Heed, P., Westrup, A.: Automated platform testing using input generation and code coverage. Technical report, Dept. of Computer Science, Lund University (2009)
Mazlan, M.A.: Stress test on J2ME compatible mobile device. Innovations in Information Technology, 1–5 (2006)
Jbenchmark, http://www.jbenchmark.com/index.jsp
Musa, J.D.: Operational profiles in software reliability engineering. IEEE Software, 14–32 (1993)
Wohlin, C., Runeson, P.: Certification of software components. IEEE Transactions on Software Engineering 20(6), 494–499 (1994)
Hamlet, R.: Random testing. In: Encyclopedia of Software Engineering, pp. 970–978. Wiley, Chichester (1994)
Chen, T.Y., Kuo, F.-.C., Merkel, R.G., Tse, T.H.: Adaptive random testing: The ART of test case diversity. Journal of Systems and Software 81(1), 60–66 (2010)
Malik, Q., Jaaskelainen, A., Virtanen, H., Katara, M., Abbors, F., Truscan, D., Lilius, J.: Model-based testing using system vs. test models – what is the difference? In: 17th IEEE International Conference and Workshops on Engineering of Computer Based Systems, pp. 291–299 (2010)
Sony Ericsson (AT commands online reference), http://developer.sonyericsson.com
Infrared Data Association, IrDA: (Object exchange protocol (IrOBEX). ver. 1.4, http://www.irda.org
Wohlin, C., Höst, M., Ohlsson, M.C., Regnell, B., Runeson, P., Wesslén, A.: Experimentation in Software Engineering: An Introduction. International Series in Software Engineering. Kluwer Academic Publishers, Dordrecht (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Runeson, P., Heed, P., Westrup, A. (2011). A Factorial Experimental Evaluation of Automated Test Input Generation. In: Caivano, D., Oivo, M., Baldassarre, M.T., Visaggio, G. (eds) Product-Focused Software Process Improvement. PROFES 2011. Lecture Notes in Computer Science, vol 6759. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21843-9_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-21843-9_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21842-2
Online ISBN: 978-3-642-21843-9
eBook Packages: Computer ScienceComputer Science (R0)