Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Reinforcement Learning Informed Evolutionary Search for Autonomous Systems Testing

Published: 03 December 2024 Publication History

Abstract

Evolutionary search (ES)-based techniques are commonly used for testing autonomous robotic systems. However, these approaches often rely on computationally expensive simulator-based models for test scenario evaluation. To improve the computational efficiency of the search-based testing, we propose augmenting the ES with a reinforcement learning (RL) agent trained using surrogate rewards derived from domain knowledge. In our approach, known as RIGAA (Reinforcement learning Informed Genetic Algorithm for Autonomous systems testing), we first train an RL agent to learn useful constraints of the problem and then use it to produce a certain part of the initial population of the search algorithm. By incorporating an RL agent into the search process, we aim to guide the algorithm towards promising regions of the search space from the start, enabling more efficient exploration of the solution space. We evaluate RIGAA on two case studies: maze generation for an autonomous “Ant” robot and road topology generation for an autonomous vehicle lane-keeping assist system. In both case studies, RIGAA reveals more failures of a high level of diversity than the compared baselines. RIGAA also outperforms the state-of-the-art tools for vehicle lane-keeping assist system testing, such as AmbieGen, CRAG, WOGAN, and Frenetic in terms of the number of revealed failures in a two-hour budget.

References

[1]
Raja Ben Abdessalem, Shiva Nejati, Lionel C. Briand, and Thomas Stifter. 2018. Testing vision-based control systems using learnable evolutionary algorithms. In Proceedings of the 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). IEEE, 1016–1026.
[2]
Raja Ben Abdessalem, Annibale Panichella, Shiva Nejati, Lionel C. Briand, and Thomas Stifter. 2018. Testing autonomous cars for feature interaction failures using many-objective search. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, 143–154.
[3]
Yashwanth Annpureddy, Che Liu, Georgios Fainekos, and Sriram Sankaranarayanan. 2011. S-taliro: A tool for temporal logic falsification for hybrid systems. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 254–257.
[4]
Giuliano Antoniol, Massimiliano Di Penta, and Mark Harman. 2005. Search-based techniques applied to optimization of project planning for a massive maintenance project. In Proceedings of the 21st IEEE International Conference on Software Maintenance (ICSM ’05). IEEE, 240–249.
[5]
Paolo Arcaini and Ahmet Cetinkaya. 2023. Crag at the sbft 2023 tool competition-cyber-physical systems track. In Proceedings of the IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT), IEEE, 41–42.
[6]
Andrea Arcuri and Lionel Briand. 2014. A hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering. Software Testing, Verification and Reliability 24, 3 (2014), 219–250.
[7]
Andrea Arcuri, David Robert White, John Clark, and Xin Yao. 2008. Multi-objective improvement of software using co-evolution and smart seeding. In Proceedings of the 7th International Conference on Simulated Evolution and Learning (SEAL ’08). Springer, 61–70.
[8]
James Arnold and Rob Alexander. 2013. Testing autonomous robot control software using procedural content generation. In Proceedings of the 32nd International Conference on Computer Safety, Reliability, and Security (SAFECOMP ’13). Springer, 33–44.
[9]
Aitor Arrieta, Pablo Valle, Joseba A. Agirre, and Goiuria Sagardui. 2023. Some seeds are strong: Seeding strategies for search-based test case selection. ACM Transactions on Software Engineering and Methodology 32, 1 (2023), 1–47.
[10]
Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz. 1997. Handbook of evolutionary computation. Release 97, 1 (1997), B1.
[11]
BeamNG GmbH. 2022-06-15. BeamNG.tech. Retrieved from https://www.beamng.tech/
[12]
Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d.O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. 2019. Dota 2 with large scale deep reinforcement learning. arXiv:1912.06680. Retrieved from https://arxiv.org/abs/1912.06680
[13]
Nicola Beume, Boris Naujoks, and Michael Emmerich. 2007. SMS-EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research 181, 3 (2007), 1653–1669.
[14]
Matteo Biagiola, Stefan Klikovits, Jarkko Peltomäki, and Vincenzo Riccio. 2023. Sbft tool competition 2023-cyber-physical systems track. In Proceedings of the IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT). IEEE, 45–48.
[15]
Christian Birchler, Nicolas Ganz, Sajad Khatiri, Alessio Gambi, and Sebastiano Panichella. 2022. Cost-effective simulation-based test selection in self-driving cars software with SDC-Scissor. In Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 164–168.
[16]
J. Blank and K. Deb. 2020. pymoo: Multi-objective optimization in Python. IEEE Access 8 (2020), 89497–89509.
[17]
Guillaume Briffoteaux. 2022. Parallel Surrogate-Based Algorithms for Solving Expensive Optimization Problems. Ph.D. Dissertation. Université de Lille; Université de Mons (UMONS).
[18]
Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. 2018. Exploration by random network distillation. arXiv:1810.12894. Retrieved from https://arxiv.org/abs/1810.12894
[19]
Guy Campion, Georges Bastin, and Brigitte Dandrea-Novel. 1996. Structural properties and classification of kinematic and dynamic models of wheeled mobile robots. IEEE Transactions on Robotics and Automation 12, 1 (1996), 47–62.
[20]
Ezequiel Castellano, Ahmet Cetinkaya, Cédric Ho Thanh, Stefan Klikovits, Xiaoyi Zhang, and Paolo Arcaini. 2021. Frenetic at the SBST 2021 tool competition. In Proceedings of the IEEE/ACM 14th International Workshop on Search-Based Software Testing (SBST). IEEE, 36–37.
[21]
Baiming Chen, Xiang Chen, Qiong Wu, and Liang Li. 2021. Adversarial evaluation of autonomous vehicles in lane-change scenarios. IEEE Transactions on Intelligent Transportation Systems 23, 8 (2021), 10333–10342.
[22]
Qiong Chen, Mengxing Huang, Qiannan Xu, Hao Wang, and Jinghui Wang. 2020. Reinforcement learning-based genetic algorithm in optimizing multidimensional data discretization scheme. Mathematical Problems in Engineering 2020 (2020).
[23]
Tao Chen, Miqing Li, and Xin Yao. 2019. Standing on the shoulders of giants: Seeding search-based multi-objective optimization with prior knowledge for software service composition. Information and Software Technology 114 (2019), 155–175.
[24]
Tsong Yueh Chen, Hing Leung, and Ieng Kei Mak. 2005. Adaptive random testing. In Proceedings of the 9th Asian Computing Science Conference on Advances in Computer Science-ASIAN 2004. Higher-Level Decision Making. Springer, 320–329.
[25]
Neo Christopher Chung, BaŻej Miasojedow, Micha Startek, and Anna Gambin. 2019. Jaccard/Tanimoto similarity test and estimation methods for biological presence-absence data. BMC Bioinformatics 20, 15 (2019), 1–11.
[26]
Carlos A. Coello Coello, Gary B. Lamont, and David A. Van Veldhuizen. 2007. Evolutionary Algorithms for Solving Multi-Objective Problems, Vol. 5, Springer.
[27]
R. Craig Coulter. 1992. Implementation of the Pure Pursuit Path Tracking Algorithm. Technical Report. Carnegie-Mellon UNIV Pittsburgh PA Robotics INST.
[28]
Kalyanmoy Deb and Himanshu Jain. 2013. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Transactions on Evolutionary Computation 18, 4 (2013), 577–601.
[29]
Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6, 2 (2002), 182–197.
[30]
Agoston E. Eiben and James E. Smith. 2015. Introduction to Evolutionary Computing. Springer.
[31]
Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. 2018. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In International Conference on Machine Learning. PMLR, 1407–1416.
[32]
Carlos M Fonseca, Luís Paquete, and Manuel López-Ibánez. 2006. An improved dimension-sweep algorithm for the hypervolume indicator. In Proceedings of the IEEE International Conference on Evolutionary Computation. IEEE, 1157–1163.
[33]
Farma Foundation. 2020. Ant Gym. Retrieved from https://gymnasium.farama.org/environments/mujoco/ant/
[34]
Gordon Fraser and Andrea Arcuri. 2012. The seed is strong: Seeding strategies in search-based software testing. In Proceedings of the IEEE 5th International Conference on Software Testing, Verification and Validation. IEEE, 121–130.
[35]
Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. 2020. D4RL: Datasets for deep data-driven reinforcement learning. arXiv:2004.07219. Retrieved from https://arxiv.org/abs/2004.07219
[36]
Alessio Gambi, Gunel Jahangirova, Vincenzo Riccio, and Fiorella Zampetti. 2022a. SBST tool competition 2022. In Proceedings of the IEEE/ACM 15th International Workshop on Search-Based Software Testing (SBST). IEEE, 25–32.
[37]
Alessio Gambi, Marc Mueller, and Gordon Fraser. 2019. Automatically testing self-driving cars with search-based procedural content generation. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. IEEE, 318–328.
[38]
Alessio Gambi, Vuong Nguyen, Jasim Ahmed, and Gordon Fraser. 2022b. Generating critical driving scenarios from accident sketches. In Proceedings of the IEEE International Conference on Artificial Intelligence Testing (AITest). IEEE, 95–102.
[39]
Ionel Gog, Sukrit Kalra, Peter Schafhalter, Matthew A. Wright, Joseph E. Gonzalez, and Ion Stoica. 2021. Pylot: A modular platform for exploring latency-accuracy tradeoffs in autonomous vehicles. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 8806–8813.
[40]
David E. Goldberg and Kalyanmoy Deb. 1991. A comparative analysis of selection schemes used in genetic algorithms. In Foundations of Genetic Algorithms, Vol. 1, Elsevier, 69–93.
[41]
Fitash Ul Haq, Donghwan Shin, and Lionel Briand. 2022. Efficient online testing for DNN-enabled systems using surrogate-assisted and many-objective optimization. In Proceedings of the 44th International Conference on Software Engineering, 811–822.
[42]
Fitash Ul Haq, Donghwan Shin, and Lionel C. Briand. 2023. Many-objective reinforcement learning for online testing of dnn-enabled systems. In Proceedings of the IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 1814–1826.
[43]
Mark Harman and Bryan F. Jones. 2001. Search-based software engineering. Information and Software Technology 43, 14 (2001), 833–839.
[44]
Mark Harman, Phil McMinn, Jerffeson Teixeira De Souza, and Shin Yoo. 2008. Search based software engineering: Techniques, taxonomy, tutorial. In LASER Summer School on Software Engineering. Springer, 1–59.
[45]
Nicolas Heess, Dhruva Tb, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Eslami, Martin Riedmiller, and David Silver. 2017. Emergence of locomotion behaviours in rich environments. arXiv:1707.02286. Retrieved from https://arxiv.org/abs/1707.02286
[46]
Ashley Hill, Antonin Raffin, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, Rene Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. 2018. Stable Baselines. Retrieved from https://github.com/hill-a/stable-baselines
[47]
Bardh Hoxha, Hoang Bach, Houssam Abbas, Adel Dokhanchi, Yoshihiro Kobayashi, and Georgios Fainekos. 2014. Towards formal specification visualization for testing and monitoring of cyber-physical systems. In Proceedings of the International Workshop on Design and Implementation of Formal Tools and Systems.
[48]
Shengyi Huang, Rousslan Fernand Julien Dossa, Chang Ye, Jeff Braga, Dipam Chakraborty, Kinal Mehta, and João G. M. Araújo. 2022. CleanRL: High-quality single-file implementations of deep reinforcement learning algorithms. Journal of Machine Learning Research 23, 274 (2022), 1–18. Retrieved from http://jmlr.org/papers/v23/21-1342.html
[49]
Dmytro Humeniuk, Foutse Khomh, and Giuliano Antoniol. 2022. A search-based framework for automatic generation of testing environments for cyber-physical systems. Information and Software Technology 149 (2022), Article 106936.
[50]
Dmytro Humeniuk, Foutse Khomh, and Giuliano Antoniol. 2023a. Ambiegen: A search-based framework for autonomous systems testing. Science of Computer Programming 230 (2023), Article 102990.
[51]
Dmytro Humeniuk, Foutse Khomh, and Giuliano Antoniol. 2023b. Reinforcement Learning Informed Evolutionary Search for Autonomous System Testing. DOI:
[52]
Daniel P. Huttenlocher, Gregory A. Klanderman, and William J. Rucklidge. 1993. Comparing images using the Hausdorff distance. IEEE Transactions on Pattern Analysis and Machine Intelligence 15, 9 (1993), 850–863.
[53]
Vijay Konda and John Tsitsiklis. 1999. Actor-critic algorithms. Advances in Neural Information Processing Systems 12 (1999), 1009–1014.
[54]
John R. Koza and Riccardo Poli. 1994. Genetic programming II, Vol. 17, MIT Press Cambridge.
[55]
Ritchie Lee, Ole J. Mengshoel, Anshu Saksena, Ryan W. Gardner, Daniel Genin, Joshua Silbermann, Michael Owen, and Mykel J. Kochenderfer. 2020. Adaptive stress testing: Finding likely failure events with reinforcement learning. Journal of Artificial Intelligence Research 69 (2020), 1165–1201.
[56]
Joel Lehman and Kenneth O. Stanley. 2011. Evolving a diversity of virtual creatures through novelty search and local competition. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, 211–218.
[57]
Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet Physics Doklady, Vol. 10, Soviet Union, 707–710.
[58]
Roberto E Lopez-Herrejon, Javier Ferrer, Francisco Chicano, Alexander Egyed, and Enrique Alba. 2014. Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of software product lines. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC). IEEE, 387–396.
[59]
Guillermo Macbeth, Eugenia Razumiejczyk, and Rubén Daniel Ledesma. 2011. Cliff’s delta calculator: A non-parametric effect size program for two groups of observations. Universitas Psychologica 10, 2 (2011), 545–555.
[60]
A. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, and James Bergstra. 2018. Benchmarking reinforcement learning algorithms on real-world robots. In Proceedings of the 2nd Conference on Robot Learning. Aude Billard, Anca Dragan, Jan Peters, and Jun Morimoto (Eds.), Proceedings of Machine Learning Research, Vol. 87, PMLR, 561–591. Retrieved from https://proceedings.mlr.press/v87/mahmood18a.html
[61]
Henry B. Mann and Donald R. Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The Annals of Mathematical Statistics 18 (1947), 50–60.
[62]
Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: Multi-objective automated testing for android applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis. 94–105.
[63]
Michael D. McKay, Richard J. Beckman, and William J. Conover. 2000. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 42, 1 (2000), 55–61.
[64]
Claudio Menghi, Shiva Nejati, Lionel Briand, and Yago Isasi Parache. 2020. Approximation-refinement testing of compute-intensive cyber-physical models: An approach based on system identification. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, 372–384.
[65]
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 1928–1937.
[66]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv:1312.5602. Retrieved from https://arxiv.org/abs/1312.5602
[67]
Mahshid Helali Moghadam, Markus Borg, Mehrdad Saadatmand, Seyed Jalaleddin Mousavirad, Markus Bohlin, and Björn Lisper. 2022. Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing. arXiv:2203.12026. Retrieved from https://arxiv.org/abs/2203.12026
[68]
Sebastiano Panichella, Alessio Gambi, Fiorella Zampetti, and Vincenzo Riccio. 2021. Sbst tool competition 2021. In Proceedings of the IEEE/ACM 14th International Workshop on Search-Based Software Testing (SBST). IEEE, 20–27.
[69]
James E. Pettinger and Richard M. Everson. 2002. Controlling genetic algorithms with reinforcement learning. In Proceedings of the 4th Annual Conference on Genetic and Evolutionary Computation. 692–692.
[70]
Andrew R. Plummer. 2006. Model-in-the-loop testing. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 220, 3 (2006), 183–199.
[71]
Jose Quevedo, Marwan Abdelatti, Farhad Imani, and Manbir Sodhi. 2021. Using reinforcement learning for tuning genetic algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, 1503–1507.
[72]
Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. 2021. Stable-Baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research 22, 268 (2021), 1–8. Retrieved from http://jmlr.org/papers/v22/20-1364.html
[73]
Rajesh Rajamani. 2011. Vehicle Dynamics and Control. Springer Science & Business Media.
[74]
Vincenzo Riccio and Paolo Tonella. 2020a. Model-based exploration of the frontier of behaviours for deep learning system testing. In Proceedings of the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’20). ACM, 13 pages. DOI:
[75]
Vincenzo Riccio and Paolo Tonella. 2020b. Model-based exploration of the frontier of behaviours for deep learning system testing. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 876–888.
[76]
José Miguel Rojas, Gordon Fraser, and Andrea Arcuri. 2016. Seeding strategies in search-based unit test generation. Software Testing, Verification and Reliability 26, 5 (2016), 366–401.
[77]
Alejandro Rosales-Pérez, Carlos A. Coello Coello, Jesus A. Gonzalez, Carlos A. Reyes-Garcia, and Hugo Jair Escalante. 2013. A hybrid surrogate-based approach for evolutionary multi-objective optimization. In Proceedings of the IEEE Congress on Evolutionary Computation. IEEE, 2548–2555.
[78]
Stuart J. Russell and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach. Pearson.
[79]
Yoshitaka Sakurai, Kouhei Takada, Takashi Kawabe, and Setsuo Tsuruta. 2010. A method to control parameters of evolutionary algorithms by using reinforcement learning. In Proceedings of the 6th International Conference on Signal-Image Technology and Internet Based Systems. IEEE, 74–79.
[80]
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438. Retrieved from https://arxiv.org/abs/1506.02438
[81]
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv:1707.06347. Retrieved from https://arxiv.org/abs/1707.06347
[82]
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems 25 (2012).
[83]
Thierry Sotiropoulos, Guiochet Guiochet, Ingrand Ingrand, and Weaselynck Waeselynck. 2016. Virtual worlds for testing robot navigation: A study on the difficulty level. In Proceedings of the 12th European Dependable Computing Conference (EDCC). IEEE, 153–160.
[84]
Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press.
[85]
Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. Mujoco: A physics engine for model-based control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 5026–5033.
[86]
Cumhur Erkan Tuncali, Georgios Fainekos, Hisahiro Ito, and James Kapinski. 2018. Simulation-based adversarial test generation for autonomous vehicles with machine learning components. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV). IEEE, 1555–1562.
[87]
Yuhui Wang, Hao He, and Xiaoyang Tan. 2020. Truly proximal policy optimization. In Proceedings of the 35th Uncertainty in Artificial Intelligence Conference. PMLR, 113–122.
[88]
Jesper Winsten and Ivan Porres. 2023. WOGAN at the SBFT 2023 tool competition-cyber-physical systems track. In Proceedings of the IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT). IEEE, 43–44.
[89]
Tahereh Zohdinasab, Vincenzo Riccio, Alessio Gambi, and Paolo Tonella. 2021. Deephyperion: Exploring the feature space of deep learning-based systems through illumination search. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis, 79–90.
[90]
Amirhossein Zolfagharian, Manel Abdellatif, Lionel C. Briand, Mojtaba Bagherzadeh, and S. Ramesh. 2023. A search-based testing approach for deep reinforcement learning agents. IEEE Transactions on Software Engineering, 49 (2023).

Cited By

View all
  • (2024)Can search-based testing with pareto optimization effectively cover failure-revealing test inputs?Empirical Software Engineering10.1007/s10664-024-10564-330:1Online publication date: 16-Nov-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 33, Issue 8
November 2024
975 pages
EISSN:1557-7392
DOI:10.1145/3613733
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 December 2024
Online AM: 27 July 2024
Accepted: 26 June 2024
Revised: 29 April 2024
Received: 20 August 2023
Published in TOSEM Volume 33, Issue 8

Check for updates

Author Tags

  1. test scenario generation
  2. autonomous systems
  3. virtual road topologies
  4. virtual maze environments
  5. reinforcement learning
  6. evolutionary search

Qualifiers

  • Research-article

Funding Sources

  • Natural Sciences and Engineering Research Council of Canada (NSERC)
  • Canadian Institute for Advanced Research (CIFAR)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)266
  • Downloads (Last 6 weeks)36
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Can search-based testing with pareto optimization effectively cover failure-revealing test inputs?Empirical Software Engineering10.1007/s10664-024-10564-330:1Online publication date: 16-Nov-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media