Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Generative AI for Self-Adaptive Systems: State of the Art and Research Roadmap

Published: 30 September 2024 Publication History

Abstract

Self-adaptive systems (SASs) are designed to handle changes and uncertainties through a feedback loop with four core functionalities: monitoring, analyzing, planning, and execution. Recently, generative artificial intelligence (GenAI), especially the area of large language models, has shown impressive performance in data comprehension and logical reasoning. These capabilities are highly aligned with the functionalities required in SASs, suggesting a strong potential to employ GenAI to enhance SASs. However, the specific benefits and challenges of employing GenAI in SASs remain unclear. Yet, providing a comprehensive understanding of these benefits and challenges is complex due to several reasons: limited publications in the SAS field, the technological and application diversity within SASs, and the rapid evolution of GenAI technologies. To that end, this article aims to provide researchers and practitioners a comprehensive snapshot that outlines the potential benefits and challenges of employing GenAI’s within SAS. Specifically, we gather, filter, and analyze literature from four distinct research fields and organize them into two main categories to potential benefits: (i) enhancements to the autonomy of SASs centered around the specific functions of the MAPE-K feedback loop, and (ii) improvements in the interaction between humans and SASs within human-on-the-loop settings. From our study, we outline a research roadmap that highlights the challenges of integrating GenAI into SASs. The roadmap starts with outlining key research challenges that need to be tackled to exploit the potential for applying GenAI in the field of SAS. The roadmap concludes with a practical reflection, elaborating on current shortcomings of GenAI and proposing possible mitigation strategies.

References

[1]
Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4998–5007. DOI:
[2]
Toufique Ahmed and Premkumar Devanbu. 2023. Few-shot training LLMs for project-specific code-summarization. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Article 177, 5 pages. DOI:
[3]
Toufique Ahmed, Kunal Suresh Pai, Premkumar Devanbu, and Earl Barr. 2024. Automatic semantic augmentation of language model prompts (for code summarization). In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 220, 13 pages. DOI:
[4]
Human Compatible AI. 2023. overcooked_ai: A Cooperative Multi-Agent Environment Based on the Overcooked Game. Retrieved May 12, 2024 from https://github.com/HumanCompatibleAI/overcooked_ai
[5]
Muideen Ajagbe and Liping Zhao. 2022. Retraining a BERT model for transfer learning in requirements engineering: A Preliminary study. In Proceedings of the IEEE 30th International Requirements Engineering Conference (RE ’22), 309–315. DOI:
[6]
Ahmed Saeed Alsayed, Hoa Khanh Dam, and Chau Nguyen. 2024. MicroRec: Leveraging large language models for microservice recommendation. In Proceedings of the 21st International Conference on Mining Software Repositories, 419–430.
[7]
Jesper Andersson, Rogério de Lemos, Sam Malek, and Danny Weyns. 2009. Modeling Dimensions of Self-Adaptive Software Systems. Springer, Berlin, 27–47. DOI:
[8]
Ian Arawjo, Chelse Swoopes, Priyan Vaithilingam, Martin Wattenberg, and Elena L. Glassman. 2024. ChainForge: A visual toolkit for prompt engineering and LLM hypothesis testing. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24). Article 304, 18 pages. DOI:
[9]
Chetan Arora, Tomas Herda, and Verena Homm. 2024. Generating test scenarios from NL requirements via LLMs: An industrial study. In Proceedings of the 32nd IEEE International Requirements Engineering 2024 Conference.
[10]
Merve Astekin, Max Hort, and Leon Moonen. 2024. An exploratory study on how non-determinism in large language models affects log parsing. In Proceedings of the 2nd International Workshop on Interpretability, Robustness, and Benchmarking in Neural Software Engineering @ ICSE.
[11]
Michael Barnes. 2010. Human-Robot Interactions in Future Military Operations (Human Factors in Defence). CRC Press.
[12]
Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 13. MIT Press.
[13]
Berkay Berabi, Jingxuan He, Veselin Raychev, and Martin Vechev. 2021. TFix: Learning to fix coding errors with a text-to-text transformer. In Proceedings of the 38th International Conference on Machine Learning, Vol. 139 PMLR, 780–791.
[14]
Michael S. Bernstein, Joon Sung Park, Meredith Ringel Morris, Saleema Amershi, Lydia B Chilton, and Mitchell L. Gordon. 2023. Architecting novel interactions with generative AI models. In Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23 Adjunct). Article 107, 3 pages. DOI:
[15]
Antonia Bertolino, Pietro Braione, Guglielmo De Angelis, Luca Gazzola, Fitsum Kifetew, Leonardo Mariani, Matteo Orrù, Mauro Pezzè, Roberto Pietrantuono, Stefano Russo, and Paolo Tonella. 2021. A survey of field-based testing techniques. ACM Computing Surveys 54, 5 (May 2021), Article 92, 39 pages. DOI:
[16]
Antonia Bertolino, Guglielmo De Angelis, Sampo Kellomaki, and Andrea Polini. 2012. Enhancing service federation trustworthiness through online testing. Computer 45, 1 (2012), 66–72. DOI:
[17]
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. 2024. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 17682–17690. DOI:
[18]
Shreya Bhatia, Tarushi Gandhi, Dhruv Kumar, and Pankaj Jalote. 2024. Unit test generation using generative AI: A comparative performance analysis of autogeneration tools. In Proceedings of the 1st International Workshop on Large Language Models for Code.
[19]
Gordon Blair, Nelly Bencomo, and Robert B. France. 2009. Models@ run.time. Computer 42, 10 (2009), 22–27. DOI:
[20]
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 4762–4779. DOI:
[21]
Darko Bozhinoski. 2024. Swarm Intelligence-based Bio-inspired Algorithms. In Proceedings of the 19th Conference on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’24).
[22]
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. 2023. RT-2: Vision-language-action models transfer web knowledge to robotic control. arXiv:2307.15818 [cs.RO]
[23]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. arXiv:2005.14165 [cs.CL]
[24]
Salva Rühling Cachay, Bo Zhao, Hailey James, and Rose Yu. 2023. DYffusion: A dynamics-informed diffusion model for spatiotemporal forecasting. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[25]
Jinyu Cai, Jinglue Xu, Jialong Li, Takuto Yamauchi, Hitoshi Iba, and Kenji Tei. 2024b. Exploring the improvement of evolutionary computation via large language models. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’24).
[26]
Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. 2024a. Large language models as tool makers. In Proceedings of the 12th International Conference on Learning Representations.
[27]
Zefan Cai, Baobao Chang, and Wenjuan Han. 2023. Human-in-the-loop through chain-of-thought. arXiv:2306.07932 [cs.CL]
[28]
R. Calinescu, L. Grunske, M. Kwiatkowska, R. Mirandola, and G. Tamburrelli. 2011. Dynamic QoS management and optimization in service-based systems. IEEE Transactions on Software Engineering 37, 3 (2011). DOI:
[29]
R. Calinescu, D. Weyns, S. Gerasimou, U. Iftikhar, I. Habli, and T. Kelly. 2018. Engineering trustworthy self-adaptive software with dynamic assurance cases. IEEE Transactions on Software Engineering 44, 11 (2018), 1039–1069. DOI:
[30]
Cary Campbell, Alin Olteanu, and Kalevi Kull. 2019. Learning and knowing as semiosis: Extending the conceptual apparatus of semiotics. Sign Systems Studies 47, 3/4 (Dec. 2019), 352–381. DOI:
[31]
Defu Cao, Furong Jia, Sercan O. Arik, Tomas Pfister, Yixiang Zheng, Wen Ye, and Yan Liu. 2024. TEMPO: Prompt-based generative pre-trained transformer for time series forecasting. In Proceedings of the 12th International Conference on Learning Representations.
[32]
Haizhou Cao, Zhenhao Huang, Tiechui Yao, Jue Wang, Hui He, and Yangang Wang. 2023. InParformer: Evolutionary decomposition transformers with interactive parallel attention for long-term time series forecasting. Proceedings of the AAAI Conference on Artificial Intelligence 37, 6 (Jun. 2023), 6906–6915. DOI:
[33]
Thomas Carta, Clément Romac, Thomas Wolf, Sylvain Lamprier, Olivier Sigaud, and Pierre-Yves Oudeyer. 2023. Grounding large language models in interactive environments with online reinforcement learning. In Proceedings of the 40th International Conference on Machine Learning (ICML ’23). JMLR.org, Article 150, 38 pages.
[34]
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2024a. ChatEval: Towards better LLM-based evaluators through multi-agent debate. In Proceedings of the 12th International Conference on Learning Representations.
[35]
Kenneth Chan, Sol Zilberman, Nicholas Polanco, Betty H. C. Cheng, and Josh Siegel. 2024b. SafeDriveRL: Combining non-cooperative game theory with reinforcement learning to explore and mitigate human-based uncertainty for autonomous vehicles. In Proceedings of the 19th Conference on Software Engineering for Adaptive and Self-Managing Systems. SEAMS.
[36]
Yevgen Chebotar, Quan Vuong, Karol Hausman, Fei Xia, Yao Lu, Alex Irpan, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum, Sumedh Anand Sontakke, Grecia Salazar, Huong T Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath, Jaspiar Singh, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, and Sergey Levine. 2023. Q-Transformer: Scalable offline reinforcement learning via autoregressive Q-functions. In Proceedings of the 7th Annual Conference on Robot Learning.
[37]
Hila Chefer, Shir Gur, and Lior Wolf. 2021. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR ’21), 782–791. DOI:
[38]
Chang Chen, Fei Deng, Kenji Kawaguchi, Caglar Gulcehre, and Sungjin Ahn. 2024b. Simple hierarchical planning with diffusion. In Proceedings of the 12h International Conference on Learning Representations.
[39]
Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, and Jun Zhu. 2023a. Offline reinforcement learning via high-fidelity generative behavior modeling. In Proceedings of the 11th International Conference on Learning Representations.
[40]
Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei Li, and Yanghua Xiao. 2023b. Say what you mean! Large language models speak too positively about negative commonsense knowledge. arXiv:2305.05976 [id=cs.CL].
[41]
Jiangjie Chen, Xintao Wang, Rui Xu, Siyu Yuan, Yikai Zhang, Wei Shi, Jian Xie, Shuang Li, Ruihan Yang, Tinghui Zhu, Aili Chen, Nianqi Li, Lida Chen, Caiyu Hu, Siye Wu, Scott Ren, Ziquan Fu, and Yanghua Xiao. 2024f. From persona to personalization: A survey on role-playing language agents. arXiv:2404.18231 [cs.CL].
[42]
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. 2021a. Decision transformer: Reinforcement learning via sequence modeling. In Proceedings of the Advances in Neural Information Processing Systems.
[43]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021b. Evaluating large language models trained on code. arXiv:2107.03374 [cs.LG].
[44]
Peng Chen, Yingying ZHANG, Yunyao Cheng, Yang Shu, Yihang Wang, Qingsong Wen, Bin Yang, and Chenjuan Guo. 2024g. Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting. In Proceedings of the 12th International Conference on Learning Representations.
[45]
Qing Chen, Wei Shuai, Jiyao Zhang, Zhida Sun, and Nan Cao. 2024d. Beyond numbers: Creating analogies to enhance data comprehension and communication with generative AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24). Article 377, 14 pages. DOI:
[46]
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2024e. AgentVerse: Facilitating multi-agent collaboration and exploring emergent behaviors. In Proceedings of the 12th International Conference on Learning Representations.
[47]
Xiaolei Chen, Jie Shi, Jia Chen, Peng Wang, and Wei Wang. 2024c. High-precision online log parsing with large language models. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion ’24), 354–355. DOI:
[48]
Yang Chen. 2024. Flakiness repair in the era of large language models. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion ’24), 441–443. DOI:
[49]
Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas Roy, and Chuchu Fan. 2024a. Scalable multi-robot collaboration with large language models: Centralized or decentralized systems? arXiv:2309.15943 [cs.RO].
[50]
Betty H. C. Cheng, Rogério de Lemos, Holger Giese, Paola Inverardi, Jeff Magee, Jesper Andersson, Basil Becker, Nelly Bencomo, Yuriy Brun, Bojan Cukic, Giovanna Di Marzo Serugendo, Schahram Dustdar, Anthony Finkelstein, Cristina Gacek, Kurt Geihs, Vincenzo Grassi, Gabor Karsai, Holger M. Kienle, Jeff Kramer, Marin Litoiu, Sam Malek, Raffaela Mirandola, Hausi A. Müller, Sooyong Park, Mary Shaw, Matthias Tichy, Massimo Tivoli, Danny Weyns, and Jon Whittle. 2009. Software Engineering for Self-Adaptive Systems: A Research Roadmap. Springer, Berlin, 1–26. DOI:
[51]
Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2023. Deep reinforcement learning from human preferences. arXiv:1706.03741 [stat.ML].
[52]
Simon Chu, Justin Koe, David Garlan, and Eunsuk Kang. 2024. Integrating graceful degradation and recovery through requirement-driven adaptation. In Proceedings of the 19th Conference on Software Engineering for Adaptive and Self-Managing Systems.
[53]
John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. TaleBrush: Visual sketching of story generation with pretrained language models. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI EA ’22). Article 172, 4 pages. DOI:
[54]
Agnieszka Ciborowska and Kostadin Damevski. 2022. Fast changeset-based bug localization with BERT. In Proceedings of the 44th International Conference on Software Engineering (ICSE ’22), 946–957. DOI:
[55]
Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, Yang Zhou, Kaizhao Liang, Jintai Chen, Juanwu Lu, Zichong Yang, Kuei-Da Liao, Tianren Gao, Erlong Li, Kun Tang, Zhipeng Cao, Tong Zhou, Ao Liu, Xinrui Yan, Shuqi Mei, Jianguo Cao, Ziran Wang, and Chao Zheng. 2023. A survey on multimodal large language models for autonomous driving. arXiv:2311.12320 [cs.AI].
[56]
Javier Cámara, Gabriel Moreno, and David Garlan. 2015. Reasoning about human participation in self-adaptive systems. In Proceedings of the 2015 IEEE/ACM 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 146–156. DOI:
[57]
Carlos Eduardo da Silva and Rogério de Lemos. 2011. Dynamic plans for integration testing of self-adaptive software systems. In Proceedings of the 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’11), 148–157. DOI:
[58]
Zhirui Dai, Arash Asgharivaskasi, Thai Duong, Shusen Lin, Maria-Elizabeth Tzes, George Pappas, and Nikolay Atanasov. 2024. Optimal scene graph planning with large language model guidance. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[59]
Murtaza Dalal, Tarun Chiruvolu, Devendra Singh Chaplot, and Ruslan Salakhutdinov. 2024. Plan-seq-learn: Language model guided RL for solving long horizon robotics tasks. In Proceedings of the 12h International Conference on Learning Representations.
[60]
Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. 2024b. A decoder-only foundation model for time-series forecasting. In Forty-first International Conference on Machine Learning (ICML).
[61]
Badhan Chandra Das, M. Hadi Amini, and Yanzhao Wu. 2024a. Security and privacy challenges of large language models: A survey. arXiv:2402.00888 [cs.CL].
[62]
Rogério de Lemos, Holger Giese, Hausi A. Müller, Mary Shaw, Jesper Andersson, Marin Litoiu, Bradley Schmerl, Gabriel Tamura, Norha M. Villegas, Thomas Vogel, Danny Weyns, Luciano Baresi, Basil Becker, Nelly Bencomo, Yuriy Brun, Bojan Cukic, Ron Desmarais, Schahram Dustdar, Gregor Engels, Kurt Geihs, Karl M. Göschka, Alessandra Gorla, Vincenzo Grassi, Paola Inverardi, Gabor Karsai, Jeff Kramer, Antónia Lopes, Jeff Magee, Sam Malek, Serge Mankovskii, Raffaela Mirandola, John Mylopoulos, Oscar Nierstrasz, Mauro Pezzè, Christian Prehofer, Wilhelm Schäfer, Rick Schlichting, Dennis B. Smith, João Pedro Sousa, Ladan Tahvildari, Kenny Wong, and Jochen Wuttke. 2013. Software Engineering for Self-Adaptive Systems: A Second Research Roadmap. Springer, Berlin, 1–32. DOI:
[63]
I. de Zarzà, J. de Curtò, Gemma Roig, and Carlos T. Calafate. 2023. LLM adaptive PID control for B5G truck platooning systems. Sensors 23, 13 (2023). DOI:
[64]
Google Deepmind. 2024. Project Astra. Retrieved May 16, 2024 from https://deepmind.google/technologies/gemini/project-astra/
[65]
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, and Stefan Rass. 2023. PentestGPT: An LLM-empowered automatic penetration testing tool. arXiv:2308.06782 [cs.SE].
[66]
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. 2022. RLPrompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 3369–3391. DOI:
[67]
Yinlin Deng, Chunqiu Steven Xia, Chenyuan Yang, Shizhuo Dylan Zhang, Shujing Yang, and Lingming Zhang. 2024. Large Language models are edge-case generators: Crafting unusual programs for fuzzing deep learning libraries. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 70, 13 pages. DOI:
[68]
Gouri Deshpande, Behnaz Sheikhi, Saipreetham Chakka, Dylan Lachou Zotegouon, Mohammad Navid Masahati, and Guenther Ruhe. 2021. Is BERT the new silver bullet? - An empirical investigation of requirements dependency classification. In Proceedings of the IEEE 29th International Requirements Engineering Conference Workshops (REW ’21). 136–145. DOI:
[69]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [cs.CL].
[70]
Bosheng Ding, Chengwei Qin, Linlin Liu, Lidong Bing, Shafiq R. Joty, and Boyang Albert Li. 2022. Is GPT-3 a good data annotator? In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
[71]
Yan Ding, Xiaohan Zhang, Saeid Amiri, Nieqing Cao, Hao Yang, Andy Kaminski, Chad Esselink, and Shiqi Zhang. 2024. Integrating action knowledge and LLMs for task planning and situation handling in open worlds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[72]
Simon Dobson, Spyros Denazis, Antonio Fernández, Dominique Gaïti, Erol Gelenbe, Fabio Massacci, Paddy Nixon, Fabrice Saffre, Nikita Schmidt, and Franco Zambonelli. 2006. A survey of autonomic communications. ACM Transactions on Autonomous and Adaptive Systems (TAAS) 1, 2 (Dec. 2006), 223–259. DOI:
[73]
Yihan Dong. 2024. The multi-agent system based on LLM for online discussions. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 2731–2733.
[74]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations.
[75]
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. 2023. PaLM-E: An embodied multimodal language model. In Proceedings of the 40th International Conference on Machine Learning (ICML’23). Article 340, 20 pages.
[76]
Li Du, Xiao Ding, Yue Zhang, Ting Liu, and Bing Qin. 2022. A graph enhanced BERT model for event prediction. In Findings of the Association for Computational Linguistics (ACL ’22). Association for Computational Linguistics, 2628–2638. DOI:
[77]
Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, and Jacob Andreas. 2023. Guiding pretraining in reinforcement learning with large language models. In Proceedings of the 40th International Conference on Machine Learning, Vol. 202. PMLR, 8657–8677.
[78]
Upol Ehsan, Elizabeth A. Watkins, Philipp Wintersberger, Carina Manger, Sunnie S. Y. Kim, Niels Van Berkel, Andreas Riener, and Mark O. Riedl. 2024. Human-centered explainable AI (HCXAI): Reloading explainability in the era of large language models (LLMs). In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’24). Article 477, 6 pages. DOI:
[79]
A. Fan, B. Gokkaya, M. Harman, M. Lyubarskiy, S. Sengupta, S. Yoo, and J. M. Zhang. 2023b. Large language models for software engineering: survey and open problems. In Proceedings of the IEEE/ACM International Conference on Software Engineering: Future of Software Engineering (ICSE-FoSE ’23), 31–53. DOI:
[80]
Caoyun Fan, Jindou Chen, Yaohui Jin, and Hao He. 2024a. Can large language models serve as rational players in game theory? A systematic analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 17960–17967. DOI:
[81]
Xinyao Fan, Yueying Wu, Chang Xu, Yuhao Huang, Weiqing Liu, and Jiang Bian. 2024b. MG-TSD: Multi-granularity time series diffusion models with guided learning process. In Proceedings of the 12th International Conference on Learning Representations.
[82]
Z. Fan, X. Gao, M. Mirchev, A. Roychoudhury, and S. Tan. 2023a. Automated repair of programs from large language models. In Proceedings of the IEEE/ACM 45th International Conference on Software Engineering (ICSE ’23), 1469–1481. DOI:
[83]
Alessandro Fantechi, Stefania Gnesi, Lucia Passaro, and Laura Semini. 2023. Inconsistency detection in natural language requirements using ChatGPT: A preliminary evaluation. In Proceedings of the IEEE 31st International Requirements Engineering Conference (RE ’23), 335–340. DOI:
[84]
Evelina Fedorenko, Steven T. Piantadosi, and Edward A. F. Gibson. 2024. Language is primarily a tool for communication rather than thought. Nature 630, 8017 (2024), 575–586. DOI:
[85]
Nick Feng, Lina Marsso, S. Getir Yaman, Isobel Standen, Yesugen Baatartogtokh, Reem Ayad, Victória Oldemburgo de Mello, Bev Townsend, Hanne Bartels, Ana Cavalcanti, Radu Calinescu, and Marsha Chechik. 2024. Normative requirements operationalization with large language models. In 32nd IEEE International Requirements Engineering (RE).
[86]
Silvan Ferreira, Ivanovitch Silva, and Allan Martins. 2024. Organizing a society of language models: structures and mechanisms for enhanced collective intelligence. arXiv:2405.03825 [cs.AI].
[87]
Antonio Filieri, Henry Hoffmann, and Martina Maggio. 2014. Automated design of self-adaptive software with control-theoretical formal guarantees. In Proceedings of the 36th International Conference on Software Engineering (ICSE ’14), 299–310. DOI:
[88]
Emily First, Markus Rabe, Talia Ringer, and Yuriy Brun. 2023. Baldur: Whole-proof generation and repair with large language models. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’23), 1229–1241. DOI:
[89]
Erik M. Fredericks, Byron DeVries, and Betty H. C. Cheng. 2014. Towards run-time adaptation of test cases for self-adaptive systems in the face of uncertainty. In Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’14), 17–26. DOI:
[90]
Erik M. Fredericks, Andres J. Ramirez, and Betty H. C. Cheng. 2013. Towards run-time testing of dynamic adaptive systems. In Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’13), 169–174. DOI:
[91]
Michael Fu, Chakkrit Tantithamthavorn, Trung Le, Van Nguyen, and Dinh Phung. 2022. VulRepair: A T5-based automated software vulnerability repair. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’22), 935–947. DOI:
[92]
Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. 2022. Generalized decision transformer for offline hindsight information matching. In Proceedings of the International Conference on Learning Representations.
[93]
Matteo Gallici, Mario Martin, and Ivan Masmitja. 2023. TransfQMix: Transformers for leveraging the graph structure of multi-agent reinforcement learning problems. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’23), 1679–1687.
[94]
Jensen Gao, Bidipta Sarkar, Fei Xia, Ted Xiao, Jiajun Wu, Brian Ichter, Anirudha Majumdar, and Dorsa Sadigh. 2024. Physically grounded vision-language models for robotic manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[95]
D. Garlan, S. W. Cheng, A. C. Huang, B. Schmerl, and P. Steenkiste. 2004. Rainbow: Architecture-based self-adaptation with reusable infrastructure. Computer 37, 10 (2004) 46–54.
[96]
Mingyang Geng, Shangwen Wang, Dezun Dong, Haotian Wang, Ge Li, Zhi Jin, Xiaoguang Mao, and Xiangke Liao. 2024. Large language models are few-shot summarizers: Multi-intent comment generation via in-context learning. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 39, 13 pages. DOI:
[97]
Omid Gheibi and Danny Weyns. 2024. Dealing with drift of adaptation spaces in learning-based self-adaptive systems using lifelong self-adaptation. ACM Transactions on Autonomous and Adaptive Systems 19, 1 (2024), 5:1–5:57. DOI:
[98]
Omid Gheibi, Danny Weyns, and Federico Quin. 2021a. Applying machine learning in self-adaptive systems: A systematic literature review. ACM Transactions on Autonomous and Adaptive Systems 15, 3, Article 9 (Aug. 2021), 37 pages.
[99]
Omid Gheibi, Danny Weyns, and Federico Quin. 2021b. On the impact of applying machine learning in the decision-making of self-adaptive systems. In Proceedings of the International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’21), 104–110. DOI:
[100]
Avijit Ghosh and Genoveva Fossas. 2022. Can there be art without an artist? arXiv:2209.07667 [cs.AI].
[101]
Miriam Gil, Manoli Albert, Joan Fons, and Vicente Pelechano. 2019. Designing human-in-the-loop autonomous cyber-physical systems. International Journal of Human-Computer Studies 130 (2019), 21–39. DOI:
[102]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Communications of the ACM 63, 11 (Oct. 2020), 139–144. DOI:
[103]
Moritz A. Graule and Volkan Isler. 2024. GG-LLM: Geometrically grounding large language models for zero-shot human activity forecasting in human-aware task planning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[104]
Alex Graves. 2012. Long Short-Term Memory. Springer, Berlin, 37–45. DOI:
[105]
Nate Gruver, Marc Finzi, Shikai Qiu, and Andrew Gordon Wilson. 2023. Large language models are zero-shot time series forecasters. In Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems (NeurIPS ’23).
[106]
Lin Guan, Karthik Valmeekam, Sarath Sreedharan, and Subbarao Kambhampati. 2023. Leveraging pre-trained large language models to construct and utilize world models for model-based task planning. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[107]
Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. 2024c. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. In Proceedings of the 12th International Conference on Learning Representations.
[108]
Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, and Xiangliang Zhang. 2024a. Large language model based multi-agents: A survey of progress and challenges. arXiv:2402.01680 [cs.CL].
[109]
Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L. Griffiths, and Mengdi Wang. 2024b. Embodied LLM agents learn to cooperate in organized teams. arXiv:2403.12482 [cs.AI].
[110]
Xiaoyu Guo, Jianjun Zhao, and Pengzhan Zhao. 2024d. On repairing quantum programs using ChatGPT. In Proceedings of the 5th International Workshop on Quantum Software Engineering (Q-SE ’24).
[111]
Agrim Gupta, Linxi Fan, Surya Ganguli, and Li Fei-Fei. 2022. MetaMorph: Learning universal controllers with transformers. In Proceedings of the International Conference on Learning Representations.
[112]
Priyanshu Gupta, Avishree Khare, Yasharth Bajpai, Saikat Chakraborty, Sumit Gulwani, Aditya Kanade, Arjun Radhakrishna, Gustavo Soares, and Ashish Tiwari. 2023. Grace: Language models meet code edits. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’23), 1483–1495. DOI:
[113]
Seth T. Hamman, Kenneth M. Hopkinson, Ruth L. Markham, Andrew M. Chaplik, and Gabrielle E. Metzler. 2017. Teaching game theory to improve adversarial thinking in cybersecurity students. IEEE Transactions on Education 60, 3 (2017), 205–211. DOI:
[114]
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward Ayers, and Stanislas Polu. 2022. Proof artifact co-training for theorem proving with language models. In Proceedings of the International Conference on Learning Representations.
[115]
Shibo Hao, Bowen Tan, Kaiwen Tang, Bin Ni, Xiyan Shao, Hengzhe Zhang, Eric Xing, and Zhiting Hu. 2023. BertNet: Harvesting knowledge graphs with arbitrary relations from pretrained language models. In Findings of the Association for Computational Linguistics (ACL ’23). Association for Computational Linguistics, 5000–5015. DOI:
[116]
Andreas Happe and Jürgen Cito. 2023. Getting pwn’d by AI: Penetration testing with large language models. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’23), 2082–2086. DOI:
[117]
Mark Harman, S. Afshin Mansouri, and Yuanyuan Zhang. 2012. Search-based software engineering: Trends, techniques and applications. ACM Computing Surveys 45, 1 (Dec. 2012), Article 11, 61 pages. DOI:
[118]
Shabnam Hassani. 2024. Enhancing legal compliance and regulation analysis with large language models. In Proceedings of the 32nd IEEE International Requirements Engineering 2024 Conference (RE ’24).
[119]
Rishi Hazra, Pedro Zuidberg Dos Martires, and Luc De Raedt. 2024. SayCanPay: Heuristic planning with large language models using learnable domain knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 20123–20133. DOI:
[120]
Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, and Xuelong Li. 2023. Diffusion model is an effective planner and data synthesizer for multi-task reinforcement learning. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 36. Curran Associates, Inc., 64896–64917.
[121]
Maya Hickmann. 2000. Linguistic relativity and linguistic determinism: some new directions. Linguistics 38, 2 (2000), 409–434. DOI:
[122]
Shinichi Honiden Hiroyuki Nakagawa. 2023. MAPE-K loop-based goal model generation using generative AI. In Proceedings of the IEEE 31st International Requirements Engineering Conference Workshop.
[123]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 6840–6851.
[124]
Jacob Hoffmann and Demian Frister. 2024. Generating software tests for mobile applications using fine-tuned large language models. In Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST ’24), 76–77. DOI:
[125]
Noah Hollmann, Samuel Müller, and Frank Hutter. 2023. Large language models for automated data science: Introducing CAAFE for context-aware automated feature engineering. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[126]
Junyuan Hong, Jiachen T. Wang, Chenhui Zhang, Zhangheng LI, Bo Li, and Zhangyang Wang. 2024a. DP-OPT: Make large language model your privacy-preserving prompt engineer. In Proceedings of the 12th International Conference on Learning Representations.
[127]
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. 2024b. MetaGPT: Meta programming for a multi-agent collaborative framework. In Proceedings of the 12th International Conference on Learning Representations.
[128]
Yuki Hou, Haruki Tamoto, and Homei Miyashita. 2024. “My agent understands me better”: Integrating dynamic human-like memory recall and consolidation in LLM-based agents. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems. Article 7, 7 pages. DOI:
[129]
Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Vol. 1: Long Papers. Association for Computational Linguistics, 328–339.
[130]
Anthony Hu, Lloyd Russell, Hudson Yeo, Zak Murez, George Fedoseev, Alex Kendall, Jamie Shotton, and Gianluca Corrado. 2023b. GAIA-1: A Generative world model for autonomous driving. arXiv:2309.17080 [cs.CV].
[131]
Ronghang Hu and Amanpreet Singh. 2021. UniT: Multimodal multitask learning with a unified transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV ’21), 1439–1449.
[132]
Siyi Hu, Fengda Zhu, Xiaojun Chang, and Xiaodan Liang. 2021. {UPD}eT: Universal multi-agent {RL} via policy decoupling with transformers. In Proceedings of the International Conference on Learning Representations.
[133]
X. Hu, Z. Liu, X. Xia, Z. Liu, T. Xu, and X. Yang. 2023a. Identify and update test cases when production code changes: A transformer-based approach. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE ’23), 1111–1122. DOI:
[134]
Kai Huang, Xiangxin Meng, Jian Zhang, Yang Liu, Wenjie Wang, Shuhao Li, and Yuqing Zhang. 2023b. An empirical study on fine-tuning large language models of code for automated program repair. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE ’23), 1162–1174. DOI:
[135]
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023c. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv:2311.05232 [cs.CL].
[136]
Tao Huang, Pengfei Chen, Jingrun Zhang, Ruipeng Li, and Rui Wang. 2023a. A transferable time series forecasting service using deep transformer model for online systems. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Article 4, 12 pages. DOI:
[137]
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, and brian ichter. 2022. Inner monologue: Embodied reasoning through planning with language models. In Proceedings of the 6th Annual Conference on Robot Learning.
[138]
Yutan Huang, Tanjila Kanij, Anuradha Madugalla, Shruti Mahajan, Chetan Arora, and John Grundy. 2024. Unlocking adaptive user experience with generative AI. arXiv:2404.05442 [cs.HC].
[139]
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2017. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research 18, 1 (Jan. 2018), 6869–6898.
[140]
William Hunt, Toby Godfrey, and Mohammad D. Soorati. 2024. Conversational language models for human-in-the-loop multi-robot coordination. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 2809–2811.
[141]
Muhammad Usman Iftikhar, Gowri Sankar Ramachandran, Pablo Bollansée, Danny Weyns, and Danny Hughes. 2017. DeltaIoT: A self-adaptive internet of things exemplar. In Proceedings of the IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’17), 76–82. DOI:
[142]
M. Usman Iftikhar and Danny Weyns. 2014. ActivFORMS: Active formal models for self-adaptation. In Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’14), 125–134.
[143]
Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, and Sadao Kurohashi. 2023. MultiTool-CoT: GPT-3 can use multiple external tools with chain of thought prompting. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Vol. 2: Short Papers. Association for Computational Linguistics, 1522–1532. DOI:
[144]
Jeevana Priya Inala, Yichen Yang, James Paulos, Yewen Pu, Osbert Bastani, Vijay Kumar, Martin Rinard, and Armando Solar-Lezama. 2020. Neurosymbolic transformers for multi-agent communication. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 13597–13608.
[145]
S. Izquierdo, G. Canal, C. Rizzo, and G. Alenyà. 2024. PlanCollabNL: Leveraging large language models for adaptive plan generation in human-robot collaboration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[146]
Pooyan Jamshidi, Javier Cámara, Bradley Schmerl, Christian Käestner, and David Garlan. 2019. Machine learning meets quantitative planning: Enabling self-adaptation in autonomous robots. In Proceedings of the IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’19), 39–50. DOI:
[147]
Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. 2022. Planning with diffusion for flexible behavior synthesis. In Proceedings of the International Conference on Machine Learning.
[148]
Piyush Jha, Joseph Scott, Jaya Sriram Ganeshna, Mudit Singh, and Vijay Ganesh. 2024. BertRLFuzzer: A BERT and reinforcement learning based fuzzer (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 23521–23522. DOI:
[149]
Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygóźdź, Piotr MioŚ, Yuhuai Wu, and Mateja Jamnik. 2022. Thor: Wielding hammers to integrate language models and automated theorem provers. In Proceedings of the Advances in Neural Information Processing Systems.
[150]
Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, and Jingyuan Wang. 2023a. PDFormer: Propagation delay-aware dynamic long-range transformer for traffic flow prediction. In Proceedings of the 37th AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and 13th Symposium on Educational Advances in Artificial Intelligence(AAAI ’23/IAAI ’23/EAAI ’23). Article 487, 9 pages. DOI:
[151]
N. Jiang, K. Liu, T. Lutellier, and L. Tan. 2023b. Impact of code language models on automated program repair. In Proceedings of the IEEE/ACM 45th International Conference on Software Engineering (ICSE ’23), 1430–1442. DOI:
[152]
Peiling Jiang, Jude Rayan, Steven P. Dow, and Haijun Xia. 2023c. Graphologue: Exploring large language model responses with interactive diagrams. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23). Article 3, 20 pages. DOI:
[153]
Shengbei Jiang, Jiabao Zhang, Wei Chen, Bo Wang, Jianyi Zhou, and Jie M. Zhang. 2024. Evaluating fault localization and program repair capabilities of existing closed-source general-purpose LLMs. In Proceedings of the 1st International Workshop on Large Language Models for Code.
[154]
Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. 2024. Time-LLM: Time series forecasting by reprogramming large language models. In Proceedings of the 12h International Conference on Learning Representations.
[155]
Peng Jin, Yang Wu, Yanbo Fan, Zhongqian Sun, Yang Wei, and Li Yuan. 2023. Act as you wish: Fine-grained control of motion diffusion model with hierarchical semantic graphs. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[156]
Christoforos Kachris. 2024. A survey on hardware accelerators for large language models. arXiv:2401.09890 [cs.AR].
[157]
Eduard Kamburjan, Riccardo Sieve, Chinmayi Prabhu Baramashetru, Marco Amato, Gianluca Barmina, Eduard Occhipinti, and Einar Broch Johnsen. 2024. GreenhouseDT: An exemplar for digital twins. In Proceedings of the 19th Conference on Software Engineering for Adaptive and Self-Managing Systems.
[158]
Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, and Shuicheng Yan. 2023. Efficient diffusion policies for offline reinforcement learning. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[159]
Qitong Kang, Fuyong Wang, Zhongxin Liu, and Zengqiang Chen. 2024. TIMAT: Temporal information multi-agent transformer. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 2321–2323.
[160]
Jeff Kephart and David Chess. 2003. The vision of autonomic computing. Computer 36, 1 (Jan. 2003), 41–50.
[161]
Junaed Younus Khan and Gias Uddin. 2023. Automatic code documentation generation using GPT-3. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Article 174, 6 pages. DOI:
[162]
Dongsun Kim and Sooyong Park. 2009. Reinforcement learning-based dynamic adaptation planning method for architecture-based self-managed software. In Proceedings of the 2009 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems, 76–85. DOI:
[163]
Jayoung Kim, Chaejeong Lee, Yehjin Shin, Sewon Park, Minjung Kim, Noseong Park, and Jihoon Cho. 2022. SOS: Score-based oversampling for tabular data. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’22), 762–772. DOI:
[164]
Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. 2018. Textual explanations for self-driving vehicles. In Proceedings of the European Conference on Computer Vision (ECCV ’18).
[165]
Diederik P. Kingma and Max Welling. 2022. Auto-encoding variational bayes. arXiv:1312.6114 [stat.ML].
[166]
K. Knill and S. Young. 1997. Hidden Markov Models in Speech and Language Processing. Springer Netherlands, Dordrecht, 27–68. DOI:
[167]
Hyung-Kwon Ko, Hyeon Jeon, Gwanmo Park, Dae Hyun Kim, Nam Wook Kim, Juho Kim, and Jinwook Seo. 2024. Natural language dataset generation framework for visualizations powered by large language models. In Proceedings of the Conference on Human Factors in Computing Systems (CHI ’24). Article 843, 22 pages. DOI:
[168]
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems (NeurIPS ’22).
[169]
Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, and Bernie Wang. 2023. Predict, refine, synthesize: Self-guiding diffusion models for probabilistic time series forecasting. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[170]
Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. 2023. Benchmarking cognitive biases in large language models as evaluators. arXiv:2309.17012 [cs.CL].
[171]
Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. 2023. Reward design with language models. In Proceedings of the the 11th International Conference on Learning Representations.
[172]
Mariam Lahami and Moez Krichen. 2021. A survey on runtime testing of dynamically adaptable and distributed systems. Software Quality Journal 29, 2 (2021), 555–593. DOI:
[173]
Márk Lajkó, Viktor Csuvik, Tibor Gyimothy, and László Vidács. 2024. Automated program repair with the GPT family, including GPT-2, GPT-3 and CodeX. In Proceedings of the IEEE/ACM International Workshop on Automated Program Repair (APR ’24).
[174]
V. Le and H. Zhang. 2021. Log-based anomaly detection without log parsing. In Proceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering (ASE ’21), 492–504. DOI:
[175]
Van-Hoang Le and Hongyu Zhang. 2023. Log parsing: How far can ChatGPT go?. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE ’23). IEEE, 1699–1704. DOI:
[176]
C. Lee, T. Yang, Z. Chen, Y. Su, and M. R. Lyu. 2023. Maat: Performance metric anomaly anticipation for cloud services with conditional diffusion. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE ’23), 116–128. DOI:
[177]
Namyeong Lee and Jun Moon. 2023. Transformer actor-critic with regularization: automated stock trading using reinforcement learning. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’23), 2815–2817.
[178]
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-Tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 9459–9474.
[179]
Jinyang Li, Binyuan Hui, G. E. Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023b. Can LLM already serve as a database interface? A big bench for large-scale database grounded text-to-SQLs. In Proceedings of the 37th Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
[180]
Jia Li, Shiva Nejati, and Mehrdad Sabetzadeh. 2024a. Using genetic programming to build self-adaptivity into software-defined networks. ACM Transactions on Autonomous and Adaptive Systems 19, 1, Article 2 (Feb. 2024), 35 pages. DOI:
[181]
Jialong Li, Mingyue Zhang, Nianyu Li, Danny Weyns, Zhi Jin, and Kenji Tei. 2024c. Exploring the potential of large language models in self-adaptive systems. In Proceedings of the 19th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’24), 77–83. DOI:
[182]
Jialong Li, Mingyue Zhang, Zhenyu Mao, Haiyan Zhao, Zhi Jin, Shinichi Honiden, and Kenji Tei. 2022b. Goal-oriented knowledge reuse via curriculum evolution for reinforcement learning-based adaptation. In Proceedings of the 29th Asia-Pacific Software Engineering Conference (APSEC ’22), 189–198.
[183]
Lei Li, Yongfeng Zhang, and Li Chen. 2021b. Personalized transformer for explainable recommendation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Vol. 1 Long Papers. Online, 4947–4957.
[184]
Nianyu Li, Sridhar Adepu, Eunsuk Kang, and David Garlan. 2020a. Explanations for human-on-the-loop: A probabilistic model checking approach. In Proceedings of the IEEE/ACM 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’20), 181–187.
[185]
Nianyu Li, Javier Cámara, David Garlan, and Bradley Schmerl. 2020b. Reasoning about when to provide explanation for human-involved self-adaptive systems. In Proceedings of the IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS ’20), 195–204. DOI:
[186]
Nianyu Li, Javier Cámara, David Garlan, Bradley Schmerl, and Zhi Jin. 2021a. Hey! Preparing humans to do tasks in self-adaptive systems. In Proceedings of the International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’21), 48–58. DOI:
[187]
Nianyu Li, Mingyue Zhang, Jialong Li, Sridhar Adepu, Eunsuk Kang, and Zhi Jin. 2024b. A game-theoretical self-adaptation framework for securing software-intensive systems. ACM Transactions on Autonomous and Adaptive Systems, 19 2 (Apr. 2024), Article 12, 49 pages. DOI:
[188]
Nianyu Li, Mingyue Zhang, Jialong Li, Eunsuk Kang, and Kenji Tei. 2023e. Preference adaptation: User satisfaction is all you need!. In Proceedings of the IEEE/ACM 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’23), 133–144.
[189]
Ruikun Li, Xuliang Li, Shiying Gao, S. T. Boris Choy, and Junbin Gao. 2023c. Graph convolution recurrent denoising diffusion model for multivariate probabilistic temporal forecasting. In Proceedings of the Advanced Data Mining and Applications: 19th International Conference (ADMA ’23). Springer-Verlag, Berlin, 661–676. DOI:
[190]
Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, and Yuke Zhu. 2022a. Pre-trained language models for interactive decision-making. In Proceedings of the Advances in Neural Information Processing Systems.
[191]
Wenhao Li, Xiangfeng Wang, Bo Jin, and Hongyuan Zha. 2023d. Hierarchical diffusion for offline decision making. In Proceedings of the 40th International Conference on Machine Learning, Vol. 202. PMLR, 20035–20064.
[192]
Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023a. Compressing context to enhance inference efficiency of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Singapore, 6342–6353. DOI:
[193]
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, and Xiang Ren. 2023a. SwiftSage: A generative agent with fast and slow thinking for complex interactive tasks. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[194]
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. 2023b. Magic3D: High-resolution text-to-3D content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR ’23), 300–309.
[195]
Jinfeng Lin, Yalin Liu, Qingkai Zeng, Meng Jiang, and Jane Cleland-Huang. 2021. Traceability transformed: Generating more accurate links with pre-trained BERT models. In Proceedings of the IEEE/ACM 43rd International Conference on Software Engineering (ICSE ’21), 324–335. DOI:
[196]
Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Heng Ji, Yuan Yao, and Tong Zhang. 2024. Mitigating the alignment tax of RLHF. arXiv:2309.06256 [cs.LG].
[197]
Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. arXiv:2305.13711 [cs.CL].
[198]
Jianwei Liu, Maria Stamatopoulou, and Dimitrios Kanoulas. 2024c. DiPPeR: Diffusion-based 2D path planner applied on legged robots. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[199]
Jijia Liu, Chao Yu, Jiaxuan Gao, Yuqing Xie, Qingmin Liao, Yi Wu, and Yu Wang. 2024f. LLM-powered hierarchical language agent for real-time human-AI coordination. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 1219–1228.
[200]
Mingjie Liu, Teo Ene, Robert Kirby, Chris Cheng, Nathaniel Pinckney, Rongjian Liang, Jonah Alben, Himyanshu Anand, Sanmitra Banerjee, Ismet Bayraktaroglu, Bonita Bhaskaran, Bryan Catanzaro, Arjun Chaudhuri, Sharon Clay, Bill Dally, Laura Dang, Parikshit Deshpande, Siddhanth Dhodhi, Sameer Halepete, Eric Hill, Jiashang Hu, Sumit Jain, Brucek Khailany, Kishor Kunal, Xiaowei Li, Hao Liu, Stuart Oberman, Sujeet Omar, Sreedhar Pratty, Ambar Sarkar, Zhengjiang Shao, Hanfei Sun, Pratik P Suthar, Varun Tej, Kaizhe Xu, and Haoxing Ren. 2023a. ChipNeMo: Domain-adapted LLMs for chip design. arXiv:2311.00176 [cs.CL].
[201]
Shengcai Liu, Caishun Chen, Xinghua Qu, Ke Tang, and Yew-Soon Ong. 2024b. Large language models as evolutionary optimizers. In Proceedings of the 12th International Conference on Learning Representations.
[202]
Tennison Liu, Nicolás Astorga, Nabeel Seedat, and Mihaela van der Schaar. 2024a. Large language models to enhance bayesian optimization. In Proceedings of the 12h International Conference on Learning Representations.
[203]
Xingyu ’Bruce’ Liu, Vladimir Kirilyuk, Xiuxiu Yuan, Peggy Chi, Alex Olwal, Xiang ’Anthony’ Chen, and Ruofei Du. 2023c. Experiencing visual captions: Augmented communication with real-time visuals using large language models. In Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23 Adjunct). Article 85, 4 pages. DOI:
[204]
Yilun Liu, Shimin Tao, Weibin Meng, Jingyu Wang, Wenbing Ma, Yuhang Chen, Yanqing Zhao, Hao Yang, and Yanfei Jiang. 2024d. Interpretable online log analysis using large language models with prompt strategies. In Proceedings of the 32nd IEEE/ACM International Conference on Program Comprehension (ICPC ’24), 35–46. DOI:
[205]
Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. 2022. Non-stationary transformers: Exploring the stationarity in time series forecasting. In Proceedings of the Advances in Neural Information Processing Systems.
[206]
Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. 2024e. Trustworthy LLMs: A survey and guideline for evaluating large language models’ alignment. arXiv:2308.05374 [cs.AI].
[207]
Zuxin Liu, Zijian Guo, Yihang Yao, Zhepeng Cen, Wenhao Yu, Tingnan Zhang, and Ding Zhao. 2023b. Constrained decision transformer for offline safe reinforcement learning. In Proceedings of the 40th International Conference on Machine Learning (ICML ’23). JMLR.org, Article 893, 20 pages.
[208]
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV ’21).
[209]
Samuel López-Ruiz, Carlos Ignacio Hernández-Castellanos, and Katya Rodríguez-Vázquez. 2022. Multi-objective framework for quantile forecasting in financial time series using transformers. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’22), 395–403. DOI:
[210]
Xingzhou Lou, Junge Zhang, Ziyan Wang, Kaiqi Huang, and Yali Du. 2024. Safe reinforcement learning with free-form natural language constraints and pre-trained language models. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 1274–1282.
[211]
Jack Lu, Kelvin Wong, Chris Zhang, Simon Suo, and Raquel Urtasun. 2024. SceneControl: Diffusion for controllable traffic scene generation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[212]
Xianchang Luo, Yinxing Xue, Zhenchang Xing, and Jiamou Sun. 2023. PRCBERT: Prompt learning for requirement classification using BERT-based pretrained language models. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Article 75, 13 pages. DOI:
[213]
Lipeng Ma, Weidong Yang, Bo Xu, Sihang Jiang, Ben Fei, Jiaqing Liang, Mingjie Zhou, and Yanghua Xiao. 2024c. KnowLog: Knowledge enhanced pre-trained language model for log understanding. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering (ICSE ’24). ACM, 32:1–32:13. DOI:
[214]
Xiao Ma and Wu-Jun Li. 2024. Weighting online decision transformer with episodic memory for offline-to-online reinforcement learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[215]
Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2024b. Eureka: Human-level reward design via coding large language models. In Proceedings of the 12h International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=IEduRUO55F
[216]
Zeyang Ma, An Ran Chen, Dong Jae Kim, Tse-Hsun Chen, and Shaowei Wang. 2024a. LLMParser: An exploratory study on using large language models for log parsing. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 99, 13 pages. DOI:
[217]
Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang. 2022. Memory-assisted prompt editing to improve GPT-3 after deployment. In Proceedings of the ACL 2022 Workshop on Commonsense Representation and Reasoning.
[218]
Anuradha Madugalla, Yutan Huang, John Grundy, Min Hee Cho, Lasith Koswatta Gamage, Tristan Leao, and Sam Thiele. 2024. Engineering adaptive information graphics for disabled communities: A case study with public space indoor maps. arXiv:2401.05659 [cs.HC].
[219]
Cláudia Mamede, Eduard Pinconschi, and Rui Abreu. 2023. A transformer-based IDE plugin for vulnerability detection. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Article 149, 4 pages. DOI:
[220]
Zhao Mandi, Shreeya Jain, and Shuran Song. 2024. RoCo: Dialectic multi-robot collaboration with large language models. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[221]
Antonio Mastropaolo, Matteo Ciniselli, Luca Pascarella, Rosalia Tufano, Emad Aghajani, and Gabriele Bavota. 2024. Towards summarizing code snippets using pre-trained transformers. In Proceedings of the 32th IEEE/ACM International Conference on Program Comprehension.
[222]
Angelos Mavrogiannis, Christoforos Mavrogiannis, and Yiannis Aloimonos. 2024. Cook2LTL: Translating cooking recipes to LTL formulae using large language models. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[223]
Nicola Mc Donnell, Jim Duggan, and Enda Howley. 2023. A genetic programming-based framework for semi-automated multi-agent systems engineering. ACM Transactions on Autonomous and Adaptive Systems 18, 2, Article 6 (May 2023), 30 pages. DOI:
[224]
Matthew B. A. McDermott, Bret Nestor, Peniel N. Argaw, and Isaac S. Kohane. 2023. Event stream GPT: A data pre-processing and modeling library for generative, pre-trained transformers over continuous-time sequences of complex events. In Proceedings of the 37th Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
[225]
Şevval Mehder and Fatma Başak Aydemir. 2022. Classification of Issue Discussions in Open Source Projects Using Deep Language Models. In Proceedings of the IEEE 30th International Requirements Engineering Conference Workshops (REW ’22), 176–182. DOI:
[226]
Luckeciano C. Melo. 2022. Transformers are meta-reinforcement learners. In Proceedings of the International Conference on Machine Learning (ICML ’22).
[227]
Microsoft. 2024. GraphRAG: Graph Retrieval-Augmented Generation. Retrieved July 22, 2024 from https://github.com/microsoft/graphrag
[228]
Tomás Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH ’10). ISCA, 1045–1048. DOI:
[229]
Utkarsh Aashu Mishra and Yongxin Chen. 2023. ReorientDiff: Diffusion model based reorientation for object manipulation. In Proceedings of the RSS 2023 Workshop on Learning for Task and Motion Planning.
[230]
Utkarsh Aashu Mishra, Shangjie Xue, Yongxin Chen, and Danfei Xu. 2023. Generative skill chaining: Long-horizon skill planning with diffusion models. In Proceedings of the 7th Annual Conference on Robot Learning.
[231]
Gabriel Moreno, Cody Kinneer, Ashutosh Pandey, and David Garlan. 2019. DARTSim: An exemplar for evaluation and comparison of self-adaptation approaches for smart cyber-physical systems. In Proceedings of the IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’19), 181–187. DOI:
[232]
Gabriel A. Moreno, Javier Camara, David Garlan, and Bradley Schmerl. 2016. Efficient decision-making under uncertainty for proactive self-adaptation. In Proceedings of the IEEE International Conference on Autonomic Computing (ICAC ’16), 147–156. DOI:
[233]
Gabriel A. Moreno, Javier Cámara, David Garlan, and Bradley Schmerl. 2015. Proactive self-adaptation under uncertainty: A probabilistic model checking approach. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, 1–12. DOI:
[234]
Christian Murphy, Gail Kaiser, Ian Vo, and Matt Chu. 2009. Quality assurance of software applications using the in vivo testing approach. In Proceedings of the 2009 International Conference on Software Testing Verification and Validation, 111–120. DOI:
[235]
H. Nakagawa and S. Honiden. 2023. MAPE-K loop-based goal model generation using generative AI. In Proceedings of the IEEE 31st International Requirements Engineering Conference Workshops (REW ’23), 247–251. DOI:
[236]
Daye Nam, Andrew Macvean, Vincent Hellendoorn, Bogdan Vasilescu, and Brad Myers. 2024. Using an LLM to help with code understanding. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 97, 13 pages. DOI:
[237]
Nandu Digital Economy Governance Research Center. 2023. Generative AI Development and Governance Observation Report 2023 (Chinese). Observation Report. Nandu Digital Economy Governance Research Center.
[238]
Nathalia Nascimento, Paulo Alencar, and Donald Cowan. 2023. Self-adaptive large language model (LLM)-based multiagent systems. In Proceedings of the IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C ’23), 104–109. DOI:
[239]
Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, and Zhixuan Liang. 2023. MetaDiffuser: Diffusion model as conditional planner for offline meta-RL. In Proceedings of the 40th International Conference on Machine Learning (ICML’23). JMLR.org, Article 1085, 19 pages.
[240]
Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, and Roy Fox. 2023. Do embodied agents dream of pixelated sheep? embodied decision making using language guided world modelling. In Proceedings of the 40th International Conference on Machine Learning (ICML’23). Article 1096, 15 pages.
[241]
João Paulo Karol Santos Nunes, Shiva Nejati, Mehrdad Sabetzadeh, and Elisa Yumi Nakagawa. 2024. Self-adaptive, requirements-driven autoscaling of microservices. In Proceedings of the 19th Conference on Software Engineering for Adaptive and Self-Managing Systems.
[242]
OpenAI. 2023. Generative Models. Retrieved May 12, 2023 from https://openai.com/index/generative-models/
[243]
OpenAI. 2024. Hello GPT-4o. Retrieved May 14, 2024 from https://openai.com/index/hello-gpt-4o/
[244]
Jiabao Pan, Yan Zhang, Chen Zhang, Zuozhu Liu, Hongwei Wang, and Haizhou Li. 2024. DynaThink: Fast or slow? A dynamic decision-making framework for large language models. arXiv:2407.01009 [cs.CL].
[245]
Ashutosh Pandey, Gabriel A. Moreno, Javier Cámara, and David Garlan. 2016. Hybrid planning for decision making in self-adaptive systems. In Proceedings of the IEEE 10th International Conference on Self-Adaptive and Self-Organizing Systems (SASO), 130–139. DOI:
[246]
Ravi Pandya, Michelle Zhao, Changliu Liu, Reid Simmons, and Henny Admoni. 2024. Multi-agent strategy explanations for human-robot collaboration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[247]
Emilio Parisotto, Francis Song, Jack Rae, Razvan Pascanu, Caglar Gulcehre, Siddhant Jayakumar, Max Jaderberg, Raphaël Lopez Kaufman, Aidan Clark, Seb Noury, Matthew Botvinick, Nicolas Heess, and Raia Hadsell. 2020. Stabilizing Transformers for Reinforcement Learning. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. PMLR, 7487–7498.
[248]
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv:2304.03442 [cs.HC].
[249]
Juan Parra-Ullauri, Antonio García-Domínguez, Nelly Bencomo, and Luis Garcia-Paucar. 2022. History-aware explanations: towards enabling human-in-the-loop in self-adaptive systems. In Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings (MODELS ’22), 286–295. DOI:
[250]
Tim Pearce, Tabish Rashid, Anssi Kanervisto, Dave Bignell, Mingfei Sun, Raluca Georgescu, Sergio Valcarcel Macua, Shan Zheng Tan, Ida Momennejad, Katja Hofmann, and Sam Devlin. 2023. Imitating human behaviour with diffusion models. In Proceedings of the 11th International Conference on Learning Representations.
[251]
Laura Plein, Wendkûuni C. Ouédraogo, Jacques Klein, and Tegawendé F. Bissyandé. 2024. Automatic generation of test cases based on bug reports: A feasibility study with large language models. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion ’24), 360–361. DOI:
[252]
Michal Pluhacek, Anezka Kazikova, Tomas Kadavy, Adam Viktorin, and Roman Senkerik. 2023. Leveraging large language models for the generation of novel metaheuristic optimization algorithms. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation (GECCO ’23 Companion), 1812–1820. DOI:
[253]
Nico Potyka, Yuqicheng Zhu, Yunjie He, Evgeny Kharlamov, and Steffen Staab. 2024. Robust knowledge extraction from large language models using social choice theory. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 1593–1601.
[254]
Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. 2023. GrIPS: Gradient-free, edit-based instruction search for prompting large language models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 3845–3864. DOI:
[255]
Anamaria-Roberta Preda, Christoph Mayr-Dorn, Atif Mashkoor, and Alexander Egyed. 2024. Supporting high-level to low-level requirements coverage reviewing with large language models. In Proceedings of the Mining Software Repositories (MSR) Conference.
[256]
Ethan Pronovost, Meghana Reddy Ganesina, Noureldin Hendy, Zeyu Wang, Andres Morales, Kai Wang, and Nicholas Roy. 2023. Scenario diffusion: Controllable driving scenario generation with diffusion. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[257]
Moschoula Pternea, Prerna Singh, Abir Chakraborty, Yagna Oruganti, Mirco Milletari, Sayli Bapat, and Kebei Jiang. 2024. The RL/LLM taxonomy tree: reviewing synergies between reinforcement learning and large language models. arXiv:2402.01874 [cs.CL].
[258]
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, dahai li, Zhiyuan Liu, and Maosong Sun. 2024. ToolLLM: Facilitating large language models to master 16000+ real-world APIs. In Proceedings of the 12th International Conference on Learning Representations.
[259]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. arXiv:2103.00020 [cs.CV].
[260]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683 [cs.LG].
[261]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with CLIP latents. arXiv:2204.06125 [cs.CV].
[262]
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, and Niko Suenderhauf. 2023. SayPlan: Grounding large language models using 3D scene graphs for scalable robot task planning. In Proceedings of the 7th Annual Conference on Robot Learning.
[263]
Fabian Ranz, Vera Hummel, and Wilfried Sihn. 2017. Capability-based task allocation in human-robot collaboration. Procedia Manufacturing 9 (2017), 182–189. DOI:
[264]
N. Rao, K. Jain, U. Alon, C. Goues, and V. J. Hellendoorn. 2023. CAT-LM training language models on aligned code and tests. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE ’23), 409–420. DOI:
[265]
Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. 2021. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In Proceedings of the 38th International Conference on Machine Learning, Vol. 139. PMLR, 8857–8868.
[266]
Emily Reif, Crystal Qian, James Wexler, and Minsuk Kahng. 2024. Automatic histograms: Leveraging language models for text dataset exploration. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’24). Article 53, 9 pages. DOI:
[267]
D. A. Reynolds and R. C. Rose. 1995. Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing 3, 1 (1995), 72–83. DOI:
[268]
Francisco Ribeiro, José Nuno Macedo, and Kanae Tsushima. 2023. Beyond code generation: The need for type-aware language models. In Proceedings of the IEEE/ACM International Workshop on Automated Program Repair (APR ’23), 21–22. DOI:
[269]
Celian Ringwald. 2024. Learning pattern-based extractors from natural language and knowledge graphs: Applying large language models to wikipedia and linked open data. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (IAAI ’24), Vol. 38, Student Abstracts, Undergraduate Consortium and Demonstrations (EAAI ’24), Vol. 38, 23411–23412. DOI:
[270]
Juan Rocamonde, Victoriano Montesinos, Elvis Nava, Ethan Perez, and David Lindner. 2024. Vision-language models are zero-shot reward models for reinforcement learning. In Proceedings of the 12th International Conference on Learning Representations.
[271]
Kevin Roose. 2022. An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy. The New York Times (02 Sept. 2022). Retrieved May 12, 2024 from https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
[272]
Enrico Saccon, Ahmet Tikna, Davide De Martini, Edoardo Lamon, Marco Roveri, and Luigi Palopoli. 2024. When Prolog meets generative models: A new approach for managing knowledge and planning in robotic applications. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[273]
Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. 2024. A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv:2402.07927 [cs.AI].
[274]
Md Sadman Sakib and Yu Sun. 2024. From cooking recipes to robot task trees – Improving planning correctness and task efficiency by leveraging LLMs with a knowledge network. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[275]
Raquel Sanchez, Javier Troya, and Javier Camara. 2024. Automated planning for adaptive cyber-physical systems under uncertainty in temporal availability constraints. In Proceedings of the 19th Conference on Software Engineering for Adaptive and Self-Managing Systems.
[276]
Sofia Santos, João Saraiva, and Francisco Ribeiro. 2024. Large language models in automated repair of haskell type errors. In Proceedings of the IEEE/ACM International Workshop on Automated Program Repair (APR ’24).
[277]
K. Sarda. 2023. Leveraging large language models for auto-remediation in microservices architecture. In Proceedings of the IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C ’23), 16–18. DOI:
[278]
Pete Sawyer, Nelly Bencomo, Jon Whittle, Emmanuel Letier, and Anthony Finkelstein. 2010. Requirements-aware systems: A research agenda for RE for self-adaptive systems. In Proceedings of the 18th IEEE International Requirements Engineering Conference, 95–103. DOI:
[279]
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv:2302.04761 [cs.CL].
[280]
Andreas Schuller, Doris Janssen, Julian Blumenröther, Theresa Maria Probst, Michael Schmidt, and Chandan Kumar. 2024. Generating personas using LLMs and assessing their viability. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’24). Article 179, 7 pages. DOI:
[281]
Rie Sera, Hironori Washizaki, Junyan Chen, Yoshiaki Fukazawa, Masahiro Taga, Kazuyuki Nakagawa, Yusuke Sakai, and Kiyoshi Honda. 2024. Development of data-driven persona including user behavior and pain point through clustering with user log of B2B software. In Proceedings of the 17th International Conference on Cooperative and Human Aspects of Software Engineering (CHASE ’24), 1–6.
[282]
Dhruv Shah, Michael Robert Equi, Bażej Osiński, Fei Xia, Brian Ichter, and Sergey Levine. 2023. Navigation with large language models: Semantic guesswork as a heuristic for planning. In Proceedings of the 7th Conference on Robot Learning, Vol. 229. PMLR, 2683–2699.
[283]
Dhruv Shah, Blazej Osinski, Brian Ichter, and Sergey Levine. 2022. LM-Nav: Robotic navigation with large pre-trained models of language, vision, and action. In Proceedings of the 6th Annual Conference on Robot Learning.
[284]
Hao Shao, Letian Wang, Ruobing Chen, Hongsheng Li, and Yu Liu. 2022. Safety-enhanced autonomous driving using interpretable sensor fusion transformer. In Proceedings of the 6th Annual Conference on Robot Learning.
[285]
Lifeng Shen, Weiyu Chen, and James Kwok. 2024. Multi-resolution diffusion models for time series forecasting. In Proceedings of the 12th International Conference on Learning Representations.
[286]
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. HuggingGPT: Solving AI tasks with ChatGPT and its friends in hugging face. arXiv:2303.17580 [cs.CL].
[287]
Stepan Shevtsov, Mihaly Berekmeri, Danny Weyns, and Martina Maggio. 2018. Control-theoretical software adaptation: A systematic literature review. IEEE Transactions on Software Engineering 44, 8 (2018), 784–810. DOI:
[288]
Haochen Shi, Zhiyuan Sun, Xingdi Yuan, Marc-Alexandre Côté, and Bang Liu. 2024b. OPEx: A large language model-powered framework for embodied instruction following. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 2465–2467.
[289]
Jingyu Shi, Rahul Jain, Hyungjun Doh, Ryo Suzuki, and Karthik Ramani. 2024a. An HCI-centric survey and taxonomy of human-generative-AI interactions. arXiv:2310.07127 [cs.HC].
[290]
Xiaoming Shi, Siqiao Xue, Kangrui Wang, Fan Zhou, James Zhang, Jun Zhou, Chenhao Tan, and Hongyuan Mei. 2023. Language models can improve event prediction by few-shot abductive reasoning. In Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems (NeurIPS ’23).
[291]
Xiao Shou, Debarun Bhattacharjya, Tian Gao, Dharmashankar Subramanian, Oktie Hassanzadeh, and Kristin P Bennett. 2023. Pairwise causality guided transformers for event sequences. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 36. Curran Associates, Inc., 46520–46533.
[292]
Jaidev Shriram and Sanjayan Pradeep Kumar Sreekala. 2023. ZINify: Transforming research papers into engaging zines with large language models. In Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23 Adjunct). Article 117, 3 pages. DOI:
[293]
Yash Shukla, Wenchang Gao, Vasanth Sarathy, Alvaro Velasquez, Robert Wright, and Jivko Sinapov. 2024. LgTS: Dynamic task sampling using LLM-generated sub-goals for reinforcement learning agents. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’24), 1736–1744.
[294]
Samira Silva, Patrizio Pelliccione, and Antonia Bertolino. 2024. Self-adaptive testing in the field. ACM Transactions on Autonomous and Adaptive Systems 19, 1 (Feb. 2024), Article 4, 37 pages. DOI:
[295]
Vítor E. Silva Souza, Alexei Lapouchnian, William N. Robinson, and John Mylopoulos. 2011. Awareness requirements for adaptive systems. In Proceedings of the 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’11), 60–69. DOI:
[296]
Daniel L. Silver, Qiang Yang, and Lianghao Li. 2013. Lifelong machine learning systems: Beyond learning algorithms. In Proceedings of the Lifelong Machine Learning, Papers from the 2013 AAAI Spring Symposium, Vol. SS-13-05. AAAI. Retrieved from http://www.aaai.org/ocs/index.php/SSS/SSS13/paper/view/5802
[297]
F. Santoni De Sio and Giulio Mecacci. 2021. Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology 34, 4 (2021), 1057–1084. DOI:
[298]
D. Sobania, M. Briesch, C. Hanna, and J. Petke. 2023. An analysis of the automatic bug fixing performance of ChatGPT. In Proceedings of the IEEE/ACM International Workshop on Automated Program Repair (APR ’23), 23–30. DOI:
[299]
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, Vol. 37. PMLR, 2256–2265.
[300]
Yang Song and Stefano Ermon. 2019. Generative modeling by estimating gradients of the data distribution. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 32. Curran Associates, Inc.
[301]
Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-based generative modeling through stochastic differential equations. In Proceedings of the International Conference on Learning Representations.
[302]
Yuan Sui, Mengyu Zhou, Mingjie Zhou, Shi Han, and Dongmei Zhang. 2024. Table meets LLM: Can large language models understand structured table data? A benchmark and empirical study. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining (WSDM ’24), 645–654. DOI:
[303]
Hao Sun, Alihan Hüyük, and Mihaela van der Schaar. 2024b. Query-dependent prompt evaluation and optimization with offline inverse RL. In Proceedings of the 12th International Conference on Learning Representations.
[304]
Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, and Chao Zhang. 2023b. AdaPlanner: Adaptive planning from feedback with language models. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[305]
Jiankai Sun, Yiqi Jiang, Jianing Qiu, Parth Nobel, Mykel J. Kochenderfer, and Mac Schwager. 2023a. Conformal prediction for uncertainty-aware planning with diffusion dynamics model. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 36. Curran Associates, Inc., 80324–80337.
[306]
Jingkai Sun, Qiang Zhang, Yiqun Duan, Xiaoyang Jiang, Chong Cheng, and Renjing Xu. 2024d. Prompt, plan, perform: LLM-based humanoid control via quantized imitation learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[307]
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, and Yue Zhao. 2024a. TrustLLM: Trustworthiness in large language models. arXiv:2401.05561 [cs.CL].
[308]
Yuqiang Sun, Daoyuan Wu, Yue Xue, Han Liu, Haijun Wang, Zhengzi Xu, Xiaofei Xie, and Yang Liu. 2024c. GPTScan: Detecting logic vulnerabilities in smart contracts by combining GPT with program analysis. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 166, 13 pages. DOI:
[309]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70. PMLR, 3319–3328.
[310]
Daniel Sykes, William Heaven, Jeff Magee, and Jeff Kramer. 2008. From goals to components: A combined approach to self-management. In Proceedings of the 2008 International Workshop on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’08), 1–8.
[311]
Andrew Szot, Max Schwarzer, Harsh Agrawal, Bogdan Mazoure, Rin Metcalf, Walter Talbott, Natalie Mackraz, R. Devon Hjelm, and Alexander T. Toshev. 2024. Large language models as generalizable policies for embodied tasks. In Proceedings of the 12th International Conference on Learning Representations.
[312]
Shiro Takagi. 2022. On the effect of pre-training for transformer in different modality on offline reinforcement learning. In Proceedings of the Advances in Neural Information Processing Systems.
[313]
Weihao Tan, Wentao Zhang, Shanqi Liu, Longtao Zheng, Xinrun Wang, and Bo An. 2024. True knowledge comes from practice: Aligning large language models with embodied environments via reinforcement learning. In Proceedings of the 12th International Conference on Learning Representations.
[314]
Binh Tang and David S. Matteson. 2021. Probabilistic transformer for time series analysis. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 23592–23608.
[315]
Peiwang Tang and Xianchao Zhang. 2023. Infomaxformer: Maximum entropy transformer for long time-series forecasting problem. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’23), 1670–1678.
[316]
Z. Tang, C. Li, J. Ge, X. Shen, Z. Zhu, and B. Luo. 2021. AST-transformer: Encoding abstract syntax trees efficiently for code summarization. In Proceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering (ASE ’21), 1193–1195. DOI:
[317]
Daniel Tanneberg, Felix Ocker, Stephan Hasler, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Heiko Wersing, Bernhard Sendhoff, and Michael Gienger. 2024. To help or not to help: LLM-based attentive support for human-robot group interactions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[318]
Zhengwei Tao, Ting-En Lin, Xiancai Chen, Hangyu Li, Yuchuan Wu, Yongbin Li, Zhi Jin, Fei Huang, Dacheng Tao, and Jingren Zhou. 2024. A survey on self-evolution of large language models. arXiv:2404.14387 [cs.CL].
[319]
Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. 2021. CSDI: Conditional score-based diffusion models for probabilistic time series imputation. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 24804–24816.
[320]
Xiaoyu Tian, Liangyu Chen, Na Liu, Yaxuan Liu, Wei Zou, Kaijiang Chen, and Ming Cui. 2023. DUMA: A dual-mind conversational agent with fast and slow thinking. arXiv]2310.18075 [cs.CL]
[321]
Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. MuJoCo: A physics engine for model-based control. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 5026–5033. DOI:
[322]
Christos Tsigkanos, Pooja Rani, Sebastian Müller, and Timo Kehrer. 2023a. Variable discovery with large language models for metamorphic testing of scientific software. In Proceedings of the International Conference on Computational Science (ICCS ’23). Springer Nature, 321–335.
[323]
Christos Tsigkanos, Pooja Rani, Sebastian Müller, and Timo Kehrer. 2023b. Large language models: The next frontier for variable discovery within metamorphic testing? In Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER ’23), 678–682. DOI:
[324]
Michele Tufano, Dawn Drain, Alexey Svyatkovskiy, and Neel Sundaresan. 2022. Generating accurate assert statements for unit test cases using pretrained transformers. In Proceedings of the 3rd ACM/IEEE International Conference on Automation of Software Test (AST ’22), 54–64. DOI:
[325]
U.S. Department of Defense. 2023. DoD directive 3000.09, autonomy in weapon systems.
[326]
Vasily Varenov and Aydar Gabdrahmanov. 2021. Security requirements classification into groups using NLP transformers. In Proceedings of the IEEE 29th International Requirements Engineering Conference Workshops (REW ’21), 444–450. DOI:
[327]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc.
[328]
Norha M. Villegas, Gabriel Tamura, Hausi A. Müller, Laurence Duchien, and Rubby Casallas. 2013. DYNAMICO: A Reference Model for Governing Control Objectives and Context Relevance in Self-Adaptive Software Systems. Springer, Berlin, 265–293. DOI:
[329]
Johanna Walker, Elisavet Koutsiana, Michelle Nwachukwu, Albert Meroño Peñuela, and Elena Simperl. 2024. The promise and challenge of large language models for knowledge engineering: Insights from a hackathon. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’24). Article 318, 9 pages. DOI:
[330]
Xingchen Wan, Ruoxi Sun, Hanjun Dai, Sercan Arik, and Tomas Pfister. 2023a. Better zero-shot reasoning with self-adaptive prompting. In Findings of the Association for Computational Linguistics (ACL ’23). Association for Computational Linguistics, 3493–3514. DOI:
[331]
Xingchen Wan, Ruoxi Sun, Hootan Nakhost, Hanjun Dai, Julian Eisenschlos, Sercan Arik, and Tomas Pfister. 2023b. Universal self-adaptive prompting. arXiv: 2305.14926. Retrieved from https://arxiv.org/pdf/2305.14926.pdf
[332]
Bryan Wang, Gang Li, and Yang Li. 2023d. Enabling conversational interaction with mobile ui using large language models. In Proceedings of the Conference on Human Factors in Computing Systems (CHI ’23). Article 432, 17 pages. DOI:
[333]
Bailin Wang, Zi Wang, Xuezhi Wang, Yuan Cao, Rif A. Saurous, and Yoon Kim. 2023f. Grammar prompting for domain-specific language generation with large language models. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[334]
Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, and Xing Xie. 2023b. On the robustness of ChatGPT: An adversarial and out-of-distribution perspective. arXiv:2302.12095 [cs.AI].
[335]
Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, and Dongsheng Li. 2022. Bootstrapped transformer for offline reinforcement learning. In Proceedings of the Advances in Neural Information Processing Systems.
[336]
Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. 2024b. A survey on large language model based autonomous agents. Frontiers of Computer Science 18, 6 (2024), 186345. DOI:
[337]
Weishi Wang, Yue Wang, Shafiq Joty, and Steven C.H. Hoi. 2023e. RAP-Gen: Retrieval-augmented patch generation with CodeT5 for automatic program repair. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’23), 146–158. DOI:
[338]
Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric Xing, and Zhiting Hu. 2024a. PromptAgent: Strategic planning with language models enables expert-level prompt optimization. In Proceedings of the 12th International Conference on Learning Representations.
[339]
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023g. Self-consistency improves chain of thought reasoning in language models. In Proceedings of the 11th International Conference on Learning Representations (ICLR ’23). OpenReview.net.
[340]
Yidong Wang, Zhuohao Yu, Wenjin Yao, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, and Yue Zhang. 2024c. PandaLM: An automatic evaluation benchmark for LLM instruction tuning optimization. In Proceedings of the 12th International Conference on Learning Representations.
[341]
Zihao Wang, Shaofei Cai, Guanzhou Chen, Anji Liu, Xiaojian Ma, and Yitao Liang. 2023a. Describe, explain, plan and select: Interactive planning with LLMs enables open-world multi-task agents. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[342]
Zhendong Wang, Jonathan J. Hunt, and Mingyuan Zhou. 2023c. Diffusion policies as an expressive policy class for offline reinforcement learning. In Proceedings of the 11th International Conference on Learning Representations.
[343]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023a. Chain-of-thought prompting elicits reasoning in large language models. arXiv:2201.11903 [cs.CL].
[344]
Yuxiang Wei, Chunqiu Steven Xia, and Lingming Zhang. 2023b. Copiloting the copilots: Fusing large language models with completion engines for automated program repair. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’23), 172–184. DOI:
[345]
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. 2022. NaturalProver: Grounded mathematical proof generation with language models. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 35. Curran Associates, Inc., 4913–4927.
[346]
Haomin Wen, Youfang Lin, Yutong Xia, Huaiyu Wan, Qingsong Wen, Roger Zimmermann, and Yuxuan Liang. 2023a. DiffSTG: Probabilistic spatio-temporal graph forecasting with denoising diffusion models. In Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL ’23). Article 60, 12 pages. DOI:
[347]
Qingsong Wen, Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan, and Liang Sun. 2023b. Transformers in time series: A survey. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI ’23), 6778–6786. DOI: Survey Track.
[348]
Danny Weyns. 2020. An Introduction to Self-adaptive Systems : A Contemporary Software Engineering Perspective. Wiley-IEEE Computer Society Pr.
[349]
Danny Weyns, Ilias Gerostathopoulos, Nadeem Abbas, Jesper Andersson, Stefan Biffl, Premek Brada, Tomas Bures, Amleto Di Salle, Matthias Galster, Patricia Lago, Grace Lewis, Marin Litoiu, Angelika Musil, Juergen Musil, Panos Patros, and Patrizio Pelliccione. 2023. Self-adaptation in industry: A survey. ACM Transactions on Autonomous and Adaptive Systems 18, 2 (2023), 44 pages. 1556–4665
[350]
Danny Weyns and Jesper Andersson. 2023. From self-adaptation to self-evolution leveraging the operational design domain. In Proceedings of the IEEE/ACM 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’23), 90–96. DOI:
[351]
Danny Weyns, Thomasb Back, Rene Vidal, Xin Yao, and Ahmed Nabile Belbachir. 2022a. The vision of self-evolving computing systems. Journal of Integrated Design and Process Science 26, 3–4 (2022), 351–367. DOI:
[352]
Danny Weyns, Ilias Gerostathopoulos, Barbora Buhnova, Nicolás Cardozo, Emilia Cioroaica, Ivana Dusparic, Lars Grunske, Pooyan Jamshidi, Christine Julien, Judith Michael, Gabriel Moreno, Shiva Nejati, Patrizio Pelliccione, Federico Quin, Genaina Rodrigues, Bradley Schmerl, Marco Vieira, Thomas Vogel, and Rebekka Wohlrab. 2022b. Guidelines for artifacts to support industry-relevant research on self-adaptation. ACM SIGSOFT Software Engineering Notes 47, 4 (Sep. 2022), 18–24. DOI:
[353]
D. Weyns, U. Iftikhar, S. Malek, and J. Andersson. 2012a. Claims and supporting evidence for self-adaptive systems: A literature study. In Proceedings of the 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 89–98. DOI:
[354]
Danny Weyns and Usman M. Iftikhar. 2023. ActivFORMS: A formally founded model-based approach to engineer self-adaptive systems. ACM Transactions on Software Engineering and Methodology 32, 1, Article 12 (Feb. 2023), 48 pages. DOI:
[355]
Danny Weyns, Sam Malek, and Jesper Andersson. 2012b. FORMS: Unifying reference model for formal specification of distributed self-adaptive systems. ACM Transactions on Autonomous and Adaptive Systems (TAAS) 7, 1, Article 8 (may 2012), 61 pages. DOI:
[356]
Danny Weyns, Sam Malek, and Jesper Andersson. 2012b. FORMS: Unifying reference model for formal specification of distributed self-adaptive systems. ACM Transactions on Autonomous and Adaptive Systems 7, 1 (2012), 8:1–8:61.
[357]
Danny Weyns, Bradley Schmerl, Vincenzo Grassi, Sam Malek, Raffaela Mirandola, Christian Prehofer, Jochen Wuttke, Jesper Andersson, Holger Giese, and Karl M. Göschka. 2013. On Patterns for Decentralized Control in Self-Adaptive Systems. Springer, Berlin, 76–107. DOI:
[358]
Jon Whittle, Pete Sawyer, Nelly Bencomo, Betty H. C. Cheng, and Jean-Michel Bruel. 2009. RELAX: Incorporating uncertainty into the specification of self-adaptive systems. In Proceedings of the 2009 17th IEEE International Requirements Engineering Conference, 79–88.
[359]
Nathan Gabriel Wood. 2023. Autonomous weapon systems and responsibility gaps: a taxonomy. Ethics and Information Technology 25, 1 (2023), 16. DOI:
[360]
Haoze Wu, Clark Barrett, and Nina Narodytska. 2023. Lemur: Integrating large language models in automated program verification. In Proceedings of the 3rd Workshop on Mathematical Reasoning and AI at NeurIPS ’23.
[361]
Sifan Wu, Xi Xiao, Qianggang Ding, Peilin Zhao, Ying Wei, and Junzhou Huang. 2020. Adversarial sparse transformer for time series forecasting. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 17105–17115.
[362]
Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022b. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the Conference on Human Factors in Computing Systems (CHI ’22). Article 385, 22 pages.
[363]
Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Norman Rabe, Charles E Staats, Mateja Jamnik, and Christian Szegedy. 2022a. Autoformalization with large language models. In Proceedings of the Advances in Neural Information Processing Systems.
[364]
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, and Tao Gui. 2023. The rise and potential of large language model based agents: A survey. arXiv:2309.07864 [cs.AI].
[365]
C. Xia, Y. Ding, and L. Zhang. 2023a. The plastic surgery hypothesis in the era of large language models. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE), 522–534. DOI:
[366]
Chunqiu Steven Xia, Matteo Paltenghi, Jia Le Tian, Michael Pradel, and Lingming Zhang. 2024. Fuzz4All: Universal fuzzing with large language models. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 126, 13 pages. DOI:
[367]
Chunqiu Steven Xia, Yuxiang Wei, and Lingming Zhang. 2023b. Automated program repair in the era of large pre-trained language models. In Proceedings of the IEEE/ACM 45th International Conference on Software Engineering (ICSE ’23), 1482–1494. DOI:
[368]
Ziyang Xiao, Dongxiang Zhang, Yangjun Wu, Lilin Xu, Yuan Jessica Wang, Xiongwei Han, Xiaojin Fu, Tao Zhong, Jia Zeng, Mingli Song, and Gang Chen. 2024. Chain-of-experts: When LLMs meet complex operations research problems. In Proceedings of the 12th International Conference on Learning Representations.
[369]
Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, and Tommi S. Jaakkola. 2022. Crystal diffusion variational autoencoder for periodic material generation. In Proceedings of the International Conference on Learning Representations.
[370]
Tianbao Xie, Siheng Zhao, Chen Henry Wu, Yitao Liu, Qian Luo, Victor Zhong, Yanchao Yang, and Tao Yu. 2024. Text2Reward: Reward Shaping with Language Models for Reinforcement Learning. In The Twelfth International Conference on Learning Representations.
[371]
Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Wang Yanggang, Haiyu Li, and Zhilin Yang. 2022a. GPS: Genetic prompt search for efficient few-shot learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 8162–8171. DOI:
[372]
Jiehui Xu, Haixu Wu, Jianmin Wang, and Mingsheng Long. 2022b. Anomaly transformer: Time series anomaly detection with association discrepancy. In Proceedings of the International Conference on Learning Representations.
[373]
Mengdi Xu, Yuchen Lu, Yikang Shen, Shun Zhang, Ding Zhao, and Chuang Gan. 2023. Hyper-decision transformer for efficient online policy adaptation. In Proceedings of the 11th International Conference on Learning Representations.
[374]
Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao, and Tianyi Zhou. 2024a. A survey on knowledge distillation of large language models. arXiv:2402.13116 [cs.CL].
[375]
Zelai Xu, Chao Yu, Fei Fang, Yu Wang, and Yi Wu. 2024b. Language agents with reinforcement learning for strategic play in the Werewolf game. arXiv:2310.18940 [cs.AI].
[376]
Zhiyi Xue, Liangguo Li, Senyue Tian, Xiaohong Chen, Pingping Li, Liangyu Chen, Tingting Jiang, and Min Zhang. 2024. Domain knowledge is all you need: A field deployment of LLM-powered test case generation in FinTech domain. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion ’24), 314–315. DOI:
[377]
Taku Yamagata, Ahmed Khalil, and Raúl Santos-Rodríguez. 2023. Q-learning decision transformer: Leveraging dynamic programming for conditional sequence modelling in offline RL. In Proceedings of the 40th International Conference on Machine Learning (ICML’ 23). JMLR.org, Article 1625, 19 pages.
[378]
Huan Yan and Yong Li. 2023. A survey of generative AI for intelligent transportation systems. arXiv:2312.08248 [cs.AI].
[379]
Aidan Z. H. Yang, Claire Le Goues, Ruben Martins, and Vincent Hellendoorn. 2024a. Large language models for test-free fault localization. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ’24). Article 17, 12 pages. DOI:
[380]
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2024c. Large language models as optimizers. In Proceedings of the 12th International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=Bb4VGOWELI
[381]
Fangkai Yang, Wenjie Yin, Lu Wang, Tianci Li, Pu Zhao, Bo Liu, Paul Wang, Bo Qiao, Yudong Liu, Mårten Björkman, Saravan Rajmohan, Qingwei Lin, and Dongmei Zhang. 2023d. Diffusion-based time series data imputation for cloud failure prediction at Microsoft 365. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’23), 2050–2055. DOI:
[382]
Heng Yang and Ke Li. 2023a. InstOptima: Evolutionary multi-objective instruction optimization via large language model-based instruction operators. In Findings of the Association for Computational Linguistics (EMNLP ’23). Association for Computational Linguistics, 13593–13602. DOI:
[383]
Jingda Yang and Ying Wang. 2024. Toward auto-modeling of formal verification for nextG protocols: A multimodal cross- and self-attention large language model approach. IEEE Access 12 (2024), 27858–27869. DOI:
[384]
Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2023e. Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys 56, 4, Article 105 (Nov. 2023), 39 pages. DOI:
[385]
Yaodong Yang, Guangyong Chen, Weixun Wang, Xiaotian Hao, Jianye HAO, and Pheng-Ann Heng. 2022. Transformer-based working memory for multiagent reinforcement learning with action parsing. In Proceedings of the Advances in Neural Information Processing Systems.
[386]
Zhun Yang, Adam Ishay, and Joohyung Lee. 2023a. Learning to Solve Constraint Satisfaction Problems with Recurrent Transformer. In Proceedings of the 11th International Conference on Learning Representations.
[387]
Zhenjie Yang, Xiaosong Jia, Hongyang Li, and Junchi Yan. 2023b. LLM4Drive: A survey of large language models for autonomous driving. arXiv:2311.01043 [cs.AI].
[388]
Zhutian Yang, Jiayuan Mao, Yilun Du, Jiajun Wu, Joshua B. Tenenbaum, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2023c. Compositional diffusion-based continuous constraint solvers. In Proceedings of the 7th Annual Conference on Robot Learning.
[389]
Ziyi Yang, Shreyas S. Raman, Ankit Shah, and Stefanie Tellex. 2024b. Plug in the safety chip: Enforcing constraints for LLM-driven robot agents. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[390]
Jianan Yao, Ziqiao Zhou, Weiteng Chen, and Weidong Cui. 2023b. Leveraging large language models for automated proof synthesis in rust. arXiv:2311.03739 [cs.FL].
[391]
Shunyu Yao, Howard Chen, John Yang, and Karthik R. Narasimhan. 2022. WebShop: Towards scalable real-world web interaction with grounded language agents. In Proceedings of the Advances in Neural Information Processing Systems.
[392]
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023a. Tree of Thoughts: Deliberate problem solving with large language models. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 36. Curran Associates, Inc., 11809–11822.
[393]
Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. 2024. A survey on large language model (LLM) security and privacy: The good, the bad, and the ugly. High-Confidence Computing 4, 2 (Jun. 2024), 100211. DOI:
[394]
Takuma Yoneda, Jiading Fang, Peng Li, Huanyu Zhang, Tianchong Jiang, Shengjie Lin, Ben Picker, David Yunis, Hongyuan Mei, and Matthew R. Walter. 2024. Statler: State-maintaining language models for embodied reasoning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[395]
Chenning Yu, Qingbiao Li, Sicun Gao, and Amanda Prorok. 2023b. Accelerating multi-agent planning using graph transformers with bounded suboptimality. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’23), 3432–3439. DOI:
[396]
Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. 2019. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Proceedings of the Conference on Robot Learning (CoRL ’19).
[397]
Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, and Fei Xia. 2023a. Language to rewards for robotic skill synthesis. arXiv:2306.08647 [cs.RO].
[398]
Wei Yuan, Quanjun Zhang, Tieke He, Chunrong Fang, Nguyen Quoc Viet Hung, Xiaodong Hao, and Hongzhi Yin. 2022. CIRCLE: continual repair across programming languages. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA ’22), 678–690. DOI:
[399]
Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu. 2024. 3D diffusion policy: Generalizable visuomotor policy learning via simple 3D representations. In Proceedings of the Workshop on 3D Visual Representations for Robot Manipulation (ICRA ’24).
[400]
Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2023a. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, 11121–11128. DOI:
[401]
Fanlong Zeng, Wensheng Gan, Yongheng Wang, Ning Liu, and Philip S. Yu. 2023b. Large language models for robotics: A survey. arXiv:2311.07226 [cs.RO].
[402]
Bin Zhang, Hangyu Mao, Jingqing Ruan, Ying Wen, Yang Li, Shao Zhang, Zhiwei Xu, Dapeng Li, Ziyue Li, Rui Zhao, Lijuan Li, and Guoliang Fan. 2023c. Controlling large language model-based agents for large-scale decision-making: An actor-critic approach. arXiv:2311.13884 [cs.AI].
[403]
Chenyuan Zhang, Hao Liu, Jiutian Zeng, Kejing Yang, Yuhong Li, and Hui Li. 2024d. Prompt-enhanced software vulnerability detection using ChatGPT. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion ’24), 276–277. DOI:
[404]
Chenrui Zhang, Lin Liu, Chuyuan Wang, Xiao Sun, Hongyu Wang, Jinpeng Wang, and Mingchen Cai. 2024c. PREFER: Prompt ensemble learning via feedback-reflect-refine. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 19525–19532. DOI:
[405]
Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, Xiaojun Chang, Junge Zhang, Feng Yin, Yitao Liang, and Yaodong Yang. 2024e. ProAgent: Building proactive cooperative agents with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 17591–17599. DOI:
[406]
Hao Zhang, Hao Wang, and Zhen Kan. 2023d. Exploiting transformer in sparse reward reinforcement learning for interpretable temporal logic motion planning. IEEE Robotics and Automation Letters 8, 8 (Aug. 2023), 4831–4838. DOI:
[407]
Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, and Joseph J. Lim. 2023f. Bootstrap your own skills: Learning to solve new tasks with large language model guidance. In Proceedings of the 7th Annual Conference on Robot Learning.
[408]
Lei Zhang, Yuge Zhang, Kan Ren, Dongsheng Li, and Yuqing Yang. 2024f. MLCopilot: Unleashing the power of large language models in solving machine learning tasks. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, Vol. 1: Long Papers. Association for Computational Linguistics, 2931–2959.
[409]
Mingyue Zhang, Jialong Li, Nianyu Li, Eunsuk Kang, and Kenji Tei. 2024b. User-driven adaptation: Tailoring autonomous driving systems with dynamic preferences. In Proceedings of the ACM International Conference on Human Factors in Computing Systems. ACM.
[410]
Mingyue Zhang, Jialong Li, Haiyan Zhao, Kenji Tei, Shinichi Honiden, and Zhi Jin. 2021. A meta reinforcement learning-based approach for self-adaptive system. In Proceedings of the IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS ’21), 1–10.
[411]
Quanjun Zhang, Chunrong Fang, Yang Xie, Yaxin Zhang, Yun Yang, Weisong Sun, Shengcheng Yu, and Zhenyu Chen. 2023a. A survey on large language models for software engineering. arXiv:2312.15223 [cs.SE].
[412]
Shujian Zhang, Chengyue Gong, Lemeng Wu, Xingchao Liu, and Mingyuan Zhou. 2023b. AutoML-GPT: Automatic machine learning with GPT. arXiv:2305.02499 [cs.CL].
[413]
Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E. Gonzalez. 2023e. TEMPERA: Test-time prompt editing via reinforcement learning. In Proceedings of the 11th International Conference on Learning Representations.
[414]
Yunhao Zhang and Junchi Yan. 2023. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In Proceedings of the 11th International Conference on Learning Representations.
[415]
Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao, Zi Gong, Hang Yu, Jianguo Li, and Rui Wang. 2024a. Unifying the perspectives of NLP and software engineering: A survey on language models for code. arXiv:2311.07989 [cs.CL].
[416]
Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. 2024. ExpeL: LLM agents are experiential learners. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI ’24), Proceedings of the 36th Conference on Innovative Applications of Artificial Intelligence (IAAI ’24), Proceedings of the 14th Symposium on Educational Advances in Artificial Intelligence (EAAI ’14). AAAI Press, 19632–19642. DOI:
[417]
Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Mengnan Du. 2023a. Explainability for large language models: A survey. arXiv:2309.01029 [cs.CL].
[418]
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023b. A survey of large language models. arXiv:2303.18223 [cs.CL].
[419]
Qinqing Zheng, Amy Zhang, and Aditya Grover. 2022. Online decision transformer. In Proceedings of the 39th International Conference on Machine Learning, Vol. 162. PMLR, 27042–27059.
[420]
Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, and Baishakhi Ray. 2023a. Language-guided traffic simulation via scene-level diffusion. arXiv:2306.06344 [cs.RO].
[421]
Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. 2023b. Guided conditional diffusion for controllable traffic simulation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’23), 3560–3566. DOI:
[422]
Haotian Zhou, Yunhan Lin, Longwu Yan, Jihong Zhu, and Huasong Min. 2024a. LLM-BT: Performing robotic adaptive tasks based on large language models and behavior trees. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[423]
Jin Peng Zhou, Charles E. Staats, Wenda Li, Christian Szegedy, Kilian Q Weinberger, and Yuhuai Wu. 2024c. Don’t trust: Verify – Grounding LLM quantitative reasoning with autoformalization. In Proceedings of the 12th International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=V5tdi14ple
[424]
Siyuan Zhou, Yilun Du, Shun Zhang, Mengdi Xu, Yikang Shen, Wei Xiao, Dit-Yan Yeung, and Chuang Gan. 2023a. Adaptive online replanning with diffusion models. In Proceedings of the 37th Conference on Neural Information Processing Systems.
[425]
Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. 2022. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the 39th International Conference on Machine Learning, Vol. 162. PMLR, 27268–27286.
[426]
Tian Zhou, Peisong Niu, Xue Wang, Liang Sun, and Rong Jin. 2023b. One fits all: Power general time series analysis by pretrained LM. In Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems (NeurIPS ’23).
[427]
Xin Zhou, Ting Zhang, and David Lo. 2024d. Large language model for vulnerability detection: Emerging results and future directions. In Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER ’24), 47–51. DOI:
[428]
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, and Lei Ma. 2024b. ISR-LLM: Iterative self-refined large language model for long-horizon sequential task planning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’24).
[429]
Fangqi Zhu, Jun Gao, Changlong Yu, Wei Wang, Chen Xu, Xin Mu, Min Yang, and Ruifeng Xu. 2023b. A generative approach for script event prediction via contrastive fine-tuning. In Proceedings of the 27th AAAI Conference on Artificial Intelligence and 35h Conference on Innovative Applications of Artificial Intelligence and 13th Symposium on Educational Advances in Artificial Intelligence(AAAI’ 23/IAAI ’23/EAAI ’23). Article 1576, 9 pages. DOI:
[430]
Tianchen Zhu, Yue Qiu, Haoyi Zhou, and Jianxin Li. 2023c. Towards long-delayed sparsity: Learning a better transformer through reward redistribution. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI ’23), 4693–4701. DOI:
[431]
Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, and Jifeng Dai. 2023a. Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory. arXiv:2305.17144 [cs.AI].
[432]
Zhengbang Zhu, Hanye Zhao, Haoran He, Yichao Zhong, Shenyu Zhang, Haoquan Guo, Tingting Chen, and Weinan Zhang. 2024. Diffusion models for reinforcement learning: A survey. arXiv:2311.01223 [cs.LG].
[433]
Yuchen Zhuang, Xiang Chen, Tong Yu, Saayan Mitra, Victor Bursztyn, Ryan A. Rossi, Somdeb Sarkhel, and Chao Zhang. 2024. ToolChain*: Efficient action space navigation in large language models with A* search. In Proceedings of the 12th International Conference on Learning Representations.
[434]
Orr Zohar, Shih-Cheng Huang, Kuan-Chieh Wang, and Serena Yeung. 2023. LOVM: Language-only vision model selection. arXiv:2306.08893 [cs.CV].
[435]
Hao Zou, Zae Myung Kim, and Dongyeop Kang. 2023. A survey of diffusion models in natural language processing. arXiv:2305.14671 [cs.CL].
[436]
Łukasz Czajka and Cezary Kaliszyk. 2018. Hammer for Coq: Automation for dependent type theory. Journal of Automated Reasoning 61, 1 (Jun. 2018), 423–453. DOI:

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Autonomous and Adaptive Systems
ACM Transactions on Autonomous and Adaptive Systems  Volume 19, Issue 3
September 2024
242 pages
EISSN:1556-4703
DOI:10.1145/3613578
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 September 2024
Online AM: 20 August 2024
Accepted: 27 June 2024
Revised: 17 June 2024
Received: 17 June 2024
Published in TAAS Volume 19, Issue 3

Check for updates

Author Tags

  1. Self-Adaptive Systems
  2. MAPE
  3. Generative AI
  4. Large Language Model
  5. diffusion model
  6. survey

Qualifiers

  • Research-article

Funding Sources

  • Grant-in-Aid for Young Scientists (Early Bird) of Waseda Research Institute for Science and Engineering, the Special Research Projects of Waseda University
  • National Natural Science Foundation of China

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 703
    Total Downloads
  • Downloads (Last 12 months)703
  • Downloads (Last 6 weeks)692
Reflects downloads up to 01 Oct 2024

Other Metrics

Citations

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media