Abstract
Large Language Models (LLMs) have opened new opportunities in modeling in general, and conceptual modeling in particular. With their advanced reasoning capabilities, accessible through natural language interfaces, LLMs enable humans to deepen their understanding of different application domains and enhance their modeling skills. However, the open-ended nature of these interfaces results in diverse interaction behaviors, which may also affect the perceived usefulness of LLM-assisted conceptual modeling. Existing works focus on various quality metrics of LLM outcomes, yet limited attention is given to how users interact with LLMs for such modeling tasks. To address this gap, we present the design and findings of an empirical study conducted with information systems students. After labeling the interactions according to their intentions (e.g., Create Model, Discuss, or Present), and representing them as an event log, we applied process mining techniques to discover process models. These models vividly capture the interaction behaviors and reveal recurrent patterns. We explored the differences in interacting with two LLMs (GPT 4.0 and Code Llama) for two modeling tasks (use case and domain modeling) across three application domains. Additionally, we analyzed user perceptions regarding the usefulness and ease of use of LLM-assisted conceptual modeling.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We have no dedicated sub-question regarding differences related to the modeling tasks, as the participants could work on them interwinedly.
- 2.
- 3.
- 4.
1 token \(\approx \) 0.75 words.
- 5.
Online supplementary material: https://zenodo.org/records/13513891.
- 6.
- 7.
- 8.
https://www.vellum.ai/llm-leaderboard, last accessed: 25.05.2024.
References
Achiam, J., et al.: GPT-4 technical report. arXiv preprint arXiv:2303.08774 (2023)
Alex, N., et al.: Raft: a real-world few-shot text classification benchmark. arXiv preprint arXiv:2109.14076 (2021)
Arulmohan, S., Meurs, M.J., Mosser, S.: Extracting domain models from textual requirements in the era of large language models. In: 2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), pp. 580–587. IEEE (2023)
Brown, J.D.: The bonferroni adjustment. Statistics 12(1), 23–27 (2008)
Cámara, J., Troya, J., Burgueño, L., Vallecillo, A.: On the assessment of generative AI in modeling tasks: an experience report with ChatGPT and UML. Softw. Syst. Model. 22(3), 781–793 (2023)
Chaaben, M.B., Burgueño, L., Sahraoui, H.A.: Towards using few-shot prompt learning for automating model completion. In: 45th IEEE/ACM International Conference on Software Engineering: New Ideas and Emerging Results, NIER@ICSE, pp. 7–12. IEEE (2023)
Chen, B., et al.: On the use of GPT-4 for creating goal models: an exploratory study. In: 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW), pp. 262–271. IEEE (2023)
Chen, K., Yang, Y., Chen, B., López, J.A.H., Mussbacher, G., Varró, D.: Automated domain modeling with large language models: a comparative study. In: 2023 ACM/IEEE 26th International Conference on Model Driven Engineering Languages and Systems (MODELS), pp. 162–172. IEEE (2023)
Chen, X., et al.: How robust is GPT-3.5 to predecessors? A comprehensive study on language understanding tasks. arXiv preprint arXiv:2303.00293 (2023)
Druckman, J.N., Kam, C.D.: Students as experimental participants. In: Cambridge Handbook of Experimental Political Science, vol. 1, pp. 41–57 (2011)
Du, X., et al.: Evaluating large language models in class-level code generation. In: 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE), pp. 865–865. IEEE Computer Society (2024)
Fill, H.G., Fettke, P., Köpke, J.: Conceptual modeling and large language models: impressions from first experiments with ChatGPT. Enterp. Model. Inf. Syst. Architect. (EMISAJ) 18, 1–15 (2023)
Giglou, H.B., D’Souza, J., Auer, S.: LLMs4OL: large language models for ontology learning. In: Payne, T.R., et al. (eds.) ISWC 2023, Part I. LNCS, vol. 14265, pp. 408–427. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47240-4_22
Grattafiori, W.X., et al.: Code llama: open foundation models for code. arXiv preprint arXiv:2308.12950 (2023)
Hou, Y., et al.: Large language models are zero-shot rankers for recommender systems. In: Goharian, N., et al. (eds.) ECIR 2024. LNCS, vol. 14609, pp. 364–381. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-56060-6_24
Izadi, M., Katzy, J., van Dam, T., Otten, M., Popescu, R.M., van Deursen, A.: Language models for code completion: a practical evaluation. arXiv preprint arXiv:2402.16197 (2024)
Kanuka, H., Koreki, G., Soga, R., Nishikawa, K.: Exploring the chatgpt approach for bidirectional traceability problem between design models and code. arXiv preprint arXiv:2309.14992 (2023)
Kocmi, T., Federmann, C.: Large language models are state-of-the-art evaluators of translation quality. arXiv preprint arXiv:2302.14520 (2023)
Liu, J., Liu, C., Lv, R., Zhou, K., Zhang, Y.: Is chatgpt a good recommender? A preliminary study. arXiv preprint arXiv:2304.10149 (2023)
MacFarland, T.W., Yates, J.M.: Kruskal–Wallis H-test for oneway analysis of variance (ANOVA) by ranks. In: Introduction to Nonparametric Statistics for the Biological Sciences Using R, pp. 177–211. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30634-6_6
Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., Yang, D.: Is chatgpt a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476 (2023)
Ruan, K., Chen, X., Jin, Z.: Requirements modeling aided by chatgpt: an experience in embedded systems. In: 31st IEEE International Requirements Engineering Conference, RE 2023 - Workshops, pp. 170–177. IEEE (2023)
Touvron, H., et al.: Llama 2: open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)
Van Veen, D., et al.: Clinical text summarization: adapting large language models can outperform human experts. Research Square (2023). https://doi.org/10.48550/ARXIV.2309.07430
White, J., et al.: A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382 (2023)
White, J., Hays, S., Fu, Q., Spencer-Smith, J., Schmidt, D.C.: Chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design. arXiv preprint arXiv:2303.07839 (2023)
Zhang, B., Haddow, B., Birch, A.: Prompting large language model for machine translation: a case study. In: International Conference on Machine Learning, pp. 41092–41110. PMLR (2023)
Zhang, T., Ladhak, F., Durmus, E., Liang, P., McKeown, K., Hashimoto, T.B.: Benchmarking large language models for news summarization. Trans. Assoc. Comput. Linguist. 12, 39–57 (2024)
Zhao, W.X., et al.: A survey of large language models. arXiv preprint arXiv:2303.18223 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ali, S.J., Reinhartz-Berger, I., Bork, D. (2025). How are LLMs Used for Conceptual Modeling? An Exploratory Study on Interaction Behavior and User Perception. In: Maass, W., Han, H., Yasar, H., Multari, N. (eds) Conceptual Modeling. ER 2024. Lecture Notes in Computer Science, vol 15238. Springer, Cham. https://doi.org/10.1007/978-3-031-75872-0_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-75872-0_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-75871-3
Online ISBN: 978-3-031-75872-0
eBook Packages: Computer ScienceComputer Science (R0)