Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3653644.3653662acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfaimlConference Proceedingsconference-collections
research-article

Applications and Challenges of Large Language Models in Smart Government -From technological Advances to Regulated Applications

Published: 20 September 2024 Publication History

Abstract

This paper explores the applications and challenges of large language models (LLMs) in the context of smart government. It delves into how LLMs can enhance government decision-making, policy interpretation, and public service delivery through intelligent analysis and predictions. It also discusses the role of LLMs in processing vast amounts of government information and in analyzing public opinion. Concurrently, the paper acknowledges the challenges posed by LLMs, including data costs, security and privacy concerns, model robustness, regulatory hurdles, and technical and talent bottlenecks. It proposes recommendations for the regulated application of LLMs, such as developing robust data protection policies, standardizing model research and evaluation, fostering interdisciplinary research, and promoting integrated development across key sectors. The paper concludes with an outlook on the future of LLMs in smart government, emphasizing the need for cautious optimism and responsible innovation.

References

[1]
Centre For the Governance of AI. 2024. Recent Trends in China's Large Language Model Landscape. Retrieved Apr 28, 2024 from https://www.governance.ai/research-paper/recent-trends-chinas-llm-landscape.
[2]
Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt. 2024. Approaching Human-Level Forecasting with Language Models. arXiv:2402.18563. Retrieved from https://arxiv.org/abs/2402.18563.
[3]
Marios Papachristou, Longqi Yang, and Chin-Chia Hsu. 2024. Leveraging Large Language Models for Collective Decision-Making. arXiv:2311.04928. Retrieved from https://arxiv.org/abs/2311.04928.
[4]
Human-Centered Artificial Intelligence, Stanford University. 2024. How Large Language Models Will Transform Science, Society, and AI. Retrieved Apr. 28, 2024 from https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai. Accessed 6 April 2024.
[5]
Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green ai. Communications of the ACM, 63(12), 54-63.
[6]
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. arXiv:1906.02243. Retrieved from https://arxiv.org/abs/1906.02243.
[7]
Lambda. 2020. Openai's gpt-3 language model: A technical overview. Retrieved Apr 28, 2024 from https://lambdalabs.com/blog/demystifying-gpt-3.
[8]
Maximilian Schreiner. 2023. GPT-4 architecture, datasets, costs and more leaked. The Decoder (2023).
[9]
Nicholas Carlini; Florian Tramèr, Eric Wallace, and etc. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), USENIX, CA, 2633-2650. https://doi.org/10.48550/arXiv.2012.07805.
[10]
Dongpo Zhang. 2018. Big data security and privacy protection. In 8th international conference on management and computer science (ICMCS 2018). ICMCS, Atlantis Press, CA, 275-278. https://doi.org/10.2991/icmcs-18.2018.56.
[11]
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv:2009.03300. Retrieved from https://arxiv.org/abs/2009.03300.
[12]
Robin Jia, Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. arXiv:1707.07328. Retrieved from https://arxiv.org/abs/1707.07328.
[13]
Miles Brundage, Shahar Avin, Jack Clark, and etc. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228. Retrieved from https://arxiv.org/abs/1802.07228.
[14]
Finale Doshi-Velez, Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv:1702.08608. Retrieved from https://arxiv.org/abs/1702.08608.
[15]
Dorian Peters, Karina Vold, Diana Robinson, and Rafael A. Calvo. 2020. Responsible AI—two frameworks for ethical design practice. IEEE Transactions on Technology and Society. 1, 1 (March 2020), 34-47.
[16]
Yichuan Wang, Mengran Xiong, and Hossein G. T. Olya. 2020. Toward an understanding of responsible artificial intelligence practices. In Proceedings of the 53rd hawaii international conference on system sciences. HICSS, Hawaii, CA, 4962-4971. https://doi.org/10.24251/hicss.2020.610.
[17]
Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Franc¸ois Crespo, and Dan Dennison. 2015. Advances in neural information processing systems. 28 (2015).
[18]
Inioluwa Deborah Raji, Andrew Smart, and Rebecca N. White. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. ACM, New York, CA, 33-44. https://doi.org/10.1145/3351095.3372873.

Index Terms

  1. Applications and Challenges of Large Language Models in Smart Government -From technological Advances to Regulated Applications
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Please enable JavaScript to view thecomments powered by Disqus.

            Information & Contributors

            Information

            Published In

            cover image ACM Other conferences
            FAIML '24: Proceedings of the 2024 3rd International Conference on Frontiers of Artificial Intelligence and Machine Learning
            April 2024
            379 pages
            ISBN:9798400709777
            DOI:10.1145/3653644
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 20 September 2024

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. Large Language Models
            2. Regulated Applications
            3. Smart Government

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Conference

            FAIML 2024

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • 0
              Total Citations
            • 38
              Total Downloads
            • Downloads (Last 12 months)38
            • Downloads (Last 6 weeks)28
            Reflects downloads up to 24 Nov 2024

            Other Metrics

            Citations

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media