Artificial Intelligence (AI) has already undergone a series of hype and disillusionment (“AI winter”) phases since its inception in the late 1950s. The current AI hype, mainly driven by deep learning on GPU hardware and the availability of massive datasets, yielded many promising results in research and industry. AI methods are seen as key enabler to realize cross-sector energy management in smart grids, to optimize the traffic flow in cross-modal mobility systems, to reduce the resource consumption in industrial production, and to guarantee sustainable and resilient supply chains in production ecosystems. But at close inspection, many AI projects in the classical industries (where the output is not a pure IT product or service) do not go beyond prototype implementations or test deployments. The success rate of AI projects that make it into long-term operational use must be significantly increased for the technology to achieve its full commercial potential. And that is the best protection against the next AI winter.
From the viewpoint of the engineering practice of technical, possibly high-risk systems, the widespread use of AI methods is far from being a commodity. In contrast to former decades, in which a basic skepticism with respect to AI was combined with a limited AI performance, there are currently other factors that limit the broad use of AI: poor availability of AI experts in the enterprises and on the job market, poor availability and accessibility of data that may serve as training data for machine learning (ML) methods, concerns to break laws on data protection and data privacy, security concerns to open up new vulnerabilities, and, last but not least, a lack of a systematic approach how to use AI in the design and operation of technical systems such that an engineer may keep trust and control.
KI-Engineering – in English AI Systems Engineering – is the topic of this special issue and argues that the solution to overcome this situation lies in the creation of a new professional discipline that uses Artificial Intelligence in a Systems Engineering (SE) context for the development of an overall technical system. The following questions exemplify the challenges in bringing AI and Systems Engineering together:
“Can the performance of AI be predicted already during a system design phase?”
“Must a single person understand the entire system, or can it be compartmentalized into development teams?”
“Can we plan the development of the AI solution on a fixed schedule?”
“How can an AI-enabled system pass a formal certification in a regulated industry?”
“Is it possible to automatically detect if an assumption made during development of an AI solution is no longer valid?”
“How often do I need to update and maintain an AI solution once it is deployed?”
This special edition summarizes the first findings of CC-KING, enriched by further articles from research and industry that fall into the thematic spectrum of AI Systems Engineering.
Survey articles
This edition starts with an introductory and survey article of the editors on AI Systems Engineering. They discuss the topic in the comparison to the historical emergence of new engineering disciplines and the long-term challenges to be solved by AI Systems Engineering.
The article “Tools and methods for Edge-AI-Systems” by Schwabe et al. gives a survey on the deployment of AI models in the edge close to physical processes and how to cope with resource constraints. They further discuss the potential of hardware/software co-design in the context of AI.
The article “PAISE – process model for AI systems engineering” by Hasterok and Stompe proposes a framework to structure the work of AI Systems initiatives from goal setting up to the long-term operation. This serves as guidance for project managers and to coordinate between the different project members and stakeholders.
Methods
In the article “Explainable AI: introducing trust and comprehensibility to AI engineering”, the authors Burkart et al. propose to use Explainable AI (xAI) for model and data set refinement. Model refinement provides insights into the inner workings of a model whereas data set refinement detects and resolves problems in the training data.
The article “The why and how of trustworthy AI” by Schmitz et al. proposes two approaches to achieve trustworthy AI: the product perspective focusing on the technology in its application environment and the organizational perspective focusing on the organizational processes for the development and operation of AI.
Applications
Sawilla et al. use two industrial use cases to discuss “Industrial challenges for AI systems engineering” in the article of the same name. Due to the authors background in manufacturing they focus on parts manufacturing as well as the process industry.
The last article “Better insights into wood-based panel production” by Bär et al. shows the use of AI for improved quality and reduced resource consumption at Dieffenbacher. They discuss challenges beyond “textbook AI” and the effort to support long-term deployments at many installations worldwide.
This special edition is a starting point to encourage a broader discussion and discourse on AI Systems Engineering with representatives of the research, the industrial and also the political community. In order to moderate this process, CC-KING offers an interdisciplinary forum as well as technology transfer instruments.
We wish you an inspiring lecture and, yes, please contact us – we are looking forward to your feedback!
About the authors
Jürgen Beyerer has been a full professor for informatics at the Institute for Anthropomatics and Robotics at the Karlsruhe Institute of Technology KIT since March 2004 and director of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB in Ettlingen, Karlsruhe, Ilmenau, Görlitz, Lemgo, Oberkochen and Rostock. Research interests include automated visual inspection, signal and image processing, variable image acquisition and processing, active vision, metrology, information theory, fusion of data and information from heterogeneous sources, system theory, autonomous systems and automation. Jürgen Beyerer is the chair of the scientific board of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING).
Julius Pfrommer is head of a research group on “distributed cyber-physical systems” at Fraunhofer IOSB focusing on the use of AI to solve “hard problems” in industry. He is furthermore the scientific director of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING) and holds an appointment an Karlsruhe Institute of Technology for his course on convex optimization theory for machine learning and engineering.
Thomas Usländer holds a degree in Computer Science from the University of Karlsruhe, Germany, and a PhD in Engineering of the Karlsruhe Institute of Technology (KIT), Germany. He is head of the department “Information Management and Production Control” and spokesperson of the business unit “Automation and Digitalization” at Fraunhofer IOSB. His research interests include the analysis and design of open, secure and dependable service architectures for the Industrial Internet of Things and Industrie 4.0. Thomas Usländer is the project manager of the Competence Center Karlsruhe for AI Systems Engineering (CC-KING).
© 2022 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.