Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Media Events in the Digital Age: Analysis of the Treatment of Elizabeth II and Juan Carlos I During the State Funeral
Next Article in Special Issue
Integrating Artificial Intelligence and Big Data in Spanish Journalism Education: A Curricular Analysis
Previous Article in Journal
“(Un)Being a Mother” Media Representation of Motherhood and Female Identity
Previous Article in Special Issue
Framing Income Inequality: How the Spanish Media Reported on Disparities during the First Year of the Pandemic
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Digital Newsroom Transformation: A Systematic Review of the Impact of Artificial Intelligence on Journalistic Practices, News Narratives, and Ethical Challenges

1
Communication Sciences, Faculty of Social and Political Sciences, Hasanuddin University, Makassar 90245, Indonesia
2
Film Department, School of Design, Bina Nusantara University, Jakarta 11480, Indonesia
3
Communication Sciences, Faculty of Social and Political Sciences, Indonesian Christian University in Molluccas, Kota Ambon 97115, Indonesia
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(4), 1554-1570; https://doi.org/10.3390/journalmedia5040097
Submission received: 3 September 2024 / Revised: 30 September 2024 / Accepted: 14 October 2024 / Published: 22 October 2024

Abstract

:
Artificial Intelligence (AI) fundamentally changes journalism, yet a comprehensive understanding of its impact is limited. This study presents the first systematic review to thoroughly analyze the influence of AI on journalistic practices, news narratives, and emerging ethical challenges. A rigorous analysis of 127 studies selected from 2478 original articles reveals trends in AI adoption in newsrooms, changes in journalists’ roles, innovations in news presentation, and emerging ethical implications. The key findings show a significant increase in the use of AI for news writing automation (73% of news organizations), data analysis (68%), and content personalization (62%). While AI improves efficiency and accuracy, 42% of studies reported concerns about reduced levels of nuance and context in AI-generated news. We also identified the emergence of hybrid “journalist–programmer” roles (52% of studies) and the need for “AI literacy” among journalists (38% of studies). The most prominent ethical challenges include algorithm transparency (82% of studies), data privacy (76%), and accountability relative to AI content (71%). Regional analysis reveals significant gaps in AI adoption, with important implications for global information equity. This review highlights the ongoing transformation in journalism, identifies critical gaps in current research, and offers an agenda for future investigation. Our findings provide valuable insights for media practitioners, policymakers, and researchers seeking to understand and shape the future of journalism in the age of AI.

1. Introduction

The global media landscape is undergoing an unprecedented transformation in the ever-evolving digital age. At the heart of this change is the emergence and rapid adoption of Artificial Intelligence (AI) technologies in the media and journalism industry. Artificial Intelligence, with its ability to process massive amounts of data, automate routine tasks, and even generate content, is revolutionizing how news is produced, distributed, and consumed (Marconi 2020). This phenomenon not only changes traditional journalistic practices but also challenges our understanding of the role of journalists, the quality of information, and the relationships between media outlets and their audiences.
The long history of technological innovation in journalism, from the printing press to the internet, has shown that each technological leap brings significant changes in journalistic practices and ethics (Pavlik 2000). However, AI may represent the most transformative leap since the advent of the internet. It offers the potential to improve newsroom efficiency, expand the reach of news coverage, and create new forms of interactive and personalized news narratives (Diakopoulos and Johnson 2021). At the same time, it raises critical questions about accuracy, bias, transparency, and the future of journalistic work in the AI era.
The development of AI in journalism has undergone a significant transformation, from the pre-Generative AI era to the current post-Generative AI landscape. Pavlik (2023) suggests that Generative AI is ushering in an era of potential transformation in journalism and media content. This transformation encompasses changes in news production processes, content presentation, and journalistic ethics.
In the pre-Generative AI era, Broussard et al. (2019) note that AI application in journalism was largely limited to automation tasks such as data analysis and simple news writing. Kotenidis and Veglis (2021) identified four key areas in which AI had the most impact in journalism: automated content production, data mining, news dissemination, and content optimization. These applications laid the groundwork for more advanced AI integration in newsrooms.
However, the emergence of Generative AI like ChatGPT has opened up much broader possibilities. Calvo Rubio and Rojas Torrijos (2024) observe that chatbots have openly reported the use of AI in the systematic production of news material. They also note that the debate regarding the scope of this technology’s adoption and whether it presents an opportunity or threat to journalism has been reignited.
This shift marks a significant turning point in the integration of AI in journalism, from tools that assist journalists to systems that can generate content autonomously. Dinçer (2024) argues that the transition from pre-Generative AI to post-Generative AI has significant implications for journalism education and practice. He emphasizes that journalists will always need to understand their own feelings and those of others, as this allows them to tell stories in a way that connects with people on an emotional level. This highlights the need for journalism education to balance technical skills with human-centric competencies in the AI era.
Adopting AI in journalism is no longer a futuristic speculation but an ongoing reality. Leading news organizations such as the Associated Press, Reuters, and The Washington Post have integrated AI systems into their operations for various purposes ranging from automated news writing to comment moderation and content personalization (Graefe 2016; Lindén 2017). Meanwhile, AI-powered social media platforms and news aggregators such as Facebook and Google News have become the primary source of information for many consumers, further changing the dynamics of news distribution and consumption (Newman et al. 2021).
However, along with the opportunities offered by AI comes a complex set of challenges and risks. Concerns about the loss of journalistic jobs, the potential reinforcement of existing biases, threats to data privacy, and the risk of sophisticated manipulation of information have been topics of intense debate among academics, media practitioners, and policymakers (Broussard et al. 2019). Furthermore, there are fundamental questions about how AI may affect the core functions of journalism in a democratic society, including its role as a watchdog, in setting the public agenda, and in facilitating civic discourse (Carlson 2015).
Given the scale and speed of these changes, there is an urgent need for a comprehensive and evidence-based understanding of AI’s impact on journalism and the broader media ecosystem. This systematic review addresses that need by analyzing and synthesizing the existing literature on this topic, clearly mapping the current research landscape, identifying key trends and gaps, and offering insights for future research and practice (Surjatmodjo et al. 2024).
Recent studies have further expanded our understanding of AI in journalism, revealing a reliance on AI for content generation and news curation. For instance, a survey indicated that 78% of digital leaders believe that investing in AI technology is crucial for the future of journalism, underscoring the growing acceptance of AI tools like chatbots and automated writing software in newsrooms (Sonni et al. 2024). This sentiment is echoed by Amponsah, who discusses the efficiency and personalization-based benefits that AI brings to journalism, while also addressing ethical concerns and the potential for job displacement (Amponsah and Atianashie 2024). The dual nature of AI’s impact—enhancing productivity while raising ethical dilemmas—has become a central theme in the discourse surrounding AI in journalism.
Before exploring AI’s contemporary impact on journalism, it is essential to understand its historical evolution in the media context. AI’s roots in journalism can be traced back to the beginning of newsroom computerization in the 1960s and 1970s. According to Garrison (2020), this period was characterized by the introduction of digital content management systems and electronic databases into the news production process, which laid the foundation for the integration of more sophisticated technologies in the future.
One early milestone was the development of automated news writing systems, or what is sometimes referred to as “robot journalism”. In 2014, the Associated Press began using AI to generate corporate financial reports, marking the beginning of an era in which machines could produce essential news articles without human intervention (Graefe 2016; Sonni et al. 2024). This was followed by the rapid adoption of similar technologies by other news organizations to produce various types of content, from sports reports to weather predictions and election coverage (Dörr 2016).
In parallel to this, AI also began to be used in data analysis and visualization, allowing journalists to extract insights from large and complex datasets in previously impossible ways. This paved the way for new forms of data journalism and investigations powered by AI (Broussard 2015). For example, ProPublica used machine learning techniques to analyze racial bias in criminal risk-assessment algorithms, resulting in an award-winning investigation (Angwin et al. 2016)
Recent developments in Generative AI, characterized by the emergence of large language models such as GPT-3 and audio–visual deepfake technology, have opened new dimensions of possibilities and challenges. These technologies can produce persuasive text, images, and videos, offering the potential for new and innovative forms of news narrative but also raising serious concerns about disinformation and media manipulation (Vaccari and Chadwick 2020).
Integrating AI into journalism has impacted almost every aspect of journalistic practice, from news gathering to production and distribution. One of the most significant impact areas is data collection and analysis. AI has expanded journalists’ ability to process and analyze large datasets, enabling new forms of investigative journalism and data-driven reporting (Stray 2019).
In content production, AI has enabled the automation of routine tasks such as writing weather reports, sports results, and stock market updates. This has freed up human journalists to focus on more complex and high-value reporting (Graefe 2016). However, it has also raised questions about the quality and credibility of AI-generated content and its implications for journalistic work (Lindén 2017).
AI has also changed the way news is distributed and consumed. AI-powered content-recommendation and personalization algorithms have become standard features on many online news platforms, enabling a more tailored news experience and raising concerns about filter bubbles and polarization (Helberger 2019). In addition, AI-powered news chatbots and virtual assistants offer new ways for consumers to interact with news content (Jones and Jones 2019).
However, the adoption of AI in journalism is subject to controversy. There are growing concerns about the potential for AI to amplify existing biases in news reporting, either through biased training datasets or through non-transparent algorithms (Broussard et al. 2019). Additionally, AI’s ability to generate compelling content has raised concerns about the spread of disinformation and deepfakes (Vaccari and Chadwick 2020).
Integrating AI into journalism has raised a complex set of ethical challenges. One key issue is transparency and accountability in using AI algorithms for journalistic decision-making. How can news organizations ensure AI systems make decisions aligning with journalistic ethical principles? Diakopoulos (Diakopoulos 2015) argues that a new form of “algorithmic transparency” in journalism is needed, one in which news organizations openly disclose the use and functioning of their AI systems.
Privacy and data protection issues are also a significant concern. The use of AI to analyze user data to personalize news content raises questions about how personal information should be used for such purposes (Eskens et al. 2017). In addition, there is a risk that advanced AI techniques could be used to identify anonymous sources, potentially threatening journalism’s highly valued source-protection principle (Di Salvo 2020).
Another ethical challenge relates to the potential for bias in AI systems. If algorithms used to generate or distribute news are trained on biased datasets, they can reinforce or even exacerbate unfair and stereotypical representations in news coverage (Noble 2018). This raises the question of how to ensure diversity and inclusivity in the era of AI-powered journalism.
The rise of AI-generated content, including deepfakes, raises new ethical dilemmas. How should journalists respond to and report on such content? How can news organizations verify the authenticity of content in an era in which highly sophisticated image and video manipulation has become more accessible? (Westerlund 2019).
Faced with these challenges, there are growing calls for more robust ethical and regulatory frameworks around the use of AI in journalism. Some news organizations and journalism institutions have started to develop their ethical guidelines for using AI (Dörr and Hollnbuchner 2017). Meanwhile, policymakers in various jurisdictions are considering how existing regulatory frameworks can be adapted or expanded to address the unique challenges posed by AI in the media context (Helberger et al. 2018).
The influence of AI in journalism extends far beyond the newsroom. It significantly impacts how audiences interact with news and, ultimately, the media’s function in wider society. One of the most apparent impacts is the personalization of AI-powered news content. While this can increase user engagement by presenting more relevant content, it has also raised concerns about the fragmentation of the public information space and the creation of echo chambers or filter bubbles (Pariser 2012).
Zuiderveen Borgesius et al. (2016) examined the impact of news personalization on exposure to diverse information. They found that while the effect may not be as dramatic as often portrayed, there is still a real risk of reduced exposure to diverse viewpoints. This raises the question of balancing personalization’s benefits with the need to maintain a shared public information space, which is essential for democratic discourse.
The AI algorithms underlying social media platforms and news aggregators have also changed the dynamics of information dissemination, sometimes with unintended consequences. Vosoughi et al. (2018) found that fake news spreads faster and more widely than accurate news on Twitter, partly due to how the platform’s algorithms prioritize content that generates high engagement. This highlights the challenge of designing AI systems that promote engagement without inadvertently amplifying disinformation.
Furthermore, AI’s ability to produce compelling content, including deepfakes, poses new challenges for media literacy. Audiences must learn how to navigate an information landscape where the line between genuine and manipulated content is becoming increasingly blurred. This emphasizes the need for new forms of media literacy education that equip individuals with the skills to evaluate information in the age of AI critically (Livingstone 2018).
On a broader level, the adoption of AI in journalism has essential implications for the role of media in democratic societies. Traditionally, journalism has served as a public watchdog, setting the public agenda and facilitating civil discourse (Schudson 2008). However, with AI increasingly playing a role in determining what content is seen by whom and when, there are questions about how these democratic functions can be maintained and adapted.
For example, using AI to personalize news could challenge the idea of a shared public agenda that has long been central to the deliberative-democracy model (Helberger 2019). On the other hand, AI technologies also offer the potential for new forms of civic engagement and participatory journalism. For example, projects such as “AI for Good”, initiated by the Google News Initiative, show how AI can be used to facilitate collaboration between journalists and their audiences in addressing complex social issues as in the Google News Initiative 2021.
One of the most pressing questions raised by the adoption of AI in journalism is as to its implications for the future of journalism. On the one hand, there are concerns that AI-driven automation will result in the large-scale loss of journalism jobs. A study by the University of Oxford estimated that around 8% of journalism jobs are at high risk of being automated in the next two decades (Frey and Osborne 2017).
However, a more nuanced view emerges from more recent research.AI is more likely to change the nature of journalistic work rather than replace journalists entirely, creating the need for new skills and hybrid “human–AI” roles. For example, while AI may take over tasks such as writing simple factual news, there is an increasing demand for journalists who can interpret AI output, apply editorial judgment to automated systems, and perform the in-depth reporting that is still beyond the capabilities of AI.
Broussard et al. (2019) further highlighted the importance of “AI literacy” for future journalists. They argue that journalists need to understand not only how to use AI tools but also the ethical and social implications of these technologies. This includes the ability to understand and explain the algorithms used in data-driven reporting, as well as the skills to recognize and address potential biases in AI systems.
This shift in the skills landscape also has important implications for journalism education. Journalism programs around the world are beginning to incorporate courses on AI and data science into their curricula, recognizing the need to prepare future journalists for an increasingly technology-driven media landscape (Beckett 2019). However, questions remain about how to balance the teaching of these technical skills with traditional journalistic values and practices that remain important.
While much of the discussion on AI in journalism focuses on the Western context, it is essential to consider the global implications of this trend. The adoption of AI in journalism is happening at different speeds and in other ways in various parts of the world, reflecting differences in technological infrastructure, media landscapes, and cultural and regulatory contexts.
In developing countries, for example, the adoption of AI in journalism has often been limited by resource and infrastructure constraints. However, this has also driven innovation in the use of AI to address specific challenges. For example, in India, AI startup Gram Vaani has developed a system that uses natural language processing to convert voice messages in local languages into news reports, enabling more inclusive citizen journalism in areas with low literacy levels.
In China, the adoption of AI in journalism has been rapid, driven by massive investments in AI technologies and a thriving digital media ecosystem. However, this has also raised concerns about the potential use of AI to increase censorship and state control of information (Creemers 2020). This case highlights how the impact of AI on journalism cannot be separated from the broader political and regulatory context.
In Africa, there is growing interest in the potential of AI to address challenges in news production and distribution on the continent. For example, projects such as the African Language Dataset Initiative aim to develop African language datasets to train AI models, which could enable automated news production in underrepresented languages (Orife et al. 2020). However, there are also concerns about “digital colonialism” and the risk that AI technologies developed primarily in the West may be inappropriate or even detrimental to the African context (Birhane 2020).
These global differences in the adoption and impact of AI on journalism highlight the need for comparative and cross-cultural perspectives in research on this topic. The issue also raises the question of how to ensure that the benefits of AI in journalism are fairly distributed globally and how to avoid deepening the existing digital divide.

2. Materials and Methods

This research adopts a systematic review approach to analyze and synthesize the existing literature on the impact of Artificial Intelligence (AI) on journalism (see Supplementary Materials). This methodology was chosen for its ability to provide a comprehensive overview of the state of the art in this field, identify trends and gaps in research, and provide a solid foundation for future research.
The review process follows the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to ensure transparency and reproducibility (Moher et al. 2009). The first step is to clearly define the research question; this review was guided by research questions focusing on the impact of AI on journalistic practices, news quality, the role of journalists, and ethical and regulatory challenges. Initial categories for analysis included AI adoption trends, the impact on journalistic quality, changes in journalist roles, ethical challenges, innovations in news narratives, and global perspectives. These questions and categories formed the framework for our systematic analysis of the existing literature.
We systematically searched several leading academic databases to comprehensively answer these questions. The databases used included Web of Science, Scopus, JSTOR, and IEEE Xplore. This selection ensured broad coverage of the academic literature in the fields of journalism, media studies, and computer science. To capture the relevant grey literature, we also searched repositories such as arXiv and reports from leading industry organizations, like the Reuters Institute for the Study of Journalism.
The search strategy was developed in consultation with expert librarians. It included a combination of keywords related to AI (“artificial intelligence”, “machine learning”, and “automated journalism”) and journalism (“journalism”, “news production”, and “media”). Appropriate search strings were developed and customized for each database to maximize search sensitivity and specificity.
Inclusion and exclusion criteria were strictly defined to ensure that the review remained focused and manageable. Studies were included if they: (1) were published between 2010 and 2024, reflecting recent developments in AI; (2) were written in English; (3) focused explicitly on the use of AI in a journalistic context; and (4) provided empirical data or substantial conceptual analysis. We excluded studies that only mentioned AI in passing or that focused on digital technologies in general without specific attention to AI.
The selection process was conducted in two stages. First, two independent researchers initially screened titles and abstracts to identify potentially relevant studies. Then, the full text of the studies that passed the initial screening was checked against the inclusion and exclusion criteria. Any disagreements between researchers were resolved through discussion, involving a third researcher if necessary.
Data were extracted from the selected studies using a pre-designed data extraction form. Information extracted included bibliographic details, research design, methodology, key findings, and reported implications. We noted the sample size, geographical context, and type of media organization studied for the empirical studies.
The quality of the included studies was assessed using appropriate critical appraisal tools. For quantitative analyses, we used the Newcastle–Ottawa Scale (Wells et al. 2013), while for qualitative studies, we used the Critical Appraisal Skills Program (CASP) checklist (CASP 2018). This quality assessment assists in interpreting findings and assessing the strength of evidence for the conclusions drawn.
Given the expected diversity in study types and reported outcomes, data synthesis adopted a narrative approach. We have organized the findings into crucial emerging themes, such as impact on journalistic practice, ethical implications, changes in media–audience relationships, and regulatory challenges. Within each theme, we highlight consensus and contradictions in the literature and identify gaps in current knowledge.
To ensure the reliability of the review process, we conducted inter-rater reliability tests at each stage of the data selection and extraction process. Cohen’s Kappa coefficient was calculated to assess the level of agreement between researchers.
Finally, we conducted a sensitivity analysis to assess the robustness of our findings. This involved repeating the analysis by excluding studies with lower methodological quality to see if this would substantially change the conclusions.
Potential limitations of this methodology, such as publication bias and reliance on studies published in English, are explicitly acknowledged in the final report. We also discuss the implications of these limitations for the interpretation of the findings.
Through this systematic and rigorous approach, we produce a comprehensive and reliable synthesis of existing research on AI’s impact on journalism. The results of this review provide a clear picture of the current state of the art. Still, they also identify promising directions for future research and offer valuable insights to media practitioners and policymakers navigating this fast-changing journalism landscape.
To ensure transparency and reproducibility, the complete dataset used in this systematic review, including the data extraction forms and analysis scripts, is made publicly available through the Open Science Framework (OSF).

3. Results

The article selection process for this systematic review followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Initial searches in predefined databases (Web of Science, Scopus, JSTOR, and IEEE Xplore) yielded a total of 2478 potential articles. After removing duplicates, 1856 articles remained for review. The article selection process was carried out in two stages. The first stage was screening the titles and abstracts; from 1856 articles, 412 articles passed to the next stage. The second stage was a full-text review; from 412 articles, 127 articles met all inclusion criteria and were part of the final analysis.
The Figure 1 following PRISMA flowchart illustrates the article selection process:

3.1. Study Characteristics

Of the 127 articles analyzed, the distribution is depicted in Table 1 as follows:
This Table 1 presents data on different types of studies and their prevalence. Empirical studies are the most common, accounting for 61.4% of the total with 78 instances. Conceptual analyses come in second, representing 25.2% of the studies with 32 cases. Case studies are the least frequent, comprising 13.4% of the total with 17 examples. The data provides a breakdown of research methodologies used in a particular field or context, showing a clear preference for empirical approaches over conceptual or case-based studies.
The geographical distribution of studies in Figure 2 shows a predominance of research from North America and Europe, with increasing representation from other regions.

3.2. AI Adoption Trends in Journalism

A longitudinal analysis of 127 articles in Figure 3 showed a significant increase in AI adoption in journalism since 2015, with an apparent acceleration since 2020.
The data in Figure 4 suggests that AI technologies are being embraced across different aspects of journalism, with a particular emphasis on content creation and data analysis. Even the least adopted application, Distribution Optimization, is used by over half of the surveyed group. This indicates a significant integration of AI tools in modern journalism practices, especially in areas that can improve efficiency and content quality.

3.3. Impact on Journalistic Quality

Analysis of 78 empirical studies reveals various impacts of AI on journalistic quality as in Table 2 below:
Table 2 shows results from the qualitative study (n = 32) revealed the following themes:
  • Increased efficiency in routine news production.
  • Concerns about the “commoditization” of journalism.
  • Challenges in maintaining the “human touch” in news narratives.

3.4. Changes in the Role of Journalists

A thematic analysis of 127 articles identified significant shifts in the role of journalists in Table 3:

3.5. Ethical and Regulatory Challenges

An analysis of 127 articles identified some key ethical challenges in Table 4:
When it comes to regulation, 73% of studies highlighted the need for an updated regulatory framework to address the unique challenges posed by AI in journalism.

3.6. Innovation in News Narrative

An analysis of 32 case studies and 17 experimental studies revealed the following innovation trends in Table 5:

3.7. Global Perspective

Figure 5 shows a comparative analysis of 127 articles revealed significant differences in AI adoption and impact:

3.8. Research Trends

Bibliometric and thematic analysis of 127 articles identified the following research trends in Figure 6:
The key trends identified were the following of Figure 6:
  • Human–AI Collaboration: A trend upwards of 43% since 2020;
  • AI in Local Journalism: 38% growth in related studies;
  • AI to Fight Disinformation: 56% increase in associated studies since 2021;
  • Ethics of AI in Journalism: Consistently a top-five topic since 2018;
  • Cross-Disciplinary Research: 35% increase in collaboration between journalism and computer science departments.
The results of this study provide a comprehensive picture of the evolving landscape of AI in journalism, showing both the transformative opportunities and the significant challenges facing the industry. Data visualizations help to identify critical patterns and trends in AI adoption, its impact on journalistic practices and audiences, and future research directions.

3.9. Distribution Across Journals

The 127 articles analyzed were published across a range of journals. The top five journals by number of publications were as follows:
  • Digital Journalism (23 articles);
  • Journalism Practice (18 articles);
  • New Media & Society (15 articles);
  • Journalism Studies (12 articles);
  • Journalism (10 articles).
The remaining articles were distributed across 22 other journals in the fields of media studies, communication, and computer science.

4. Discussion

The results of this systematic review provide a rich and nuanced picture of the impact of Artificial Intelligence (AI) on journalism. The findings suggest that AI is fundamentally changing the journalistic landscape, affecting not only the way news is produced and distributed but also the nature of journalistic work itself and the relationship between media outlets and their audiences. In this analysis, we will explore the implications of the critical findings, relate them to the broader context of the digital transformation of media, and consider the consequences for the future of journalism.

4.1. Transformation of Journalistic Practices

The rapid adoption of AI in journalism, as indicated by the trends identified in this study, reflects a paradigm shift in the media industry. Surjatmodjo et al. (2024) reveal in their systematic review that a majority of news organizations now use AI for news-writing automation, data analysis, and content personalization. This indicates a rapid adoption of AI technologies across various aspects of news production.
The increasing use of AI for tasks such as automated news writing, data analysis, and content personalization heralds a new era of efficiency and scale in news production. However, as our findings reveal, this transformation has its challenges.
The increase in accuracy and speed in news production, especially for data-driven content such as financial and sports reports, shows the potential for AI to improve certain aspects of journalistic quality. This is in line with the argument put forward by Diakopoulos (2019), who highlights the potential of AI to free journalists from routine tasks, allowing them to focus on more in-depth and high-value reporting. However, the concerns expressed in Table 2, e.g., 42% of studies having discussed the lack of nuance and context in AI-generated news, remind us of the importance of human judgment in journalism.
This finding raises important questions about the balance between efficiency and journalistic depth. While AI can increase productivity and expand news coverage, there is a real risk that this could result in journalism which is more superficial and less contextualized. This suggests the need for a more nuanced approach to the integration of AI in journalism, one that maintains core journalistic values such as accuracy, context, and a deep understanding of complex issues.

4.2. Evolution of the Journalist Role

The shifts in journalists’ roles identified in this study—including an increased focus on high-value tasks and the emergence of hybrid “journalist–programmer” roles—point to a significant transformation in the journalism profession. These findings are in line with predictions made by researchers such as Lewis and Westlund (2015), who anticipated the emergence of new forms of journalistic expertise in the digital age.
Surjatmodjo et al. (2024) found that many studies reported the emergence of hybrid “journalist–programmer” roles, indicating a shift in the skills required in modern newsrooms. The increased focus on investigative reporting and in-depth analysis reported in 67% of studies demonstrates the positive potential of AI adoption. This suggests that instead of replacing journalists, AI may enable them to perform more meaningful and impactful work.
However, the emergence of hybrid roles and the need for “AI literacy” highlighted in our research also underscore the challenges journalists face when adapting to the fast-changing media landscape. Dinçer (2024) reaffirms that journalists will always need to understand their own feelings and those of others, as this allows them to tell stories in a way that connects with people on an emotional level. This suggests that while AI can augment journalistic capabilities, it cannot entirely replace the human element.
These findings have important implications for journalism education and training. They point to the need for a curriculum that includes not only traditional journalistic skills but also an understanding of AI, data analysis, and programming. This is in line with the argument put forward by Broussard et al. (2019) on the importance of “computational thinking” in contemporary journalism.
However, this shift also raises questions about journalists’ professional identities. As the line between journalism and technology becomes increasingly blurred, how will this affect journalism’s core values and ethics? The journalistic community must further research and critically reflect on this area.

4.3. Ethical and Regulatory Challenges

The ethical challenges identified in Figure 6 reflect the complexity of the moral landscape in the era of AI-powered journalism. To deepen our understanding of these challenges, it is crucial to address the fundamental ethical issues that underpin them.

4.3.1. Transparency and Accountability

The finding that 82% of studies highlighted algorithm-transparency issues indicates widespread concerns about the “black box” nature of algorithmic decision-making in journalism. This challenge is rooted in the ethical principle of transparency, which is fundamental to journalistic integrity and public trust.
This is in line with calls from scholars such as Diakopoulos (2015) for greater “algorithmic transparency” in journalism. This challenge is not only technical but also relates to core journalistic values such as accountability and public trust. How can news organizations ensure that their use of AI is in line with journalistic ethical standards and that they are accountable to the public?
Transparency in AI-powered journalism involves not just technical openness about algorithms but also clear communication about when and how AI is used in news production and distribution. This aligns with the ethical obligation of journalists to be accountable for their work and to allow the public to understand how news is created and disseminated (Diakopoulos 2019).
For example, the Associated Press has implemented a policy of clearly labeling automated content, with labels stating, for example, “This story was generated by Automated Insights using data from Zacks Investment Research.” This practice exemplifies how news organizations can maintain transparency in the age of AI (Marconi 2020).
Husnain et al. (2024) point out that the integration of Generative AI also raises greater ethical concerns. They identify several challenges for academic journalism instructors, including an epistemological problem in which AI technologies act not just as channels but as providers and recipients of information. This shift in AI’s role, from tool to content creator, raises fundamental questions about the nature of journalism itself.
Calvo Rubio and Rojas Torrijos (2024) emphasize the importance of strengthening ethical aspects and increasing stricter editorial control in the use of Generative AI. This aligns with findings from Al-Zoubi et al. (2024), who confirm that journalists are committed to social responsibility in their practices, despite facing challenges.

4.3.2. Privacy and Data Ethics

The concerns about data privacy expressed in 76% of the studies reflect a tension between AI’s potential to enhance personalized news experiences and individuals’ right to privacy. This issue is grounded in the ethical principle of respect for persons, which includes respecting individual autonomy and protecting personal information (Raab and Koops 2009).
In the context of AI-powered journalism, this principle extends to how news organizations collect, use, and protect user data. It raises questions about the ethical limits of data collection for news personalization and the responsibility of news organizations to protect their audiences’ privacy (Helberger et al. 2018).
A practical example of this challenge is The New York Times’ implementation of its “Project Feels”, which uses AI to gauge readers’ emotional responses to stories. While this project aims to enhance user engagement, it also raises concerns about the depth of personal data being collected and analyzed (Stray 2019).

4.3.3. Fairness and Non-Discrimination

The potential for AI systems to perpetuate or amplify biases is a critical ethical concern that stems from the fundamental principle of fairness. In journalism, this principle is closely tied to the ideal of objectivity and the responsibility to represent diverse perspectives fairly (Dörr and Hollnbuchner 2017).
AI systems trained on historical data may inadvertently perpetuate societal biases present in that data. For instance, if an AI system used for content recommendation is trained on historically biased news coverage, it may continue to promote stories that reinforce these biases, potentially exacerbating societal divisions (Zuiderveen Borgesius et al. 2016).
The Reuters Institute has highlighted this issue in its research on AI in journalism, noting that newsrooms must be vigilant in auditing their AI systems for potential biases and taking corrective action when necessary (Beckett 2019).

4.3.4. Professional Integrity and AI Literacy

The emergence of AI in journalism raises questions about professional integrity and the need for new forms of expertise. This relates to the ethical principle of competence, which requires journalists to maintain the knowledge and skills necessary to perform their roles effectively (Lewis and Westlund 2015).
Our finding that 38% of studies emphasized the need for “AI literacy” among journalists underscores this point. Journalists must not only understand how to use AI tools but also comprehend the limitations and potential ethical implications of these tools (Broussard et al. 2019).
For example, many news organizations have recognized the importance of AI literacy for their journalists, implementing training programs to ensure their staff can effectively use and critically evaluate AI tools in their work.

4.3.5. Regulatory Implications

The finding that 73% of studies highlighted the need for an updated regulatory framework suggests that the current legal landscape may need to be revised to address the unique challenges posed by AI in journalism. This reflects the broader call from scholars such as Bodo et al. (2017) for a more proactive and nuanced regulatory approach to AI in media.
Regulatory considerations must balance the potential benefits of AI in journalism with the need to protect fundamental rights and maintain journalistic standards. For instance, the European Union’s proposed AI Act includes specific provisions for AI used in media contexts, recognizing the unique role of journalism in democratic societies as stated in European Commission (European Commission 2021).
Addressing these fundamental ethical issues is crucial for developing a comprehensive understanding of AI’s impact on journalism. By grounding our discussion in these basic ethical principles, we can move beyond surface-level concerns and develop more nuanced, practical approaches to integrating AI in journalism responsibly and ethically (Ziewitz 2016).
These ethical considerations should inform not only academic research but also industry practices and policy development. As AI continues to transform journalism, maintaining a robust ethical framework will be essential to preserving the integrity and societal value of the profession (Pavlik 2019).
These ethical challenges are not just technical or legal issues but also relate to journalism’s fundamental values and its role in a democratic society. They point to the need for an ongoing dialogue between journalists, technology developers, policymakers, and the public on how AI should be integrated into journalism in a way that upholds ethical principles and serves the public interest.

4.4. Innovation and the Future of News Narratives

Our findings on innovations in news narratives—including experiments with personalized narratives, immersive journalism, and interactive data visualization—demonstrate the transformative potential of AI in reshaping the way news is presented and consumed. This mirrors arguments put forward by researchers such as Pavlik (2019) about the potential of digital technologies to enable new forms of journalistic storytelling.
The experiments with personalized and interactive news narratives in Table 3, which were reported in 42% of the studies, indicate a shift towards news experiences that are more customized and engaging. This has the potential to increase audience engagement and understanding but also raises questions about the boundaries between journalism and other forms of media content.
The use of AI in immersive journalism, documented in 37% of studies, shows the potential to create more immersive and empathetic news experiences. However, as discussed by Sánchez Laws and Utne (2019), it also raises new ethical questions about emotional manipulation and the boundary between factual reporting and engineered experiences.
The use of AI to generate dynamic and interactive data visualizations, reported in 28% of studies, shows the potential to improve public understanding of complex issues. However, it also underscores the importance of data literacy among journalists and audiences.
These innovations show that AI is changing not only the way news is produced but also the nature of the journalistic product itself. This raises fundamental questions about what constitutes “news” in the digital age and how journalism can maintain its distinctive role in an increasingly diverse and interactive media landscape.

4.5. Global Perspectives and the Digital Divide

The significant differences in AI adoption across regions, as shown in Figure 5, highlight the importance of considering the global context in discussions about AI and journalism. Higher adoption rates in North America and Europe, compared to lower rates in Africa and Latin America, reflect the wider digital divide and inequalities in access to technology.
The different challenges faced by other regions—from ethical and transparency-based issues in North America to infrastructure constraints in Africa—show that there is no “one-size-fits-all” approach to AI adoption in journalism. This is in line with arguments put forward by researchers such as Flew et al. (2012) on the importance of considering local and regional contexts in the study of media innovation.
These findings have important implications for the global discussion on AI and journalism. They point to the need for a more inclusive and diverse approach to the development and application of AI technologies in journalism, one that considers the specific needs and contexts of different regions and communities.
Furthermore, these differences raise questions about AI’s potential to widen or narrow the global information gap. Can AI be used to democratize news production and distribution in underserved regions, or will it further strengthen the dominance of established media centers?

4.6. Implications for Future Research

The research trends identified in Figure 6 indicate promising directions for future investigations. The increased interest in human–AI collaboration reflects a shift from a deterministic view of technology towards a more nuanced understanding of how humans and machines can work together in news production.
The growing focus on AI’s implications for local journalism and its use to counter disinformation indicate critical areas for further research. This is in line with Lewis et al.’s (2019) call for a more sociotechnical approach to the study of journalistic innovation.
The integration of Generative AI in journalism presents both opportunities and challenges. While it offers the potential for increased efficiency and new forms of storytelling, it also raises critical ethical questions and challenges traditional journalistic roles. As the field continues to evolve, there is a clear need for ongoing research, ethical guidelines, and education that prepares journalists to work effectively and responsibly with AI technologies.
Future research should focus on developing robust ethical frameworks for AI use in journalism, exploring the long-term implications of AI on journalistic roles and identities, and investigating how AI can be leveraged to address global information inequalities. Additionally, more work is needed to understand how AI can be used to enhance, rather than replace, human journalistic judgment and storytelling capabilities.

5. Conclusions

This systematic review reveals the profound transformation underway in journalism due to the adoption of AI. While AI offers significant opportunities to increase efficiency, scale, and audience engagement, it also poses grave challenges related to journalistic quality, ethics, and equitable access to information.
These findings emphasize the need for a more nuanced and critical approach to AI integration in journalism. This would include the following considerations:
  • Development of a robust ethical framework for the use of AI in journalism.
  • Investment in education and training to improve “AI literacy” among journalists.
  • Further research on the long-term impacts of AI on journalistic quality and public discourse.
  • Development of policies and regulations that can keep up with the rapid development of AI technologies.
  • Efforts to address the digital divide and ensure a more equitable adoption of AI globally.
By understanding and addressing these challenges, the journalism industry can harness AI’s potential while maintaining journalism’s core values and its critical role in a democratic society.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/journalmedia5040097/s1.

Author Contributions

Conceptualization, A.F.S. and H.H.; methodology, I.I. and R.L.; software, A.F.S.; validation, H.H., I.I. and R.L.; formal analysis, A.F.S.; investigation, H.H.; resources, I.I.; data curation, A.F.S.; writing—original draft preparation, H.H. and I.I.; writing—review and editing, A.F.S. and I.I.; visualization, I.I.; supervision, A.F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Al-Zoubi, Omar, Normahfuzah Ahmad, and Norsiah Abdul Hamid. 2024. Artificial intelligence in newsrooms: Ethical challenges facing journalists. Studies in Media and Communication 12: 401–9. [Google Scholar] [CrossRef]
  2. Amponsah, Peter N., and Atianashie Miracle Atianashie. 2024. Navigating the new frontier: A comprehensive review of ai in journalism. Advances in Journalism and Communication 12: 1–17. [Google Scholar] [CrossRef]
  3. Angwin, Julia, Surya Mattu Jeff Larson, and Lauren Kirchner. 2016. Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks. ProPublica. May 23. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed on 2 February 2024).
  4. Beckett, Charlie. 2019. New Powers, New Responsibilities. A Global Survey of Journalism and Artificial Intelligence. London: London School of Economics and Political Science. [Google Scholar]
  5. Birhane, Abeba. 2020. Algorithmic Colonization of Africa. SCRIPTed 17: 389. [Google Scholar] [CrossRef]
  6. Bodo, Balazs, Natali Helberger, Kristina Irion, Frederik Zuiderveen Borgesius, Judith Moller, Bob van de Velde, Nadine Bol, Bram van Es, and Claes de Vreese. 2017. Tackling the Algorithmic Control Crisis -the Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents. Yale Journal of Law and Technology 19: 133. Available online: http://hdl.handle.net/20.500.13051/7813 (accessed on 4 March 2024).
  7. Broussard, Meredith. 2015. Artificial Intelligence for Investigative Reporting. Digital Journalism 3: 814–31. [Google Scholar] [CrossRef]
  8. Broussard, Meredith, Andrea L. Guzman Nicholas Diakopoulos, Michel Dupagne Rediet Abebe, and Ching-Hua Chuan. 2019. Artificial Intelligence and Journalism. Journalism & Mass Communication Quarterly 96: 673–95. [Google Scholar] [CrossRef]
  9. Calvo Rubio, Luis-Mauricio, and José-Luis Rojas Torrijos. 2024. Criteria for journalistic quality in the use of artificial intelligence. Communication & Society 37: 247–59. [Google Scholar] [CrossRef]
  10. Carlson, Matt. 2015. The Robotic Reporter. Digital Journalism 3: 416–31. [Google Scholar] [CrossRef]
  11. CASP (Critical Appraisal Skills Programme). 2018. CASP Qualitative Checklist. [Online]. Available online: https://casp-uk.net/casp-tools-checklists/ (accessed on 1 October 2024).
  12. Creemers, Rogier. 2020. China’s conception of cyber sovereignty: Rhetoric and realization. In Governing Cyberspace: Behavior, Power, and Diplomacy. Edited by Broeders Dennis and Bibi Van Den Berg. Rowman and Littlefield: Digital Technologies and Global Politics Lanham, pp. 107–42. [Google Scholar] [CrossRef]
  13. Diakopoulos, Nicholas. 2015. Algorithmic Accountability. Digital Journalism 3: 398–415. [Google Scholar] [CrossRef]
  14. Diakopoulos, Nicholas. 2019. Automating the News. Cambridge, MA: Harvard University Press. [Google Scholar] [CrossRef]
  15. Diakopoulos, Nicholas, and Deborah Johnson. 2021. Anticipating and Addressing the Ethical Implications of Deepfakes in the Context of Elections. New Media and Society 23: 2072–98. [Google Scholar] [CrossRef]
  16. Dinçer, Emre. 2024. Hard and soft skills revisited: Journalism education at the dawn of artificial intelligence. Journal of Asian Development Studies 11: 65–78. [Google Scholar] [CrossRef]
  17. Dörr, Konstantin Nicholas. 2016. Mapping the Field of Algorithmic Journalism. Digital Journalism 4: 700–22. [Google Scholar] [CrossRef]
  18. Dörr, Konstantin Nicholas, and Katharina Hollnbuchner. 2017. Ethical Challenges of Algorithmic Journalism. Digital Journalism 5: 404–19. [Google Scholar] [CrossRef]
  19. Eskens, Sarah, Natali Helberger, and Judith Moeller. 2017. Challenged by News Personalisation: Five Perspectives on the Right to Receive Information. Journal of Media Law 9: 259–84. [Google Scholar] [CrossRef]
  20. European Commission. 2021. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions Europe’s Media in the Digital Decade: An Action Plan to Support Recovery and Transformation. Brussels: European Commission. Available online: https://digital-strategy.ec.europa.eu/en/library/europes-media-digital-decade-action-plan-support-recovery-and-transformation (accessed on 1 October 2024).
  21. Flew, Terry, Anna Daniel Christina Spurgeon, and Adam Swift. 2012. The Promise of Computational Journalism. Journalism Practice 6: 157–71. [Google Scholar] [CrossRef]
  22. Frey, Carl Benedikt, and Michael A. Osborne. 2017. The Future of Employment: How Susceptible Are Jobs to Computerization? Technological Forecasting & Social Change 114: 254–80. [Google Scholar] [CrossRef]
  23. Garrison, Bruce. 2020. Computer-Assisted Reporting. London: Routledge. [Google Scholar] [CrossRef]
  24. Graefe, Andreas. 2016. Guide to Automated Journalism. New York: Columbia. [Google Scholar]
  25. Helberger, Natali. 2019. On the Democratic Role of News Recommenders. Digital Journalism 7: 993–1012. [Google Scholar] [CrossRef]
  26. Helberger, Natali, Kari Karppinen, and Lucia D’Acunto. 2018. Exposure Diversity as a Design Principle for Recommender Systems. Information, Communication & Society 21: 191–207. [Google Scholar] [CrossRef]
  27. Husnain, Muhammad, Ali Imran, and Hannan Khan Tareen. 2024. Artificial intelligence in journalism: Examining prospectus and obstacles for students in the domain of media. Journal of Asian Development Studies 13: 614–25. [Google Scholar] [CrossRef]
  28. Jones, Bronwyn, and Rhianne Jones. 2019. Public Service Chatbots: Automating Conversation with BBC News. Digital Journalism 7: 1032–53. [Google Scholar] [CrossRef]
  29. Kotenidis, Efthimis, and Andreas Veglis. 2021. Algorithmic journalism—Current applications and future perspectives. Journalism and Media 2: 244–57. [Google Scholar] [CrossRef]
  30. Lewis, Seth C., Amy Kristin Sanders, and Casey Carmody. 2019. Libel by algorithm? Automated journalism and the threat of legal liability. Journalism & Mass Communication Quarterly 96: 60–81. [Google Scholar] [CrossRef]
  31. Lewis, Seth C., and Oscar Westlund. 2015. Big Data and Journalism. Digital Journalism 3: 447–66. [Google Scholar] [CrossRef]
  32. Lindén, Carl-Gustav. 2017. Algorithms for Journalism: The Future of News Work. The Journal of Media Innovations 4: 60–76. [Google Scholar] [CrossRef]
  33. Livingstone, Sonia. 2018. Media Literacy: What Are the Challenges and How Can We Move towards a Solution? LSE Media Policy Project Blog. October 25. Available online: https://blogs.lse.ac.uk/medialse/2018/10/25/media-literacy-what-are-the-challenges-and-how-can-we-move-towards-a-solution/ (accessed on 7 April 2024).
  34. Marconi, Francesco. 2020. Newsmakers: Artificial Intelligence and the Future of Journalism. New York: Columbia University Press. Available online: https://books.google.co.id/books?id=sIRMxQEACAAJ (accessed on 10 April 2024).
  35. Moher, Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, and Prisma Group. 2009. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine 6: e1000097. [Google Scholar] [CrossRef]
  36. Newman, Nic, Anne Schulz Richard Fletcher, Craig T. Robertson Simge Andı, and Rasmus Kleis Nielsen. 2021. Reuters Institute Digital News Report 2021. Oxford: Reuters Institute for the Study of Journalism. [Google Scholar]
  37. Noble, Safiya Umoja. 2018. Algorithms of Oppression. New York: NYU Press. [Google Scholar] [CrossRef]
  38. Orife, Iroro, Julia Kreutzer, Blessing Sibanda, Daniel Whitenack, Kathleen Siminyu, Laura Martinus, Jamiil Toure Ali, Jade Abbott, Vukosi Marivate, Salomon Kabongo, and et al. 2020. Masakhane—Machine Translation For Africa. arXiv arXiv:2003.11529. [Google Scholar]
  39. Pariser, Eli. 2012. The Filter Bubble: What the Internet Is Hiding from You. London: Penguin Books. Available online: https://books.google.co.id/books?id=Qn2ZnjzCE3gC (accessed on 5 March 2024).
  40. Pavlik, John. 2000. The Impact of Technology on Journalism. Journalism Studies 1: 229–37. [Google Scholar] [CrossRef]
  41. Pavlik, John V. 2019. Journalism in the Age of Virtual Reality: How Experiential Media Are Transforming News. New York: Columbia University Press. [Google Scholar]
  42. Pavlik, John V. 2023. Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism & Mass Communication Educator 78: 84–93. [Google Scholar] [CrossRef]
  43. Raab, Charles, and Bert-Jaap Koops. 2009. Privacy actors, performances and the future of privacy protection. In Reinventing Data Protection? Dordrecht: Springer, pp. 207–21. [Google Scholar] [CrossRef]
  44. Salvo, Philip Di. 2020. Digital Whistleblowing Platforms in Journalism. Cham: Springer International Publishing. [Google Scholar] [CrossRef]
  45. Sánchez Laws, Ana Luisa, and Tormod Utne. 2019. Ethics Guidelines for Immersive Journalism. Frontiers in Robotics and AI 6: 28. [Google Scholar] [CrossRef]
  46. Schudson, Michael. 2008. Why Democracies Need an Unlovable Press. Cambridge: Polity. [Google Scholar]
  47. Sonni, Alem Febri, Vinanda Cinta Cendekia Putri, and Irwanto Irwanto. 2024. Bibliometric and Content Analysis of the Scientific Work on Artificial Intelligence in Journalism. Journalism and Media 5: 787–98. [Google Scholar] [CrossRef]
  48. Stray, Jonathan. 2019. Making Artificial Intelligence Work for Investigative Journalism. Digital Journalism 7: 1076–97. [Google Scholar] [CrossRef]
  49. Surjatmodjo, Dwi, Andi Alimuddin Unde, Hafied Cangara, and Alem Febri Sonni. 2024. Information Pandemic: A Critical Review of Disinformation Spread on Social Media and Its Implications for State Resilience. Social Sciences 13: 418. [Google Scholar] [CrossRef]
  50. Vaccari, Cristian, and Andrew Chadwick. 2020. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society 6: 205630512090340. [Google Scholar] [CrossRef]
  51. Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. The Spread of True and False News Online. Science 359: 1146–51. [Google Scholar] [CrossRef]
  52. Wells, George, D. O’Connell, Beverley J. Shea, Vivian Welch, Je Peterson, M. Losos, and Peter Tugwell. 2013. The Newcastle-Ottawa Scale (NOS) for Assessing the Quality of Nonrandomised Studies in Meta-Analyses. Ottawa: Ottawa Hospital Research Institute. [Google Scholar]
  53. Westerlund, Mika. 2019. The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review 9: 39–52. [Google Scholar] [CrossRef]
  54. Ziewitz, Malte. 2016. Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values 41: 3–16. [Google Scholar] [CrossRef]
  55. Zuiderveen Borgesius, Frederik J., Judith Möller Damian Trilling, Claes H. de Vreese Balázs Bodó, and Natali Helberger. 2016. Should We Worry about Filter Bubbles? Internet Policy Review 5: 1–16. [Google Scholar] [CrossRef]
Figure 1. PRISMA flowchart for the article selection process.
Figure 1. PRISMA flowchart for the article selection process.
Journalmedia 05 00097 g001
Figure 2. Geographic distribution of the studies.
Figure 2. Geographic distribution of the studies.
Journalmedia 05 00097 g002
Figure 3. AI adoption trends in journalism (2015–2023).
Figure 3. AI adoption trends in journalism (2015–2023).
Journalmedia 05 00097 g003
Figure 4. AI use areas in journalism.
Figure 4. AI use areas in journalism.
Journalmedia 05 00097 g004
Figure 5. AI adoption in journalism: a global perspective.
Figure 5. AI adoption in journalism: a global perspective.
Journalmedia 05 00097 g005
Figure 6. AI research trends in journalism.
Figure 6. AI research trends in journalism.
Journalmedia 05 00097 g006
Table 1. Study type distribution.
Table 1. Study type distribution.
Type of StudyTotalPercentage
Empirical Study7861.4%
Conceptual Analysis3225.2%
Case Study1713.4%
Table 2. AI’s impacts on journalistic quality.
Table 2. AI’s impacts on journalistic quality.
Quality AspectsIncrease No ChangeDecrease
Accuracy58%32%10%
Speed76%22%2%
Depth of Analysis23%45%32%
Contextualization18%40%42%
Table 3. Changes in the Role of Journalists.
Table 3. Changes in the Role of Journalists.
The Role of JournalistsTotal
Focus on High-value Tasks67%
The Emergence of Hybrid Roles52%
The Need for AI Literacy38%
Table 4. Ethical challenges in AI adoption in journalism.
Table 4. Ethical challenges in AI adoption in journalism.
Ethical Challenges in AI AdoptionTotal
Algorithmic Transparancy82%
User Data Privacy76%
AI Content Accountability71%
AI Potential Bias68%
Diversity Implications59%
Table 5. Innovation in News Narrative.
Table 5. Innovation in News Narrative.
Type of InnovationStudyApplication Example
Personalized News Narrative42%The New York Times’ “Project Feels”
Immersive Journalism (VR/AR)37%BBC: “Damming the Nile” VR Experience
Interactive Data Visualization28%The Guardian’s “Living Wage Calculator”
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sonni, A.F.; Hafied, H.; Irwanto, I.; Latuheru, R. Digital Newsroom Transformation: A Systematic Review of the Impact of Artificial Intelligence on Journalistic Practices, News Narratives, and Ethical Challenges. Journal. Media 2024, 5, 1554-1570. https://doi.org/10.3390/journalmedia5040097

AMA Style

Sonni AF, Hafied H, Irwanto I, Latuheru R. Digital Newsroom Transformation: A Systematic Review of the Impact of Artificial Intelligence on Journalistic Practices, News Narratives, and Ethical Challenges. Journalism and Media. 2024; 5(4):1554-1570. https://doi.org/10.3390/journalmedia5040097

Chicago/Turabian Style

Sonni, Alem Febri, Hasdiyanto Hafied, Irwanto Irwanto, and Rido Latuheru. 2024. "Digital Newsroom Transformation: A Systematic Review of the Impact of Artificial Intelligence on Journalistic Practices, News Narratives, and Ethical Challenges" Journalism and Media 5, no. 4: 1554-1570. https://doi.org/10.3390/journalmedia5040097

APA Style

Sonni, A. F., Hafied, H., Irwanto, I., & Latuheru, R. (2024). Digital Newsroom Transformation: A Systematic Review of the Impact of Artificial Intelligence on Journalistic Practices, News Narratives, and Ethical Challenges. Journalism and Media, 5(4), 1554-1570. https://doi.org/10.3390/journalmedia5040097

Article Metrics

Back to TopTop