Nothing Special   »   [go: up one dir, main page]

Derleme
BibTex RIS Kaynak Göster

Yapay Zekânın Siyasi, Etik ve Toplumsal Açıdan Dezenformasyon Tehdidi

Yıl 2023, Sayı: 11 - Tema: Dezenformasyon, 247 - 266, 16.12.2023
https://doi.org/10.54722/iletisimvediplomasi.1358267

Öz

Makine öğreniminin bilgi işlemde kullanılması, kamusal alanı manipüle eden yapay zekâ yeteneğiyle oluşturulmuş dezenformasyon içeriklerinde hızlı bir artışa neden olmuştur. Yapay zekâ tekniklerinin kullanıldığı dezenformasyon içeriklerinin siyasi, etik ve toplumsal sonuçları, sosyal medya sunucularının kullanıcılarını, devletlerin ise toplumlarını dezenformasyondan koruma zorunluluğunu ortaya çıkarmıştır. Mevcut dezenformasyon sorununa çevrimiçi taciz, basın özgürlüğü, insan hakları ve etik problemler gibi sorunlar eklenmiştir. Bireysel ve devlet destekli dezenformasyon çabaları, toplumsal sistemde giderek yaygınlaşmıştır. Bu çabalar, gerçek haberleri saptırma, gayri meşru hâle getirme, eleştirmenleri susturma ve kamuoyunu manipüle etmek için yapay zekâ sistemlerinden yararlanmaktadır. Bu bağlamda araştırma, dezenformasyonun dinamiklerini ve yapay zekânın dezenformasyondaki rolünü analiz etmeye odaklanmıştır. Araştırmada literatür taraması yöntemine başvurulmuştur. Dezenformasyon ve yapay zekâ kavramları hakkında kapsamlı bir literatür taraması yapılmıştır. Yapay zekâ destekli dezenformasyonun mevcut etkilerinden yola çıkılarak genel bir değerlendirme yapılmış ve yapay zekâ tekniklerinin kullanıldığı dezenformasyon içeriklerinin siyasi, etik ve toplumsal sonuçlarının belirlenmesi amaçlanmıştır.

Etik Beyan

Araştırma etik beyan gerektirmemektedir.

Destekleyen Kurum

Destekleyen kurum bulunmamaktadır.

Teşekkür

Dergide emeği geçen tüm hocalarıma teşekkür ederim.

Kaynakça

  • Akers, L., & Gordon, J. S. (2018). Using Facebook for large-scale online randomized clinical trial recruitment: effective advertising strategies. Journal of Medical Internet Research, 20(11), e290.
  • Alvares, J., & Salzman-Mitchell, P. (2019). The succession myth and the rebellious AI creation: Classical narratives in the 2015 film Ex Machina. Arethusa, 52(2), 181-202.
  • Berger, J., & Milkman, K. L. (2013). Emotion and virality: what makes online content go viral? NIM Marketing Intelligence Review, 5(1), 18.
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32.
  • Boshmaf, Y., Muslukhov, I., Beznosov, K., & Ripeanu, M. (2011). The socialbot network: when bots socialize for fame and money. In Proceedings of the 27th annual computer security applications conference, (s. 93-102).
  • Brief, C. P. (2021). AI and the Future of Disinformation Campings.
  • Bukovská, B. (2020). The European Commission’s Code of conduct for countering illegal hate speech online. Algorithms.
  • Candi, M. R. (2018). Social strategy to gain knowledge for innovation. British Journal of Management, 29(4), 731-749. Center, G. E. (2020). Pillars of Russia's disinformation and propaganda ecosystem. US Department of State.
  • Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. Clif, L. Rev. , 1753.
  • Collins, A., & Ebrahimi, T. (2021). Risk governance and the rise of deepfakes. de Lima Salge, C. A., & Berente, N. (2018). Is that social bot behaving unethically? Communications of the ACM, 60(9), 29-31.
  • DeSteno, D., Pretty, R. E., Rucker, D. D., Wegener, D. T., & Braverman, J. (2004). Discrete emotions and persuasion: the role of emotion-induced expectancies. Journal of Personality and Social Psychology, 86(1), 43.
  • Dhir, A., Yossatorn, Y., Kaur, P., & Chen, S. (2018). Online social media fatigue and psychological wellbeing—A study of compulsive use, fear of missing out, fatigue, anxiety and depression. International Journal of Information Management, 40, 141-152.
  • Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13-29.
  • Feldstein, S. (2019). The global expansion of AI surveillance. Washington: Carnegie Endowment for International Peace.
  • Gollwitzer, A., Martel, C., Brady, W. J., Parnamets, P., Freedman, I. G., Knowles, E. D., & Van Bavel, J. J. (2020). Partisan differences in physical distancing are linked to health outcomes during the COVID-19 pandemic. Nature Human Behaviour, 4(11), 1186-1197.
  • Hameleers, M., Humprecht, E., Möller, J., & Lühring, J. (2023). Degrees of deception: The effects of different types of COVID-19 misinformation and the effectiveness of corrective information in crisis times. Information, Communication & Society, 26(9), 1699-1715.
  • Hareli, S., & Hess, U. (2012). The social signal value of emotions. Cognition & Emotion, 26(3), 385-389. Howard, P. N., & Kollanyi, B. (2016). Bots, Strongerin and Brexit: Computational Propaganda During the UK-EU Rerefendum.
  • Ireton, C., & Posetti, J. (2018). Journalism, fake news & disinformation: handbook for journalism education and training. Paris : Unesco Publishing .
  • Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56-59.
  • Ivakhiv, O. (2016). Information state of system estimation. International Journal of Computing, 15(1), 31-39. Jackson, P. C. (2019). Introduction to artificial intelligence. Courier Dover Publications.
  • Jacobs, G., Caraça, J., Fiorini, R., Hoedl, E., Nagan, W. P., Reuter, T., & Zucconi, A. (2018). The future of democracy: Challenges & prospects. Cadmus, 3(4), 7-31.
  • Jang, H., Rempel, E., Roth, D., Carenini, G., & Janjua, N. Z. (2021). Tracking COVID-19 discourse on twitter in North America: Infodemiology study using topic modeling and aspect-based sentiment analysis. Journal of medical Internet research, 23(2), 25-31.
  • Kertysova, K. (2018). Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Security and Human Rights, 29(1-4), 55-81.
  • Kopf, R. K., Nimmo, D. G., Ritchie, E. G., & Martin, J. K. (2019). Science communication in a post-truth world. Frontiers in Ecology and the Environment, 17(6), 310-312.
  • Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of experimental political science, 9(1), 104-117.
  • Kudugunta, S., & Ferrara, E. (2018). Deep neural networks for bot detection. Information Sciences, 467, 312-322. Malik, D. P., & Dhiman, D. B. (2022). Science Communication in India: Current Trends and Future Vision. Journal of Media & Management, 4(5), 1-4.
  • Mantelero, A. (2018). AI and Big Data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754-772.
  • Marsden, C., Meyer, T., & Brown, I. (2020). Platform values and democratic elections: How can the law regulate digital disinformation? Computer Law & Security Review, 36, 105373.
  • Metzler, H., Pellert, M., & Garcia, D. (2022). Using social media data to capture emotions before and during COVID-19.
  • Miranda, S. M., & Yetgin, E. (2016). Are social media emancipatory or hegemonic? Societal effects of mass media digitization in the case of the SOPA discourse. MIS quartrely, 40(2), 303-330.
  • Mork, A., Hale, J. A., & T., R. (2020). Fake for real: a history of forgery and falsification.
  • Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Scinece Review, 115(3), 999-1015.
  • Pierson, A. E., Brady, C. E., Clark, D. B., & Sengupta, P. (2023). Students’ epistemic commitments in a heterogeneity-seeking modeling curriculum. Cognition and Instruction, 41(2), 125-157.
  • Rathje, S., Robertson, C., Brady, W. J., & Van Bavel, J. J. (2022). People think that social media platforms do (but should not) amplify divisive content.
  • Richter, A. (2019). Accountability and media literacy mechanisms as a counteraction to disinformation in Europe. Journal of Digital Media & Policy, 10(3), 311-327.
  • Roy, M., Moreau, N., Rousseau, C., Mercier, A., Wilson, A., & Atlani-Duault, L. (2020). Ebola and localized blame on social media: Analysis of Twitter and Facebook conversations during the 2014–2015 Ebola epidemic. Culture, Medicine, and Psychiatry, 44, 56-79.
  • Russell, S. J., & Norving, P. (2016). Artificial Intelligence: A Modern Approach. London: Pearson Education Limited.
  • Salzman, J., & Ruhl, J. B. (2019). Environmental Law. Currencies and the commodification of environmental law (s. 3-90).
  • Satter, R. (2019). Social media timeout as French election reaches final state.
  • Simonite, T. (2019). Are You For Real? Wired, 27(7), 24-25.
  • Storozuk, A., Ashley, M., Delage, V., & Maloney, E. A. (2020). Got bots? Practical recommendations to protect online survey data from bot attacks. The Quantitative Methods for Psychology, 16(5), 472-481.
  • Stupp, C. (2019). Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case. The Wall Street Journal, 30(8).
  • Vincent, V. U. (2021). Integrating intuition and artificial intelligence in organizational decision-making. Business Horizons, 64(4), 425-438.
  • Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and internet giants’ converging and diverging strategies against Hi-Tech misinformation. Media and Communication, 9(1), 291-300.
  • Walorska, A. M. (2020). Redesigning Organizations: Concepts for the Connected Society. The Algoritmic Society (s. 149-160). içinde
  • Weeks, B. E., & Garrett, R. K. (2019). Emotional characteristics of social media and political misperceptions. Journalism and truth in an age of social media, 236-250.
  • Wischnewski, M., Bernemann, R., Ngo, T., & Kramer, N. (2021). Disagree? You must be hot! How beliefs shape twitter profile perceptions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, (s. 1-11).
  • Young, S. D., Crowley, J. S., & Vermund, S. H. (2021). Artificial intelligence and sexual health in the USA. The Lancet Digital Health, 3(8), 467-468.
  • Zakharov, E., Shysheya, A., Burkov, E., & Lempitsky, V. (2019). Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE/CVF international conference on computer vision, (s. 9459-9468).
  • Zellers, R., Holtzman, A., Rashking, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advnces in Neural Information Processing Systems, 32.

The Threat of Disinformation from The Political, Ethical and Social Perspective of Artificial Intelligence

Yıl 2023, Sayı: 11 - Tema: Dezenformasyon, 247 - 266, 16.12.2023
https://doi.org/10.54722/iletisimvediplomasi.1358267

Öz

Using machine learning in computing has led to a rapid increase in AI-capable disinformation content that manipulates the public sphere. The political, ethical, and social consequences of disinformation content using artificial intelligence techniques have created an obligation for social media providers to protect their users and states to protect their societies from disinformation. Online harassment, freedom of the press, human rights, and ethical problems have been added to the existing disinformation problem. Individual and state-sponsored disinformation efforts have become increasingly prevalent in the social system. These efforts use artificial intelligence systems to distort and delegitimize real news, silence critics and manipulate public opinion. In this context, the research focuses on analysing the dynamics of disinformation and the role of artificial intelligence in disinformation. The literature review method was used in the research. A comprehensive literature review was conducted on disinformation and artificial intelligence concepts. Based on the current effects of artificial intelligence-supported disinformation, a general evaluation was made and it was aimed to determine the political, ethical, and social consequences of disinformation content using artificial intelligence techniques.

Kaynakça

  • Akers, L., & Gordon, J. S. (2018). Using Facebook for large-scale online randomized clinical trial recruitment: effective advertising strategies. Journal of Medical Internet Research, 20(11), e290.
  • Alvares, J., & Salzman-Mitchell, P. (2019). The succession myth and the rebellious AI creation: Classical narratives in the 2015 film Ex Machina. Arethusa, 52(2), 181-202.
  • Berger, J., & Milkman, K. L. (2013). Emotion and virality: what makes online content go viral? NIM Marketing Intelligence Review, 5(1), 18.
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32.
  • Boshmaf, Y., Muslukhov, I., Beznosov, K., & Ripeanu, M. (2011). The socialbot network: when bots socialize for fame and money. In Proceedings of the 27th annual computer security applications conference, (s. 93-102).
  • Brief, C. P. (2021). AI and the Future of Disinformation Campings.
  • Bukovská, B. (2020). The European Commission’s Code of conduct for countering illegal hate speech online. Algorithms.
  • Candi, M. R. (2018). Social strategy to gain knowledge for innovation. British Journal of Management, 29(4), 731-749. Center, G. E. (2020). Pillars of Russia's disinformation and propaganda ecosystem. US Department of State.
  • Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. Clif, L. Rev. , 1753.
  • Collins, A., & Ebrahimi, T. (2021). Risk governance and the rise of deepfakes. de Lima Salge, C. A., & Berente, N. (2018). Is that social bot behaving unethically? Communications of the ACM, 60(9), 29-31.
  • DeSteno, D., Pretty, R. E., Rucker, D. D., Wegener, D. T., & Braverman, J. (2004). Discrete emotions and persuasion: the role of emotion-induced expectancies. Journal of Personality and Social Psychology, 86(1), 43.
  • Dhir, A., Yossatorn, Y., Kaur, P., & Chen, S. (2018). Online social media fatigue and psychological wellbeing—A study of compulsive use, fear of missing out, fatigue, anxiety and depression. International Journal of Information Management, 40, 141-152.
  • Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13-29.
  • Feldstein, S. (2019). The global expansion of AI surveillance. Washington: Carnegie Endowment for International Peace.
  • Gollwitzer, A., Martel, C., Brady, W. J., Parnamets, P., Freedman, I. G., Knowles, E. D., & Van Bavel, J. J. (2020). Partisan differences in physical distancing are linked to health outcomes during the COVID-19 pandemic. Nature Human Behaviour, 4(11), 1186-1197.
  • Hameleers, M., Humprecht, E., Möller, J., & Lühring, J. (2023). Degrees of deception: The effects of different types of COVID-19 misinformation and the effectiveness of corrective information in crisis times. Information, Communication & Society, 26(9), 1699-1715.
  • Hareli, S., & Hess, U. (2012). The social signal value of emotions. Cognition & Emotion, 26(3), 385-389. Howard, P. N., & Kollanyi, B. (2016). Bots, Strongerin and Brexit: Computational Propaganda During the UK-EU Rerefendum.
  • Ireton, C., & Posetti, J. (2018). Journalism, fake news & disinformation: handbook for journalism education and training. Paris : Unesco Publishing .
  • Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56-59.
  • Ivakhiv, O. (2016). Information state of system estimation. International Journal of Computing, 15(1), 31-39. Jackson, P. C. (2019). Introduction to artificial intelligence. Courier Dover Publications.
  • Jacobs, G., Caraça, J., Fiorini, R., Hoedl, E., Nagan, W. P., Reuter, T., & Zucconi, A. (2018). The future of democracy: Challenges & prospects. Cadmus, 3(4), 7-31.
  • Jang, H., Rempel, E., Roth, D., Carenini, G., & Janjua, N. Z. (2021). Tracking COVID-19 discourse on twitter in North America: Infodemiology study using topic modeling and aspect-based sentiment analysis. Journal of medical Internet research, 23(2), 25-31.
  • Kertysova, K. (2018). Artificial intelligence and disinformation: How AI changes the way disinformation is produced, disseminated, and can be countered. Security and Human Rights, 29(1-4), 55-81.
  • Kopf, R. K., Nimmo, D. G., Ritchie, E. G., & Martin, J. K. (2019). Science communication in a post-truth world. Frontiers in Ecology and the Environment, 17(6), 310-312.
  • Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of experimental political science, 9(1), 104-117.
  • Kudugunta, S., & Ferrara, E. (2018). Deep neural networks for bot detection. Information Sciences, 467, 312-322. Malik, D. P., & Dhiman, D. B. (2022). Science Communication in India: Current Trends and Future Vision. Journal of Media & Management, 4(5), 1-4.
  • Mantelero, A. (2018). AI and Big Data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754-772.
  • Marsden, C., Meyer, T., & Brown, I. (2020). Platform values and democratic elections: How can the law regulate digital disinformation? Computer Law & Security Review, 36, 105373.
  • Metzler, H., Pellert, M., & Garcia, D. (2022). Using social media data to capture emotions before and during COVID-19.
  • Miranda, S. M., & Yetgin, E. (2016). Are social media emancipatory or hegemonic? Societal effects of mass media digitization in the case of the SOPA discourse. MIS quartrely, 40(2), 303-330.
  • Mork, A., Hale, J. A., & T., R. (2020). Fake for real: a history of forgery and falsification.
  • Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Scinece Review, 115(3), 999-1015.
  • Pierson, A. E., Brady, C. E., Clark, D. B., & Sengupta, P. (2023). Students’ epistemic commitments in a heterogeneity-seeking modeling curriculum. Cognition and Instruction, 41(2), 125-157.
  • Rathje, S., Robertson, C., Brady, W. J., & Van Bavel, J. J. (2022). People think that social media platforms do (but should not) amplify divisive content.
  • Richter, A. (2019). Accountability and media literacy mechanisms as a counteraction to disinformation in Europe. Journal of Digital Media & Policy, 10(3), 311-327.
  • Roy, M., Moreau, N., Rousseau, C., Mercier, A., Wilson, A., & Atlani-Duault, L. (2020). Ebola and localized blame on social media: Analysis of Twitter and Facebook conversations during the 2014–2015 Ebola epidemic. Culture, Medicine, and Psychiatry, 44, 56-79.
  • Russell, S. J., & Norving, P. (2016). Artificial Intelligence: A Modern Approach. London: Pearson Education Limited.
  • Salzman, J., & Ruhl, J. B. (2019). Environmental Law. Currencies and the commodification of environmental law (s. 3-90).
  • Satter, R. (2019). Social media timeout as French election reaches final state.
  • Simonite, T. (2019). Are You For Real? Wired, 27(7), 24-25.
  • Storozuk, A., Ashley, M., Delage, V., & Maloney, E. A. (2020). Got bots? Practical recommendations to protect online survey data from bot attacks. The Quantitative Methods for Psychology, 16(5), 472-481.
  • Stupp, C. (2019). Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case. The Wall Street Journal, 30(8).
  • Vincent, V. U. (2021). Integrating intuition and artificial intelligence in organizational decision-making. Business Horizons, 64(4), 425-438.
  • Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and internet giants’ converging and diverging strategies against Hi-Tech misinformation. Media and Communication, 9(1), 291-300.
  • Walorska, A. M. (2020). Redesigning Organizations: Concepts for the Connected Society. The Algoritmic Society (s. 149-160). içinde
  • Weeks, B. E., & Garrett, R. K. (2019). Emotional characteristics of social media and political misperceptions. Journalism and truth in an age of social media, 236-250.
  • Wischnewski, M., Bernemann, R., Ngo, T., & Kramer, N. (2021). Disagree? You must be hot! How beliefs shape twitter profile perceptions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, (s. 1-11).
  • Young, S. D., Crowley, J. S., & Vermund, S. H. (2021). Artificial intelligence and sexual health in the USA. The Lancet Digital Health, 3(8), 467-468.
  • Zakharov, E., Shysheya, A., Burkov, E., & Lempitsky, V. (2019). Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE/CVF international conference on computer vision, (s. 9459-9468).
  • Zellers, R., Holtzman, A., Rashking, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advnces in Neural Information Processing Systems, 32.
Toplam 50 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Yeni İletişim Teknolojileri
Bölüm Derlemeler
Yazarlar

Kılıç Köçeri 0000-0003-1687-3001

Erken Görünüm Tarihi 16 Aralık 2023
Yayımlanma Tarihi 16 Aralık 2023
Gönderilme Tarihi 11 Eylül 2023
Yayımlandığı Sayı Yıl 2023 Sayı: 11 - Tema: Dezenformasyon

Kaynak Göster

APA Köçeri, K. (2023). Yapay Zekânın Siyasi, Etik ve Toplumsal Açıdan Dezenformasyon Tehdidi. İletişim Ve Diplomasi(11), 247-266. https://doi.org/10.54722/iletisimvediplomasi.1358267