Abstract
As language technologies have become more sophisticated and prevalent, there have been increasing concerns about bias in natural language processing (NLP). Such work often focuses on the effects of bias instead of sources. In contrast, this paper discusses how normative language assumptions and ideologies influence a range of automated language tools. These underlying assumptions can inform (a) grammar and tone suggestions provided by commercial products, (b) language varieties (e.g., dialects and other norms) taught by language learning technologies, (c) language patterns used by chatbots and similar applications to interact with users. These tools demonstrate considerable technological advancement but are rarely interrogated with regard to the language ideologies they intentionally or implicitly reinforce. We consider prior research on language ideologies and how they may impact (at scale) the large language models (LLMs) that underlie many automated language technologies. Specifically, this paper draws on established theoretical frameworks for understanding how humans typically perceive or judge language varieties and patterns that may differ from their own or their perceived standard. We then discuss how language ideologies can perpetuate social hierarchies and stereotypes, even within seemingly impartial automation. In doing so, we contribute to the emerging literature on how the risks of language ideologies and assumptions can be better understood and mitigated in the design, testing, and implementation of automated language technologies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ajisoko, P.: The use of Duolingo apps to improve English vocabulary learning. Int. J. Emerging Technol. Learn. (iJET) 15(7), 149–155 (2020). https://www.learntechlib.org/p/217084/. Accessed 15 Jan 2024
Alharbi, W.: AI in the Foreign language classroom: a pedagogical overview of automated writing assistance tools. Educ. Res. Int. 2023 (2023). https://doi.org/10.1155/2023/4253331
Ayres-Bennett, W.: Codification and prescription in linguistic standardisation. Constructing Lang. Norms, Myths Emotions 13, 99 (2016)
Bang, M., Vossoughi, S.: Participatory design research and educational justice: Studying learning and relations within social change making. Cognition Inst. 34(3), 173–193 (2016). https://doi.org/10.1080/07370008.2016.1181879
Baratta, A.: Accent and linguistic prejudice within British teacher training. J. Lang. Identity Educ. 16(6), 416–423 (2017). https://doi.org/10.1080/15348458.2017.1359608
Barocas, S., Selbst, A.D.: Big data's disparate impact. California Law Rev., 671–732 (2016). https://doi.org/10.15779/Z38BG31
Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: Can language models be too big?. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021). https://doi.org/10.1145/3442188.3445922
Bender, E.M., Koller, A.: Climbing towards NLU: on meaning, form, and understanding in the age of data. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp. 5185–5198 (2020). https://doi.org/10.18653/v1/2020.acl-main.463
Ben-Simon, A., Bennett, R.E.: Towards more substantively meaningful automated essay scoring. J. Teach. Learn. Assessment 6(1), 4–44 (2007). http://www.jtla.org
Bhardwaj, R., Majumder, N., Poria, S.: Investigating gender bias in bert. Cognitive Comput. 13(4), 1008–1018. https://doi.org/10.1007/s12559-021-09881-2 (@021)
Blattner, L., Nelson, S., Spiess, J.: Unpacking the Black Box: Regulating Algorithmic Decisions (2021). https://doi.org/10.48550/arXiv.2110.03443
Blodgett, S.L., Barocas, S., Daumé III, H., Wallach, H.: Language (technology) is power: A critical survey of “bias” in nlp (2020). https://doi.org/10.48550/arXiv.2005.14050
Bødker, S., Dindler, C., Iversen, O.S., Smith, R.C.: Participatory design. Synthesis Lectures Hum.-Centered Inform. 14(5), i–143 (2021)
Buolamwini, J., Gebru, T.: Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)
Burgstahler, S.: Universal design: Implications for computing education. ACM Trans. Comput. Educ. (TOCE) 11(3), 1–17 (2011)
Burner, T.: Formative assessment of writing in English as a foreign language. Scand. J. Educ. Res. 60(6), 626–648 (2016)
Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183–186 (2017)
Cardona, M.A., Rodríguez, R.J., Ishmael, K.: Artificial intelligence and the future of teaching and learning. Office of Educational Technology (2023). https://tech.ed.gov/files/2023/05/ai-future-of-teaching-and-learning-report.pdf
Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., Barro, S.: AI literacy in K-12: a systematic literature review. Int. J. STEM Educ. 10(1), 29 (2023). https://doi.org/10.1186/s40594-023-00418-7
Chan, M.P.Y., Choe, J., Li, A., Chen, Y., Gao, X., Holliday, N.: Training and typological bias in ASR performance for world Englishes. In: Proceedings of the 23rd Conference of the International Speech Communication Association (2022). https://doi.org/10.21437/Interspeech.2022-10869
Chang, D.H., Lin, M.P.C., Hajian, S., Wang, Q.Q.: Educational design principles of using AI chatbot that supports self-regulated learning in education: goal setting, feedback, and personalization. Sustainability 15(17), 12921 (2023)
Chen, K.H.: Ideologies of Language Standardization. In: The Oxford Handbook of Language Policy and Planning. Oxford University Press (2018)
Chien, Y.H., Yao, C.K.: Development of an ai userbot for engineering design education using an intent and flow combined framework. Appl. Sci. 10(22), 7970 (2020). https://doi.org/10.3390/app10227970
Chiu, T.K.: The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interact. Learn. Environ., 1–17 (2023). https://doi.org/10.1080/10494820.2023.2253861
Choudhury, M. Generative AI has a language problem. Nat. Hum. Behav. 7, 1802–1803. https://doi.org/10.1038/s41562-023-01716-4 (2023)
Curzan, A.: Fixing English: Prescriptivism and Language History. Cambridge University Press (2014)
Cushing, I.: ‘Say it like the Queen’: the standard language ideology and language policy making in English primary schools. Lang. Culture Curriculum 34(3), 321–336 (2021). https://doi.org/10.1080/07908318.2020.1840578
Dalton, B., Proctor, C.P., Uccelli, P., Mo, E., Snow, C.E.: Designing for diversity: the role of reading strategies and interactive vocabulary in a digital reading environment for fifth-grade monolingual English and bilingual students. J. Literacy Res. 43(1), 68–100 (2011). https://doi.org/10.1177/1086296X10397872
Deane, P.: On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing 18, 7–24 (2013). https://doi.org/10.1016/j.asw.2012.10.002
Delgado, F., Yang, S., Madaio, M., Yang, Q.: The participatory turn in ai design: Theoretical foundations and the current state of practice. In: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pp. 1–23 (2023). https://doi.org/10.1145/3617694.3623261
Devlin, J., Chang, M. W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding (2018). https://doi.org/10.48550/arXiv.1810.04805
DiChristofano, A., Shuster, H., Chandra, S., Patwari, N.: Performance disparities between accents in automatic speech recognition (2022). https://doi.org/10.48550/arXiv.2208.01157
Drake, G.: American Linguistic Prescriptivism: Its Decline and Revival in the 19th Century1. Lang. Soc. 6(3), 323–340 (1977). https://doi.org/10.1017/S0047404500005042
Economidou-Kogetsidis, M.: Variation in evaluations of the (im) politeness of emails from L2 learners and perceptions of the personality of their senders. J. Pragmat. 106, 1–19 (2016)
Eisenstein, J., Prabhakaran, V., Rivera, C., Demszky, D., & Sharma, D. MD3: The Multi-Dialect Dataset of Dialogues. https://doi.org/10.48550/arXiv.1904.05527 (2023)
Eubanks, V.: Automating inequality: How high-tech tools profile, police, and punish the poor. Martin’s Press, St (2018)
European Commission, Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union (2022). https://data.europa.eu/doi/https://doi.org/10.2766/153756
Fine, M., et al.: Participatory action research: From within and beyond prison bars (2003). https://doi.org/10.1037/10595-010
Flores, N.: From academic language to language architecture: Challenging raciolinguistic ideologies in research and practice. Theory Practice 59(1), 22–31 (2020). https://doi.org/10.1080/00405841.2019.1665411
Gardner, J., O’Leary, M., Yuan, L.: Artificial intelligence in educational assessment: ‘Breakthrough? Or buncombe and ballyhoo?’ J. Comput. Assist. Learn. 37(5), 1207–1216 (2021)
Goyal, N., Kivlichan, I.D., Rosen, R., Vasserman, L.: Is your toxicity my toxicity? exploring the impact of rater identity on toxicity annotation. In: Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1–28 (2022). https://doi.org/10.1145/3555088
Graham, S., Hebert, M., Harris, K.R.: Formative assessment and writing: a meta-analysis. Elem. Sch. J. 115(4), 523–547 (2015)
Gupta, A.: African-American English: Teacher beliefs, teacher needs and teacher preparation programs. Reading Matrix Int. Online J. 10(2) (2010). https://digitalcommons.odu.edu/cgi/viewcontent.cgi?article=1001&context=teachinglearning_fac_pubs
Gupta, A., Atef, Y., Mills, A., Bali, M.: Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for Developing Critical AI Literacies (2024). https://doi.org/10.48550/arXiv.2401.08711
Gururangan, S., et al.: Whose language counts as high quality? measuring language ideologies in text data selection. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 2562–2580, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics (2022). https://doi.org/10.48550/arXiv.2201.10474
Haig, Y., Oliver, R.: Language variation and education: Teachers' perceptions. Lang. Educ. 17(4), 266–280 (2003). https://doi.org/10.1080/09500780308666852
Hannon, D., Danahy, E., Schneider, L., Coopey, E., Garber, G.: Encouraging teachers to adopt inquiry-based learning by engaging in participatory design. In: IEEE 2nd Integrated STEM Education Conference, pp. 1–4. IEEE (2012). https://doi.org/10.1109/ISECon.2012.6204169
Heritage, M. Formative assessment: Making it happen in the classroom. Corwin Press (2021)
Hong, J. Y., Kim, Y.: Development of AI data science education program to foster data literacy of elementary school students. J. Korean Assoc. Inform. Educ. 24(6), 633–641 (2020). https://doi.org/10.14352/jkaie.2020.24.6.633
Horvath, B.M.: Australian English: Phonology. Varieties of English 3, 89–110 (2008)
Hovy, D., Yang, D.: The importance of modeling social factors of language: theory and practice. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 588–602 (2021)
Hunt, V., Layton, D., Prince, S.: Diversity matters. McKinsey Company 1(1), 15–29 (2015)
Jackson, L., Kuhlman, C., Jackson, F., Fox, P.K.: Including vulnerable populations in the assessment of data from vulnerable populations. Front. Big Data 2, 19 (2019). https://doi.org/10.3389/fdata.2019.00019
Kauf, C., et al.: Event knowledge in large language models: the gap between the impossible and the unlikely. Cognitive Sci. 47(11), e13386 (2023). https://doi.org/10.1111/cogs.13386
Kendall, T., Farrington, C.: The corpus of regional African American language. Version 6, 1 (2018)
Kidd, M.A.: Archetypes, stereotypes and media representation in a multi-cultural society. Procedia-Soc. Behav. Sci. 236, 25–28 (2016). https://doi.org/10.1016/j.sbspro.2016.12.007
Knesek, G.E.: Why Focusing on Grades Is a Barrier to Learning. Harvard Business Publishing: Education. https://hbsp.harvard.edu/inspiring-minds/why-focusing-on-grades-is-a-barrier-to-learning, 24 Apr 2022
Koenecke, A., et al.: Racial disparities in automated speech recognition. Proc. Natl. Acad. Sci. 117(14), 7684–7689 (2020)
Koh, E., Doroudi, S.: Learning, teaching, and assessment with generative artificial intelligence: towards a plateau of productivity. Learn. Res. Practice 9(2), 109–116 (2023). https://doi.org/10.1080/23735082.2023.2264086
Kotek, H., Dockum, R., Sun, D.: Gender bias and stereotypes in Large Language Models. In: Proceedings of the ACM Collective Intelligence Conference, pp. 12–24 (2023). https://doi.org/10.1145/3582269.3615599
Krishnan, J., Black, R.W., Olson, C.B.: The power of context: exploring teachers’ formative assessment for online collaborative writing. Read. Writ. Q. 37(3), 201–220 (2021)
Krishnamurthy, P. Understanding data bias. Towards data science. https://towardsdatascience.com/survey-d4f168791e57 (2019, September 11)
Kuhlman, C., Jackson, L., Chunara, R.: No computation without representation: avoiding data and algorithm biases through diversity (2020). https://doi.org/10.48550/arXiv.2002.11836
Kutlu, E., Tiv, M., Wulff, S., Titone, D.: The impact of race on speech perception and accentedness judgements in racially diverse and non-diverse groups. Appl. Linguis. 43(5), 867–890 (2022)
Kurinec, C.A., Weaver, C.A.: III Dialect on trial: use of African American Vernacular English influences juror appraisals. Psychol. Crime Law 25(8), 803–828. https://doi.org/10.1080/1068316X.2019.1597086 (2019)
Lachney, M.: Computational communities: African-American cultural capital in computer science education. Comput. Sci. Educ. 27(3–4), 175–196 (2017). https://doi.org/10.1080/08993408.2018.1429062
Larimore, S., Kennedy, I., Haskett, B., Arseniev-Koehler, A.: Reconsidering annotator disagreement about racist language: Noise or signal? In: Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pp. 81–90 (2021). https://doi.org/10.18653/v1/2021.socialnlp-1.7
Lawton, R., de Kleine, C.: The need to dismantle “standard” language ideology at the community college: an analysis of writing and literacy instructor attitudes. J. College Reading Learn. 50(4), 197–219 (2020). https://doi.org/10.1080/10790195.2020.1836938
Lee, I., Ali, S., Zhang, H., DiPaola, D., Breazeal, C.: Developing middle school students' AI literacy. In: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, pp. 191–197 (2021)
Lee, K.J., et al.: The show must go on: a conceptual model of conducting synchronous participatory design with children online. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2021). https://doi.org/10.1145/3411764.3445715
Lee, I., Ali, S., Zhang, H., DiPaola, D., Breazeal, C.: Developing middle school students' AI literacy. In: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, pp. 191–197 (2021). https://dl.acm.org/doi/10.1145/3408877.3432513
Lee, H., Chung, H.Q., Zhang, Y., Abedi, J., Warschauer, M.: The effectiveness and features of formative assessment in US K-12 education: a systematic review. Appl. Measur. Educ. 33(2), 124–140 (2020)
Lee, C.D.: The centrality of culture to the scientific study of learning and development: how an ecological framework in education research facilitates civic responsibility. Educ. Res. 37(5), 267 (2008)
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., Zou, J.: GPT detectors are biased against non-native English writers (2023). https://doi.org/10.48550/arXiv.2304.02819
Lindquist, H., Levin, M.: Apples and oranges: on comparing data from different corpora. In Corpus Linguistics and Linguistic Theory, pp. 201–213. Brill (2000). https://doi.org/10.1163/9789004490758_017
Lippi-Green, R.: English with an accent: Language, ideology, and discrimination in the United States. Routledge (2012)
Liu, C. C., Koto, F., Baldwin, T., Gurevych, I.: Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings (2023). https://doi.org/10.48550/arXiv.2309.08591
Long, D., Magerko, B.: What is AI literacy? competencies and design considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2020). https://doi.org/10.1145/3313831.3376727
Luckin, R., Cukurova, M., Kent, C., du Boulay, B.: Empowering educators to be AI-ready. Comput. Educ. Artif. Intell. 3, 100076 (2022). https://doi.org/10.1016/j.caeai.2022.100076
Maghbouleh, N., Schachter, A., Flores, R.D.: Middle Eastern and North African Americans may not be perceived, nor perceive themselves, to be White. In: Proceedings of the National Academy of Sciences 119(7) https://doi.org/10.1073/pnas.2117940119 (2022)
Markl, N.: Language variation and algorithmic bias: understanding algorithmic bias in British English automatic speech recognition. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 521–534 (2022). https://doi.org/10.1145/3531146.3533117
Martin, J.L., Tang, K.: Understanding racial disparities in automatic speech recognition: the case of habitual “be”. In: Interspeech, pp. 626–630 (2020). https://doi.org/10.21437/Interspeech.2020-2893
Martin, J.L., Wright, K.E.: Bias in automatic speech recognition: The case of African American language. Appl. Linguist. 44(4), 613–630 (2023). https://doi.org/10.1093/applin/amac066
Mason, M., Carson-Berndsen, J.: Investigating phoneme similarity with artificially accented speech. In: Proceedings of the 20th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pp. 49–57 (2023). https://doi.org/10.48550/arXiv.2305.07389
McNamara, D.S., et al.: The Writing-Pal: Natural language algorithms to support intelligent tutoring on writing strategies. In: K-12 Education: Concepts, Methodologies, Tools, and Applications, pp. 780–793. IGI Global (2014). https://doi.org/10.4018/978-1-4666-4502-8.ch045
Milios, A., BehnamGhader, P.: An analysis of social biases present in bert variants across multiple languages (2022). https://doi.org/10.48550/arXiv.2211.14402
Milroy, J.: Language ideologies and the consequences of standardization. J. Sociolinguistics 5(4), 530–555 (2001). https://doi.org/10.1111/1467-9481.00163
Milroy, L.: Standard English and language ideology in Britain and the United States. In: Standard English, pp. 173–206. Routledge (2002)
Milroy, J.: The ideology of the standard. The Routledge Companion to Sociolinguistics 133 (2007)
Miao, F., Holmes, W., Huang, R., Zhang, H.: AI and education: Guidance for policy-makers. United Nations Educational, Scientific and Cultural Organization (2021). https://unesdoc.unesco.org/ark:/48223/pf0000376709
Moore, B.: Speaking our language: the story of Australian English, pp. 97–8. Melbourne: Oxford University Press (2008)
Mozafari, M., Farahbakhsh, R., Crespi, N.: Hate speech detection and racial bias mitigation in social media based on BERT model. PloS one 15(8), e0237861 (2020). https://doi.org/10.1371/journal.pone.0237861
Munro, M.J.: Listening to the “noise” in the data: the critical importance of individual differences in second-language speech. J. Second Lang. Pronunciation (2023). https://doi.org/10.1075/jslp.23029.mun
Nekoto, W., et al.: Participatory research for low-resourced machine translation: a case study in african languages (2020). https://doi.org/10.48550/arXiv.2010.02353
Ocumpaugh, J., Roscoe, R.D., Baker, R.S., et al.: Toward asset-based instruction and assessment in artificial intelligence in education. Int. J. Artif. Intell. Educ. (2024). https://doi.org/10.1007/s40593-023-00382-x
O'Neil, C.: Weapons of math destruction: How big data increases inequality and threatens democracy. Crown (2017)
Paris, D., Winn, M.T. (eds.): Humanizing research: Decolonizing qualitative inquiry with youth and communities. Sage Publications (2013)
Parks, A.N.: Metaphors of hierarchy in mathematics education discourse: the narrow path. J. Curriculum Stud. 42(1), 79–97 (2010). https://doi.org/10.1080/00220270903167743
Patel, L.: Decolonizing Educational Research : From Ownership to Answerability. Routledge, NewYork (2015)
Pater, J., Coupe, A., Pfafman, R., Phelan, C., Toscos, T., Jacobs, M.: Standardizing reporting of participant compensation in HCI: a systematic literature review and recommendations for the field. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2021). https://doi.org/10.1145/3411764.3445734
Peng, H., Ma, S., Spector, J.M.: Personalized adaptive learning: an emerging pedagogical approach enabled by a smart learning environment. Smart Learn. Environ. 6(1), 1–14 (2019)
Pham, N., Pham, L., Meyers, A.L.: Towards Better Inclusivity: A Diverse Tweet Corpus of English Varieties (2024). https://doi.org/10.48550/arXiv.2401.11487
Pierre, J., Crooks, R., Currie, M., Paris, B., Pasquetto, I.: Getting Ourselves Together: Data-centered participatory design research & epistemic burden. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–11 (2021). https://doi.org/10.1145/3411764.3445103
Plank, B.: The ‘Problem’ of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation (2022). https://doi.org/10.48550/arXiv.2211.02570
Preston, D.R.: The cognitive foundations of language regard. Poznan Stud. Contemporary Linguist. 53(1), 17–42 (2017). https://doi.org/10.1515/psicl-2017-0002
Prinsloo, P.: Of ‘black boxes’ and algorithmic decision-making in (higher) education–a commentary. Big Data & Soc. 7(1). https://doi.org/10.1177/2053951720933994 (2020)
Reyero Lobo, P., Daga, E., Alani, H., Fernandez, M.: Semantic Web technologies and bias in artificial intelligence: a systematic literature review. Semantic Web 14(4), 745–770 (2023). https://doi.org/10.3233/SW-223041
Rose, H., Galloway, N.: Debating standard language ideology in the classroom: Using the ‘Speak Good English Movement’to raise awareness of global Englishes. RELC J. 48(3), 294–301 (2017). https://doi.org/10.1177/0033688216684281
Roscoe, R.D., McNamara, D.S.: Writing Pal: Feasibility of an intelligent writing strategy tutor in the high school classroom. J. Educ. Psychol. 105(4), 1010–1025 (2013). https://doi.org/10.1037/a0032340
Santiago, H., Martin, J., Moeller, S., Tang, K.: Disambiguation of morpho-syntactic features of African American English--the case of habitual be (2022). https://doi.org/10.48550/arXiv.2204.1242
Scott, K.A., Sheridan, K.M., Clark, K.: Culturally responsive computing: a theory revisited. Learn. Media Technol. 40(4), 412–436 (2015). https://doi.org/10.1080/17439884.2014.924966
Silaj, K.M., Frangiyyeh, A., Paquette‐Smith, M.: The impact of multimedia design and the accent of the instructor on student learning and evaluations of teaching. Applied Cognitive Psychology (2023). https://doi.org/10.1002/acp.4143
Silber Mohamed, H., Farris, E.M.: ‘Bad hombres’? an examination of identities in US media coverage of immigration. J. Ethnic Migration Stud. 46(1), 158–176 (2020). https://doi.org/10.1080/1369183X.2019.1574221
Slota, S.C., et al.: Good systems, bad data? interpretations of AI hype and failures. Proc Assoc. Inf. Sci. Technol. 57(1), e275 (2020). https://doi.org/10.1002/pra2.275
Snell, J.: Dialect, interaction and class positioning at school: From deficit to difference to repertoire. Lang. Educ. 27(2), 110–128 (2013). https://doi.org/10.1080/09500782.2012.760584
Spence, J. L., Hornsey, M.J., Stephenson, E.M., Imuta, K.: Is Your Accent Right for the Job? A Meta-Analysis on Accent Bias in Hiring Decisions. Personality and Social Psychology Bulletin (2022). https://doi.org/10.1177/01461672221130595
Strickland, C.L., Young, S.: Dialect bias in questioning styles in the standard English classroom. In: Presented at Annual Research Forum (Winston-Salem, NC, December 1999), p. 121
Tan, S., Joty, S., Varshney, L.R., Kan, M.Y.: Mind your inflections! Improving NLP for non-standard Englishes with Base-Inflection Encoding (2020). https://doi.org/10.48550/arXiv.2004.14870
Teng, L.S.: Explicit strategy-based instruction in L2 writing contexts: a perspective of self-regulated learning and formative assessment. Assess. Writ. 53, 100645 (2022)
Thompson, N.A., Weiss, D.A.: A framework for the development of computerized adaptive tests. Pract. Assess. Res. Eval. 16(1), 1 (2019)
Trudgill, P., Hannah, J.: International English: A guide to the varieties of standard English. Routledge (2013)
Tuck, E., Yang, K.W.: Decolonization is not a metaphor. Tabula Rasa (38), 61–111 (2021). https://doi.org/10.25058/20112742.n38.04
Van der Linden, W.J., Glas, C.A. (eds.): Computerized adaptive testing: Theory and practice. Springer Science & Business Media. (2000)
Vakil, S., McKinney de Royston, M., Suad Nasir, N.I., Kirshner, B.: Rethinking race and power in design-based research: Reflections from the field. Cognition Instruction 34(3), 194–209 (2016). https://doi.org/10.1080/07370008.2016.1169817
Walsh, J.A.: Natural language processing in educational contexts: opportunities and potential pitfalls (Doctoral dissertation) (2022)
Wassink, A.B., Gansen, C., Bartholomew, I.: Uneven success: automatic speech recognition and ethnicity-related dialects. Speech Commun. 140, 50–70 (2022)
Wilson, J., Huang, Y., Palermo, C., Beard, G., MacArthur, C.A.: Automated feedback and automated scoring in the elementary grades: Usage, attitudes, and associations with writing outcomes in a districtwide implementation of MI Write. Int. J. Artif. Intell. Educ. 31(2), 234–276 (2021). https://doi.org/10.1007/s40593-020-00236-w
Wingate, U.: The impact of formative feedback on the development of academic writing. In: Approaches to Assessment that Enhance Learning in Higher Education, pp. 29–43. Routledge (2014)
Wolf, M.J., Miller, K., Grodzinsky, F.S.: Why we should have seen that coming: comments on Microsoft's tay “experiment,” and wider implications. ACM Sigcas Comput. Soc. 47(3), 54–64 (2017). https://doi.org/10.1145/3144592.3144598
Yin, W., Agarwal, V., Jiang, A., Zubiaga, A., Sastry, N.: Annobert: effectively representing multiple annotators’ label choices to improve hate speech detection. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, pp. 902–913 (2023)
Yu, C., Jeoung, S., Kasi, A., Yu, P., Ji, H.: Unlearning bias in language models by partitioning gradients. In: Findings of the Association for Computational Linguistics: ACL 2023, pp. 6032–6048 (2023). https://doi.org/10.18653/v1/2023.findings-acl.375
Zavala, M.: What do we mean by decolonizing research strategies? Lessons from decolonizing, Indigenous research projects in New Zealand and Latin America (2013)
Acknowledgments
This work was funded in part by a grant from the Gates Foundation (INV-006213), by AERDF/EF+Math grant “Making learning visible: scalable, multisystem detection of self-regulation related to EF”, and by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A180261 to Arizona State University. Opinions, findings, conclusions, or recommendations expressed in this work are those of the author and do not necessarily reflect the reviews of funding sources.
Author information
Authors and Affiliations
Contributions
The authors have no competing interests to declare that are relevant to the content of this article.
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Goldshtein, M., Ocumpaugh, J., Potter, A., Roscoe, R.D. (2024). The Social Consequences of Language Technologies and Their Underlying Language Ideologies. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. HCII 2024. Lecture Notes in Computer Science, vol 14696. Springer, Cham. https://doi.org/10.1007/978-3-031-60875-9_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-60875-9_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60874-2
Online ISBN: 978-3-031-60875-9
eBook Packages: Computer ScienceComputer Science (R0)