Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3677525.3678694acmconferencesArticle/Chapter ViewAbstractPublication PagesgooditConference Proceedingsconference-collections
research-article
Free access

MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions

Published: 04 September 2024 Publication History

Abstract

Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues. Controversial topics, including vaccination, abortion, racism, and sexual orientation, often elicit opinions and attitudes that are not solely based on evidence but rather reflect moral worldviews. Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content. Building on the Moral Foundations Theory (MFT), this paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse. We describe a framework for both aggregated and domain-adversarial training on multiple heterogeneous MFT human-annotated datasets sourced from Twitter (now X), Reddit, and Facebook that broaden textual content diversity in terms of social media audience interests, content presentation and style, and spreading patterns. We show that the proposed framework achieves an average F1 score that is between 11% and 32% higher than lexicon-based approaches, Word2Vec embeddings, and zero-shot classification with large language models such as GPT-4 for in-domain inference. Domain-adversarial training yields better out-of domain predictions than aggregate training while achieving comparable performance to zero-shot learning. Our approach contributes to annotation-free and effective morality learning, and provides useful insights towards a more comprehensive understanding of moral narratives in controversial social debates using NLP.

References

[1]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
[2]
Milad Alshomary, Roxanne El Baff, Timon Gurcke, and Henning Wachsmuth. 2022. The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics(ACL ’22). Association for Computational Linguistics, Dublin, Ireland, 8782–8797. https://aclanthology.org/2022.acl-long.601.pdf
[3]
Prithviraj Ammanabrolu, Liwei Jiang, Maarten Sap, Hannaneh Hajishirzi, and Yejin Choi. 2022. Aligning to Social Norms and Values in Interactive Narratives. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 5994–6017.
[4]
Oscar Araque, Lorenzo Gatti, Sergio Consoli, and Kyriaki Kalimeri. 2024. A Novel Lexicon for the Moral Foundation of Liberty. arxiv:2407.11862 [cs.CL] https://arxiv.org/abs/2407.11862
[5]
Oscar Araque, Lorenzo Gatti, and Kyriaki Kalimeri. 2020. MoralStrength: Exploiting a moral lexicon and embedding similarity for moral foundations prediction. Knowledge-Based Systems 191 (2020), 1–11. https://linkinghub.elsevier.com/retrieve/pii/S095070511930526X
[6]
Oscar Araque, Lorenzo Gatti, and Kyriaki Kalimeri. 2022. LibertyMFD: A Lexicon to Assess the Moral Foundation of Liberty. In Proceedings of the 2022 ACM Conference on Information Technology for Social Good. 154–160.
[7]
Luigi Asprino, Luana Bulla, Stefano De Giorgis, Aldo Gangemi, Ludovica Marinucci, and Misael Mongiovi. 2022. Uncovering Values: Detecting Latent Moral Content from Natural Language with Explainable and Non-Trained Methods. In Proceedings of Deep Learning Inside Out: The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures(DeeLIO ’22). Association for Computational Linguistics, Dublin, Ireland and Online, 33–41. https://aclanthology.org/2022.deelio-1.4
[8]
Mohamed Bahgat, Steven R. Wilson, and Walid Magdy. 2020. Towards Using Word Embedding Vector Space for Better Cohort Analysis. In Proceedings of the International AAAI Conference on Web and Social Media(ICWSM ’20). AAAI Press, Atlanta, Georgia, 919–923. https://ojs.aaai.org/index.php/ICWSM/article/view/7358
[9]
Mariano Gastón Beiró, Jacopo D’Ignazi, Victoria Perez Bustos, María Florencia Prado, and Kyriaki Kalimeri. 2023. Moral narratives around the vaccination debate on facebook. In Proceedings of the ACM Web Conference 2023. 4134–4141.
[10]
Jordan J Bird, Anikó Ekárt, and Diego R Faria. 2023. Chatbot Interaction with Artificial Intelligence: human data augmentation with T5 and language transformer ensemble for text classification. Journal of Ambient Intelligence and Humanized Computing 14, 4 (2023), 3129–3144.
[11]
Judith Borghouts, Yicong Huang, Sydney Gibbs, Suellen Hopfer, Chen Li, and Gloria Mark. 2023. Understanding underlying moral values and language use of COVID-19 vaccine attitudes on twitter. PNAS nexus 2, 3 (2023), pgad013.
[12]
Mitchell Bosley, Musashi Jacobs-Harukawa, Hauke Licht, and Alexander Hoyle. 2023. Do we still need BERT in the age of GPT? Comparing the benefits of domain-adaptation and in-context-learning approaches to using LLMs for Political Science Research. (2023).
[13]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[14]
Shan Chen, Yingya Li, Sheng Lu, Hoang Van, Hugo JWL Aerts, Guergana K Savova, and Danielle S Bitterman. 2024. Evaluating the ChatGPT family of models for biomedical reasoning and classification. Journal of the American Medical Informatics Association 31, 4 (2024), 940–948.
[15]
Stephan A Curiskis, Barry Drake, Thomas R Osborn, and Paul J Kennedy. 2020. An evaluation of document clustering and topic modelling in two online social networks: Twitter and Reddit. Information Processing & Management 57, 2 (2020), 102034.
[16]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[17]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://huggingface.co/google/bert-base-uncased. Accessed: 2024-06-06.
[18]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics(NAACL ’19). 4171–4186. https://aclanthology.org/N19-1423
[19]
Seungheon Doh, Keunwoo Choi, Jongpil Lee, and Juhan Nam. 2023. LP-MusicCaps: LLM-Based Pseudo Music Captioning. In Ismir 2023 Hybrid Conference.
[20]
Python Software Foundation. 2023. Python Regular Expression (re) Library. https://docs.python.org/3/library/re.html Accessed: 2024-06-06.
[21]
Deep Ganguli, Amanda Askell, Nicholas Schiefer, Thomas I Liao, Kamilė Lukošiūtė, Anna Chen, Anna Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Hernandez, 2023. The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459 (2023).
[22]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning. PMLR, 1180–1189.
[23]
Carlos González-Santos, Miguel A Vega-Rodríguez, Carlos J Pérez, Joaquín M López-Muñoz, and Iñaki Martínez-Sarriegui. 2023. Automatic assignment of moral foundations to movies by word embedding. Knowledge-Based Systems 270 (2023), 110539.
[24]
Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P. Wojcik, and Peter H. Ditto. 2013. Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism. In Advances in Experimental Social Psychology. Vol. 47. Elsevier, Amsterdam, the Netherlands, 55–130. https://doi.org/10.1016/B978-0-12-407236-7.00002-4
[25]
Jesse Graham, Jonathan Haidt, and Brian A Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations.Journal of personality and social psychology 96, 5 (2009), 1029.
[26]
Jesse Graham, Jonathan Haidt, and Brian A. Nosek. 2009. Liberals and Conservatives Rely on Different Sets of Moral Foundations. Journal of Personality and Social Psychology 96, 5 (2009), 1029–1046. https://doi.org/10.1037/a0015141
[27]
Jesse Graham, Brian a Nosek, Jonathan Haidt, Ravi Iyer, Spassena Koleva, and Peter H Ditto. 2011. Mapping the moral domain.Journal of personality and social psychology 101, 2 (Aug. 2011), 366–85.
[28]
Joshua Greene and Jonathan Haidt. 2002. How (and where) does moral judgment work?Trends in cognitive sciences 6, 12 (2002), 517–523.
[29]
Siyi Guo, Negar Mokhberian, and Kristina Lerman. 2023. A Data Fusion Framework for Multi-Domain Morality Learning. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 17. 281–291.
[30]
Jonathan Haidt. 2012. The righteous mind: Why good people are divided by politics and religion. Vintage.
[31]
Jonathan Haidt and Jesse Graham. 2007. When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social justice research 20, 1 (2007), 98–116.
[32]
Jonathan Haidt and Craig Joseph. 2004. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus 133, 4 (2004), 55–66.
[33]
Evan Hernandez, Diwakar Mahajan, Jonas Wulff, Micah J Smith, Zachary Ziegler, Daniel Nadler, Peter Szolovits, Alistair Johnson, Emily Alsentzer, 2023. Do we still need clinical language models?. In Conference on Health, Inference, and Learning. PMLR, 578–597.
[34]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[35]
Joe Hoover, Gwenyth Portillo-Wightman, Leigh Yeh, Shreya Havaldar, Aida Mostafazadeh Davani, Ying Lin, Brendan Kennedy, Mohammad Atari, Zahra Kamel, Madelyn Mendlen, 2020. Moral foundations twitter corpus: A collection of 35k tweets annotated for moral sentiment. Social Psychological and Personality Science 11, 8 (2020), 1057–1071.
[36]
Frederic R Hopp, Jacob T Fisher, Devin Cornell, Richard Huskey, and René Weber. 2021. The extended Moral Foundations Dictionary (eMFD): Development and applications of a crowd-sourced approach to extracting moral intuitions from text. Behavior research methods 53 (2021), 232–246.
[37]
Minda Hu, Ashwin Rao, Mayank Kejriwal, and Kristina Lerman. 2021. Socioeconomic correlates of anti-science attitudes in the US. Future Internet 13, 6 (2021), 160.
[38]
Xiaolei Huang, Alexandra Wormley, and Adam Cohen. 2022. Learning to Adapt Domain Shifts of Moral Values via Instance Weighting. In Proceedings of the 33rd ACM Conference on Hypertext and Social Media(HT ’22). Association for Computing Machinery, 121–131. https://doi.org/10.1145/3511095.3531269
[39]
Ravi Iyer, Spassena Koleva, Jesse Graham, Peter Ditto, and Jonathan Haidt. 2012. Understanding libertarian morality: The psychological dispositions of self-identified libertarians. (2012).
[40]
Kokil Jaidka, Sharath Guntuku, and Lyle Ungar. 2018. Facebook versus Twitter: Differences in self-disclosure and trait prediction. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 12.
[41]
Liwei Jiang, Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, Maxwell Forbes, Jon Borchardt, Saadia Gabriel, 2021. Can machines learn morality? the delphi experiment. arXiv preprint arXiv:2110.07574 (2021).
[42]
Kyriaki Kalimeri, Mariano G. Beiró, Matteo Delfino, Robert Raleigh, and Ciro Cattuto. 2019. Predicting demographics, moral foundations, and human values from digital behaviours. Computers in Human Behavior 92 (2019), 428–445. https://doi.org/10.1016/j.chb.2018.11.024
[43]
Kyriaki Kalimeri, Mariano G. Beiró, Alessandra Urbinati, Andrea Bonanomi, Alessandro Rosina, and Ciro Cattuto. 2019. Human values and attitudes towards vaccination in social media. In Companion Proceedings of The 2019 World Wide Web Conference(WWW ’19). 248–254. https://doi.org/10.1145/3308560.3316489
[44]
Gary King and Langche Zeng. 2001. Logistic regression in rare events data. Political analysis 9, 2 (2001), 137–163.
[45]
Alex Gwo Jen Lan and Ivandré Paraboni. 2022. Text- and author-dependent moral foundations classification. New Review of Hypermedia and Multimedia 0, 0 (2022), 1–21. https://doi.org/10.1080/13614568.2022.2092655
[46]
Enrico Liscio, Oscar Araque, Lorenzo Gatti, Ionut Constantinescu, Catholijn Jonker, Kyriaki Kalimeri, and Pradeep Kumar Murukannaiah. 2023. What does a text classifier learn about morality? An explainable method for cross-domain comparison of moral rhetoric. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 14113–14132.
[47]
Enrico Liscio, Alin E. Dondera, Andrei Geadau, Catholijn M. Jonker, and Pradeep K. Murukannaiah. 2022. Cross-Domain Classification of Moral Values. In Findings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics(NAACL ’22). Association for Computational Linguistics, Seattle, USA, 2727–2745. https://aclanthology.org/2022.findings-naacl.209.pdf
[48]
Enrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I.J. Dobbe, Catholijn M. Jonker, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar, and Pradeep K. Murukannaiah. 2023. Value Inference in Sociotechnical Systems: Blue Sky Ideas Track. In Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems(AAMAS ’23). IFAAMAS, London, United Kingdom, 1–7.
[49]
Yujian Liu, Xinliang Frederick Zhang, David Wegsman, Nick Beauchamp, and Lu Wang. 2022. POLITICS: pretraining with same-story article comparison for ideology prediction and stance detection. arXiv preprint arXiv:2205.00619 (2022).
[50]
Leland McInnes, John Healy, and James Melville. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv preprint arXiv:1802.03426 (2018).
[51]
Yelena Mejova, Kyriaki Kalimeri, and Gianmarco De Francisci Morales. 2023. Authority without Care: Moral Values behind the Mask Mandate Response. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 17. 614–625.
[52]
Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understanding. Advances in Neural Information Processing Systems 35 (2022), 462–477.
[53]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26 (2013).
[54]
Negar Mokhberian, Andrés Abeliuk, Patrick Cummings, and Kristina Lerman. 2020. Moral framing and ideological bias of news. In Social Informatics: 12th International Conference, SocInfo 2020, Pisa, Italy, October 6–9, 2020, Proceedings 12. Springer, 206–219.
[55]
Marlon Mooijman, Joe Hoover, Ying Lin, Heng Ji, and Morteza Dehghani. 2018. Moralization in social networks and the emergence of violence during protests. Nature Human Behaviour 2, 6 (2018), 389–396. https://doi.org/10.1038/s41562-018-0353-0
[56]
Pansy Nandwani and Rupali Verma. 2021. A review on sentiment analysis and emotion detection from text. Social Network Analysis and Mining 11, 1 (2021), 81.
[57]
Tuan Dung Nguyen, Ziyu Chen, Nicholas George Carroll, Alasdair Tran, Colin Klein, and Lexing Xie. 2024. Measuring Moral Dimensions in Social Media with Mformer. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 18. 1134–1147.
[58]
Matheus C. Pavan, Vitor G. Santos, Alex G. J. Lan, Joao Martins, Wesley Ramos Santos, Caio Deutsch, Pablo B. Costa, Fernando C. Hsieh, and Ivandre Paraboni. 2020. Morality Classification in Natural Language Text. IEEE Transactions on Affective Computing 3045, c (2020), 1–8. https://doi.org/10.1109/taffc.2020.3034050
[59]
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825–2830.
[60]
Vladimir Ponizovskiy, Murat Ardag, Lusine Grigoryan, Ryan Boyd, Henrik Dobewall, and Peter Holtz. 2020. Development and Validation of the Personal Values Dictionary: A Theory-Driven Tool for Investigating References to Basic Human Values in Text. European Journal of Personality 34, 5 (2020), 885–902. https://doi.org/10.1002/per.2294
[61]
Vjosa Preniqi, Kyriaki Kalimeri, and Charalampos Saitis. 2022. "More Than Words": Linking Music Preferences and Moral Values Through Lyrics. ISMIR (2022).
[62]
Vjosa Preniqi, Kyriaki Kalimeri, and Charalampos Saitis. 2023. Soundscapes of morality: Linking music preferences and moral values through lyrics and audio. Plos one 18, 11 (2023), e0294402.
[63]
Shalini Priya, Ryan Sequeira, Joydeep Chandra, and Sourav Kumar Dandapat. 2019. Where should one get news updates: Twitter or Reddit. Online Social Networks and Media 9 (2019), 17–29.
[64]
Liang Qiu, Yizhou Zhao, Jinchao Li, Pan Lu, Baolin Peng, Jianfeng Gao, and Song-Chun Zhu. 2022. ValueNet: A New Dataset for Human Value Driven Dialogue System. In Proceedings of the 36th AAAI Conference on Artificial Intelligence(AAAI ’22). 11183–11191. https://doi.org/10.1609/aaai.v36i10.21368
[65]
Nino Scherrer, Claudia Shi, Amir Feder, and David Blei. 2024. Evaluating the moral beliefs encoded in llms. Advances in Neural Information Processing Systems 36 (2024).
[66]
H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one 8, 9 (2013), e73791.
[67]
Shalom H. Schwartz. 2012. An Overview of the Schwartz Theory of Basic Values. Online readings in Psychology and Culture 2, 1 (2012), 1–20. https://doi.org/10.9707/2307-0919.1116
[68]
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
[69]
Jackson Trager, Alireza S Ziabari, Aida Mostafazadeh Davani, Preni Golazizian, Farzan Karimi-Malekabadi, Ali Omrani, Zhihe Li, Brendan Kennedy, Nils Karl Reimer, Melissa Reyes, 2022. The Moral Foundations Reddit Corpus. arXiv preprint arXiv:2208.05545 (2022).
[70]
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, 2021. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359 (2021).
[71]
Steven R. Wilson, Yiting Shen, and Rada Mihalcea. 2018. Building and Validating Hierarchical Lexicons with a Case Study on Personal Values. In Proceedings of the 10th International Conference on Social Informatics(SocInfo ’18). Springer, St. Petersburg, Russia, 455–470. https://doi.org/10.1007/978-3-030-01129-1_28
[72]
Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, Fiona Aga, Jinshi Huang, Charles Bai, 2022. Sustainable AI: Environmental Implications, Challenges and Opportunities. Proceedings of Machine Learning and Systems 4 (2022), 795–813.
[73]
Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. ZeroGen: Efficient Zero-shot Learning via Dataset Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (Eds.). Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 11653–11669. https://doi.org/10.18653/v1/2022.emnlp-main.801
[74]
Jingyan Zhou, Minda Hu, Junan Li, Xiaoying Zhang, Xixin Wu, Irwin King, and Helen Meng. 2023. Rethinking Machine Ethics–Can LLMs Perform Moral Reasoning through the Lens of Moral Theories?arXiv preprint arXiv:2308.15399 (2023).

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
GoodIT '24: Proceedings of the 2024 International Conference on Information Technology for Social Good
September 2024
481 pages
ISBN:9798400710940
DOI:10.1145/3677525
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 September 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Language use
  2. Moral values
  3. Social Media

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

GoodIT '24
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 42
    Total Downloads
  • Downloads (Last 12 months)42
  • Downloads (Last 6 weeks)42
Reflects downloads up to 18 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media