default search action
Korin Richmond
Person information
- affiliation: University of Edinburgh, Scotland, UK
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j13]Aidan Pine, Erica Cooper, David Guzmán, Eric Joanis, Anna Kazantseva, Ross Krekoski, Roland Kuhn, Samuel Larkin, Patrick Littell, Delaney Lothian, Akwiratékha' Martin, Korin Richmond, Marc Tessier, Cassia Valentini-Botinhao, Dan Wells, Junichi Yamagishi:
Speech Generation for Indigenous Language Education. Comput. Speech Lang. 90: 101723 (2025) - 2024
- [j12]Cheng Gong, Xin Wang, Erica Cooper, Dan Wells, Longbiao Wang, Jianwu Dang, Korin Richmond, Junichi Yamagishi:
ZMM-TTS: Zero-Shot Multilingual and Multispeaker Speech Synthesis Conditioned on Self-Supervised Discrete Speech Representations. IEEE ACM Trans. Audio Speech Lang. Process. 32: 4036-4051 (2024) - [i20]Cheng Gong, Erica Cooper, Xin Wang, Chunyu Qiang, Mengzhe Geng, Dan Wells, Longbiao Wang, Jianwu Dang, Marc Tessier, Aidan Pine, Korin Richmond, Junichi Yamagishi:
An Initial Investigation of Language Adaptation for TTS Systems under Low-resource Scenarios. CoRR abs/2406.08911 (2024) - [i19]Jinzuomu Zhong, Korin Richmond, Zhiba Su, Siqi Sun:
AccentBox: Towards High-Fidelity Zero-Shot Accent Generation. CoRR abs/2409.09098 (2024) - [i18]Siqi Sun, Korin Richmond:
Acquiring Pronunciation Knowledge from Transcribed Speech Audio via Multi-task Learning. CoRR abs/2409.09891 (2024) - [i17]Zhichen Han, Tianqi Geng, Hui Feng, Jiahong Yuan, Korin Richmond, Yuanchao Li:
Cross-lingual Speech Emotion Recognition: Humans vs. Self-Supervised Models. CoRR abs/2409.16920 (2024) - [i16]Yujia Sun, Zeyu Zhao, Korin Richmond, Yuanchao Li:
Revisiting Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations. CoRR abs/2409.17899 (2024) - 2023
- [j11]Siqi Sun, Korin Richmond, Hao Tang:
Improving Seq2Seq TTS Frontends With Transcribed Speech Audio. IEEE ACM Trans. Audio Speech Lang. Process. 31: 1940-1952 (2023) - [c73]Nicholas Sanders, Korin Richmond:
Invert-Classify: Recovering Discrete Prosody Inputs for Text-To-Speech. ASRU 2023: 1-7 - [c72]Rachel Beeson, Korin Richmond:
Silent Speech Recognition with Articulator Positions Estimated from Tongue Ultrasound and Lip Video. INTERSPEECH 2023: 1149-1153 - [c71]Dan Wells, Korin Richmond, William Lamb:
A Low-Resource Pipeline for Text-to-Speech from Found Data With Application to Scottish Gaelic. INTERSPEECH 2023: 4324-4328 - [c70]Nicholas Sanders, Korin Richmond:
Recovering Discrete Prosody Inputs via Invert-Classify. SSW 2023: 244-245 - [i15]Cheng Gong, Xin Wang, Erica Cooper, Dan Wells, Longbiao Wang, Jianwu Dang, Korin Richmond, Junichi Yamagishi:
ZMM-TTS: Zero-shot Multilingual and Multispeaker Speech Synthesis Conditioned on Self-supervised Discrete Speech Representations. CoRR abs/2312.14398 (2023) - 2022
- [c69]Aidan Pine, Dan Wells, Nathan Thanyehténhas Brinklow, Patrick Littell, Korin Richmond:
Requirements and Motivations of Low-Resource Speech Synthesis for Language Revitalization. ACL (1) 2022: 7346-7359 - [c68]Cassia Valentini-Botinhao, Manuel Sam Ribeiro, Oliver Watts, Korin Richmond, Gustav Eje Henter:
Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks. INTERSPEECH 2022: 471-475 - [c67]Dan Wells, Hao Tang, Korin Richmond:
Phonetic Analysis of Self-supervised Representations of English Speech. INTERSPEECH 2022: 3583-3587 - [c66]Emelie Van De Vreken, Korin Richmond, Catherine Lai:
Voice Puppetry with FastPitch. INTERSPEECH 2022: 5219-5220 - [i14]Cassia Valentini-Botinhao, Manuel Sam Ribeiro, Oliver Watts, Korin Richmond, Gustav Eje Henter:
Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks. CoRR abs/2209.11003 (2022) - 2021
- [j10]Manuel Sam Ribeiro, Joanne Cleland, Aciel Eshky, Korin Richmond, Steve Renals:
Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors. Speech Commun. 128: 24-34 (2021) - [j9]Aciel Eshky, Joanne Cleland, Manuel Sam Ribeiro, Eleanor Sugden, Korin Richmond, Steve Renals:
Automatic audiovisual synchronisation for ultrasound tongue imaging. Speech Commun. 132: 83-95 (2021) - [c65]Jing-Xuan Zhang, Korin Richmond, Zhen-Hua Ling, Lirong Dai:
TaLNet: Voice Reconstruction from Tongue and Lip Articulation with Transfer Learning from Text-to-Speech Synthesis. AAAI 2021: 14402-14410 - [c64]Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals:
Silent versus Modal Multi-Speaker Speech Recognition from Ultrasound and Video. Interspeech 2021: 641-645 - [c63]Jason Taylor, Korin Richmond:
Confidence Intervals for ASR-Based TTS Evaluation. Interspeech 2021: 2791-2795 - [c62]Manuel Sam Ribeiro, Jennifer Sanger, Jing-Xuan Zhang, Aciel Eshky, Alan Wrench, Korin Richmond, Steve Renals:
Tal: A Synchronised Multi-Speaker Corpus of Ultrasound Tongue Imaging, Audio, and Lip Videos. SLT 2021: 1109-1116 - [c61]Dan Wells, Korin Richmond:
Cross-lingual Transfer of Phonological Features for Low-resource Speech Synthesis. SSW 2021: 160-165 - [c60]Jason Taylor, Sébastien Le Maguer, Korin Richmond:
Liaison and Pronunciation Learning in End-to-End Text-to-Speech in French. SSW 2021: 195-199 - [i13]Manuel Sam Ribeiro, Joanne Cleland, Aciel Eshky, Korin Richmond, Steve Renals:
Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors. CoRR abs/2103.00324 (2021) - [i12]Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals:
Silent versus modal multi-speaker speech recognition from ultrasound and video. CoRR abs/2103.00333 (2021) - [i11]Aciel Eshky, Joanne Cleland, Manuel Sam Ribeiro, Eleanor Sugden, Korin Richmond, Steve Renals:
Automatic audiovisual synchronisation for ultrasound tongue imaging. CoRR abs/2105.15162 (2021) - 2020
- [c59]Jason Taylor, Korin Richmond:
Enhancing Sequence-to-Sequence Text-to-Speech with Morphology. INTERSPEECH 2020: 1738-1742 - [c58]Kouichi Katsurada, Korin Richmond:
Speaker-Independent Mel-Cepstrum Estimation from Articulator Movements Using D-Vector Input. INTERSPEECH 2020: 3176-3180 - [i10]Manuel Sam Ribeiro, Jennifer Sanger, Jing-Xuan Zhang, Aciel Eshky, Alan Wrench, Korin Richmond, Steve Renals:
TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos. CoRR abs/2011.09804 (2020)
2010 – 2019
- 2019
- [c57]Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals:
Speaker-independent Classification of Phonetic Segments from Raw Ultrasound in Child Speech. ICASSP 2019: 1328-1332 - [c56]Cheng-I Lai, Alberto Abad, Korin Richmond, Junichi Yamagishi, Najim Dehak, Simon King:
Attentive Filtering Networks for Audio Replay Attack Detection. ICASSP 2019: 6316-6320 - [c55]Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals:
Ultrasound Tongue Imaging for Diarization and Alignment of Child Speech Therapy Sessions. INTERSPEECH 2019: 16-20 - [c54]Jason Taylor, Korin Richmond:
Analysis of Pronunciation Learning in End-to-End Speech Synthesis. INTERSPEECH 2019: 2070-2074 - [c53]Aciel Eshky, Manuel Sam Ribeiro, Korin Richmond, Steve Renals:
Synchronising Audio and Ultrasound by Learning Cross-Modal Embeddings. INTERSPEECH 2019: 4100-4104 - [c52]Jason Fong, Jason Taylor, Korin Richmond, Simon King:
A Comparison of Letters and Phones as Input to Sequence-to-Sequence Models for Speech Synthesis. SSW 2019: 223-227 - [i9]Aciel Eshky, Manuel Sam Ribeiro, Korin Richmond, Steve Renals:
Synchronising audio and ultrasound by learning cross-modal embeddings. CoRR abs/1907.00758 (2019) - [i8]Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals:
Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions. CoRR abs/1907.00818 (2019) - [i7]Aciel Eshky, Manuel Sam Ribeiro, Joanne Cleland, Korin Richmond, Zoe Roxburgh, James M. Scobbie, Alan Wrench:
UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions. CoRR abs/1907.00835 (2019) - [i6]Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals:
Speaker-independent classification of phonetic segments from raw ultrasound in child speech. CoRR abs/1907.01413 (2019) - 2018
- [j8]Alexander Hewer, Stefanie Wuhrer, Ingmar Steiner, Korin Richmond:
A multilinear tongue model derived from speech related MRI data of the human vocal tract. Comput. Speech Lang. 51: 68-92 (2018) - [c51]Jie Zhang, Korin Richmond, Robert B. Fisher:
Dual-modality Talking-metrics: 3D Visual-Audio Integrated Behaviometric Cues from Speakers. ICPR 2018: 3144-3149 - [c50]Aciel Eshky, Manuel Sam Ribeiro, Joanne Cleland, Korin Richmond, Zoe Roxburgh, James M. Scobbie, Alan Wrench:
UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions. INTERSPEECH 2018: 1888-1892 - [i5]Cheng-I Lai, Alberto Abad, Korin Richmond, Junichi Yamagishi, Najim Dehak, Simon King:
Attentive Filtering Networks for Audio Replay Attack Detection. CoRR abs/1810.13048 (2018) - 2016
- [c49]Korin Richmond, Simon King:
Smooth talking: Articulatory join costs for unit selection. ICASSP 2016: 5150-5154 - [c48]Rasmus Dall, Sandrine Brognaux, Korin Richmond, Cassia Valentini-Botinhao, Gustav Eje Henter, Julia Hirschberg, Junichi Yamagishi, Simon King:
Testing the consistency assumption: Pronunciation variant forced alignment in read and spontaneous speech synthesis. ICASSP 2016: 5155-5159 - [c47]Qiong Hu, Junichi Yamagishi, Korin Richmond, Kartick Subramanian, Yannis Stylianou:
Initial investigation of speech synthesis based on complex-valued neural networks. ICASSP 2016: 5630-5634 - [p1]Alexander Hewer, Stefanie Wuhrer, Ingmar Steiner, Korin Richmond:
Tongue Mesh Extraction from 3D MRI Data of the Human Vocal Tract. Perspectives in Shape Analysis 2016: 345-365 - [i4]Alexander Hewer, Ingmar Steiner, Timo Bolkart, Stefanie Wuhrer, Korin Richmond:
A statistical shape space model of the palate surface trained on 3D MRI scans of the vocal tract. CoRR abs/1602.07679 (2016) - [i3]Alexander Hewer, Stefanie Wuhrer, Ingmar Steiner, Korin Richmond:
A Multilinear Tongue Model Derived from Speech Related MRI Data of the Human Vocal Tract. CoRR abs/1612.05005 (2016) - 2015
- [c46]Qiong Hu, Yannis Stylianou, Ranniery Maia, Korin Richmond, Junichi Yamagishi:
Methods for applying dynamic sinusoidal models to statistical parametric speech synthesis. ICASSP 2015: 4889-4893 - [c45]Alexander Hewer, Ingmar Steiner, Timo Bolkart, Stefanie Wuhrer, Korin Richmond:
A statistical shape space model of the palate surface trained on 3D MRI scans of the vocal tract. ICPhS 2015 - [c44]Qiong Hu, Zhizheng Wu, Korin Richmond, Junichi Yamagishi, Yannis Stylianou, Ranniery Maia:
Fusion of multiple parameterisations for DNN-based sinusoidal speech synthesis with multi-task learning. INTERSPEECH 2015: 854-858 - 2014
- [j7]João P. Cabral, Korin Richmond, Junichi Yamagishi, Steve Renals:
Glottal Spectral Separation for Speech Synthesis. IEEE J. Sel. Top. Signal Process. 8(2): 195-208 (2014) - [c43]Qiong Hu, Yannis Stylianou, Korin Richmond, Ranniery Maia, Junichi Yamagishi, Javier Latorre:
A fixed dimension and perceptually based dynamic sinusoidal model of speech. ICASSP 2014: 6270-6274 - [c42]Qiong Hu, Yannis Stylianou, Ranniery Maia, Korin Richmond, Junichi Yamagishi, Javier Latorre:
An investigation of the application of dynamic sinusoidal models to statistical parametric speech synthesis. INTERSPEECH 2014: 780-784 - 2013
- [j6]Christian Geng, Alice Turk, James M. Scobbie, Cedric Macmartin, Philip Hoole, Korin Richmond, Alan Wrench, Marianne Pouplier, Ellen Gurman Bard, Ziggy Campbell, Catherine Dickie, Eddie Dubourg, William J. Hardcastle, Evia Kainada, Simon King, Robin J. Lickley, Satsuki Nakai, Steve Renals, Kevin White, Ronny Wiegand:
Recording speech articulation in dialogue: Evaluating a synchronized double electromagnetic articulography setup. J. Phonetics 41(6): 421-431 (2013) - [j5]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi:
Articulatory Control of HMM-Based Parametric Speech Synthesis Using Feature-Space-Switched Multiple Regression. IEEE Trans. Speech Audio Process. 21(1): 205-217 (2013) - [c41]Ingmar Steiner, Korin Richmond, Slim Ouni:
Speech animation using electromagnetic articulography as motion capture data. AVSP 2013: 55-60 - [c40]James M. Scobbie, Alice Turk, Christian Geng, Simon King, Robin J. Lickley, Korin Richmond:
The edinburgh speech production facility doubletalk corpus. INTERSPEECH 2013: 764-766 - [c39]Korin Richmond, Zhen-Hua Ling, Junichi Yamagishi, Benigno Uria:
On the evaluation of inversion mapping performance in the acoustic domain. INTERSPEECH 2013: 1012-1016 - [c38]Qiong Hu, Korin Richmond, Junichi Yamagishi, Javier Latorre:
An experimental comparison of multiple vocoder types. SSW 2013: 135-140 - [c37]Maria Astrinaki, Alexis Moinet, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Thierry Dutoit:
Mage - reactive articulatory feature control of HMM-based parametric speech synthesis. SSW 2013: 207-211 - [c36]Maria Astrinaki, Alexis Moinet, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Thierry Dutoit:
Mage - HMM-based speech synthesis reactively controlled by the articulators. SSW 2013: 243 - [i2]Ingmar Steiner, Korin Richmond, Slim Ouni:
Speech animation using electromagnetic articulography as motion capture data. CoRR abs/1310.8585 (2013) - 2012
- [c35]Ingmar Steiner, Korin Richmond, Slim Ouni:
Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis. FAA 2012: 2:1 - [c34]Korin Richmond, Steve Renals:
Ultrax: An Animated Midsagittal Vocal Tract Display for Speech Therapy. INTERSPEECH 2012: 74-77 - [c33]Benigno Uria, Iain Murray, Steve Renals, Korin Richmond:
Deep Architectures for Articulatory Inversion. INTERSPEECH 2012: 867-870 - [c32]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi:
Vowel Creation by Articulatory Control in HMM-based Parametric Speech Synthesis. INTERSPEECH 2012: 991-994 - [i1]Ingmar Steiner, Korin Richmond, Slim Ouni:
Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis. CoRR abs/1209.4982 (2012) - 2011
- [c31]João P. Cabral, Steve Renals, Junichi Yamagishi, Korin Richmond:
HMM-based speech synthesiser using the LF-model of the glottal source. ICASSP 2011: 4704-4707 - [c30]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi:
Feature-Space Transform Tying in Unified Acoustic-Articulatory Modelling for Articulatory Control of HMM-Based Speech Synthesis. INTERSPEECH 2011: 117-120 - [c29]Korin Richmond, Phil Hoole, Simon King:
Announcing the Electromagnetic Articulography (Day 1) Subset of the mngu0 Articulatory Corpus. INTERSPEECH 2011: 1505-1508 - [c28]Ming Lei, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Li-Rong Dai:
Formant-Controlled HMM-Based Speech Synthesis. INTERSPEECH 2011: 2777-2780 - 2010
- [j4]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi:
An Analysis of HMM-based prediction of articulatory movements. Speech Commun. 52(10): 834-846 (2010) - [c27]Gregor Hofer, Korin Richmond:
Comparison of HMM and TMDN methods for lip synchronisation. INTERSPEECH 2010: 454-457 - [c26]Korin Richmond, Robert A. J. Clark, Susan Fitt:
On generating combilex pronunciations via morphological analysis. INTERSPEECH 2010: 1974-1977 - [c25]Daniel Felps, Christian Geng, Michael Berger, Korin Richmond, Ricardo Gutierrez-Osuna:
Relying on critical articulators to estimate vocal tract spectra in an articulatory-acoustic database. INTERSPEECH 2010: 1990-1993 - [c24]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi:
HMM-based text-to-articulatory-movement prediction and analysis of critical articulators. INTERSPEECH 2010: 2194-2197 - [c23]Gregor Hofer, Korin Richmond, Michael Berger:
Lip synchronization by acoustic inversion. SIGGRAPH Posters 2010: 11:1 - [c22]João P. Cabral, Steve Renals, Korin Richmond, Junichi Yamagishi:
An HMM-based speech synthesiser using glottal post-filtering. SSW 2010: 365-370
2000 – 2009
- 2009
- [j3]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi, Ren-Hua Wang:
Integrating Articulatory Features Into HMM-Based Parametric Speech Synthesis. IEEE Trans. Speech Audio Process. 17(6): 1171-1185 (2009) - [c21]Korin Richmond, Robert A. J. Clark, Susan Fitt:
Robust LTS rules with the Combilex speech technology lexicon. INTERSPEECH 2009: 1295-1298 - [c20]Ingmar Steiner, Korin Richmond:
Towards unsupervised articulatory resynthesis of German utterances using EMA data. INTERSPEECH 2009: 2055-2058 - [c19]Korin Richmond:
Preliminary inversion mapping results with a new EMA corpus. INTERSPEECH 2009: 2835-2838 - 2008
- [c18]Zhen-Hua Ling, Korin Richmond, Junichi Yamagishi, Ren-Hua Wang:
Articulatory control of HMM-based parametric speech synthesis driven by phonetic knowledge. INTERSPEECH 2008: 573-576 - [c17]João P. Cabral, Steve Renals, Korin Richmond, Junichi Yamagishi:
Glottal spectral separation for parametric speech synthesis. INTERSPEECH 2008: 1829-1832 - [c16]Chao Qin, Miguel Á. Carreira-Perpiñán, Korin Richmond, Alan Wrench, Steve Renals:
Predicting tongue shapes from a few landmark locations. INTERSPEECH 2008: 2306-2309 - 2007
- [j2]Robert A. J. Clark, Korin Richmond, Simon King:
Multisyn: Open-domain unit selection for the Festival speech synthesis system. Speech Commun. 49(4): 317-330 (2007) - [c15]Korin Richmond, Volker Strom, Robert A. J. Clark, Junichi Yamagishi, Susan Fitt:
Festival multisyn voices for the 2007 Blizzard Challenge. Blizzard Challenge 2007 - [c14]Korin Richmond:
A multitask learning perspective on acoustic-articulatory inversion. INTERSPEECH 2007: 2465-2468 - [c13]Korin Richmond:
Trajectory Mixture Density Networks with Multiple Mixtures for Acoustic-Articulatory Inversion. NOLISP 2007: 263-272 - [c12]João P. Cabral, Steve Renals, Korin Richmond, Junichi Yamagishi:
Towards an improved modeling of the glottal source in statistical parametric speech synthesis. SSW 2007: 113-118 - 2006
- [c11]Robert A. J. Clark, Korin Richmond, Volker Strom, Simon King:
Multisyn Voice for the Blizzard Challenge 2006. Blizzard Challenge 2006 - [c10]Susan Fitt, Korin Richmond:
Redundancy and productivity in the speech technology lexicon - can we do better? INTERSPEECH 2006 - [c9]Korin Richmond:
A trajectory mixture density network for the acoustic-articulatory inversion mapping. INTERSPEECH 2006 - 2005
- [c8]Robert A. J. Clark, Korin Richmond, Simon King:
Multisyn voices from ARCTIC data for the blizzard challenge. INTERSPEECH 2005: 101-104 - [c7]Gregor Hofer, Korin Richmond, Robert A. J. Clark:
Informed blending of databases for emotional speech synthesis. INTERSPEECH 2005: 501-504 - 2004
- [c6]Dave Toney, David Feinberg, Korin Richmond:
Acoustic Features for Profiling Mobile Users of Conversational Interfaces. Mobile HCI 2004: 394-398 - [c5]Robert A. J. Clark, Korin Richmond, Simon King:
Festival 2 - build your own general purpose unit selection speech synthesiser. SSW 2004: 173-178 - 2003
- [j1]Korin Richmond, Simon King, Paul Taylor:
Modelling the uncertainty in recovering articulation from acoustics. Comput. Speech Lang. 17(2-3): 153-172 (2003) - 2000
- [c4]Alan Wrench, Korin Richmond:
Continuous speech recognition using articulatory data. INTERSPEECH 2000: 145-148 - [c3]Joe Frankel, Korin Richmond, Simon King, Paul Taylor:
An automatic speech recognition system using neural networks and linear dynamic models to recover and model articulatory traces. INTERSPEECH 2000: 254-257
1990 – 1999
- 1999
- [c2]Korin Richmond:
Estimating velum height from acoustics during continuous speech. EUROSPEECH 1999: 149-152 - 1997
- [c1]Korin Richmond, Andrew Smith, Einat Amitay:
Detecting Subject Boundaries Within Text: A Language Independent Statistical Approach. EMNLP 1997
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-07 20:32 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint