default search action
Chanwoo Kim 0001
Person information
- affiliation: Samsung Research, Seoul, South Korea
Other persons with the same name
- Chanwoo Kim — disambiguation page
- Chanwoo Kim 0002 — University of Washington, Seattle, WA, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j4]Joshua Tian Jin Tee, Kang Zhang, Hee Suk Yoon, Dhananjaya N. Gowda, Chanwoo Kim, Chang D. Yoo:
Physics Informed Distillation for Diffusion Models. Trans. Mach. Learn. Res. 2024 (2024) - [c59]Abhinav Garg, Jiyeon Kim, Sushil Khyalia, Chanwoo Kim, Dhananjaya Gowda:
Data Driven Grapheme-to-Phoneme Representations for a Lexicon-Free Text-to-Speech. ICASSP 2024: 11091-11095 - [c58]Jae-Sung Bae, Joun Yeop Lee, Ji-Hyun Lee, Seongkyu Mun, Taehwa Kang, Hoon-Young Cho, Chanwoo Kim:
Latent Filling: Latent Space Data Augmentation for Zero-Shot Speech Synthesis. ICASSP 2024: 11166-11170 - [c57]Heejin Choi, Jae-Sung Bae, Joun Yeop Lee, Seongkyu Mun, Jihwan Lee, Hoon-Young Cho, Chanwoo Kim:
Mels-Tts : Multi-Emotion Multi-Lingual Multi-Speaker Text-To-Speech System Via Disentangled Style Tokens. ICASSP 2024: 12682-12686 - [c56]SooHwan Eom, Eunseop Yoon, Hee Suk Yoon, Chanwoo Kim, Mark Hasegawa-Johnson, Chang D. Yoo:
AdaMER-CTC: Connectionist Temporal Classification with Adaptive Maximum Entropy Regularization for Automatic Speech Recognition. ICASSP 2024: 12707-12711 - [i20]Abhinav Garg, Jiyeon Kim, Sushil Khyalia, Chanwoo Kim, Dhananjaya Gowda:
Data-driven grapheme-to-phoneme representations for a lexicon-free text-to-speech. CoRR abs/2401.10465 (2024) - 2023
- [c55]Mehul Kumar, Jiyeon Kim, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim:
Self-Supervised Accent Learning for Under-Resourced Accents Using Native Language Data. ICASSP 2023: 1-5 - [c54]Sunjae Yoon, Ji Woo Hong, SooHwan Eom, Hee Suk Yoon, Eunseop Yoon, Daehyeok Kim, Junyeong Kim, Chanwoo Kim, Chang D. Yoo:
Counterfactual Two-Stage Debiasing For Video Corpus Moment Retrieval. ICASSP 2023: 1-5 - [c53]Eunseop Yoon, Hee Suk Yoon, Dhananjaya Gowda, SooHwan Eom, Daehyeok Kim, John B. Harvill, Heting Gao, Mark Hasegawa-Johnson, Chanwoo Kim, Chang D. Yoo:
Mitigating the Exposure Bias in Sentence-Level Grapheme-to-Phoneme (G2P) Transduction. INTERSPEECH 2023: 2028-2032 - [c52]Joun Yeop Lee, Jae-Sung Bae, Seongkyu Mun, Jihwan Lee, Ji-Hyun Lee, Hoon-Young Cho, Chanwoo Kim:
Hierarchical Timbre-Cadence Speaker Encoder for Zero-shot Speech Synthesis. INTERSPEECH 2023: 4334-4338 - [i19]Eunseop Yoon, Hee Suk Yoon, Dhananjaya Gowda, SooHwan Eom, Daehyeok Kim, John B. Harvill, Heting Gao, Mark Hasegawa-Johnson, Chanwoo Kim, Chang D. Yoo:
Mitigating the Exposure Bias in Sentence-Level Grapheme-to-Phoneme (G2P) Transduction. CoRR abs/2308.08442 (2023) - [i18]Nagaraj Adiga, Jinhwan Park, Chintigari Shiva Kumar, Shatrughan Singh, Kyungmin Lee, Chanwoo Kim, Dhananjaya Gowda:
On the compression of shallow non-causal ASR models using knowledge distillation and tied-and-reduced decoder for low-latency on-device speech recognition. CoRR abs/2312.09842 (2023) - 2022
- [c51]Mohd Abbas Zaidi, Beomseok Lee, Sangha Kim, Chanwoo Kim:
Cross-Modal Decision Regularization for Simultaneous Speech Translation. INTERSPEECH 2022: 116-120 - [c50]Seongkyu Mun, Dhananjaya Gowda, Jihwan Lee, Changwoo Han, Dokyun Lee, Chanwoo Kim:
Prototypical speaker-interference loss for target voice separation using non-parallel audio samples. INTERSPEECH 2022: 276-280 - [c49]Jinhwan Park, Sichen Jin, Junmo Park, Sungsoo Kim, Dhairya Sandhyana, Changheon Lee, Myoungji Han, Jungin Lee, Seokyeong Jung, Changwoo Han, Chanwoo Kim:
Conformer-Based on-Device Streaming Speech Recognition with KD Compression and Two-Pass Architecture. SLT 2022: 92-99 - [c48]Chanwoo Kim, Sathish Indurti, Jinhwan Park, Wonyong Sung:
Macro-Block Dropout for Improved Regularization in Training End-to-End Speech Recognition Models. SLT 2022: 331-338 - [i17]Nauman Dawalatabad, Tushar Vatsal, Ashutosh Gupta, Sungsoo Kim, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim:
Two-Pass End-to-End ASR Model Compression. CoRR abs/2201.02741 (2022) - [i16]Jihwan Lee, Joun Yeop Lee, Heejin Choi, Seongkyu Mun, Sangjun Park, Chanwoo Kim:
Into-TTS : Intonation Template based Prosody Control System. CoRR abs/2204.01271 (2022) - [i15]Jihwan Lee, Jae-Sung Bae, Seongkyu Mun, Heejin Choi, Joun Yeop Lee, Hoon-Young Cho, Chanwoo Kim:
An Empirical Study on L2 Accents of Cross-lingual Text-to-Speech Systems via Vowel Space. CoRR abs/2211.03078 (2022) - [i14]Chanwoo Kim, Sathish Indurti, Jinhwan Park, Wonyong Sung:
Macro-block dropout for improved regularization in training end-to-end speech recognition models. CoRR abs/2212.14149 (2022) - 2021
- [c47]Sachin Singh, Ashutosh Gupta, Aman Maghan, Dhananjaya Gowda, Shatrughan Singh, Chanwoo Kim:
Comparative Study of Different Tokenization Strategies for Streaming End-to-End ASR. ASRU 2021: 388-394 - [c46]Dhananjaya Gowda, Abhinav Garg, Jiyeon Kim, Mehul Kumar, Sachin Singh, Ashutosh Gupta, Ankur Kumar, Nauman Dawalatabad, Aman Maghan, Shatrughan Singh, Chanwoo Kim:
HiTNet: Byte-to-BPE Hierarchical Transcription Network for End-to-End Speech Recognition. ASRU 2021: 395-402 - [c45]Nauman Dawalatabad, Tushar Vatsal, Ashutosh Gupta, Sungsoo Kim, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim:
Two-Pass End-to-End ASR Model Compression. ASRU 2021: 403-410 - [c44]Ashutosh Gupta, Aditya Jayasimha, Aman Maghan, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim:
Voice to Action: Spoken Language Understanding for Memory-Constrained Systems. ASRU 2021: 473-479 - [c43]Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim:
Semi-Supervised Transfer Learning for Language Expansion of End-to-End Speech Recognition Models to Low-Resource Languages. ASRU 2021: 984-988 - [c42]Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim:
A Comparison of Streaming Models and Data Augmentation Methods for Robust Speech Recognition. ASRU 2021: 989-995 - [c41]Ashutosh Gupta, Ankur Kumar, Dhananjaya Gowda, Kwangyoun Kim, Sachin Singh, Shatrughan Singh, Chanwoo Kim:
Neural Utterance Confidence Measure for RNN-Transducers and Two Pass Models. ICASSP 2021: 6398-6402 - [c40]Chanwoo Kim, Abhinav Garg, Dhananjaya Gowda, Seongkyu Mun, Changwoo Han:
Streaming End-to-End Speech Recognition with Jointly Trained Neural Feature Enhancement. ICASSP 2021: 6773-6777 - [c39]Sathish Reddy Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Hyojung Han, Seokchan Ahn, Sangha Kim, Chanwoo Kim, Inchul Hwang:
Task Aware Multi-Task Learning for Speech to Text Tasks. ICASSP 2021: 7723-7727 - [c38]Jinhwan Park, Chanwoo Kim, Wonyong Sung:
Convolution-Based Attention Model With Positional Encoding For Streaming Speech Recognition On Embedded Devices. SLT 2021: 30-37 - [i13]Chanwoo Kim, Abhinav Garg, Dhananjaya Gowda, Seongkyu Mun, Changwoo Han:
Streaming end-to-end speech recognition with jointly trained neural feature enhancement. CoRR abs/2105.01254 (2021) - [i12]Mohd Abbas Zaidi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim, Chanwoo Kim:
Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems. CoRR abs/2110.15729 (2021) - [i11]Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim:
A comparison of streaming models and data augmentation methods for robust speech recognition. CoRR abs/2111.10043 (2021) - [i10]Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim:
Semi-supervised transfer learning for language expansion of end-to-end speech recognition models to low-resource languages. CoRR abs/2111.10047 (2021) - 2020
- [c37]Chanwoo Kim, Dhananjaya Gowda, Dongsoo Lee, Jiyeon Kim, Ankur Kumar, Sungsoo Kim, Abhinav Garg, Changwoo Han:
A Review of On-Device Fully Neural End-to-End Automatic Speech Recognition Algorithms. ACSSC 2020: 277-283 - [c36]Chanwoo Kim, Kwangyoun Kim, Sathish Reddy Indurthi:
Small Energy Masking for Improved Neural Network Training for End-To-End Speech Recognition. ICASSP 2020: 7684-7688 - [c35]Sathish Reddy Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim:
End-end Speech-to-Text Translation with Modality Agnostic Meta-Learning. ICASSP 2020: 7904-7908 - [c34]Abhinav Garg, Ashutosh Gupta, Dhananjaya Gowda, Shatrughan Singh, Chanwoo Kim:
Hierarchical Multi-Stage Word-to-Grapheme Named Entity Corrector for Automatic Speech Recognition. INTERSPEECH 2020: 1793-1797 - [c33]Dhananjaya Gowda, Ankur Kumar, Kwangyoun Kim, Hejung Yang, Abhinav Garg, Sachin Singh, Jiyeon Kim, Mehul Kumar, Sichen Jin, Shatrughan Singh, Chanwoo Kim:
Utterance Invariant Training for Hybrid Two-Pass End-to-End Speech Recognition. INTERSPEECH 2020: 2827-2831 - [c32]Abhinav Garg, Gowtham P. Vadisetti, Dhananjaya Gowda, Sichen Jin, Aditya Jayasimha, Youngho Han, Jiyeon Kim, Junmo Park, Kwangyoun Kim, Sooyeon Kim, Young-Yoon Lee, Kyungbo Min, Chanwoo Kim:
Streaming On-Device End-to-End ASR System for Privacy-Sensitive Voice-Typing. INTERSPEECH 2020: 3371-3375 - [c31]Ankur Kumar, Sachin Singh, Dhananjaya Gowda, Abhinav Garg, Shatrughan Singh, Chanwoo Kim:
Utterance Confidence Measure for End-to-End Speech Recognition with Applications to Distributed Speech Recognition Scenarios. INTERSPEECH 2020: 4357-4361 - [i9]Kwangyoun Kim, Kyungmin Lee, Dhananjaya Gowda, Junmo Park, Sungsoo Kim, Sichen Jin, Young-Yoon Lee, Jinsu Yeo, Daehyun Kim, Seokyeong Jung, Jungin Lee, Myoungji Han, Chanwoo Kim:
Attention based on-device streaming speech recognition with large speech corpus. CoRR abs/2001.00577 (2020) - [i8]Chanwoo Kim, Kwangyoun Kim, Sathish Reddy Indurthi:
Small energy masking for improved neural network training for end-to-end speech recognition. CoRR abs/2002.06312 (2020) - [i7]Chanwoo Kim, Dhananjaya Gowda, Dongsoo Lee, Jiyeon Kim, Ankur Kumar, Sungsoo Kim, Abhinav Garg, Changwoo Han:
A review of on-device fully neural end-to-end automatic speech recognition algorithms. CoRR abs/2012.07974 (2020) - [i6]Hyojung Han, Sathish Reddy Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim, Chanwoo Kim, Inchul Hwang:
Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation. CoRR abs/2012.14681 (2020)
2010 – 2019
- 2019
- [c30]Abhinav Garg, Dhananjaya Gowda, Ankur Kumar, Kwangyoun Kim, Mehul Kumar, Chanwoo Kim:
Improved Multi-Stage Training of Online Attention-Based Encoder-Decoder Models. ASRU 2019: 70-77 - [c29]Chanwoo Kim, Minkyoo Shin, Shatrughan Singh, Larry Heck, Dhananjaya Gowda, Sungsoo Kim, Kwangyoun Kim, Mehul Kumar, Jiyeon Kim, Kyungmin Lee, Changwoo Han, Abhinav Garg, Eunhyang Kim:
End-to-End Training of a Large Vocabulary End-to-End Speech Recognition System. ASRU 2019: 562-569 - [c28]Kwangyoun Kim, Seokyeong Jung, Jungin Lee, Myoungji Han, Chanwoo Kim, Kyungmin Lee, Dhananjaya Gowda, Junmo Park, Sungsoo Kim, Sichen Jin, Young-Yoon Lee, Jinsu Yeo, Daehyun Kim:
Attention Based On-Device Streaming Speech Recognition with Large Speech Corpus. ASRU 2019: 956-963 - [c27]Chanwoo Kim, Mehul Kumar, Kwangyoun Kim, Dhananjaya Gowda:
Power-Law Nonlinearity with Maximally Uniform Distribution Criterion for Improved Neural Network Training in Automatic Speech Recognition. ASRU 2019: 988-995 - [c26]Anjali Menon, Chanwoo Kim, Richard M. Stern:
Robust Recognition of Reverberant and Noisy Speech Using Coherence-based Processing. ICASSP 2019: 6775-6779 - [c25]Chanwoo Kim, Minkyu Shin, Abhinav Garg, Dhananjaya Gowda:
Improved Vocal Tract Length Perturbation for a State-of-the-Art End-to-End Speech Recognition System. INTERSPEECH 2019: 739-743 - [c24]Dhananjaya Gowda, Abhinav Garg, Kwangyoun Kim, Mehul Kumar, Chanwoo Kim:
Multi-Task Multi-Resolution Char-to-BPE Cross-Attention Decoder for End-to-End Speech Recognition. INTERSPEECH 2019: 2783-2787 - [i5]Sathish Reddy Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim:
Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning. CoRR abs/1911.04283 (2019) - [i4]Chanwoo Kim, Sungsoo Kim, Kwangyoun Kim, Mehul Kumar, Jiyeon Kim, Kyungmin Lee, Changwoo Han, Abhinav Garg, Eunhyang Kim, Minkyoo Shin, Shatrughan Singh, Larry Heck, Dhananjaya Gowda:
end-to-end training of a large vocabulary end-to-end speech recognition system. CoRR abs/1912.11040 (2019) - [i3]Chanwoo Kim, Mehul Kumar, Kwangyoun Kim, Dhananjaya Gowda:
power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition. CoRR abs/1912.11041 (2019) - [i2]Abhinav Garg, Dhananjaya Gowda, Ankur Kumar, Kwangyoun Kim, Mehul Kumar, Chanwoo Kim:
Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models. CoRR abs/1912.12384 (2019) - 2018
- [c23]Chanwoo Kim, Anjali Menon, Michiel Bacchiani, Richard M. Stern:
Sound Source Separation Using Phase Difference and Reliable Mask Selection Selection. ICASSP 2018: 5559-5563 - [c22]Chanwoo Kim, Tara N. Sainath, Arun Narayanan, Ananya Misra, Rajeev C. Nongpiur, Michiel Bacchiani:
Spectral Distortion Model for Training Phase-Sensitive Deep-Neural Networks for Far-Field Speech Recognition. ICASSP 2018: 5729-5733 - [c21]Chanwoo Kim, Ehsan Variani, Arun Narayanan, Michiel Bacchiani:
Efficient Implementation of the Room Simulator for Training Deep Neural Network Acoustic Models. INTERSPEECH 2018: 3028-3032 - [c20]Xinhui Zhou, Chiman Kwan, Bulent Ayhan, Chanwoo Kim, Kshitiz Kumar, Richard M. Stern:
A Comparative Study of Spatial Speech Separation Techniques to Improve Speech Recognition. ISNN 2018: 494-502 - 2017
- [j3]Tara N. Sainath, Ron J. Weiss, Kevin W. Wilson, Bo Li, Arun Narayanan, Ehsan Variani, Michiel Bacchiani, Izhak Shafran, Andrew W. Senior, Kean K. Chin, Ananya Misra, Chanwoo Kim:
Multichannel Signal Processing With Deep Neural Networks for Automatic Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 25(5): 965-979 (2017) - [c19]Anjali Menon, Chanwoo Kim, Umpei Kurokawa, Richard M. Stern:
Binaural processing for robust recognition of degraded speech. ASRU 2017: 24-31 - [c18]Chanwoo Kim, Ananya Misra, Kean K. Chin, Thad Hughes, Arun Narayanan, Tara N. Sainath, Michiel Bacchiani:
Generation of Large-Scale Simulated Utterances in Virtual Rooms to Train Deep-Neural Networks for Far-Field Speech Recognition in Google Home. INTERSPEECH 2017: 379-383 - [c17]Bo Li, Tara N. Sainath, Arun Narayanan, Joe Caroselli, Michiel Bacchiani, Ananya Misra, Izhak Shafran, Hasim Sak, Golan Pundak, Kean K. Chin, Khe Chai Sim, Ron J. Weiss, Kevin W. Wilson, Ehsan Variani, Chanwoo Kim, Olivier Siohan, Mitchel Weintraub, Erik McDermott, Richard Rose, Matt Shannon:
Acoustic Modeling for Google Home. INTERSPEECH 2017: 399-403 - [c16]Anjali Menon, Chanwoo Kim, Richard M. Stern:
Robust Speech Recognition Based on Binaural Auditory Processing. INTERSPEECH 2017: 3872-3876 - [p1]Tara N. Sainath, Ron J. Weiss, Kevin W. Wilson, Arun Narayanan, Michiel Bacchiani, Bo Li, Ehsan Variani, Izhak Shafran, Andrew W. Senior, Kean K. Chin, Ananya Misra, Chanwoo Kim:
Raw Multichannel Processing Using Deep Neural Networks. New Era for Robust Speech Recognition, Exploiting Deep Learning 2017: 105-133 - [i1]Chanwoo Kim, Ehsan Variani, Arun Narayanan, Michiel Bacchiani:
Efficient Implementation of the Room Simulator for Training Deep Neural Network Acoustic Models. CoRR abs/1712.03439 (2017) - 2016
- [j2]Byung Joon Cho, Haeyong Kwon, Ji-Won Cho, Chanwoo Kim, Richard M. Stern, Hyung-Min Park:
A Subband-Based Stationary-Component Suppression Method Using Harmonics and Power Ratio for Reverberant Speech Recognition. IEEE Signal Process. Lett. 23(6): 780-784 (2016) - [j1]Chanwoo Kim, Richard M. Stern:
Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 24(7): 1315-1329 (2016) - 2014
- [c15]Hyung-Min Park, Matthew Maciejewski, Chanwoo Kim, Richard M. Stern:
Robust speech recognition in reverberant environments using subband-based steady-state monaural and binaural suppression. INTERSPEECH 2014: 2715-2718 - [c14]Chanwoo Kim, Kean K. Chin, Michiel Bacchiani, Richard M. Stern:
Robust speech recognition using temporal masking and thresholding algorithm. INTERSPEECH 2014: 2734-2738 - 2012
- [c13]Chanwoo Kim, Richard M. Stern:
Power-Normalized Cepstral Coefficients (PNCC) for robust speech recognition. ICASSP 2012: 4101-4104 - [c12]Chanwoo Kim, Charbel El Khawand, Richard M. Stern:
Two-microphone source separation algorithm based on statistical modeling of angle distributions. ICASSP 2012: 4629-4632 - 2011
- [c11]Kshitiz Kumar, Chanwoo Kim, Richard M. Stern:
Delta-spectral cepstral coefficients for robust speech recognition. ICASSP 2011: 4784-4787 - [c10]Chanwoo Kim, Kshitiz Kumar, Richard M. Stern:
Binaural sound source separation motivated by auditory processing. ICASSP 2011: 5072-5075 - 2010
- [c9]Chanwoo Kim, Richard M. Stern:
Feature extraction for robust speech recognition based on maximizing the sharpness of the power distribution and on power flooring. ICASSP 2010: 4574-4577 - [c8]Chanwoo Kim, Richard M. Stern, Kiwan Eom, Jaewon Lee:
Automatic selection of thresholds for signal separation algorithms based on interaural delay. INTERSPEECH 2010: 729-732 - [c7]Chanwoo Kim, Richard M. Stern:
Nonlinear enhancement of onset for robust speech recognition. INTERSPEECH 2010: 2058-2061
2000 – 2009
- 2009
- [c6]Chanwoo Kim, Richard M. Stern:
Power function-based power distribution normalization algorithm for robust speech recognition. ASRU 2009: 188-193 - [c5]Chanwoo Kim, Kshitiz Kumar, Richard M. Stern:
Robust speech recognition using a Small Power Boosting algorithm. ASRU 2009: 243-248 - [c4]Chanwoo Kim, Richard M. Stern:
Feature extraction for robust speech recognition using a power-law nonlinearity and power-bias subtraction. INTERSPEECH 2009: 28-31 - [c3]Chanwoo Kim, Kshitiz Kumar, Bhiksha Raj, Richard M. Stern:
Signal separation for robust speech recognition based on phase difference information obtained in the frequency domain. INTERSPEECH 2009: 2495-2498 - 2008
- [c2]Chanwoo Kim, Richard M. Stern:
Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis. INTERSPEECH 2008: 2598-2601 - 2006
- [c1]Chanwoo Kim, Yu-Hsiang Bosco Chiu, Richard M. Stern:
Physiologically-motivated synchrony-based processing for robust automatic speech recognition. INTERSPEECH 2006
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 20:10 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint