default search action
Matteo Pirotta
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c50]Matteo Pirotta, Andrea Tirinzoni, Ahmed Touati, Alessandro Lazaric, Yann Ollivier:
Fast Imitation via Behavior Foundation Models. ICLR 2024 - [c49]Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati:
Simple Ingredients for Offline Reinforcement Learning. ICML 2024 - [i42]Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati:
Simple Ingredients for Offline Reinforcement Learning. CoRR abs/2403.13097 (2024) - 2023
- [j9]Harsh Satija, Alessandro Lazaric, Matteo Pirotta, Joelle Pineau:
Group Fairness in Reinforcement Learning. Trans. Mach. Learn. Res. 2023 (2023) - [c48]Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
On the Complexity of Representation Learning in Contextual Linear Bandits. AISTATS 2023: 7871-7896 - [c47]Liyu Chen, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path. ALT 2023: 310-357 - [c46]Virginie Do, Elvis Dohmatob, Matteo Pirotta, Alessandro Lazaric, Nicolas Usunier:
Contextual bandits with concave rewards, and an application to fair ranking. ICLR 2023 - [c45]Liyu Chen, Andrea Tirinzoni, Alessandro Lazaric, Matteo Pirotta:
Layered State Discovery for Incremental Autonomous Exploration. ICML 2023: 4953-5001 - [i41]Liyu Chen, Andrea Tirinzoni, Alessandro Lazaric, Matteo Pirotta:
Layered State Discovery for Incremental Autonomous Exploration. CoRR abs/2302.03789 (2023) - 2022
- [j8]Matteo Papini, Matteo Pirotta, Marcello Restelli:
Smoothing policies and safe policy gradients. Mach. Learn. 111(11): 4081-4137 (2022) - [c44]Evrard Garcelon, Matteo Pirotta, Vianney Perchet:
Encrypted Linear Contextual Bandit. AISTATS 2022: 2519-2551 - [c43]Evrard Garcelon, Vashist Avadhanula, Alessandro Lazaric, Matteo Pirotta:
Top K Ranking for Multi-Armed Bandit with Noisy Evaluations. AISTATS 2022: 6242-6269 - [c42]Jean Tarbouriech, Omar Darwiche Domingues, Pierre Ménard, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Adaptive Multi-Goal Exploration. AISTATS 2022: 7349-7383 - [c41]Evrard Garcelon, Kamalika Chaudhuri, Vianney Perchet, Matteo Pirotta:
Privacy Amplification via Shuffling for Linear Contextual Bandits. ALT 2022: 381-407 - [c40]Yunchang Yang, Tianhao Wu, Han Zhong, Evrard Garcelon, Matteo Pirotta, Alessandro Lazaric, Liwei Wang, Simon Shaolei Du:
A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning. ICLR 2022 - [c39]Andrea Tirinzoni, Matteo Papini, Ahmed Touati, Alessandro Lazaric, Matteo Pirotta:
Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees. NeurIPS 2022 - [i40]Liyu Chen, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path. CoRR abs/2210.04946 (2022) - [i39]Virginie Do, Elvis Dohmatob, Matteo Pirotta, Alessandro Lazaric, Nicolas Usunier:
Contextual bandits with concave rewards, and an application to fair ranking. CoRR abs/2210.09957 (2022) - [i38]Andrea Tirinzoni, Matteo Papini, Ahmed Touati, Alessandro Lazaric, Matteo Pirotta:
Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees. CoRR abs/2210.13083 (2022) - [i37]Yifang Chen, Karthik Abinav Sankararaman, Alessandro Lazaric, Matteo Pirotta, Dmytro Karamshuk, Qifan Wang, Karishma Mandyam, Sinong Wang, Han Fang:
Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler. CoRR abs/2211.02233 (2022) - [i36]Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
On the Complexity of Representation Learning in Contextual Linear Bandits. CoRR abs/2212.09429 (2022) - 2021
- [j7]Alberto Maria Metelli, Matteo Pirotta, Daniele Calandriello, Marcello Restelli:
Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach. J. Mach. Learn. Res. 22: 97:1-97:83 (2021) - [j6]Carlo D'Eramo, Andrea Cini, Alessandro Nuara, Matteo Pirotta, Cesare Alippi, Jan Peters, Marcello Restelli:
Gaussian Approximation for Bias Reduction in Q-Learning. J. Mach. Learn. Res. 22: 277:1-277:51 (2021) - [c38]Omar Darwiche Domingues, Pierre Ménard, Matteo Pirotta, Emilie Kaufmann, Michal Valko:
A Kernel-Based Approach to Non-Stationary Reinforcement Learning in Metric Spaces. AISTATS 2021: 3538-3546 - [c37]Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Sample Complexity Bounds for Stochastic Shortest Path with a Generative Model. ALT 2021: 1157-1178 - [c36]Omar Darwiche Domingues, Pierre Ménard, Matteo Pirotta, Emilie Kaufmann, Michal Valko:
Kernel-Based Reinforcement Learning: A Finite-Time Analysis. ICML 2021: 2783-2792 - [c35]Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Leveraging Good Representations in Linear Contextual Bandits. ICML 2021: 8371-8380 - [c34]Jean Tarbouriech, Runlong Zhou, Simon S. Du, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret. NeurIPS 2021: 6843-6855 - [c33]Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
A Provably Efficient Sample Collection Strategy for Reinforcement Learning. NeurIPS 2021: 7611-7624 - [c32]Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta:
Local Differential Privacy for Regret Minimization in Reinforcement Learning. NeurIPS 2021: 10561-10573 - [c31]Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. NeurIPS 2021: 16371-16383 - [i35]Evrard Garcelon, Vianney Perchet, Matteo Pirotta:
Homomorphically Encrypted Linear Contextual Bandit. CoRR abs/2103.09927 (2021) - [i34]Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Leveraging Good Representations in Linear Contextual Bandits. CoRR abs/2104.03781 (2021) - [i33]Jean Tarbouriech, Runlong Zhou, Simon S. Du, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret. CoRR abs/2104.11186 (2021) - [i32]Yunchang Yang, Tianhao Wu, Han Zhong, Evrard Garcelon, Matteo Pirotta, Alessandro Lazaric, Liwei Wang, Simon S. Du:
A Unified Framework for Conservative Exploration. CoRR abs/2106.11692 (2021) - [i31]Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs. CoRR abs/2106.13013 (2021) - [i30]Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. CoRR abs/2110.14798 (2021) - [i29]Jean Tarbouriech, Omar Darwiche Domingues, Pierre Ménard, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Adaptive Multi-Goal Exploration. CoRR abs/2111.12045 (2021) - [i28]Paul Luyo, Evrard Garcelon, Alessandro Lazaric, Matteo Pirotta:
Differentially Private Exploration in Reinforcement Learning with Linear Representation. CoRR abs/2112.01585 (2021) - [i27]Evrard Garcelon, Kamalika Chaudhuri, Vianney Perchet, Matteo Pirotta:
Privacy Amplification via Shuffling for Linear Contextual Bandits. CoRR abs/2112.06008 (2021) - [i26]Evrard Garcelon, Vashist Avadhanula, Alessandro Lazaric, Matteo Pirotta:
Top K Ranking for Multi-Armed Bandit with Noisy Evaluations. CoRR abs/2112.06517 (2021) - 2020
- [j5]Alberto Maria Metelli, Matteo Pirotta, Marcello Restelli:
On the use of the policy gradient and Hessian in inverse reinforcement learning. Intelligenza Artificiale 14(1): 117-150 (2020) - [c30]Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric, Matteo Pirotta:
Improved Algorithms for Conservative Exploration in Bandits. AAAI 2020: 3962-3969 - [c29]Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric, Matteo Pirotta:
Conservative Exploration in Reinforcement Learning. AISTATS 2020: 1431-1441 - [c28]Andrea Zanette, David Brandfonbrener, Emma Brunskill, Matteo Pirotta, Alessandro Lazaric:
Frequentist Regret Bounds for Randomized Least-Squares Value Iteration. AISTATS 2020: 1954-1964 - [c27]Jean Tarbouriech, Evrard Garcelon, Michal Valko, Matteo Pirotta, Alessandro Lazaric:
No-Regret Exploration in Goal-Oriented Reinforcement Learning. ICML 2020: 9428-9437 - [c26]Evrard Garcelon, Baptiste Rozière, Laurent Meunier, Jean Tarbouriech, Olivier Teytaud, Alessandro Lazaric, Matteo Pirotta:
Adversarial Attacks on Linear Contextual Bandits. NeurIPS 2020 - [c25]Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Improved Sample Complexity for Incremental Autonomous Exploration in MDPs. NeurIPS 2020 - [c24]Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric:
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits. NeurIPS 2020 - [c23]Jean Tarbouriech, Shubhanshu Shekhar, Matteo Pirotta, Mohammad Ghavamzadeh, Alessandro Lazaric:
Active Model Estimation in Markov Decision Processes. UAI 2020: 1019-1028 - [i25]Michiel van der Meer, Matteo Pirotta, Elia Bruni:
Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning. CoRR abs/2001.04418 (2020) - [i24]Jian Qian, Ronan Fruit, Matteo Pirotta, Alessandro Lazaric:
Concentration Inequalities for Multinoulli Random Variables. CoRR abs/2001.11595 (2020) - [i23]Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric, Matteo Pirotta:
Conservative Exploration in Reinforcement Learning. CoRR abs/2002.03218 (2020) - [i22]Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric, Matteo Pirotta:
Improved Algorithms for Conservative Exploration in Bandits. CoRR abs/2002.03221 (2020) - [i21]Evrard Garcelon, Baptiste Rozière, Laurent Meunier, Jean Tarbouriech, Olivier Teytaud, Alessandro Lazaric, Matteo Pirotta:
Adversarial Attacks on Linear Contextual Bandits. CoRR abs/2002.03839 (2020) - [i20]Yonathan Efroni, Shie Mannor, Matteo Pirotta:
Exploration-Exploitation in Constrained MDPs. CoRR abs/2003.02189 (2020) - [i19]Jean Tarbouriech, Shubhanshu Shekhar, Matteo Pirotta, Mohammad Ghavamzadeh, Alessandro Lazaric:
Active Model Estimation in Markov Decision Processes. CoRR abs/2003.03297 (2020) - [i18]Omar Darwiche Domingues, Pierre Ménard, Matteo Pirotta, Emilie Kaufmann, Michal Valko:
Regret Bounds for Kernel-Based Reinforcement Learning. CoRR abs/2004.05599 (2020) - [i17]Pierre-Alexandre Kamienny, Matteo Pirotta, Alessandro Lazaric, Thibault Lavril, Nicolas Usunier, Ludovic Denoyer:
Learning Adaptive Exploration Strategies in Dynamic Environments Through Informed Policy Regularization. CoRR abs/2005.02934 (2020) - [i16]Omar Darwiche Domingues, Pierre Ménard, Matteo Pirotta, Emilie Kaufmann, Michal Valko:
A Kernel-Based Approach to Non-Stationary Reinforcement Learning in Metric Spaces. CoRR abs/2007.05078 (2020) - [i15]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric:
Improved Analysis of UCRL2 with Empirical Bernstein Inequality. CoRR abs/2007.05456 (2020) - [i14]Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
A Provably Efficient Sample Collection Strategy for Reinforcement Learning. CoRR abs/2007.06437 (2020) - [i13]Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta:
Local Differentially Private Regret Minimization in Reinforcement Learning. CoRR abs/2010.07778 (2020) - [i12]Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric:
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits. CoRR abs/2010.12247 (2020) - [i11]Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric:
Improved Sample Complexity for Incremental Autonomous Exploration in MDPs. CoRR abs/2012.14755 (2020)
2010 – 2019
- 2019
- [c22]Jian Qian, Ronan Fruit, Matteo Pirotta, Alessandro Lazaric:
Exploration Bonus for Regret Minimization in Discrete and Continuous Average Reward MDPs. NeurIPS 2019: 4891-4900 - [c21]Ronald Ortner, Matteo Pirotta, Alessandro Lazaric, Ronan Fruit, Odalric-Ambrym Maillard:
Regret Bounds for Learning State Representations in Reinforcement Learning. NeurIPS 2019: 12717-12727 - [i10]Matteo Papini, Matteo Pirotta, Marcello Restelli:
Smoothing Policies and Safe Policy Gradients. CoRR abs/1905.03231 (2019) - [i9]Andrea Zanette, David Brandfonbrener, Matteo Pirotta, Alessandro Lazaric:
Frequentist Regret Bounds for Randomized Least-Squares Value Iteration. CoRR abs/1911.00567 (2019) - [i8]Jean Tarbouriech, Evrard Garcelon, Michal Valko, Matteo Pirotta, Alessandro Lazaric:
No-Regret Exploration in Goal-Oriented Reinforcement Learning. CoRR abs/1912.03517 (2019) - 2018
- [c20]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, Ronald Ortner:
Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning. ICML 2018: 1573-1581 - [c19]Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, Marcello Restelli:
Stochastic Variance-Reduced Policy Gradient. ICML 2018: 4023-4032 - [c18]Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, Marcello Restelli:
Importance Weighted Transfer of Samples in Reinforcement Learning. ICML 2018: 4943-4952 - [c17]Davide Di Febbo, Emilia Ambrosini, Matteo Pirotta, Eric Rojas, Marcello Restelli, Alessandra Laura Giulia Pedrocchi, Simona Ferrante:
Does Reinforcement Learning outperform PID in the control of FES-induced elbow flex-extension? MeMeA 2018: 1-6 - [c16]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric:
Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes. NeurIPS 2018: 2998-3008 - [i7]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, Ronald Ortner:
Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning. CoRR abs/1802.04020 (2018) - [i6]Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, Marcello Restelli:
Importance Weighted Transfer of Samples in Reinforcement Learning. CoRR abs/1805.10886 (2018) - [i5]Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, Marcello Restelli:
Stochastic Variance-Reduced Policy Gradient. CoRR abs/1806.05618 (2018) - [i4]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric:
Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes. CoRR abs/1807.02373 (2018) - [i3]Jian Qian, Ronan Fruit, Matteo Pirotta, Alessandro Lazaric:
Exploration Bonus for Regret Minimization in Undiscounted Discrete and Continuous Markov Decision Processes. CoRR abs/1812.04363 (2018) - 2017
- [j4]Simone Parisi, Matteo Pirotta, Jan Peters:
Manifold-based multi-objective policy search with sample reuse. Neurocomputing 263: 3-14 (2017) - [c15]Carlo D'Eramo, Alessandro Nuara, Matteo Pirotta, Marcello Restelli:
Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems. AAAI 2017: 1840-1846 - [c14]Samuele Tosatto, Matteo Pirotta, Carlo D'Eramo, Marcello Restelli:
Boosted Fitted Q-Iteration. ICML 2017: 3434-3443 - [c13]Alberto Maria Metelli, Matteo Pirotta, Marcello Restelli:
Compatible Reward Inverse Reinforcement Learning. NIPS 2017: 2050-2059 - [c12]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, Emma Brunskill:
Regret Minimization in MDPs with Options without Prior Knowledge. NIPS 2017: 3166-3176 - [c11]Matteo Papini, Matteo Pirotta, Marcello Restelli:
Adaptive Batch Size for Safe Policy Gradients. NIPS 2017: 3591-3600 - [c10]Davide Tateo, Matteo Pirotta, Marcello Restelli, Andrea Bonarini:
Gradient-based minimization for multi-expert Inverse Reinforcement Learning. SSCI 2017: 1-8 - [i2]Matteo Pirotta, Marcello Restelli:
Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent. CoRR abs/1712.03428 (2017) - 2016
- [b1]Matteo Pirotta:
Reinforcement learning: from theory to algorithms. Polytechnic University of Milan, Italy, 2016 - [j3]Simone Parisi, Matteo Pirotta, Marcello Restelli:
Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation. J. Artif. Intell. Res. 57: 187-227 (2016) - [j2]Giorgio Manganini, Matteo Pirotta, Marcello Restelli, Luigi Piroddi, Maria Prandini:
Policy Search for the Optimal Control of Markov Decision Processes: A Novel Particle-Based Iterative Scheme. IEEE Trans. Cybern. 46(11): 2643-2655 (2016) - [c9]Matteo Pirotta, Marcello Restelli:
Inverse Reinforcement Learning through Policy Gradient Minimization. AAAI 2016: 1993-1999 - 2015
- [j1]Matteo Pirotta, Marcello Restelli, Luca Bascetta:
Policy gradient in Lipschitz Markov Decision Processes. Mach. Learn. 100(2-3): 255-283 (2015) - [c8]Matteo Pirotta, Simone Parisi, Marcello Restelli:
Multi-Objective Reinforcement Learning with Continuous Pareto Frontier Approximation. AAAI 2015: 2928-2934 - [c7]Danilo Caporale, Luca Deori, Roberto Mura, Alessandro Falsone, Riccardo Vignali, Luca Giulioni, Matteo Pirotta, Giorgio Manganini:
Optimal control to reduce emissions in gasoline engines: an iterative learning control approach for ECU calibration maps improvement. ECC 2015: 1420-1425 - [c6]Giorgio Manganini, Matteo Pirotta, Marcello Restelli, Luca Bascetta:
Following Newton direction in Policy Gradient with parameter exploration. IJCNN 2015: 1-8 - 2014
- [c5]Simone Parisi, Matteo Pirotta, Nicola Smacchia, Luca Bascetta, Marcello Restelli:
Policy gradient approaches for multi-objective sequential decision making: A comparison. ADPRL 2014: 1-8 - [c4]Simone Parisi, Matteo Pirotta, Nicola Smacchia, Luca Bascetta, Marcello Restelli:
Policy gradient approaches for multi-objective sequential decision making. IJCNN 2014: 2323-2330 - [i1]Matteo Pirotta, Simone Parisi, Marcello Restelli:
Multi-objective Reinforcement Learning with Continuous Pareto Frontier Approximation. CoRR abs/1406.3497 (2014) - 2013
- [c3]Matteo Pirotta, Marcello Restelli, Alessio Pecorino, Daniele Calandriello:
Safe Policy Iteration. ICML (3) 2013: 307-315 - [c2]Matteo Pirotta, Marcello Restelli, Luca Bascetta:
Adaptive Step-Size for Policy Gradient Methods. NIPS 2013: 1394-1402 - 2011
- [c1]Martino Migliavacca, Alessio Pecorino, Matteo Pirotta, Marcello Restelli, Andrea Bonarini:
Fitted policy search. ADPRL 2011: 287-294
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-13 00:44 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint