default search action
Dylan Hadfield-Menell
Person information
- affiliation: University of California, Berkeley, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j4]Jonathan Stray, Alon Y. Halevy, Parisa Assar, Dylan Hadfield-Menell, Craig Boutilier, Amar Ashar, Chloé Bakalar, Lex Beattie, Michael D. Ekstrand, Claire Leibowicz, Connie Moon Sehat, Sara Johansen, Lianne Kerlin, David Vickrey, Spandana Singh, Sanne Vrijenhoek, Amy Xian Zhang, McKane Andrus, Natali Helberger, Polina Proutskova, Tanushree Mitra, Nina Vasan:
Building Human Values into Recommender Systems: An Interdisciplinary Synthesis. Trans. Recomm. Syst. 2(3): 20:1-20:57 (2024) - [c39]Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas A. Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell:
Black-Box Access is Insufficient for Rigorous AI Audits. FAccT 2024: 2254-2272 - [c38]Anand Siththaranjan, Cassidy Laidlaw, Dylan Hadfield-Menell:
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF. ICLR 2024 - [i46]Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Alexander Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell:
Black-Box Access is Insufficient for Rigorous AI Audits. CoRR abs/2401.14446 (2024) - [i45]Aengus Lynch, Phillip Guo, Aidan Ewart, Stephen Casper, Dylan Hadfield-Menell:
Eight Methods to Evaluate Robust Unlearning in LLMs. CoRR abs/2402.16835 (2024) - [i44]Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell:
Defending Against Unforeseen Failure Modes with Latent Adversarial Training. CoRR abs/2403.05030 (2024) - [i43]Stephen Casper, Jieun Yun, Joonhyuk Baek, Yeseong Jung, Minhwan Kim, Kiwan Kwon, Saerom Park, Hayden Moore, David Shriver, Marissa Connor, Keltin Grimes, Angus Nicolson, Arush Tagade, Jessica Rumbelow, Hieu Minh Nguyen, Dylan Hadfield-Menell:
The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability. CoRR abs/2404.02949 (2024) - [i42]Abhay Sheshadri, Aidan Ewart, Phillip Guo, Aengus Lynch, Cindy Wu, Vivek Hebbar, Henry Sleight, Asa Cooper Stickland, Ethan Perez, Dylan Hadfield-Menell, Stephen Casper:
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs. CoRR abs/2407.15549 (2024) - 2023
- [j3]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca D. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell:
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. Trans. Mach. Learn. Res. 2023 (2023) - [c37]Stephen Casper, Dylan Hadfield-Menell, Gabriel Kreiman:
White-Box Adversarial Policies in Deep Reinforcement Learning. SafeAI@AAAI 2023 - [c36]Phillip J. K. Christoffersen, Andreas A. Haupt, Dylan Hadfield-Menell:
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL. AAMAS 2023: 448-456 - [c35]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? EMNLP 2023: 4791-4797 - [c34]Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, Dylan Hadfield-Menell:
Red Teaming Deep Neural Networks with Feature Synthesis Tools. NeurIPS 2023 - [c33]Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. SaTML 2023: 464-483 - [i41]Andreas A. Haupt, Dylan Hadfield-Menell, Chara Podimata:
Recommending to Strategic Users. CoRR abs/2302.06559 (2023) - [i40]Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Dylan Hadfield-Menell:
Benchmarking Interpretability Tools for Deep Neural Networks. CoRR abs/2302.10894 (2023) - [i39]Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell:
Explore, Establish, Exploit: Red Teaming Language Models from Scratch. CoRR abs/2306.09442 (2023) - [i38]Stephen Casper, Zifan Guo, Shreya Mogulothu, Zachary Marinov, Chinmay Deshpande, Rui-Jie Yew, Zheng Dai, Dylan Hadfield-Menell:
Measuring the Success of Diffusion Models at Imitating Human Artists. CoRR abs/2307.04028 (2023) - [i37]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca D. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell:
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. CoRR abs/2307.15217 (2023) - [i36]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? CoRR abs/2312.03729 (2023) - [i35]Anand Siththaranjan, Cassidy Laidlaw, Dylan Hadfield-Menell:
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF. CoRR abs/2312.08358 (2023) - 2022
- [c32]Rui-Jie Yew, Dylan Hadfield-Menell:
A Penalty Default Approach to Preemptive Harm Disclosure and Mitigation for AI Systems. AIES 2022: 823-830 - [c31]Micah D. Carroll, Anca D. Dragan, Stuart Russell, Dylan Hadfield-Menell:
Estimating and Penalizing Induced Preference Shifts in Recommender Systems. ICML 2022: 2686-2708 - [c30]Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, Gabriel Kreiman:
Robust Feature-Level Adversaries are Interpretability Tools. NeurIPS 2022 - [c29]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Tom Griffiths, Dylan Hadfield-Menell:
How to talk so AI will learn: Instructions, descriptions, and autonomy. NeurIPS 2022 - [c28]Mihaela Curmei, Andreas A. Haupt, Benjamin Recht, Dylan Hadfield-Menell:
Towards Psychologically-Grounded Dynamic Preference Models. RecSys 2022: 35-48 - [i34]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell:
Linguistic communication as (inverse) reward design. CoRR abs/2204.05091 (2022) - [i33]Micah Carroll, Dylan Hadfield-Menell, Stuart Russell, Anca D. Dragan:
Estimating and Penalizing Induced Preference Shifts in Recommender Systems. CoRR abs/2204.11966 (2022) - [i32]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell:
How to talk so your robot will learn: Instructions, descriptions, and pragmatics. CoRR abs/2206.07870 (2022) - [i31]Jonathan Stray, Alon Y. Halevy, Parisa Assar, Dylan Hadfield-Menell, Craig Boutilier, Amar Ashar, Lex Beattie, Michael D. Ekstrand, Claire Leibowicz, Connie Moon Sehat, Sara Johansen, Lianne Kerlin, David Vickrey, Spandana Singh, Sanne Vrijenhoek, Amy X. Zhang, McKane Andrus, Natali Helberger, Polina Proutskova, Tanushree Mitra, Nina Vasan:
Building Human Values into Recommender Systems: An Interdisciplinary Synthesis. CoRR abs/2207.10192 (2022) - [i30]Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. CoRR abs/2207.13243 (2022) - [i29]Mihaela Curmei, Andreas A. Haupt, Dylan Hadfield-Menell, Benjamin Recht:
Towards Psychologically-Grounded Dynamic Preference Models. CoRR abs/2208.01534 (2022) - [i28]Phillip J. K. Christoffersen, Andreas A. Haupt, Dylan Hadfield-Menell:
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL. CoRR abs/2208.10469 (2022) - [i27]Stephen Casper, Dylan Hadfield-Menell, Gabriel Kreiman:
White-Box Adversarial Policies in Deep Reinforcement Learning. CoRR abs/2209.02167 (2022) - [i26]Stephen Casper, Kaivalya Hariharan, Dylan Hadfield-Menell:
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks. CoRR abs/2211.10024 (2022) - 2021
- [j2]Liu Leqi, Dylan Hadfield-Menell, Zachary C. Lipton:
When curation becomes creation. Commun. ACM 64(12): 44-47 (2021) - [j1]Liu Leqi, Dylan Hadfield-Menell, Zachary C. Lipton:
When Curation Becomes Creation: Algorithms, microcontent, and the vanishing distinction between platforms and creators. ACM Queue 19(3): 11-15 (2021) - [c27]Michael James McDonald, Dylan Hadfield-Menell:
Guided Imitation of Task and Motion Planning. CoRL 2021: 630-640 - [c26]Micah Carroll, Dylan Hadfield-Menell, Stuart Russell, Anca D. Dragan:
Estimating and Penalizing Preference Shift in Recommender Systems. RecSys 2021: 661-667 - [i25]Simon Zhuang, Dylan Hadfield-Menell:
Consequences of Misaligned AI. CoRR abs/2102.03896 (2021) - [i24]Liu Leqi, Dylan Hadfield-Menell, Zachary C. Lipton:
When Curation Becomes Creation: Algorithms, Microcontent, and the Vanishing Distinction between Platforms and Creators. CoRR abs/2107.00441 (2021) - [i23]Jonathan Stray, Ivan Vendrov, Jeremy Nixon, Steven Adler, Dylan Hadfield-Menell:
What are you optimizing for? Aligning Recommender Systems with Human Values. CoRR abs/2107.10939 (2021) - [i22]Michael James McDonald, Dylan Hadfield-Menell:
Guided Imitation of Task and Motion Planning. CoRR abs/2112.03386 (2021) - 2020
- [c25]Alexander Matt Turner, Dylan Hadfield-Menell, Prasad Tadepalli:
Conservative Agency via Attainable Utility Preservation. AIES 2020: 385-391 - [c24]Raphael Koster, Dylan Hadfield-Menell, Gillian K. Hadfield, Joel Z. Leibo:
Silly Rules Improve the Capacity of Agents to Learn Stable Enforcement and Compliance Behaviors. AAMAS 2020: 1887-1888 - [c23]Simon Zhuang, Dylan Hadfield-Menell:
Consequences of Misaligned AI. NeurIPS 2020 - [i21]Raphael Köster, Dylan Hadfield-Menell, Gillian K. Hadfield, Joel Z. Leibo:
Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors. CoRR abs/2001.09318 (2020) - [i20]Arnaud Fickinger, Simon Zhuang, Dylan Hadfield-Menell, Stuart Russell:
Multi-Principal Assistance Games. CoRR abs/2007.09540 (2020) - [i19]Arnaud Fickinger, Simon Zhuang, Andrew Critch, Dylan Hadfield-Menell, Stuart Russell:
Multi-Principal Assistance Games: Definition and Collegial Mechanisms. CoRR abs/2012.14536 (2020)
2010 – 2019
- 2019
- [c22]Dylan Hadfield-Menell, McKane Andrus, Gillian K. Hadfield:
Legible Normativity for AI Alignment: The Value of Silly Rules. AIES 2019: 115-121 - [c21]Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, Anca D. Dragan:
Human-AI Learning Performance in Multi-Armed Bandits. AIES 2019: 369-375 - [c20]Dylan Hadfield-Menell, Gillian K. Hadfield:
Incomplete Contracting and AI Alignment. AIES 2019: 417-422 - [c19]Rohan Choudhury, Gokul Swamy, Dylan Hadfield-Menell, Anca D. Dragan:
On the Utility of Model Learning in HRI. HRI 2019: 317-325 - [c18]Lawrence Chan, Dylan Hadfield-Menell, Siddhartha S. Srinivasa, Anca D. Dragan:
The Assistive Multi-Armed Bandit. HRI 2019: 354-363 - [c17]Alexander Matt Turner, Dylan Hadfield-Menell, Prasad Tadepalli:
Conservative Agency. AISafety@IJCAI 2019 - [i18]Rohan Choudhury, Gokul Swamy, Dylan Hadfield-Menell, Anca D. Dragan:
On the Utility of Model Learning in HRI. CoRR abs/1901.01291 (2019) - [i17]Lawrence Chan, Dylan Hadfield-Menell, Siddhartha S. Srinivasa, Anca D. Dragan:
The Assistive Multi-Armed Bandit. CoRR abs/1901.08654 (2019) - [i16]Alexander Matt Turner, Dylan Hadfield-Menell, Prasad Tadepalli:
Conservative Agency via Attainable Utility Preservation. CoRR abs/1902.09725 (2019) - [i15]Marc Khoury, Dylan Hadfield-Menell:
Adversarial Training with Voronoi Constraints. CoRR abs/1905.01019 (2019) - [i14]Matthew Rahtz, James Fang, Anca D. Dragan, Dylan Hadfield-Menell:
An Extensible Interactive Interface for Agent Design. CoRR abs/1906.02641 (2019) - 2018
- [c16]Dhruv Malik, Malayandi Palaniappan, Jaime F. Fisac, Dylan Hadfield-Menell, Stuart Russell, Anca D. Dragan:
An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning. ICML 2018: 3391-3399 - [c15]Ellis Ratner, Dylan Hadfield-Menell, Anca D. Dragan:
Simplifying Reward Design through Divide-and-Conquer. Robotics: Science and Systems 2018 - [i13]Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, Anca D. Dragan:
Expressive Robot Motion Timing. CoRR abs/1802.01536 (2018) - [i12]Dylan Hadfield-Menell, Gillian K. Hadfield:
Incomplete Contracting and AI Alignment. CoRR abs/1804.04268 (2018) - [i11]Ellis Ratner, Dylan Hadfield-Menell, Anca D. Dragan:
Simplifying Reward Design through Divide-and-Conquer. CoRR abs/1806.02501 (2018) - [i10]Dhruv Malik, Malayandi Palaniappan, Jaime F. Fisac, Dylan Hadfield-Menell, Stuart Russell, Anca D. Dragan:
An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning. CoRR abs/1806.03820 (2018) - [i9]Sören Mindermann, Rohin Shah, Adam Gleave, Dylan Hadfield-Menell:
Active Inverse Reward Design. CoRR abs/1809.03060 (2018) - [i8]Marc Khoury, Dylan Hadfield-Menell:
On the Geometry of Adversarial Examples. CoRR abs/1811.00525 (2018) - [i7]Dylan Hadfield-Menell, McKane Andrus, Gillian K. Hadfield:
Legible Normativity for AI Alignment: The Value of Silly Rules. CoRR abs/1811.01267 (2018) - [i6]Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, Anca D. Dragan:
Human-AI Learning Performance in Multi-Armed Bandits. CoRR abs/1812.09376 (2018) - 2017
- [c14]Dylan Hadfield-Menell, Anca D. Dragan, Pieter Abbeel, Stuart Russell:
The Off-Switch Game. AAAI Workshops 2017 - [c13]Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, Anca D. Dragan:
Expressive Robot Motion Timing. HRI 2017: 22-31 - [c12]Dylan Hadfield-Menell, Anca D. Dragan, Pieter Abbeel, Stuart Russell:
The Off-Switch Game. IJCAI 2017: 220-227 - [c11]Smitha Milli, Dylan Hadfield-Menell, Anca D. Dragan, Stuart Russell:
Should Robots be Obedient? IJCAI 2017: 4754-4760 - [c10]Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan:
Pragmatic-Pedagogic Value Alignment. ISRR 2017: 49-57 - [c9]Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart J. Russell, Anca D. Dragan:
Inverse Reward Design. NIPS 2017: 6765-6774 - [i5]Smitha Milli, Dylan Hadfield-Menell, Anca D. Dragan, Stuart Russell:
Should Robots be Obedient? CoRR abs/1705.09990 (2017) - [i4]Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan:
Pragmatic-Pedagogic Value Alignment. CoRR abs/1707.06354 (2017) - [i3]Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart Russell, Anca D. Dragan:
Inverse Reward Design. CoRR abs/1711.02827 (2017) - 2016
- [c8]Rohan Chitnis, Dylan Hadfield-Menell, Abhishek Gupta, Siddharth Srivastava, Edward Groshev, Christopher Lin, Pieter Abbeel:
Guided search for task and motion plans using learned heuristics. ICRA 2016: 447-454 - [c7]Dylan Hadfield-Menell, Christopher Lin, Rohan Chitnis, Stuart Russell, Pieter Abbeel:
Sequential quadratic programming for task plan optimization. IROS 2016: 5040-5047 - [c6]Dylan Hadfield-Menell, Stuart Russell, Pieter Abbeel, Anca D. Dragan:
Cooperative Inverse Reinforcement Learning. NIPS 2016: 3909-3917 - [i2]Dylan Hadfield-Menell, Anca D. Dragan, Pieter Abbeel, Stuart Russell:
Cooperative Inverse Reinforcement Learning. CoRR abs/1606.03137 (2016) - [i1]Dylan Hadfield-Menell, Anca D. Dragan, Pieter Abbeel, Stuart Russell:
The Off-Switch Game. CoRR abs/1611.08219 (2016) - 2015
- [c5]Dylan Hadfield-Menell, Alex X. Lee, Chelsea Finn, Eric Tzeng, Sandy H. Huang, Pieter Abbeel:
Beyond lowest-warping cost action selection in trajectory transfer. ICRA 2015: 3231-3238 - [c4]Dylan Hadfield-Menell, Edward Groshev, Rohan Chitnis, Pieter Abbeel:
Modular task and motion planning in belief space. IROS 2015: 4991-4998 - [c3]Dylan Hadfield-Menell, Stuart Russell:
Multitasking: Optimal Planning for Bandit Superprocesses. UAI 2015: 345-354 - 2014
- [c2]Alex X. Lee, Sandy H. Huang, Dylan Hadfield-Menell, Eric Tzeng, Pieter Abbeel:
Unifying scene registration and trajectory optimization for learning from demonstrations with application to manipulation of deformable objects. IROS 2014: 4402-4407 - 2013
- [c1]Dylan Hadfield-Menell, Leslie Pack Kaelbling, Tomás Lozano-Pérez:
Optimization in the now: Dynamic peephole optimization for hierarchical planning. ICRA 2013: 4560-4567
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-08-20 20:32 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint