Nothing Special   »   [go: up one dir, main page]

Dan Hendrycks (born 1994 or 1995[1]) is an American machine learning researcher. He serves as the director of the Center for AI Safety.

Dan Hendrycks
Born1994 or 1995 (age 29–30)
EducationUniversity of Chicago (B.S., 2018)
UC Berkeley (Ph.D., 2022)
Scientific career
Fields
InstitutionsUC Berkeley
Center for AI Safety

Early life and education

edit

Hendrycks was raised in a Christian evangelical household in Marshfield, Missouri.[2][3] He received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022.[4]

Career and research

edit

Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.

He credits his participation in the effective altruism (EA) movement-linked 80,000 Hours program for his career focus towards AI safety, though denied being an advocate for EA.[2]

In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence.[5][6]

In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks.[7][8] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents.[9][10][11] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents.[12][13]

Hendrycks is the safety adviser of xAI, an AI startup company founded by Elon Musk in 2023. To avoid any potential conflicts of interest, he receives a symbolic one-dollar salary and holds no company equity.[1][14] As of November 2024, he is also an advisor at Scale AI.[15]

In 2024 Hendrycks published a 568 page book entitled "Introduction to AI Safety, Ethics, and Society" based on courseware he had previously developed.[16]

Selected publications

edit
  • Hendrycks, Dan; Gimpel, Kevin (2020-07-08). "Gaussian Error Linear Units (GELUs)". arXiv:1606.08415 [cs.LG].
  • Hendrycks, Dan; Gimpel, Kevin (2018-10-03). "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks". International Conference on Learning Representations 2017. arXiv:1610.02136.
  • Hendrycks, Dan; Mazeika, Mantas; Dietterich, Thomas (2019-01-28). "Deep Anomaly Detection with Outlier Exposure". International Conference on Learning Representations 2019. arXiv:1812.04606.
  • Hendrycks, Dan; Mazeika, Mantas; Zou, Andy (2021-10-25). "What Would Jiminy Cricket Do? Towards Agents That Behave Morally". Conference on Neural Information Processing Systems 2021. arXiv:2110.13136.

References

edit
  1. ^ a b Henshall, Will (September 7, 2023). "Time 100 AI: Dan Hendrycks". Time.
  2. ^ a b Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed". The Boston Globe. Archived from the original on July 8, 2023.
  3. ^ Castaldo, Joe (June 23, 2023). "'I hope I'm wrong': Why some experts see doom in AI". The Globe and Mail.
  4. ^ "Dan Hendrycks". people.eecs.berkeley.edu. Retrieved 2023-04-14.
  5. ^ "Nvidia moves into A.I. services and ChatGPT can now use your credit card". Fortune. Retrieved 2023-04-13.
  6. ^ "Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan: Responses" (PDF). National Artificial Intelligence Initiative. March 2022.
  7. ^ Hendrycks, Dan; Mazeika, Mantas (2022-06-13). "X-Risk Analysis for AI Research". arXiv:2206.05862v7 [cs.CY].
  8. ^ Gendron, Will. "An AI safety expert outlined a range of speculative doomsday scenarios, from weaponization to power-seeking behavior". Business Insider. Retrieved 2023-05-07.
  9. ^ Hendrycks, Dan (2023-03-28). "Natural Selection Favors AIs over Humans". arXiv:2303.16200 [cs.CY].
  10. ^ Colton, Emma (2023-04-03). "AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns". Fox News. Retrieved 2023-04-14.
  11. ^ Klein, Ezra (2023-04-07). "Why A.I. Might Not Take Your Job or Supercharge the Economy". The New York Times. Retrieved 2023-04-14.
  12. ^ Hendrycks, Dan; Mazeika, Mantas; Woodside, Thomas (2023). "An Overview of Catastrophic AI Risks". arXiv:2306.12001 [cs.CY].
  13. ^ Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed". The Boston Globe. Retrieved July 10, 2023.
  14. ^ Lovely, Garrison (January 22, 2024). "Can Humanity Survive AI?". Jacobin.
  15. ^ Goldman, Sharon (2024-11-14). "Elon Musk's xAI safety whisperer just became an advisor to Scale AI". Fortune. Retrieved 2024-11-14.
  16. ^ "AI Safety, Ethics, and Society Textbook". www.aisafetybook.com. Retrieved 9 May 2024.