Nothing Special   »   [go: up one dir, main page]

Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

FromLex Fridman Podcast


#431 – Roman Yampolskiy: Dangers of Superintelligent AI

FromLex Fridman Podcast

ratings:
Length:
20 minutes
Released:
Jun 2, 2024
Format:
Podcast episode

Description

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
- Yahoo Finance: https://yahoofinance.com
- MasterClass: https://masterclass.com/lexpod to get 15% off
- NetSuite: http://netsuite.com/lex to get free product tour
- LMNT: https://drinkLMNT.com/lex to get free sample pack
- Eight Sleep: https://eightsleep.com/lex to get $350 off

EPISODE LINKS:
Roman's X: https://twitter.com/romanyam
Roman's Website: http://cecs.louisville.edu/ry
Roman's AI book: https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
- Check out the sponsors above, it's the best way to support this podcast
- Support on Patreon: https://www.patreon.com/lexfridman
- Twitter: https://twitter.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Medium: https://medium.com/@lexfridman

OUTLINE:
Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) - Introduction
(09:12) - Existential risk of AGI
(15:25) - Ikigai risk
(23:37) - Suffering risk
(27:12) - Timeline to AGI
(31:44) - AGI turing test
(37:06) - Yann LeCun and open source AI
(49:58) - AI control
(52:26) - Social engineering
(54:59) - Fearmongering
(1:04:49) - AI deception
(1:11:23) - Verification
(1:18:22) - Self-improving AI
(1:30:34) - Pausing AI development
(1:36:51) - AI Safety
(1:46:35) - Current AI
(1:51:58) - Simulation
(1:59:16) - Aliens
(2:00:50) - Human mind
(2:07:10) - Neuralink
(2:16:15) - Hope for the future
(2:20:11) - Meaning of life
Released:
Jun 2, 2024
Format:
Podcast episode

Titles in the series (100)

Conversations at MIT and beyond about the nature of intelligence with some of the most interesting people in the world thinking about AI from the perspective of deep learning, robotics, AGI, neuroscience, philosophy, psychology, cognitive science, economics, physics, mathematics, and more.