Hundred Year Podcast

Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer

July 16, 2024 Irreverent Labs Season 1 Episode 48
Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer
Hundred Year Podcast
More Info
Hundred Year Podcast
Stanford PhD explains why AI isn’t going to destroy humanity, but why we need to make it safer
Jul 16, 2024 Season 1 Episode 48
Irreverent Labs

Duncan Eddy has spent years working in the realm of space satellite communications, and now he’s directing his talents toward AI as the Executive Director of the Stanford Center for AI Safety. In this episode, Duncan speaks with Adario Strange to explain why the commercialization of space will continue to fuel our explorations into the Moon and Mars, and how AI-powered robots may be the primary method for deep space exploration in the future. Then the discussion turns toward the topic of AI safety and the algorithm the Stanford group developed to try to help guide the technology in the right direction. Finally, the area of AI super intelligence comes up, and you may be surprised at what Duncan has to say about that given his role as an AI safety advocate.You can find out more about the Stanford Center for AI Safety here:
https://aisafety.stanford.edu#AI #artificialintelligence #software #siliconvalley #elonmusk #spacex #jeffbezos #blueorigin #space #spacetravel #robots #agi #superintelligence #aisafety #sciencefiction #scifi

Subscribe to our newsletter!
Hundred Year Lens | Adario Strange | Substack

Visit our Podcast site!
Hundred Year Podcast

Intro/Outro Music by Karl Casey @ White Bat Audio

Show Notes

Duncan Eddy has spent years working in the realm of space satellite communications, and now he’s directing his talents toward AI as the Executive Director of the Stanford Center for AI Safety. In this episode, Duncan speaks with Adario Strange to explain why the commercialization of space will continue to fuel our explorations into the Moon and Mars, and how AI-powered robots may be the primary method for deep space exploration in the future. Then the discussion turns toward the topic of AI safety and the algorithm the Stanford group developed to try to help guide the technology in the right direction. Finally, the area of AI super intelligence comes up, and you may be surprised at what Duncan has to say about that given his role as an AI safety advocate.You can find out more about the Stanford Center for AI Safety here:
https://aisafety.stanford.edu#AI #artificialintelligence #software #siliconvalley #elonmusk #spacex #jeffbezos #blueorigin #space #spacetravel #robots #agi #superintelligence #aisafety #sciencefiction #scifi

Subscribe to our newsletter!
Hundred Year Lens | Adario Strange | Substack

Visit our Podcast site!
Hundred Year Podcast

Intro/Outro Music by Karl Casey @ White Bat Audio