Artwork

London Futurists द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री London Futurists या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal
Player FM - पॉडकास्ट ऐप
Player FM ऐप के साथ ऑफ़लाइन जाएं!

Catastrophe and consent

32:29
 
साझा करें
 

Manage episode 366648336 series 3390521
London Futurists द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री London Futurists या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?
The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.
Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Inspiring Computing
The Inspiring Computing podcast is where computing meets the real world. This podcast...
Listen on: Apple Podcasts Spotify

  continue reading

80 एपिसोडस

Artwork

Catastrophe and consent

London Futurists

12 subscribers

published

iconसाझा करें
 
Manage episode 366648336 series 3390521
London Futurists द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री London Futurists या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?
The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.
Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Inspiring Computing
The Inspiring Computing podcast is where computing meets the real world. This podcast...
Listen on: Apple Podcasts Spotify

  continue reading

80 एपिसोडस

सभी एपिसोड

×
 
Loading …

प्लेयर एफएम में आपका स्वागत है!

प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।

 

त्वरित संदर्भ मार्गदर्शिका