Artwork

London Futurists द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री London Futurists या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal
Player FM - पॉडकास्ट ऐप
Player FM ऐप के साथ ऑफ़लाइन जाएं!

The 4 Cs of Superintelligence

32:39
 
साझा करें
 

Manage episode 366223699 series 3390521
London Futurists द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री London Futurists या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal

The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

81 एपिसोडस

Artwork

The 4 Cs of Superintelligence

London Futurists

12 subscribers

published

iconसाझा करें
 
Manage episode 366223699 series 3390521
London Futurists द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री London Futurists या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal

The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

81 एपिसोडस

सभी एपिसोड

×
 
Loading …

प्लेयर एफएम में आपका स्वागत है!

प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।

 

त्वरित संदर्भ मार्गदर्शिका