Artwork

The Nonlinear Fund द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री The Nonlinear Fund या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal
Player FM - पॉडकास्ट ऐप
Player FM ऐप के साथ ऑफ़लाइन जाएं!

LW - Yoshua Bengio: Reasoning through arguments against taking AI safety seriously by Judd Rosenblatt

1:39
 
साझा करें
 

Manage episode 428518881 series 3337129
The Nonlinear Fund द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री The Nonlinear Fund या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yoshua Bengio: Reasoning through arguments against taking AI safety seriously, published by Judd Rosenblatt on July 12, 2024 on LessWrong. He starts by emphasizing The issue is so hotly debated because the stakes are major: According to some estimates, quadrillions of dollars of net present value are up for grabs, not to mention political power great enough to significantly disrupt the current world order. [...] The most important thing to realize, through all the noise of discussions and debates, is a very simple and indisputable fact: while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans. And goes on to do a pretty great job addressing "those who think AGI and ASI are impossible or are centuries in the future AGI is possible but only in many decades that we may reach AGI but not ASI, those who think that AGI and ASI will be kind to us that corporations will only design well-behaving AIs and existing laws are sufficient who think that we should accelerate AI capabilities research and not delay benefits of AGI talking about catastrophic risks will hurt efforts to mitigate short-term human-rights issues with AI concerned with the US-China cold war that international treaties will not work the genie is out of the bottle and we should just let go and avoid regulation that open-source AGI code and weights are the solution worrying about AGI is falling for Pascal's wager" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1801 एपिसोडस

Artwork
iconसाझा करें
 
Manage episode 428518881 series 3337129
The Nonlinear Fund द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री The Nonlinear Fund या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yoshua Bengio: Reasoning through arguments against taking AI safety seriously, published by Judd Rosenblatt on July 12, 2024 on LessWrong. He starts by emphasizing The issue is so hotly debated because the stakes are major: According to some estimates, quadrillions of dollars of net present value are up for grabs, not to mention political power great enough to significantly disrupt the current world order. [...] The most important thing to realize, through all the noise of discussions and debates, is a very simple and indisputable fact: while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans. And goes on to do a pretty great job addressing "those who think AGI and ASI are impossible or are centuries in the future AGI is possible but only in many decades that we may reach AGI but not ASI, those who think that AGI and ASI will be kind to us that corporations will only design well-behaving AIs and existing laws are sufficient who think that we should accelerate AI capabilities research and not delay benefits of AGI talking about catastrophic risks will hurt efforts to mitigate short-term human-rights issues with AI concerned with the US-China cold war that international treaties will not work the genie is out of the bottle and we should just let go and avoid regulation that open-source AGI code and weights are the solution worrying about AGI is falling for Pascal's wager" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1801 एपिसोडस

All episodes

×
 
Loading …

प्लेयर एफएम में आपका स्वागत है!

प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।

 

त्वरित संदर्भ मार्गदर्शिका