Everyone has a dream. But sometimes there’s a gap between where we are and where we want to be. True, there are some people who can bridge that gap easily, on their own, but all of us need a little help at some point. A little boost. An accountability partner. A Snooze Squad. In each episode, the Snooze Squad will strategize an action plan for people to face their fears. Guests will transform their own perception of their potential and walk away a few inches closer to who they want to become ...
…
continue reading
The Nonlinear Fund द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री The Nonlinear Fund या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
Player FM - पॉडकास्ट ऐप
Player FM ऐप के साथ ऑफ़लाइन जाएं!
Player FM ऐप के साथ ऑफ़लाइन जाएं!
LW - Shane Legg's necessary properties for every AGI Safety plan by jacquesthibs
MP3•एपिसोड होम
Manage episode 415852556 series 3337129
The Nonlinear Fund द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री The Nonlinear Fund या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shane Legg's necessary properties for every AGI Safety plan, published by jacquesthibs on May 1, 2024 on LessWrong. I've been going through the FAR AI videos from the alignment workshop in December 2023. I'd like people to discuss their thoughts on Shane Legg's 'necessary properties' that every AGI safety plan needs to satisfy. The talk is only 5 minutes, give it a listen: Otherwise, here are some of the details: All AGI Safety plans must solve these problems (necessary properties to meet at the human level or beyond): Good world model Good reasoning Specification of the values and ethics to follow All of these require good capabilities, meaning capabilities and alignment are intertwined. Shane thinks future foundation models will solve conditions 1 and 2 at the human level. That leaves condition 3, which he sees as solvable if you want fairly normal human values and ethics. Shane basically thinks that if the above necessary properties are satisfied at a competent human level, then we can construct an agent that will consistently choose the most value-aligned actions. And you can do this via a cognitive loop that scaffolds the agent to do this. Shane says at the end of this talk: If you think this is a terrible idea, I want to hear from you. Come talk to me afterwards and tell me what's wrong with this idea. Since many of us weren't at the workshop, I figured I'd share the talk here to discuss it on LW. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
…
continue reading
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shane Legg's necessary properties for every AGI Safety plan, published by jacquesthibs on May 1, 2024 on LessWrong. I've been going through the FAR AI videos from the alignment workshop in December 2023. I'd like people to discuss their thoughts on Shane Legg's 'necessary properties' that every AGI safety plan needs to satisfy. The talk is only 5 minutes, give it a listen: Otherwise, here are some of the details: All AGI Safety plans must solve these problems (necessary properties to meet at the human level or beyond): Good world model Good reasoning Specification of the values and ethics to follow All of these require good capabilities, meaning capabilities and alignment are intertwined. Shane thinks future foundation models will solve conditions 1 and 2 at the human level. That leaves condition 3, which he sees as solvable if you want fairly normal human values and ethics. Shane basically thinks that if the above necessary properties are satisfied at a competent human level, then we can construct an agent that will consistently choose the most value-aligned actions. And you can do this via a cognitive loop that scaffolds the agent to do this. Shane says at the end of this talk: If you think this is a terrible idea, I want to hear from you. Come talk to me afterwards and tell me what's wrong with this idea. Since many of us weren't at the workshop, I figured I'd share the talk here to discuss it on LW. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
1655 एपिसोडस
MP3•एपिसोड होम
Manage episode 415852556 series 3337129
The Nonlinear Fund द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री The Nonlinear Fund या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shane Legg's necessary properties for every AGI Safety plan, published by jacquesthibs on May 1, 2024 on LessWrong. I've been going through the FAR AI videos from the alignment workshop in December 2023. I'd like people to discuss their thoughts on Shane Legg's 'necessary properties' that every AGI safety plan needs to satisfy. The talk is only 5 minutes, give it a listen: Otherwise, here are some of the details: All AGI Safety plans must solve these problems (necessary properties to meet at the human level or beyond): Good world model Good reasoning Specification of the values and ethics to follow All of these require good capabilities, meaning capabilities and alignment are intertwined. Shane thinks future foundation models will solve conditions 1 and 2 at the human level. That leaves condition 3, which he sees as solvable if you want fairly normal human values and ethics. Shane basically thinks that if the above necessary properties are satisfied at a competent human level, then we can construct an agent that will consistently choose the most value-aligned actions. And you can do this via a cognitive loop that scaffolds the agent to do this. Shane says at the end of this talk: If you think this is a terrible idea, I want to hear from you. Come talk to me afterwards and tell me what's wrong with this idea. Since many of us weren't at the workshop, I figured I'd share the talk here to discuss it on LW. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
…
continue reading
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shane Legg's necessary properties for every AGI Safety plan, published by jacquesthibs on May 1, 2024 on LessWrong. I've been going through the FAR AI videos from the alignment workshop in December 2023. I'd like people to discuss their thoughts on Shane Legg's 'necessary properties' that every AGI safety plan needs to satisfy. The talk is only 5 minutes, give it a listen: Otherwise, here are some of the details: All AGI Safety plans must solve these problems (necessary properties to meet at the human level or beyond): Good world model Good reasoning Specification of the values and ethics to follow All of these require good capabilities, meaning capabilities and alignment are intertwined. Shane thinks future foundation models will solve conditions 1 and 2 at the human level. That leaves condition 3, which he sees as solvable if you want fairly normal human values and ethics. Shane basically thinks that if the above necessary properties are satisfied at a competent human level, then we can construct an agent that will consistently choose the most value-aligned actions. And you can do this via a cognitive loop that scaffolds the agent to do this. Shane says at the end of this talk: If you think this is a terrible idea, I want to hear from you. Come talk to me afterwards and tell me what's wrong with this idea. Since many of us weren't at the workshop, I figured I'd share the talk here to discuss it on LW. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
1655 एपिसोडस
모든 에피소드
×प्लेयर एफएम में आपका स्वागत है!
प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।