Player FM ऐप के साथ ऑफ़लाइन जाएं!
LW - The Best Lay Argument is not a Simple English Yud Essay by J Bostock
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 439284201 series 3314709
Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique.
If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk.
Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms.
One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all.
I'm going to critique three examples which I think typify these:
Failure to Adapt Concepts
I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested.
I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind.
Here's an attempt to do better:
1. So far, humans have mostly developed technology by understanding the systems which the technology depends on.
2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does.
3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways.
4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic.
And here's Claude's just for fun:
1. Up until now, humans have created new technologies by understanding how they work.
2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside.
3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects.
4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful.
I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary.
Failure to Filter Information
When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI:
When I showed this to my partner, they said "This is very confusing, it makes it look like an AGI is an AI which makes a chess AI". Making more AI...
2437 एपिसोडस
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 439284201 series 3314709
Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique.
If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk.
Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms.
One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all.
I'm going to critique three examples which I think typify these:
Failure to Adapt Concepts
I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested.
I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind.
Here's an attempt to do better:
1. So far, humans have mostly developed technology by understanding the systems which the technology depends on.
2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does.
3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways.
4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic.
And here's Claude's just for fun:
1. Up until now, humans have created new technologies by understanding how they work.
2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside.
3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects.
4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful.
I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary.
Failure to Filter Information
When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI:
When I showed this to my partner, they said "This is very confusing, it makes it look like an AGI is an AI which makes a chess AI". Making more AI...
2437 एपिसोडस
Alle Folgen
×प्लेयर एफएम में आपका स्वागत है!
प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।