Audio versions of posts at https://www.planned-obsolescence.org/ Read by AI trained on the author's voice. Please excuse any stiltedness -- it's learning!
…
continue reading
The AI Show is a groundbreaking series hosted by Emily Barrett, AI Lead & HPC System Architect for Lenovo, that explores the profound impact of artificial intelligence across diverse sectors. Through captivating conversations with industry pioneers, academic luminaries, and visionary thinkers, this compelling show unravels the infinite possibilities and complexities of AI technology. Emily engages with renowned experts, uncovering untapped career prospects, the critical role of training, and ...
…
continue reading
1
Dangerous capability tests should be harder
8:43
8:43
बाद में चलाएं
बाद में चलाएं
सूचियाँ
पसंद
पसंद
8:43
We should be spending less time proving today’s AIs are safe and more time figuring out how to tell if tomorrow’s AIs are dangerous: planned-obsolescence.org/dangerous-capability-tests-should-be-harderद्वारा Ajeya Cotra, Kelsey Piper
…
continue reading
1
AI Meets Psychology, Decoding Human Behaviour Like Never Before
25:19
25:19
बाद में चलाएं
बाद में चलाएं
सूचियाँ
पसंद
पसंद
25:19
In this exciting episode of "The AI Show," we delve into the fascinating intersection of artificial intelligence and psychology. Join us as we explore how AI is revolutionising our understanding of human behaviour, providing unprecedented insights into the mind. Our guest, John Scott, shares his expertise on how AI technologies are being used at Cu…
…
continue reading
1
The AI Show - Episode 1 - Clare Walsh - Audio Only
24:38
24:38
बाद में चलाएं
बाद में चलाएं
सूचियाँ
पसंद
पसंद
24:38
Join us on an enlightening journey into the future of education and ethics in AI with our first episode of the AI Show (audio only). Host, Emily Barrett, sits down with the remarkable Clare Walsh for an intimate conversation that bridges the gap between humanity and technology. Discover the untapped career prospects flourishing within AI, understan…
…
continue reading
This startlingly fast progress in LLMs was driven both by scaling up LLMs and doing schlep to make usable systems out of them. We think scale and schlep will both improve rapidly: planned-obsolescence.org/scale-schlep-and-systemsद्वारा Ajeya Cotra, Kelsey Piper
…
continue reading
Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025: https://planned-obsolescence.org/language-models-surprised-usद्वारा Ajeya Cotra, Kelsey Piper
…
continue reading
Most new technologies don’t accelerate the pace of economic growth. But advanced AI might do this by massively increasing the research effort going into developing new technologies.द्वारा Ajeya Cotra, Kelsey Piper
…
continue reading
Both AI fears and AI hopes rest on the belief that it may be possible to build alien minds that can do everything we can do and much more. AI-driven technological progress could save countless lives and make everyone massively healthier and wealthier: https://planned-obsolescence.org/the-costs-of-caution…
…
continue reading
Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year: https://planned-obsolescence.org/continuous-doesnt-mean-slowद्वारा Ajeya Cotra, Kelsey Piper
…
continue reading
Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress. https://planned-obsolescence.org/ais-accelerating-ai-researchद्वारा Ajeya Cotra, Kelsey Piper
…
continue reading
The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more: https://www.planned-obsolescence.org/is-it-time-for-a-pause/…
…
continue reading
If we’ve decided we’re collectively fine with unleashing millions of spam bots, then the least we can do is actually study what they can – and can’t – do: https://www.planned-obsolescence.org/ethics-of-red-teaming/द्वारा Ajeya Cotra
…
continue reading
We’re trying to think ahead to a possible future in which AI is making all the most important decisions: https://www.planned-obsolescence.org/what-were-doing-here/द्वारा Ajeya Cotra
…
continue reading
1
"Aligned" shouldn't be a synonym for "good"
6:14
6:14
बाद में चलाएं
बाद में चलाएं
सूचियाँ
पसंद
पसंद
6:14
Perfect alignment just means that AI systems won’t want to deliberately disregard their designers' intent; it's not enough to ensure AI is good for the world: https://www.planned-obsolescence.org/aligned-vs-good/द्वारा Ajeya Cotra
…
continue reading
AI systems that have a precise understanding of how they’ll be evaluated and what behavior we want them to display will earn more reward than AI systems that don’t: https://www.planned-obsolescence.org/situational-awareness/द्वारा Ajeya Cotra
…
continue reading
We're creating incentives for AI systems to make their behavior look as desirable as possible, while intentionally disregarding human intent when that conflicts with maximizing reward: https://www.planned-obsolescence.org/the-training-game/द्वारा Ajeya Cotra
…
continue reading
If we can accurately recognize good performance on alignment, we could elicit lots of useful alignment work from our models, even if they're playing the training game: https://www.planned-obsolescence.org/training-ais-to-help-us-align-ais/द्वारा Ajeya Cotra
…
continue reading
Many fellow alignment researchers may be operating under radically different assumptions from you: https://www.planned-obsolescence.org/disagreement-in-alignment/द्वारा Ajeya Cotra
…
continue reading