INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
Manage episode 421789371 series 3577835
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?
If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet).
Because the full show notes have a whopping 115 additional links, I'll highlight some that I think are particularly worthwhile here:
- The best article you'll ever read on Open Source AI
- The best article you'll ever read on emergence in ML
- Kate Crawford's Atlas of AI (Wikipedia)
- On the Measure of Intelligence
- Thomas Piketty's Capital in the Twenty-First Century (Wikipedia)
- Yurii Nesterov's Introductory Lectures on Convex Optimization
Chapters
- (02:32) - Introducing Igor
- (10:11) - Aside on EY, LW, EA, etc., a.k.a. lettersoup
- (18:30) - Igor on AI alignment
- (33:06) - "Open Source" in AI
- (41:20) - The story of infinite riches and suffering
- (59:11) - On AI threat models
- (01:09:25) - Representation in AI
- (01:15:00) - Hazard fishing
- (01:18:52) - Intelligence and eugenics
- (01:34:38) - Emergence
- (01:48:19) - Considering externalities
- (01:53:33) - The shape of an argument
- (02:01:39) - More eugenics
- (02:06:09) - I'm convinced, what now?
- (02:18:03) - AIxBio (round ??)
- (02:29:09) - On open release of models
- (02:40:28) - Data and copyright
- (02:44:09) - Scientific accessibility and bullshit
- (02:53:04) - Igor's point of view
- (02:57:20) - Outro
Links
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. All references, including those only mentioned in the extended version of this episode, are included.
- Suspicious Machines Methodology, referred to as the "Rotterdam Lighthouse Report" in the episode
- LIONS Lab at EPFL
- The meme that Igor references
- On the Hardness of Learning Under Symmetries
- Course on the concept of equivariant deep learning
- Aside on EY/EA/etc.
- Sources on Eliezer Yudkowski
- Scholarly Community Encyclopedia
- TIME100 AI
- Yudkowski's personal website
- EY Wikipedia
- A Very Literary Wiki -TIME article: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down documenting EY's ruminations of bombing datacenters; this comes up later in the episode but is included here because it about EY.
- LessWrong
- MIRI
- Coverage on Nick Bostrom (being a racist)
- The Guardian article: ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute
- The Guardian article: Oxford shuts down institute run by Elon Musk-backed philosopher
- Investigative piece on Émile Torres
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
- NY Times article: We Teach A.I. Systems Everything, Including Our Biases
- NY Times article: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.
- Timnit Gebru's Wikipedia
- The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence
- Sources on the environmental impact of LLMs
- Sources on Eliezer Yudkowski
- Filling Gaps in Trustworthy Development of AI (Igor is an author on this one)
- A Computational Turn in Policy Process Studies: Coevolving Network Dynamics of Policy Change
- The Smoothed Possibility of Social Choice, an intro in social choice theory and how it overlaps with ML
- Relating to Dan Hendrycks
- Natural Selection Favors AIs over Humans
- "One easy-to-digest source to highlight what he gets wrong [is] Social and Biopolitical Dimensions of Evolutionary Thinking" -Igor
- Introduction to AI Safety, Ethics, and Society, recently published textbook
- "Source to the section [of this paper] that makes Dan one of my favs from that crowd." -Igor
- Twitter post referenced in the episode<...
- Natural Selection Favors AIs over Humans
19 एपिसोडस