Artwork

information labs and Information labs द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री information labs and Information labs या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal
Player FM - पॉडकास्ट ऐप
Player FM ऐप के साथ ऑफ़लाइन जाएं!

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

16:10
 
साझा करें
 

Manage episode 448458563 series 3480798
information labs and Information labs द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री information labs and Information labs या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal

🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining how this may conflict with the principles of free speech and access to diverse information.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:51] Q1-How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?

⏲️[06:53] Q2-Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?

⏲️[10:20] Q3-What changes would you recommend to better align chatbot moderation policies with free speech protections?

⏲️[15:18] Wrap-up & Outro

💭 Q1 - How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?

🗣️ "This is the first time in human history that new communications technology does not solely depend on human input, like the printing press or radio."

🗣️ "Limiting or restricting the output and even the ability to make prompts will necessarily affect the underlying capability to reinforce free speech, and especially access to information."

🗣️ "If I interact with an AI chatbot, it's me and the AI system, so it seems counterintuitive that the restrictions on AI chatbots are more wide-ranging than those on social media."

🗣️ "Would it be acceptable to ordinary users to say, you're writing a document on blasphemy, and then Word says, 'I can't complete that sentence because it violates our policies'?"

🗣️ "The boundary between freedom of speech being in danger and freedom of thought being affected is a very narrow one."

🗣️ "Under international human rights law, freedom of thought is absolute, but algorithmic restrictions risk subtly interfering with that freedom.(...) These restrictions risk being tentacles into freedom of thought, subtly guiding us in ways we might not even notice."

💭 Q2 - Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?

🗣️ "The AI act includes an obligation to assess and mitigate systemic risk, which could be relevant here regarding generative AI’s impact on free expression."

🗣️ "The AI act defines systemic risk as a risk that is specific to the high-impact capabilities of general-purpose AI models that could affect public health, safety, or fundamental rights."

🗣️ "The question is whether the interpretation under the AI act would lean more in a speech protective or a speech restrictive manner."

🗣️ "Overly broad restrictions could undermine freedom of expression in the Charter of Fundamental Rights, which is part of EU law."

🗣️ "My instinct is that the AI act would likely lean in a more speech-restrictive way, but it's too early to say for certain."

💭 Q3 - What changes would you recommend to better align chatbot moderation policies with free speech protections?

🗣️ "Let’s use international human rights law as a benchmark—something most major social media platforms commit to on paper but don’t live up to in practice."

🗣️ "We showed that major social media platforms' hate speech policies have undergone extensive scope creep over the past decade, which does not align with international human rights standards."

🗣️ "It's conceptually more difficult to apply international human rights standards to an AI chatbot because my interaction is private, unlike public speech."

🗣️ "We should avoid adopting a 'harm-oriented' principle to AI chatbots, especially when dealing with disinformation and misinformation, which is often protected under freedom of expression."

🗣️ "It's important to maintain an iterative process with AI systems, where humans remain responsible for how we use and share information, rather than placing all the responsibility on the chatbot."

📌 About Our Guest

🎙️ Jacob Mchangama | The Future of Free Speech & Vanderbilt University

𝕏 https://x.com/@JMchangama

🌐 Article | AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

https://theconversation.com/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-226596

🌐 The Future of Free Speech

https://futurefreespeech.org

🌐 Jacob Mchangama

http://jacobmchangama.com

Jacob Mchangama is the Executive Director of The Future of Free Speech and a Research Professor at Vanderbilt University. He is also a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE) and author of “Free Speech: A History From Socrates to Social Media”.

  continue reading

24 एपिसोडस

Artwork
iconसाझा करें
 
Manage episode 448458563 series 3480798
information labs and Information labs द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री information labs and Information labs या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal

🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining how this may conflict with the principles of free speech and access to diverse information.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:51] Q1-How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?

⏲️[06:53] Q2-Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?

⏲️[10:20] Q3-What changes would you recommend to better align chatbot moderation policies with free speech protections?

⏲️[15:18] Wrap-up & Outro

💭 Q1 - How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?

🗣️ "This is the first time in human history that new communications technology does not solely depend on human input, like the printing press or radio."

🗣️ "Limiting or restricting the output and even the ability to make prompts will necessarily affect the underlying capability to reinforce free speech, and especially access to information."

🗣️ "If I interact with an AI chatbot, it's me and the AI system, so it seems counterintuitive that the restrictions on AI chatbots are more wide-ranging than those on social media."

🗣️ "Would it be acceptable to ordinary users to say, you're writing a document on blasphemy, and then Word says, 'I can't complete that sentence because it violates our policies'?"

🗣️ "The boundary between freedom of speech being in danger and freedom of thought being affected is a very narrow one."

🗣️ "Under international human rights law, freedom of thought is absolute, but algorithmic restrictions risk subtly interfering with that freedom.(...) These restrictions risk being tentacles into freedom of thought, subtly guiding us in ways we might not even notice."

💭 Q2 - Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?

🗣️ "The AI act includes an obligation to assess and mitigate systemic risk, which could be relevant here regarding generative AI’s impact on free expression."

🗣️ "The AI act defines systemic risk as a risk that is specific to the high-impact capabilities of general-purpose AI models that could affect public health, safety, or fundamental rights."

🗣️ "The question is whether the interpretation under the AI act would lean more in a speech protective or a speech restrictive manner."

🗣️ "Overly broad restrictions could undermine freedom of expression in the Charter of Fundamental Rights, which is part of EU law."

🗣️ "My instinct is that the AI act would likely lean in a more speech-restrictive way, but it's too early to say for certain."

💭 Q3 - What changes would you recommend to better align chatbot moderation policies with free speech protections?

🗣️ "Let’s use international human rights law as a benchmark—something most major social media platforms commit to on paper but don’t live up to in practice."

🗣️ "We showed that major social media platforms' hate speech policies have undergone extensive scope creep over the past decade, which does not align with international human rights standards."

🗣️ "It's conceptually more difficult to apply international human rights standards to an AI chatbot because my interaction is private, unlike public speech."

🗣️ "We should avoid adopting a 'harm-oriented' principle to AI chatbots, especially when dealing with disinformation and misinformation, which is often protected under freedom of expression."

🗣️ "It's important to maintain an iterative process with AI systems, where humans remain responsible for how we use and share information, rather than placing all the responsibility on the chatbot."

📌 About Our Guest

🎙️ Jacob Mchangama | The Future of Free Speech & Vanderbilt University

𝕏 https://x.com/@JMchangama

🌐 Article | AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

https://theconversation.com/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-226596

🌐 The Future of Free Speech

https://futurefreespeech.org

🌐 Jacob Mchangama

http://jacobmchangama.com

Jacob Mchangama is the Executive Director of The Future of Free Speech and a Research Professor at Vanderbilt University. He is also a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE) and author of “Free Speech: A History From Socrates to Social Media”.

  continue reading

24 एपिसोडस

सभी एपिसोड

×
 
Loading …

प्लेयर एफएम में आपका स्वागत है!

प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।

 

त्वरित संदर्भ मार्गदर्शिका