#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama
MP3•एपिसोड होम
Manage episode 428686712 series 3585389
Dev and Doc द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री Dev and Doc या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
How do we reach the holy grail of a clinically safe LLM for healthcare? Dev and Doc are back to discuss news with Meta's LlaMA model and potential of healthcare LLMs finetuned on top like BioLlaMa. We discuss the key steps in building a clinically safe LLM for healthcare for healthcare and how this was pursued by Hippocratic AI's latest model - Polaris. 👨🏻⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr The podcast 🎙️ 🔊Spotify: https://podcasters.spotify.com/pod/show/devanddoc 📙Substack: https://aiforhealthcare.substack.com/ Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d References Hippocratic AI LLM- https://arxiv.org/pdf/2403.13313 BioLLM tweet - https://twitter.com/aadityaura/status/1783662626901528803 Foresight lancet paper -https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00025-6/fulltext Linear processing units- https://wow.groq.com/lpu-inference-engine/ Timestamps 00:00 Start 01:10 Intro- llama3 , a chatGPT level model in our hands 06:53 Linear processing units to run LLMs 09:42 BioLLM for medical question and answering 11:13 quality and size of dataset, using youtube transcripts 12:41 Question and answering pairs do not reflect the real world - holy grail of healthcare llm 18:43 Dev has Beef with hippocratic AI 20:25 Step1 Training a clinical foundational model from scratch 22:43 Step 2 Instruction tuning with multi-turn simulated conversation 24:15 Step 3 training the model to guide model in tangential conversations 27:42 Focusing on the hospital back office and specialist nurse phone calls 33:02 Evaluating Polaris - clinical safety LLM , bedside manner, medical safety advice
…
continue reading
24 एपिसोडस