Player FM рдРрдк рдХреЗ рд╕рд╛рде рдСрдлрд╝рд▓рд╛рдЗрди рдЬрд╛рдПрдВ!
Inside Nano Banana ЁЯНМ and the Future of Vision-Language Models with Oliver Wang - #748
Manage episode 508093774 series 2355587
Today, weтАЩre joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash ImageтАФbetter known by its code name, тАЬNano Banana.тАЭ We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with GeminiтАЩs world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day тАЬthinkтАЭ and тАЬreasonтАЭ in images.
The complete show notes for this episode can be found at https://twimlai.com/go/748.
777 рдПрдкрд┐рд╕реЛрдбрд╕
Inside Nano Banana ЁЯНМ and the Future of Vision-Language Models with Oliver Wang - #748
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 508093774 series 2355587
Today, weтАЩre joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash ImageтАФbetter known by its code name, тАЬNano Banana.тАЭ We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with GeminiтАЩs world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day тАЬthinkтАЭ and тАЬreasonтАЭ in images.
The complete show notes for this episode can be found at https://twimlai.com/go/748.
777 рдПрдкрд┐рд╕реЛрдбрд╕
All episodes
×рдкреНрд▓реЗрдпрд░ рдПрдлрдПрдо рдореЗрдВ рдЖрдкрдХрд╛ рд╕реНрд╡рд╛рдЧрдд рд╣реИ!
рдкреНрд▓реЗрдпрд░ рдПрдлрдПрдо рд╡реЗрдм рдХреЛ рд╕реНрдХреИрди рдХрд░ рд░рд╣рд╛ рд╣реИ рдЙрдЪреНрдЪ рдЧреБрдгрд╡рддреНрддрд╛ рд╡рд╛рд▓реЗ рдкреЙрдбрдХрд╛рд╕реНрдЯ рдЖрдк рдХреЗ рдЖрдирдВрдж рд▓реЗрдВрдиреЗ рдХреЗ рд▓рд┐рдП рдЕрднреАред рдпрд╣ рд╕рдмрд╕реЗ рдЕрдЪреНрдЫрд╛ рдкреЙрдбрдХрд╛рд╕реНрдЯ рдПрдкреНрдк рд╣реИ рдФрд░ рдпрд╣ Android, iPhone рдФрд░ рд╡реЗрдм рдкрд░ рдХрд╛рдо рдХрд░рддрд╛ рд╣реИред рдЙрдкрдХрд░рдгреЛрдВ рдореЗрдВ рд╕рджрд╕реНрдпрддрд╛ рдХреЛ рд╕рд┐рдВрдХ рдХрд░рдиреЗ рдХреЗ рд▓рд┐рдП рд╕рд╛рдЗрдирдЕрдк рдХрд░реЗрдВред