Google expands its AI capabilities with the new voice assistant Gemini Live, characterized by natural conversation and human interactions. The company plans to integrate this advanced AI assistant into its latest Pixel smartphones and also make it available to other Android users through a subscription.
Rick Osterloh, who is in charge of Android, Chrome, and hardware at Google, emphasized in an exclusive interview that Gemini Live was designed to respond to user queries almost in real-time and communicate like a human. "Our focus was on improving system performance to enable quick and smooth interactions," said Osterloh.
Gemini Live sets itself apart from other voice assistants like Apple’s Siri or Amazon’s Alexa by reducing contextual misunderstandings and reacting flexibly to conversation interruptions. Users can, for example, correct the assistant during an ongoing response, allowing for more intuitive use.
Despite these advances, Gemini Live still has limitations. Many basic functions such as setting timers or starting alarms are currently not available, as the assistant runs entirely in the cloud and not locally on the device. However, Google is working on integrating these features soon.
The integration of Gemini Live into Google’s new Pixel devices and its availability for a monthly fee demonstrates how heavily Google is investing in the dissemination of AI-based tools. As AI increasingly becomes an integral part of smartphones, Google faces a market that is not yet fully prepared to embrace these technologies. Osterloh acknowledges that there will be a period of adjustment, similar to the transition from handwritten letters to emails.
With Gemini Live, Google Sets Another Milestone in AI Development While Further Extending Its Lead Over Competitors Like Apple and Amazon. However, the True Challenge May Lie in Convincing Users of the Benefits of These New Technologies Without Losing the Human Connection.