Google announced that subscribers to its AI chatbot, Gemini Advanced, will be able to use its latest model, Gemini 1.5 Pro, and launched a new feature dubbed "Gemini Live."
Gemini Advanced showcases its ability to process and analyze large volumes of data, including up to 1,500 pages of documents, 100 emails, an hour of video content, and codebases with more than 30,000 lines.
The main features of Gemini Live include real-time voice chats and adapting to users' speech patterns, among other things. It can respond to users' surroundings through images or videos taken by a smartphone's camera. Gemini Live also can analyze images, interpret computer code, locate items, and offer coaching advice on various topics such as job interviews and public speaking.
The update also includes added capabilities like making plans, including trip itineraries, creating Gems (customized versions of Gemini), and more connections with Google Apps.
Analyst QuickTake: Google's Live Feature shares similarities with Meta's Ray-Ban glasses and OpenAI's recently revamped ChatGPT in terms of its capabilities. Google emphasizes that Gemini constitutes a significant step toward building a universal AI agent capable of understanding and promptly responding to everyday human contexts.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.