Google unveiled its latest AI model, Gemini, trained on custom silicon chips, specifically Google's own Tensor Processing Units (TPUs).
The AI model is available in three versions: Gemini Ultra (for text and image analysis), Gemini Nano (for smartphone features), and Gemini Pro (powering Bard, Google's generative chatbot).
It is claimed that Google's use of custom silicon, its Tensor Processing Units (TPUs), allows Gemini to run "significantly faster" than previous models.
Google claims Gemini surpasses OpenAI's GPT-4 in language understanding and code generation.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.