Google's DeepMind has introduced Gemini 1.5, a mid-size multimodal model, developed on Mixture-of-Experts (MoE) architecture.
The model is able to to manage around one million tokens and matches with Google's Gemini Ultra model and surpasses Gemini 1.0 Pro in most benchmark tests, with optimized computing due to the MoE approach.
The model can process vast amounts of information, including one hour of video, 11 hours of audio, and codebases with over 30,000 lines of code or over 700,000 words.
Google also announced making Gemini models available to developers through its cloud platform, Vertex AI.
The model is currently limited to a select group of developers and early testers.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.