Google used its annual Google I/O developer conference to unveil a series of additions to its Gemma family of models. The event's highlight was the announcement of the Gemma 2, its next-gen iteration of open-weight models with a 27 billion parameter model slated for release in June.
PaliGemma, a rendition of the Gemma model capable of image captioning, image labeling, and visual Q&A scenarios, is already available. It is regarded as the initial vision language model within the Gemma family. Compared to the previous versions, which carried only 2 billion and 7 billion parameters, this new 27 billion model represents a significant leap forward.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.