Google introduced a new family of lightweight open-weight models named Gemma 2B and Gemma 7B, which can be utilized for research and commercial purposes.
The Gemma models follow a dense decoder-only architecture. Moreover, developers can use ready-to-use Colab and Kaggle notebooks and integrations with platforms like Hugging Face, MaxText, and NVIDIA's NeMo. Once they are pre-trained and tuned, they can run universally.
Gemma models are optimized for Google Cloud and can run across several device types, including laptop, desktop, IoT, mobile, and cloud. Google has also partnered with NVIDIA to optimize Gemma for NVIDIA GPUs. Additionally, Google is rolling out a new responsible GenAI toolkit and a debugging tool.
Analyst QuickTake: This news comes a week after Google launched its latest Gemini 1.5 model with multimodal capabilities. Gemma, a product of Google DeepMind and several other Google teams, emanates from the same technology used to create the Gemini models. Google claims these open models offer wide access for developers and researchers to customize and fine-tune as per their requirements, although they are not categorized as open-source.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.