Hugging Face, a platform for ML and natural language processing, has entered a strategic partnership with Google Cloud to allow developers working on Hugging Face to leverage Google Cloud's infrastructure.
Developers will have the ability to use Google Cloud's AI-optimized infrastructure such as GPUs and TPUs to train and serve open models and build new GenAI applications. Additionally, developers can train and deploy Hugging Face models on Google Cloud's Vertex AI directly from the Hugging Face platform.
In addition, developers can use the Google Cloud Marketplace for the management and billing of the Hugging Face managed platform, encompassing inference, endpoints, and AutoTrain.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.