All Updates

All Updates

icon
Filter
Product updates
Hugging Face launches inference as a service for AI deployment
Generative AI Infrastructure
Jul 29, 2024
This week:
Product updates
Hexagon unveils Advanced Compensation for metal 3D printing
Additive Manufacturing
Yesterday
Funding
Eden AI raises EUR 3 million in seed funding to accelerate product development
Generative AI Infrastructure
Nov 21, 2024
M&A
Wiz acquires Dazz to expand cloud security remediation capabilities
Next-gen Cybersecurity
Nov 21, 2024
Partnerships
Immutable partners with Altura to enhance Web3 game development and marketplace solutions
Web3 Ecosystem
Nov 21, 2024
Funding
OneCell Diagnostics raises USD 16 million in Series A funding to enhance cancer diagnostics
Precision Medicine
Nov 21, 2024
Partnerships
BioLineRx and Ayrmid partner to license and commercialize APHEXDA across multiple indications
Precision Medicine
Nov 21, 2024
Product updates
SOPHiA GENETICS announces global launch of MSK-IMPACT powered with SOPHiA DDM
Precision Medicine
Nov 21, 2024
Product updates
Biofidelity launches Aspyre Clinical Test for lung cancer detection
Precision Medicine
Nov 21, 2024
Partnerships
Spendesk partners with Adyen to enhance SMB spend management with banking-as-a-service solution
Business Expense Management
Nov 21, 2024
M&A
Mews acquires Swedish RMS provider Atomize to enhance Hospitality Cloud platform
Travel Tech
Nov 21, 2024
Generative AI Infrastructure

Generative AI Infrastructure

Jul 29, 2024

Hugging Face launches inference as a service for AI deployment

Product updates

  • Hugging Face has launched an inference-as-a-service product for AI deployment on NVIDIA's DGX Cloud. This service leverages NVIDIA NIM microservices to enhance token efficiency and provide access to popular AI models for developers.

  • The new service will deliver up to five times better token efficiency, enable immediate access to NVIDIA NIM microservices, and support leading AI models like Llama 3 and Mistral AI. Developers can prototype and deploy open-source AI models from the Hugging Face Hub, benefiting from serverless inference, increased flexibility, minimal infrastructure overhead, and optimized performance with NVIDIA NIM on the NVIDIA DGX Cloud.

  • Analyst QuickTake: Hugging Face offers integrated MLOps solutions, with a platform akin to GitHub for AI code repositories, models, and datasets. Launching its inference-as-a-service capabilities for foundation models such as LLama 3 and Mistral AI expands its capabilities into LLMOps solutions.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.