All Updates

All Updates

icon
Filter
Product updates
OctoML launches self-optimizing compute service for AI
Machine Learning Infrastructure
Jun 14, 2023
This week:
M&A
N-able acquires Adlumin for USD 266 million to strengthen cybersecurity offerings
Next-gen Cybersecurity
Today
M&A
Bitsight acquires Cybersixgill for USD 115 million to enhance threat intelligence capabilities
Cyber Insurance
Today
M&A
Snowflake acquires Datavolo to enhance data integration capabilities for undisclosed sum
Generative AI Infrastructure
Today
M&A
Snowflake acquires Datavolo to enhance data integration capabilities for undisclosed sum
Data Infrastructure & Analytics
Today
Product updates
Microsoft launches Copilot Actions for workplace automation
Foundation Models
Yesterday
M&A
Almanac acquires Gro Intelligence's IP assets for undisclosed sum
Smart Farming
Yesterday
Partnerships
Aduro Clean Technologies partners with Zeton to build hydrochemolytic pilot plant
Waste Recovery & Management Tech
Yesterday
Funding
Oishii raises USD 16 million in Series B funding from Resilience Reserve
Vertical Farming
Yesterday
Management news
GrowUp Farms appoints Mike Hedges as CEO
Vertical Farming
Yesterday
M&A
Rise Up acquires Yunoo and expands LMS monetization capabilities
EdTech: Corporate Learning
Yesterday
Machine Learning Infrastructure

Machine Learning Infrastructure

Jun 14, 2023

OctoML launches self-optimizing compute service for AI

Product updates

  • OctoML, an ML model optimization and deployment platform, has launched the latest iteration of its services, OctoAI. This self-optimizing infrastructure service is designed to assist companies in building and deploying AI applications, with a particular emphasis on generative AI applications.

  • OctoAI is a managed computing service that supports businesses in utilizing pre-existing open-source models and refining them using their own data to host personalized models. Users can easily prioritize their preferences, such as latency or cost, and OctoAI will automatically determine the appropriate hardware for their needs. 

  • Moreover, the service automatically optimizes these models, resulting in additional cost savings and performance improvements. It also determines the most suitable platform for running the models, whether it be NVIDIA’S GPUs or AWS' Inferentia machines.

  • The new platform also provides access to a library of popular open-source large language models (LLMs), such as Stable Diffusion 2.1, Dolly v2, LLaMA 65B, Whisper, FlanUL, and Vicuna, which developers can use to build their AI applications. 

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.