All Updates

All Updates

icon
Filter
Product updates
Anthropic introduces prompt caching for Claude API
Foundation Models
Aug 14, 2024
This week:
Partnerships
T-Mobile partners with OpenAI to develop AI-powered customer service platform
Generative AI Applications
Yesterday
Partnerships
Runway partners with Lionsgate to develop AI video tools using studio's movie catalog
Generative AI Applications
Sep 18, 2024
Funding
QMill raises EUR 4 million in seed funding to provide quantum computing industrial applications
Quantum Computing
Sep 18, 2024
Product updates
QuiX Quantum launches 'Bia' quantum cloud computing service for quantum solutions
Quantum Computing
Sep 18, 2024
Partnerships
Oxford Ionics and Infineon Technologies partner to build portable quantum computer for Cyberagentur
Quantum Computing
Sep 18, 2024
Partnerships
Product updates
Tencent Ai Lab launches EzAudio AI for text-to-audio generation with Johns Hopkins University
Foundation Models
Sep 18, 2024
Funding
TON secures USD 30 million in investment from Bitget and Foresight Ventures
Web3 Ecosystem
Sep 18, 2024
Funding
Hemi Labs raises USD 15 million in funding to launch blockchain network
Web3 Ecosystem
Sep 18, 2024
Product updates
Fivetran launches Hybrid Deployment for data pipeline management
Machine Learning Infrastructure
Sep 18, 2024
Product updates
Fivetran launches Hybrid Deployment for data pipeline management
Data Infrastructure & Analytics
Sep 18, 2024
Foundation Models

Foundation Models

Aug 14, 2024

Anthropic introduces prompt caching for Claude API

Product updates

  • Anthropic, a company specializing in developing AI models and apps, has launched prompt caching for its Claude API in a public beta.

  • The feature allows developers to cache frequently used context between API calls for Claude 3.5 Sonnet and Claude 3 Haiku models, with support for Claude 3 Opus coming soon. Moreover, the pricing for cached prompts is based on the number of input tokens cached and usage frequency.

  • Prompt caching enables developers to provide Claude with more background knowledge and example outputs. Furthermore, it is claimed to be effective for conversational agents, coding assistants, large document processing, detailed instruction sets, agentic search and tool use, and interacting with long-form content.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.