All Updates

All Updates

icon
Filter
Product updates
Kinara Inc launches Ara-2 Edge AI processor to deliver high performance for LLM models
Generative AI Infrastructure
Dec 12, 2023
This week:
Partnerships
Microsoft and BlackRock partner to launch USD 30 billion AI data center investment fund
Machine Learning Infrastructure
Yesterday
Funding
Limitless Labs raises USD 3 million in pre-seed funding to develop prediction market
Web3 Ecosystem
Yesterday
Product updates
Google Cloud launches Blockchain RPC service for Web3 developers
Web3 Ecosystem
Yesterday
Product updates
Kore.ai launches GALE platform for enterprise GenAI adoption
Machine Learning Infrastructure
Yesterday
Product updates
Kore.ai launches GALE platform for enterprise GenAI adoption
Generative AI Infrastructure
Yesterday
Partnerships
Climeworks partners with Terraset to enable philanthropic support for carbon removal
Carbon Capture, Utilization & Storage (CCUS)
Sep 17, 2024
Funding
8 Rivers secures investment from JX Nippon to commercialize DAC technology
Carbon Capture, Utilization & Storage (CCUS)
Sep 17, 2024
Product updates
ProAmpac launches enhanced online pouch configurator MAKR by DASL for custom flexible packaging prototypes
Smart Packaging Tech
Sep 17, 2024
Funding
M&A
Majority stake in Bollegraaf Group acquired by Summa Equity for EUR 800 million
Waste Recovery & Management Tech
Sep 17, 2024
Partnerships
NASA awards Intuitive Machines contract for near-space network services
Space Travel and Exploration Tech
Sep 17, 2024
Generative AI Infrastructure

Generative AI Infrastructure

Dec 12, 2023

Kinara Inc launches Ara-2 Edge AI processor to deliver high performance for LLM models

Product updates

  • Kinara, Inc. has introduced the Ara-2 Edge AI processor, designed for edge servers and laptops. The Ara-2 Edge AI processor is tailored for running applications like video analytics, LLMs, and GenAI models.

  • It claims a performance increase of 5–8x compared to its predecessor, the Ara-1. The processor aims to provide real-time responsiveness, high throughput, and low latency for executing large AI models. Notably, it supports traditional AI models, as well as state-of-the-art models with transformer-based architectures.

  • Key features include an experientially enhanced feature set, on-chip memories, high off-chip bandwidth, and support for 10's of billions of parameters used by GenAI models.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.