All Updates

All Updates

icon
Filter
Product updates
Partnerships
Cerebras Systems unveils new 'Wafer Scale Engine 3' AI chip; announces partnership with Qualcomm
Generative AI Infrastructure
Mar 13, 2024
This week:
M&A
N-able acquires Adlumin for USD 266 million to strengthen cybersecurity offerings
Next-gen Cybersecurity
Today
M&A
Bitsight acquires Cybersixgill for USD 115 million to enhance threat intelligence capabilities
Cyber Insurance
Today
M&A
Snowflake acquires Datavolo to enhance data integration capabilities for undisclosed sum
Generative AI Infrastructure
Today
M&A
Snowflake acquires Datavolo to enhance data integration capabilities for undisclosed sum
Data Infrastructure & Analytics
Today
Product updates
Microsoft launches Copilot Actions for workplace automation
Foundation Models
Yesterday
M&A
Almanac acquires Gro Intelligence's IP assets for undisclosed sum
Smart Farming
Yesterday
Partnerships
Aduro Clean Technologies partners with Zeton to build hydrochemolytic pilot plant
Waste Recovery & Management Tech
Yesterday
Funding
Oishii raises USD 16 million in Series B funding from Resilience Reserve
Vertical Farming
Yesterday
Management news
GrowUp Farms appoints Mike Hedges as CEO
Vertical Farming
Yesterday
M&A
Rise Up acquires Yunoo and expands LMS monetization capabilities
EdTech: Corporate Learning
Yesterday
Generative AI Infrastructure

Generative AI Infrastructure

Mar 13, 2024

Cerebras Systems unveils new 'Wafer Scale Engine 3' AI chip; announces partnership with Qualcomm

Product updates
Partnerships

  • Cerebras Systems, a developer of computing chips and systems dedicated to accelerating AI workloads, has unveiled the Wafer Scale Engine 3 (WSE-3), the third iteration of its semiconductor chip, which is claimed to be the world's largest. This new product is designed for the optimization of AI models and offers double the performance of its predecessor. The price of its solution remains unchanged.

  • The WSE-3’s design is equivalent to a 12-inch wafer. Key features include a transistor count boosted to 4 trillion, a reduction in transistor size from seven to five nanometers, an upgraded on-chip SRAM from 40 GB to 44 GB, and an increase in compute cores from 850,000 to 900,000. These additional enhancements have been achieved without greatly altering the ratio of logic transistors to memory circuits.

  • The company claims that the new chip makes AI models easier to program than GPUs by reducing the coding required to train advanced AI models.

  • Furthermore, Cerebras Systems has partnered with Qualcomm to use its AI processor to make real-time predictions. This partnership is believed to make running AI models more efficient and cost-effective. 

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.