All Updates

All Updates

icon
Filter
Product updates
Kneron releases new KL730 AI chip to support LLMs at the edge
Edge Computing
Aug 15, 2023
This week:
Funding
Matr Foods raises EUR 20 million in debt funding to build mycelium meat factory
Plant-based Meat
Today
Last week:
M&A
Platform Science to acquire Trimble's global transportation telematics business units
Truck Industry Tech
Yesterday
Funding
Whatfix raises USD 100 million in Series E funding to expand business
EdTech: Corporate Learning
Yesterday
Product updates
Sky Mavis launches cross-game onboarding solution
Web3 Ecosystem
Sep 14, 2024
Funding
Bicara Therapeutics raises USD 315 million in IPO; plans to develop lead candidate ficerafusp alfa
Precision Medicine
Sep 13, 2024
Partnerships
Massive Bio and Foundation Medicine partner to improve cancer clinical trial enrollment
Precision Medicine
Sep 13, 2024
Partnerships
Moffitt Cancer Center partners with AstraZeneca to advance oncology cell therapies
Cell & Gene Therapy
Sep 13, 2024
Product updates
Quandela launches European quantum computer in North America
Quantum Computing
Sep 13, 2024
Partnerships
IonQ achieves high qubit gate fidelity on barium development platform
Quantum Computing
Sep 13, 2024
Partnerships
Massive Bio and Foundation Medicine partner to improve cancer clinical trial enrollment
Clinical Trial Technology
Sep 13, 2024
Edge Computing

Edge Computing

Aug 15, 2023

Kneron releases new KL730 AI chip to support LLMs at the edge

Product updates

  • Kneron, an on-device edge AI chip manufacturer, launched KL730, an auto-grade neural processing unit (NPU) chip. It will feature an integrated Image Signal Processor (ISP), as well as a peripheral interface that connects digital signals such as image, video, audio, and millimeter waves, to enable users to develop a number of AI applications across multiple industries.

  • The company claims that KL730 will offer a 3x–4x increase in energy efficiency compared to its previous models. Notably, the chip will also offer support for lightweight GPT large language models (LLMs), such as nanoGPT, and will be capable of 0.35–4 effective tera operations per second (TOPS).

  •  Furthermore, the chip will leverage Kneo, the company’s edge AI network, to enable users to run LLMs either partially or fully offline on their edge devices.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.