NovuMind specializes in providing efficient inferencing hardware for AI applications. The company's technology is built on a tensor processing architecture, using tensors as the primitive data type. This technology has been protected by patents since 2018. NovuMind's AI chips focus exclusively on deep learning acceleration, specifically designed around small 3 x 3 convolution filters for neural networks. The NovuTensor IP is scalable from a couple of TOPS to over PetaOPS, offering customers flexibility to optimize according to their requirements across different foundry process nodes and libraries.
The company designed its first prototype AI chip to deliver 15 teraflops of performance under 5 W, with support for TensorFlow, Caffe, and Torch models. In February 2018, it planned a second chip designed to operate under 1 W.
In October 2018, the company announced plans to release a family of products based on NovuTensor, including PCI Express AI accelerator cards and a Developers Kit. However, no update on its launch status was provided as of November 2024.
The technology aims to enable AI processing at the edge, allowing internet-connected devices to perform recognition tasks without requiring constant communication with data centers.
Key customers and partnerships
In July 2018, NovuMind partnered with Hewlett Packard Enterprise AI Innovation Center to utilize NovuMind's AI software for training state-of-the-art AI models. In the same month, NovuMind collaborated with the city of Chengdu on the Shennong Supercomputer Project, providing software for training deep learning models on its NovuStar distributed AI training platform.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.