Analog Inference specializes in developing AI inference accelerators by leveraging its proprietary analog in-memory compute technology, which offers improved performance per watt compared to other technologies. The company's core offering is its AI system-on-chip (SOC) processor, designed to run single or multimodal GenAI apps in data centers and computer vision applications at the edge.
Analog Inference employs in-memory computing with non-volatile memory cells that perform computations directly, eliminating the need for static random access memory (SRAM), dynamic random access memory (DRAM), multipliers, and adders. This approach addresses the von Neumann bottleneck, enabling parallel processing and significantly improving power efficiency and performance.
As of September 2023, Analog Inference completed digital and analog designs for its test chips and documented the tapeout of its AI4 SOC processor, with plans for a demonstration and release in the future. It released alpha versions of its estimator, optimizer, and compiler on the software side, with stack production and release slated for 2024.
Key customers and partnerships
Analog Inference has built a strong ecosystem with OEMs, ODMs, system integrators, and AI analytics vendors, targeting markets like edge servers, smart retail, security, and smart cities. It collaborates with national labs, Big Data firms, and hardware manufacturers.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.