GenAI hardware accelerator RaiderChip launched GenAI v1, proposing a new and effective solution for LLM inference found on varying low-cost FPGA devices.
GenAI v1 is designed with 32-bit floating-point arithmetic, offering precision that facilitates direct usage of original LLM model weights without alteration or quantization. Key features include maintaining raw LLM models’ intelligence and reasoning capabilities, compatibility with several FPGA devices, and a "plug-and-play" nature that utilizes minimal AXI interfaces.
The company claims that the GenAI v1 lets customers run unquantized LLM models at full interactive speed on limited memory bandwidths, whereas its competition performs 20% slower.
Analyst QuickTake: The round follows RaiderChip raising EUR 1 million (~USD 1.1 million) in seed funding to expand operations and market its AI accelerator engine. Additionally, GenAI v1 supports a variety of language models, from the Microsoft Phi-2 small language model (SLM) for targeted solutions to the Meta Llama-2 and Llama-3 LLMs for more complex applications.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.