Launched from stealth in July 2024, EdgeRunner AI focuses on developing GenAI solutions for Edge computing. Its core technology revolves around ultra-efficient language models (UELMs), which are small, task-specific, open models optimized to operate on any device or hardware without requiring internet access. These models prioritize data privacy, security, and compliance, enabling enterprises and organizations for responsible adoption of AI without compromising performance or security.
The company's platform, powered by transparent models, is designed to solve real-world business challenges by deploying multiple small, task-specific open models that work together to create swarm intelligence. This approach aims to address the specific needs of enterprises and governments, rather than relying on large, generalized models like GPT-5 or AGI.
EdgeRunner AI's approach involves including small language models (SLMs) into UELMs by optimizing them for specific tasks and various hardware platforms. This enables local execution of AI models on any device or hardware, resulting in improved performance, increased data privacy, near-zero latency, and reduced power consumption.
Key customers and Partnerships
As of August 2024, Edgerunner AI’s partners included Intel, Dell Technologies, and Deloitte.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.