All Updates

All Updates

icon
Filter
Product updates
LatticeFlow launches the first evaluation framework for EU AI Act compliance
Generative AI Infrastructure
Yesterday
This week:
Industry news
White House considers expanding implementation of AI chip export limits to certain gulf countries citing national security concerns
Generative AI Infrastructure
Oct 15, 2024
Funding
Galileo raises USD 45 million in Series B funding to improve its AI evaluation platform and research capabilities
Generative AI Infrastructure
Oct 15, 2024
Funding
Xscape Photonics raises USD 44 million in Series A funding to develop AI data center platform
Generative AI Infrastructure
Oct 15, 2024
Product updates
Huawei launches Cloud Stack 8.5 for Middle East and Central Asia to create an enhanced hybrid cloud
Generative AI Infrastructure
Oct 15, 2024
Partnerships
Databricks partners with AWS to enhance GenAI capabilities using Trainium chips
Generative AI Infrastructure
Oct 15, 2024
Last week:
Industry news
ServiceNow, CoreWeave, and others to invest USD 8.2 billion in UK data centers
Generative AI Infrastructure
Oct 13, 2024
Partnerships
Dell expands AI Factory with new PowerEdge servers powered by AMD
Generative AI Infrastructure
Oct 13, 2024
Partnerships
AssemblyAI partners with Langflow to enhance GenAI application development
Generative AI Infrastructure
Oct 12, 2024
Funding
CoreWeave secures USD 650 million credit line from Wall Street banks for product expansion and growth
Generative AI Infrastructure
Oct 11, 2024
Generative AI Infrastructure

Generative AI Infrastructure

Yesterday

LatticeFlow launches the first evaluation framework for EU AI Act compliance

Product updates

  • AI model evaluation platform LatticeFlow has launched Compl-AI, the first evaluation framework for determining compliance with the EU AI Act. The free, open-source framework evaluates large language models (LLMs) across 27 technical areas based on the Act's six ethical principles.

  • Compl-AI assesses LLM responses in areas such as prejudiced answers, general knowledge, biased completions, following harmful instructions, truthfulness, copyrighted material memorization, common sense reasoning, goal hijacking, and prompt leakage. The framework rates models on a scale from 0 (no compliance) to one (full compliance), with N/A scores for insufficient data.

  • The platform reveals shortcomings in existing models and benchmarks, particularly in areas like robustness, safety, diversity, and fairness. Additionally, the company claims the methodology can be extended to evaluate AI models against future regulatory acts, making it a valuable tool for organizations working across different jurisdictions.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.