All Updates

All Updates

icon
Filter
Product updates
LatticeFlow launches the first evaluation framework for EU AI Act compliance
Generative AI Infrastructure
Oct 16, 2024
This week:
Funding
EKORE raises EUR 1.3 million (~ USD 1 million) in seed funding to strengthen platform
Digital Twin
Dec 20, 2024
Funding
Culina Health raises USD 7.9 million in Series A funding to expand offerings and expand team
Functional Nutrition
Dec 19, 2024
FDA approval
ViGeneron receives IND clearance for VG801 gene therapy
Cell & Gene Therapy
Dec 19, 2024
Product updates
Reflex Aerospace ships first commercial satellite SIGI
Next-gen Satellites
Dec 19, 2024
Partnerships
Vast partners with SpaceX for two private astronaut missions to ISS
Space Travel and Exploration Tech
Dec 19, 2024
Management news
Carbios appoints Philippe Pouletty as interim CEO amid plant delay
Waste Recovery & Management Tech
Dec 19, 2024
Funding
BlueQubit raises USD 10 million in seed funding to develop quantum platform
Quantum Computing
Dec 19, 2024
FDA approval
Arbor Biotechnologies receives FDA clearance for ABO-101 IND application
Human Gene Editing
Dec 19, 2024
Partnerships
Funding
Personalis partners with Merck and Moderna for cancer therapy development and investment
Precision Medicine
Dec 19, 2024
Partnerships
COTA partners with Guardant Health to develop clinicogenomic data solutions for cancer research
Precision Medicine
Dec 19, 2024
Generative AI Infrastructure

Generative AI Infrastructure

Oct 16, 2024

LatticeFlow launches the first evaluation framework for EU AI Act compliance

Product updates

  • AI model evaluation platform LatticeFlow has launched Compl-AI, the first evaluation framework for determining compliance with the EU AI Act. The free, open-source framework evaluates large language models (LLMs) across 27 technical areas based on the Act's six ethical principles.

  • Compl-AI assesses LLM responses in areas such as prejudiced answers, general knowledge, biased completions, following harmful instructions, truthfulness, copyrighted material memorization, common sense reasoning, goal hijacking, and prompt leakage. The framework rates models on a scale from 0 (no compliance) to one (full compliance), with N/A scores for insufficient data.

  • The platform reveals shortcomings in existing models and benchmarks, particularly in areas like robustness, safety, diversity, and fairness. Additionally, the company claims the methodology can be extended to evaluate AI models against future regulatory acts, making it a valuable tool for organizations working across different jurisdictions.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.