Aporia, an AI control platform for enhancing AI security and integrity, has expanded its research center, Aporia Labs, to focus on preventing AI risks, such as hallucinations and bias.
Aporia Labs intends to develop advanced policies to mitigate hallucinations, prevent data leakage, and protect against breaches of sensitive information. It also plans to release reports highlighting AI risks in the coming months.
This expansion aims to enhance Aporia’s AI Guardrails solution further to ensure robust and unbiased AI performance.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.