Giskard, the developer of ML testing solutions, has developed an open-source testing framework for large language models (LLMs), aiming to identify biases, security vulnerabilities, and the potential generation of harmful or toxic content.
The newly developed product by Giskard is an open-source Python library integrated into LLM projects, specifically focusing on retrieval-augmented generation (RAG) projects.
It is compatible with various ML tools like Hugging Face, MLFlow, and TensorFlow where it assists in generating a test suite covering issues such as performance, biases, misinformation, and harmful content. Giskard enables continuous testing through integration with CI/CD pipelines.
The firm claims the framework was launched in response to the increasing need for ML testing systems due to impending regulations such as the EU's AI Act.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.