GenAI security platform DeepKeep introduced its GenAI Risk Assessment tool, which offers a thorough examination and risk evaluation of LLMs and computer vision models.
Key features of the tool include penetration testing, examining an AI model's tendencies to create false outputs, identifying the risk of data privacy breaches, and checking for harmful, biased, or unethical language usage. The tool's AI firewall further boosts protection against attacks on AI applications.
DeepKeep is an AI-native security and trustworthiness platform that identifies vulnerabilities in GenAI models and LLMs throughout their lifecycle. It provides automated security and trust remedies for data curation, model training, and model inferencing in pre- and post-production environments. DeepKeep protects the expanding AI surface area beyond the model's learned space and the AI application's comprehension, detecting actual validated threats in multi-modal AI systems including LLMs, vision, and tabular data.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.