Privacera, a startup that operates a data privacy infrastructure platform, announced the launch of Privacera AI Governance (PAIG), a unified tool allowing organizations to leverage AI and large language models (LLMs) while reducing exposure of sensitive corporate data and personally identifiable information (PII), and remaining compliant with regulations such as GDPR, CCPA, and HIPAA.
PAIG blends Privacera’s data security technology with dedicated AI and LLMs to dynamically manage security, privacy, and data access controls. The solution automatically scans and classifies training data, monitors AI models, user requests, and outputs, redacting or de-identifying data where required, and enforces data access controls in line with policy.
Analyst QuickTake: Several firms in the digital privacy space have recently rolled out new tools—broadly varying in functionality—in response to the proliferation of generative AI . Some firms aim to tackle privacy risks brought about by LLMs, with Private AI launching “PrivateGPT,” a privacy layer for ChatGPT, and Mithril Security and Skyflow launching secure processing environments for LLMs. Other tools leverage generative AI to automate or simplify workflows, including BigID’s “BigAI,” a specialized LLM for data discovery tasks, and Informatica’s ClaireGPT AI assistant.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.