EDGE Insights

EDGE Insights

icon
Filter
This week:
Last week:
Foundation Models

Foundation Models

The evolving rulebook of AI governance: Balancing innovation with accountability

The GenAI ecosystem is fast evolving. Big Tech players are locked in a race to develop the next generation of AI models, while startups proliferate, churning out an assortment of AI-powered applications. Meanwhile, ML infrastructure companies continue to build the chips, hardware, and support systems needed to scale AI models and applications. Right now, AI governance is the new hydra-headed problem, as it attempts to address concerns around AI safety and responsible use at each stage of the AI/ML pipeline, from data sourcing to testing to deployment.
Current approaches to AI governance broadly include hard laws like the EU AI Act and self-regulation in line with AI ethics scholarship and best practices. The GenAI ecosystem is working to respond to this growing push for regulation, with foundation model builders leading the response, albeit imperfectly. Meanwhile, ML infrastructure  companies are slowly working to embed AI governance functionality into their platforms. Generative AI (GenAI) applications currently trail behind, taking their cues from other players in the broad ecosystem.
This Insight explores the need for AI governance and navigates current approaches from hard law to self-regulation, emerging responses from players in the broader AI/ML ecosystem, and the impacts of governance on AI systems development.

What is AI governance?

AI governance deals with the development and use of AI systems in alignment with business, regulatory, and ethical requirements. Through consensus-building with diverse stakeholders, the objective of AI governance is to ensure that AI systems are developed and used in a responsible, ethical, fair, and transparent way.
AI governance may be conceived as a three-layer structure, with AI systems as the core governed entity. At the broad environmental level, AI governance takes the form of legal frameworks (hard law or binding regulation), principles and guidelines from AI ethics scholarship and practice (self-regulation), and stakeholder pressures governing AI development. 
At the organizational level, AI governance refers to directing, managing, and monitoring the AI activities of an enterprise. It involves the development of AI in alignment with an organization’s strategy and values as well as its broader contextual environment. The AI systems level is the final, critical operational level of AI governance, and it covers the development, use, and management of AI systems. 
three layers of ai governance
Source: Compiled by SPEEDA Edge based on the AIGA AI Governance Framework
At a minimum, the governance of AI systems calls for the documentation of details including the origins of data, the techniques that train each model, the parameters used, and the metrics from testing phases. Through such documentation, users have increased visibility into model behavior throughout the AI lifecycle, the data used in its development, and potential risks. 
At a more granular level, AI governance concerns itself with the end-to-end AI lifecycle, including: 1) AI systems design, 2) algorithm design, 3) data operations, 4) development operations, 5) risks and impacts, 6) accountability and ownership, 7) transparency, explainability, and contestability, and 8) compliance.

The need for AI governance

While AI alignment is already a critical objective in the development of foundation models, the evolving threat landscape and risks of misuse continue to fuel the push for systemic AI governance. Present risk-based approaches to AI governance, like the EU AI Act, are primarily concerned with preventing AI systems from manipulating individuals through subliminal techniques, exploiting vulnerable groups, or attacking critical infrastructure. Data privacy concerns and copyright issues are also among the areas that current forms of AI governance seek to address. 
However, LLM vulnerabilities and the potential weaponization of GenAI present added concerns that may need addressing through regulation.  

LLM vulnerabilities

In 2023, the Open Worldwide Application Security Project (OWASP) released a draft list of ten vulnerability types for AI applications built on LLMs. Each vulnerability lends itself to a range of potential attack scenarios. 
owasp top 10
Source: Open Worldwide Application Security Project’s (OWASP) Top 10 for LLM 2023
Prompt injections top the list as the most prevalent form of attack. These vulnerabilities occur when attackers manipulate an LLM using crafted input prompts, with outcomes ranging from the exposure of sensitive information to influencing decisions. In OWASP’s attack scenarios, prompt injections can be used to manipulate an LLM’s interactions with plugins, triggering unauthorized purchases, inappropriate social media posts, and more.
The LLM supply chain is also vulnerable to attack, undermining the integrity of training data, ML models, and deployment platforms, leading to breaches, biased outcomes, and potential system failures. While traditional vulnerabilities focused on software components alone, impacts are extended due to the re-use of pre-trained models, crowdsourced data, and extension plugins in AI systems. In OWASP’s attack scenarios, supply chain attacks can include the exploitation of a vulnerable package to compromise a system. The OpenAI data breach, which stemmed from a vulnerability in the Redis open-source library used by ChatGPT, is a case in point.
The vulnerabilities associated with LLMs and similar large AI models could impact the entire AI/ML pipeline. As such, AI governance necessarily has a broad remit, encompassing end-to-end MLOps.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.