The EU has agreed on pioneering AI regulations, focusing on governing technologies like GenAI models and biometric identification tools.
Companies developing these technologies will face stringent requirements, including technical documentation, compliance with EU copyright laws, and transparent disclosure of training content. Advanced models posing systemic risks will undergo extra scrutiny, necessitating risk assessment, incident reporting, cybersecurity measures, and energy efficiency reporting.
Notably, Germany, France, and Italy prefer self-regulation by companies over direct government control of GenAI models to prevent stifling innovation and competitive ability against global tech leaders.
Analyst QuickTake: The EU AI Act categorizes AI into different risk levels and follows years of efforts, originating from a 2021 proposal, to regulate AI technology, with GenAI's emergence prompting a reevaluation due to concerns about job displacement, discriminatory language, and privacy issues. The European Parliament must still vote on the act, which is anticipated to be a formality, according to Brando Benifei, an Italian lawmaker involved in the negotiations.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.