President Joe Biden signed an executive order on AI that establishes standards for security and privacy protections and requires developers to safety-test new models.
The order will impact companies developing AI tools, such as Microsoft, Amazon, and Google, who will need to submit test results on their new models to the Government before releasing them to the public. The order also includes the development of standards for watermarking AI-generated content.
The executive order aims to govern the use of AI technology, leveraging the US Government's role as a top customer for Big Tech companies to vet technology with potential risks. It will also involve multiple federal agencies in ensuring AI systems meet safety and security requirements.
Analyst QuickTake: As the European Union moves closer to passing a comprehensive law to regulate AI's negative impacts and with the US Congress still at the initial stages of discussing safeguards, the Biden Administration appears to be taking action within its own domain of control. The executive order also aligns with the voluntary pledges made by tech companies and constitutes just one aspect of a larger strategy.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.