China is reportedly planning stricter controls on the use of GenAI services in the country to address security concerns and has issued draft guidance on the security of training data and the implementation of large language models (LLMs).
The draft guidance suggests that authorized data labelers and reviewers should process AI training data and that developers should base their LLMs on foundational models filed with authorities. It also proposes a blacklist system to block training data with illegal or harmful content.
Analyst QuickTake: The aim of the stricter controls is to ensure that GenAI services in China do not infringe copyright, breach personal data security, or violate cybersecurity laws. The proposed measures aim to strike a balance between harnessing the benefits of AI and mitigating the risks. In July 2023, the Cyberspace Administration of China (CAC), in collaboration with six other regulatory bodies, introduced new rules to govern GenAI, which came into effect on August 15.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.