China to set strict rules for introduction of AI tools that can be developed

The Chinese government is considering new rules for the development of artificial intelligence (AI) that emphasize content control and licensing.

According to In a July 11 Financial Times story, the Cyberspace Administration of China (CAC) wants to impose a system that would require local companies to obtain a license before launching production AI systems.

The move marks a tightening of an initial draft regulation released in April that allowed companies to register with authorities 10 business days after a product was introduced.

According to sources consulted by the FT, the new licensing system is expected to be included in the next regulation, which is scheduled to be published by the end of this month.

Mandatory security reviews of AI-generated content were also included in the April draft of the rule.

In its draft, the government said all content must „embody basic socialist values” and must not „subvert state authority, overthrow the socialist system, incite secession, or undermine national unity.”

Cointelegraph reached out to the CAC for comment on the matter, but did not receive a response prior to publication.

Chinese tech and e-commerce companies, Baidu and Alibaba both launched AI tools this year, rivaling the popular AI chatbot ChatGPT.

According to FT news sources, both companies have been in touch with regulators in recent months to get their products in line with the new rules.

With the above-mentioned implications for the upcoming regulation, The draft also says that Chinese authorities hold tech companies that build AI models fully responsible for any content created with their products.

READ  Personal relationships are being replaced by technology: Grupo Milenio expert

Regulators around the world have called for moderation of AI-generated content. In America, Senator Michael Bennett recently wrote a letter to companies developing technology to tag AI-generated content.

The European Commission’s vice-president for values ​​and transparency, Vera Djurova, recently told the media that she believes generative AI tools „have the potential to generate disinformation”. They should label the generated content to avoid it.

Clarification: The information and/or opinions expressed in this article do not necessarily represent the views or editorial line of Cointelegraph. The information provided herein should not be construed as financial advice or investment recommendation. All investment and business operations involve risks and are the responsibility of each individual before making an investment decision.

Continue reading:

Investments in crypto assets are not restricted. They may not suit retail investors and the entire amount invested may be lost. The services or products offered are not directed at or accessible to investors in Spain.

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *