HomeTechnologyGovernments race to regulate AI tools

Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI’s ChatGPT are complicating governments’ efforts to agree on laws governing the use of the technology.

Here are the latest steps national and international governing bodies are taking to regulate AI devices:


*Planning Rules

Australia will make search engine Draft new codeTo prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material.


*Planning Rules

Leading AI Developers Agreed at the first Global AI Security Summit in the UK on 2 November, working with governments to test new frontier models before releasing them to help manage the risks of developing technology.

More than 25 countries present meetingThe US and China as well as the EU signed the “Bletchley Declaration” on 1 November to work together and establish a common approach on inspections.

Britain said at the summit it would triple it to 300 million pounds ($364 million) Grant As for the “AI Research Resource”, which includes two supercomputers that will support research into making advanced AI models safer, Prime Minister Rishi Sunak said a week later that the UK would install the world’s first A.I. security institute,

Britain’s data watchdog said in October it had released Snap Inc. (SNAP.N) snapchat with initials enforcement notice Over a potential failure to appropriately assess the privacy risks of its generative AI chatbot to users, particularly children.


*Temporary rules were implemented

China’s Vice Minister of Science and Technology Wu Zhaohui said this in the inaugural session AI Security Summit On November 1, Britain said Beijing was ready to step up cooperation on AI safety to help build an international “governance framework”.

China published Security requirements proposed in October for companies offering services powered by generative AI also include a blacklist of sources that cannot be used to train AI models.

Country Issued A set of temporary measures in August, requiring service providers to submit security assessments and obtain approval before releasing AI products on a large scale.

European Union

*Planning Rules

EU MPs and governments arrived on 8 December temporary deal On the historical rules governing the use of AI, including the use of AI by governments in biometric surveillance and how to regulate AI systems like ChatGPT.

The agreement requires basic models and general-purpose AI systems to comply with transparency obligations before they can be brought to market. These include preparing technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.


* Investigating potential violations

French privacy watchdog said in april It was investigating complaints about ChatGPT.


* Seeking input on regulations

G7 countries agreed to 11-point agreement on October 30 Code of conduct For companies developing advanced AI systems, which “aims to promote safe, secure and trustworthy AI around the world”.


* Investigating potential violations

Italian Data Protection Authority plan to review AI platforms and hire experts in the field, a top official said in May. chatgpt was temporarily banned in Italy in March, but it was made Available Again in April.


*Planning Rules

Japan hopes to introduce By the end of 2023An official close to the discussions said in July that the rules were closer to the US approach than tougher rules planned in the EU.

country’s privacy watchdog has warned OpenAI will not collect sensitive data without people’s permission.


* Investigating potential violations

Poland’s personal data protection office said in September OpenAI investigation On complaints that ChatGPT breaks EU data protection laws.


* Investigating potential violations

Spain’s data protection agency launched a preliminary investigation into possible data breaches by ChatGPT in April.

United Nations

*Planning Rules

UN Secretary-General Antonio Guterres announced the formation of the 39-member committee on October 26. Advisory bodyComposed of technical company executives, government officials and academics to address issues in the international governance of AI.

United States of america

* Seeking input on regulations

On November 27, America, Britain and more than a dozen other countries Unveiled A 20-page non-binding agreement that includes general recommendations on AI such as monitoring systems for abuse, protecting data from tampering and vetting software suppliers.

America will launch an A.I. security institute To evaluate the known and emerging risks of so-called “frontier” AI models, Commerce Secretary Gina Raimondo said during the AI ​​Security Summit in Britain on Nov. 1.

President Joe Biden issued a executive Order On October 30, developers of AI systems that pose a risk to US national security, the economy, public health or safety will be required to share the results of security tests with the government.

The US Federal Trade Commission opened a Investigation OpenAI in July over claims that it violated consumer protection laws.

- Advertisement -
- Advertisment -