Newswire

For Further Information Contact:

korea@transatlanticlaw.com

Korea Update: Legislative Framework and Practical Implications of “Law on Nurturing the AI Industry and Establishing a Trust Basis”

With the emergence of Chat GPT, the importance of artificial intelligence (AI) has become even more prominent. On February 14, 2023, the Science, ICT, Broadcasting and Communications Committee of the National Assembly of South Korea reached an agreement on the “Law on Nurturing the AI Industry and Establishing a Trust Basis.” This can be considered a remarkable legislation not only in the history of South Korea but also around the world.

This law, which can be considered as the basic law of artificial intelligence, has drawn a great deal of attention, as there were as many as 12 bills with the name “Artificial Intelligence” proposed since the bill on “the Law on Research and Development, Industrial Promotion, and Ethical Responsibility of Artificial Intelligence” was introduced by lawmaker Lee Sang-Min on July 13, 2020, after the 21st National Assembly was inaugurated. In order to facilitate the deliberation process, a public hearing was held on February 24, and all lawmakers from both the ruling and opposition parties agreed on the need for the passage of this bill, emphasizing the importance of establishing the AI industry based on reliability, transparency, and safety. It was also mentioned that nurturing the industry through self-regulation by companies is more important than government regulation. In a recent discussion at a subcommittee meeting of the standing committee held on December 2022, the scope of discussion has been expanded to include the need for risk management measures in high-risk areas that could be linked to human rights violations such as biometrics or employment, as well as the legal definition of AI ethics principles and algorithms. On February 14, 2023, the bill was finally passed as a committee alternative, which incorporates the support system and ethical principles of artificial intelligence into one bill.

This law establishes itself as a basic law that has precedence over all other laws regarding artificial intelligence and also has the nature of a special law. Furthermore, it provides the first legal definition of artificial intelligence, defining it as the implementation of human intellectual abilities, such as learning, reasoning, perception, judgment, and language comprehension, through electronic methods. The law also sets out basic principles to ensure the safety and reliability of AI and to enable its development in a direction that enhances the lives of citizens. These principles are well reflected in the EU’s GDPR (2018), the U.S. 10 Principles for Trusted Artificial Intelligence (2020), and the UK’s Explainable AI Guidelines (2020). The government can now establish and announce the principles that AI developers and users must follow in the development and utilization process. These principles include (i) ensuring the safety and reliability of AI in the development and utilization process, (ii) making products and services that utilize AI accessible to all, and (iii) contributing to the prosperity and well-being of humanity. Also in line with the government’s establishment and announcement of these principles, private companies, research institutions, and organizations can now establish voluntary private AI ethics committees under the new law. These committees can autonomously investigate and correct safety issues and human rights violations, and establish detailed ethical guidelines for each individual field.

The government will be able to develop more comprehensive policies by establishing the basic plan for AI every three years according to this law. It is expected that the AI committee under the Prime Minister’s office will be established in accordance with this law to ensure the implementation and trustworthiness of the AI society. The National AI Center will be established under the Korea Institute for Advancement of Technology to foster the AI industry and enhance competitiveness. It is expected that the committee and the center will support tasks such as standardizing various AI technologies, building learning data, securing specialized personnel, and creating AI clusters.

In addition, an important aspect addressed in this law is how to ensure safety in high-risk areas. High-risk areas refer to areas that can significantly impact the protection of safety, health, and basic rights of citizens, including artificial intelligence (AI) for judgment and evaluation of biological information in matters with regard to energy, healthcare industries, medical devices, nuclear facilities, criminal investigations or arrests, and those that have significant impact on individual rights and obligations, such as employment and loan qualification review, as well as operation of transportation systems and facilities including autonomous driving, and AI used by national or local governments and public institutions to make decisions on areas that could have impact on citizens. Those who provide products or services in these high-risk areas may first request confirmation from the Minister of Science and ICT to determine if their products or services fall within this category. If the Minister receives such a request from a business operator, he/she must verify whether the product or service falls into the high-risk category and can receive advice from a committee of experts in making such decision. Business operators who handle products and services in high-risk areas must notify users in advance that their products and services are operated based on AI in high-risk areas. They also have an obligation to take measures to ensure reliability and safety. The Minister of Science and ICT may establish and announce specific measures to ensure reliability through the committee’s deliberation and resolution and recommend compliance with these measures to business operators. These reliability measures include 1) how to manage risks, 2) whether there are documents to verify reliability, 3) the final output or key criteria of AI within the technically feasible range, 4) user protection, and 5) management and supervision of persons operating AI in high-risk areas. However, it appears that the provisions related to user’s refusal, request for explanation, and objection, which are reflected in the GDPR of the EU have not been incorporated into this legislation.

This law applies to all businesses that operate AI in each field, and in particular, businesses that operate AI in high-risk areas are obligated to establish ethical principles and ensure reliability and safety. This is an important matter that is related to a business’s product liability and exemption, so companies need to be especially cautious.

By Yulchon, Korea, a Transatlantic Law International Affiliated Firm. 

For further information or for any assistance please contact korea@transatlanticlaw.com 

Disclaimer: Transatlantic Law International Limited is a UK registered limited liability company providing international business and legal solutions through its own resources and the expertise of over 105 affiliated independent law firms in over 95 countries worldwide. This article is for background information only and provided in the context of the applicable law when published and does not constitute legal advice and cannot be relied on as such for any matter. Legal advice may be provided subject to the retention of Transatlantic Law International Limited’s services and its governing terms and conditions of service. Transatlantic Law International Limited, based at 42 Brook Street, London W1K 5DB, United Kingdom, is registered with Companies House, Reg Nr. 361484, with its registered address at 83 Cambridge Street, London SW1V 4PS, United Kingdom.