Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

NB: Administrative Law and Administration Practice
Reference:

Public Law Aspects of Technical Regulation of Artificial Intelligence in Russia and the World.

Atabekov Atabek Rustamovich

PhD in Economics

Associate Professor of the Department of Administrative and Financial Law, Legal Institute, Peoples' Friendship University of Russia

117198, Russia, Moscow, Miklukho-Maklaya str., 6

atabekoff1@mail.ru
Other publications by this author
 

 

DOI:

10.7256/2306-9945.2023.2.39938

EDN:

EFVTPO

Received:

04-03-2023


Published:

11-03-2023


Abstract: Within the framework of this article, a comparative analysis of existing approaches to the implementation of technical regulation of artificial intelligence in the public law of Russia and foreign countries is carried out. As part of the comparative analysis, the basic problems in the field of proper technical and public regulation of artificial intelligence in the world practice are identified, the practice of technical regulation in Russia is considered separately, and possible compensatory legal measures are proposed to ensure transparent and proper practice of technical regulation of artificial intelligence in the field of public administration in Russia. The subject of the study is the legal relations of public authorities in the field of technical regulation in relation to artificial intelligence. The object of the study is the regulatory documents, recommendations and other documents regulating the implementation of the autonomy of artificial intelligence for the purposes of public legal relations in Russia and foreign countries, academic publications and analytical reports on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is placed on the implementation of a comparative legal study of the problems of ensuring the proper quality of technical regulation of artificial intelligence through the prism of the actions of standardization bodies and other public authorities. The considered typical situations, the current practice of technical regulation of artificial intelligence, as well as the methods proposed by the author can be reflected in the legislative and law enforcement practice of relevant authorities responsible both for technical regulation and for ensuring the integration of this technology into the sphere of public legal relations.


Keywords:

artificial intelligence, electronic person, comparative legal research of AI, technical regulation of AI, safety AI, public law, administrative law, information law, law enforcement practice, standardization

This article is automatically translated. You can find original text of the article here.

The development of the technological potential of artificial intelligence is one of the significant tasks for both the Russian Federation [1-2] and the international community [3].

At the same time, it is impossible not to note the emerging issues of positioning artificial intelligence (AI) in the context of technical regulation and the position of both national institutions in the field of standardization and public authorities in this area.

Basic and fundamental work in the field of developing unified approaches to standardization of AI is carried out at the level of international standardization bodies - ISO/IEC (TC1/SC42) and their various joint subcommittees. Within the framework of the functioning of this TC, a wide list of standards in the field of AI has already been published [4-6].

At the moment, the use of AI for the purposes of public authorities is recommended by ISO to be guided by ISO/IEC 38507:2022, which prescribes the basic principles, as well as the potential negative consequences of the use of AI:

1. The risk of monopolization of the market due to the industry specialization of AI.

2. Bias in judgments when processing AI data without thorough verification.

3. The risk of loss of human control, taking into account the speed of AI learning.

4. Discrimination and potential harm to the fundamental rights of employees who can be replaced by AI.

5. Reputational costs in the absence of AI control.

Specialized authorities are being created at the level of various states, relevant strategic documents are being developed [7].

Thus, the US National Strategic Plan defines the need to increase the availability of AI test benches and subsequent work to increase indicators related to the development of standards and the involvement of the professional community [8]. In the following years, the American National Institute was represented by the Secretary in the joint Technical Committee 1/sub-committee 42 formed by ISO and IEC [9]. In addition, understanding the military potential of AI, the US National Security Commission on Artificial Intelligence (NSCAI), focused the attention of national authorities involved in the development of standards, the need to systematically improve the reliability of the results and functioning AI models [10].

In China, the relevant work is organized and coordinated by the Ministry of Industry and Information Technology, which published a White Paper on AI Standardization in 2018 [11]. At the same time, China's initiatives to bring together approaches in the field of standardization [12-13], for cloud computing, industrial software does not cancel the principles of sovereign regulation of AI. Within the framework of the national standardization strategy called "Chinese Standards 2035", it is recorded that national standards will differ from international standards, in order to develop domestic industry [14].

At the same time, the activities of bodies in the field of standardization are distributed among 5 organizations [15]:

1. The National Technical Committee for Standardization of Information Technologies (SAC/TC 28) is the basic body that deals with terminology in the field of AI and related fields related to AI and human interaction.

2. China National Technical Committee for Standardization of Automation and Integration Systems specializes in the standardization of industrial robots.

3. The National Technical Committee for Standardization of Audio, Video and Multimedia specializes in standards for smart homes.

4. The National Technical Committee for Standardization of Information Security specializes in security standards for smart cities and industrial production.

5. The National Technical Committee on Intelligent Transport Systems specializes in standards in the field of intelligent transport.

The White Paper on Standardization of China also highlights fundamental problems in the field of standardization and certification of AI:

1. Heterogeneity of AI technical development at the level of different countries gives rise to different interpretations of AI at the national and supranational levels.

2. Multicomponence and cross-border universality of technology can generate an excessive or duplicative contour of standards.

3. Standardization as a way of lobbying for the interests of international corporations and associations.

4. An unlimited number of users of products, which leads to difficulties in coordinating the standardization process.

5. Security and ethics issues are the most difficult category in the field of standardization, which slow down the work on the main block of AI standardization.

In the European Union, the fundamental standardization bodies are the European Committee for Standardization and the European Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI). In 2019, CENELEC created an AI Task Force [16], after which a roadmap for AI standardization was published in 2020 [17]. ETSI has also created various focus groups on artificial intelligence [18].

It is also impossible not to mention the initiative of the EU legislative initiative regarding artificial intelligence, which has a risk-oriented approach and lays its fundamental foundations, including for the purposes of standardization and regulation by the authorities (Regulation) [19].

This regulation defines a horizontal approach to AI (unlike other EU legislation on product safety), which indicates a comprehensive consideration of the issue of AI regulation, as well as minimizing risks at the level of cross-border movement of voluntary national and international standards.

At the same time, the AI products themselves are graded by the Regulations depending on the degree of risk to society, with their subsequent restrictions and obligations (Articles 5, 8, 52 and 69 of the Regulations) in the field of ensuring the safety and reliability of the technology.

At the same time, Decision No. 768/2008/EC of the European Parliament and of the Council of July 9, 2008 on the general framework for the sale of products [20], contains provisions that the specification of local standards in the field of AI is the target task of standardization institutes (CENELEC, ETSI, etc.). At the same time, the conformity assessment of products is a duty only the manufacturer, which is also reflected in relation to AI (Articles 16,17, 51, 60 86 of the Regulations). In addition, article 71 of the Regulations fixes the amount of a negotiable administrative fine in the amount of 2%, 4% or 6% of the violator's annual turnover.

The risk-oriented approach proposed by the EC in relation to AI has an interesting approach in the field of standardization, however, it retains issues in the field of AI verification by the manufacturer of products, and not by third parties, and these institutions are not public entities, but NGOs.

In Russia, the standardization of activities is entrusted to Rosstandart, which has prepared, together with the Ministry of Economic Development, a standardization program for the priority area "Artificial Intelligence" for the period 2021-2024 [21]. The program contains more than 70 industry metrological standards.

At the same time, an interesting phenomenon is both the discrepancy at the level of GOST R 43.0.8-2017 and GOST R 59277-2020 documents regarding the definition of AI, and in terms of the practical implementation of standardization in the field of AI according to the above program. According to this program, 44 intersectoral standards in the field of AI are planned by 2024 (11 in 2022, 21 in 2023, 12 in 2024), 58 industry metrological standards (21 in 2022, 19 in 2023, 18 in 2024), 91 accompanied international document (standard) on AI (20 in 2021, 22 in 2022, 24 in 2023, 25 in 2024). The industry specialization of these documents covers almost all spheres of human activity.

Rosstandart, guided by Article 9 of Federal Law No. 162-FZ of 29.06.2015 "On Standardization in the Russian Federation" (the Law) implements state policy in the field of standardization, determines the procedure for achieving a unified position of different parties in the development of national standards, the procedure for their examination, organizes the development of standards, etc. At the same time, according to paragraph 25 of Article 9 of the Law, it is possible to make a decision on the creation and liquidation of technical committees for standardization, which implies the creation on a voluntary basis of interested legal entities, as well as authorities participate in the development, examination of documents in the field of standardization in the areas assigned by it (Article 11 of the Law).

In the field of AI, TC 164 "Artificial Intelligence" acts as such [22]. At the same time, it should be noted that both scientific organizations (GOST R 59898-2021, GOST R 58776-2019, etc.) and commercial companies with a high degree of technical involvement (GOST R 59391-2021), as well as companies whose competence raises potential questions (GOST R 70249-2022, GOST R 59385-2021).

According to the author, it should be noted that the basic general problem of standardization issues is that, directly or indirectly, the field of technical regulation of AI is influenced by non-governmental organizations with the participation of interested companies that lay down their competitive advantages and subsequently weaken state control, including in relation to such technology as AI, which has a whole the spectrum of unpredictable consequences for society [23-27].

The second problem is the potential duplication and divergence of positions regarding the regulation of AI, taking into account its multicomponent base, self-learning system and the possibility of its deployment on any data sample (military, medical, banking, government, etc.).

The third fundamental problem is the issue of competencies in the field of developing AI standards for the needs of the state segment, which remains in a vulnerable position when issuing a standard by private law organizations.

The specified contour of problems, according to the author, can be solved by introducing a number of initiatives at the level of authorities, namely by forming administrative barriers, in order to ensure conditions for the development of standards for the needs of public bodies exclusively by state scientific institutions, as well as the formation of a legislative initiative for additional verification of the standards proposed by TC 164 by the relevant authority responsible for the development and AI regulation, both in order to minimize cross-border risks of excessive AI regulation and duplication of regulatory provisions presented in the standards.

References
1. The President took part in the main discussion of the international conference on artificial intelligence: [website]. — URL: http://kremlin.ru/events/president/news/69927 (accessed: 27.02.2023).
2. Meeting of the Board of the Ministry of Defense(: [website]. — URL: http://kremlin.ru/events/president/news/70159 (accessed: 27.02.2023).
3. UNESCO section dedicated to the actual problems of artificial intelligence: [website]. — URL: https://en.unesco.org/artificial-intelligence (accessed: 27.02.2023).
4. ISO/IEC TR 24029-1:2021 Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview: [website]. — URL:https://www.iso.org/standard/77609.html (accessed: 27.02.2023).
5. ISO/IEC 22989:2022 Information technology — Artificial intelligence — Artificial intelligence concepts and terminology: [website]. — URL: https://www.iso.org/standard/74296.html. (accessed: 27.02.2023).
6. ISO/IEC 38507:2022 Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations: [website]. — URL: https://www.iso.org/standard/56641.html (accessed: 27.02.2023).
7. OECD AI Observatory: [website]. — URL: https://oecd.ai/(accessed: 27.02.2023).
8. Networking and Information Technology Research and Development Subcommittee. The National Artificial Intelligence Research and Development Strategic Plan. – 2016.
9. ISO/IEC JTC 1/SC 42 Artificial intelligence: [website]. — URL:https://www.iso.org/ru/committee/6794475.html (accessed: 27.02.2023).
10. Final Report National Security Commission on Artificial Intelligence: [website]. — URL: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf (accessed: 27.02.2023).
11. Excerpts from China’s ‘White Paper on Artificial Intelligence Standardization.’ URL: [website]. — URL: http://www.sgic.gov.cn/upload/f1ca3511-05f2-43a0-8235-eeb0934db8c7/20180122/5371516606048992.pdf (accessed: 27.02.2023).
12. Memorandum of Understanding Between SGCC and IEEE: [website]. — URL: https://www.ieee-pes.org/2018-mou-sgcc-ieee (accessed: 27.02.2023).
13. Cooperation Agreement between CEN and CENELEC and the Standardization Administration of China: [website]. — URL: https://www.cencenelec.eu/media/CEN-CENELEC/European%20Standardization/Documents/IC/Cooperation%20Agreements/cen-clc-sac_cooperationagreement_en.pdf (accessed: 27.02.2023).
14. De La Bruyère E., Picarsic N. China Standards 2035: Beijing’s Platform Geopolitics and “Standardization Work in 2020.” //Horizon Advisory China Standards Series. – 2020.
15. Artificial Intelligence Standardization White Paper (2018 Edition): [website]. — URL: https://cset.georgetown.edu/publication/artificial-intelligence-standardization-white-paper/ (accessed: 27.02.2023).
16. CEN and CENELEC launch new AI focus group : [website]. — URL:https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/ (accessed: 01.03.2022).
17. CEN-CENELEC Focus Group Report: Road Map on Artificial Intelligence: [website]. — URL:https://www.standict.eu/sites/default/files/2021-03/CEN-CLC_FGR_RoadMapAI.pdf (accessed: 01.03.2022).
18. Industry specification group (ISG) securing artificial intelligence (SAI): [website]. — URL:https://www.etsi.org/committee/1640-sai (accessed: 01.03.2022).
19. COM(2021) 206 final 2021/0106(COD) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS: [website]. — URL:https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN (accessed: 01.03.2022).
20. EU Decision ¹768/2008/EC – UNECE : [website]. — URL: https://unece.org/fileadmin/DAM/trade/wp6/SectoralInitiatives/MARS/Slovakia_Oct10/Decision_768_2008_EC_ru.pdf (accessed: 01.03.2022).
21. PERSPECTIVE STANDARDIZATION PROGRAM in the priority area "Artificial Intelligence" for the period 2021-2024 (approved by the Ministry of Economic Development of Russia and Rosstandart of Russia): [website]. — URL: https://www.economy.gov.ru/material/file/28a4b183b4aee34051e85ddb3da87625/20201222.pdf (accessed: 01.03.2022).
22. Official page TC 164: [website]. — URL:http://tc164.ru/page28499750.html (accessed 01.03.2022)
23. Vinuesa R. et al. The role of artificial intelligence in achieving the Sustainable Development Goals //Nature communications. – 2020. – Ò. 11. – ¹. 1. – Ñ. 233.
24. Mihalis K. Ten technologies to fight coronavirus //Eur. Parliam. Res. Serv. – 2020. – Ñ. 1-20.
25. Raso F. A. et al. Artificial intelligence & human rights: Opportunities & risks //Berkman Klein Center Research Publication. – 2018. – ¹. 2018–6.
26. Gusev A.V. et al. Legal regulation of artificial intelligence software in healthcare in the Russian Federation.// Medical Technologies. Assessment and Choice – 2021. – ¹. 1 (43). – pp. 36–45. (In Russ.)
27. Kharitonova Y. S., Savina V. S. Tekhonologiya iskusstvennogo intellekta i pravo: vyzovy sovremennosti [Artificial Intelligence Technology and Law: Challenges of Our Time].// Vestnik Permskogo universiteta. Juridicheskie nauki – Perm University Herald. Juridical Sciences. 2020.-Issue 49.-Pp. 524–549. (In Russ.)

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

A REVIEW of an article on the topic "Public law aspects of technical regulation of artificial intelligence in Russia and the world". The subject of the study. The article proposed for review is devoted to public law aspects of "... technical regulation of artificial intelligence in Russia and the world". The author has chosen a special subject of research: the proposed issues are studied from the point of view of international, administrative and constitutional law of Russia and other states, while the author notes that "Specialized authorities are being created at the level of various states, relevant strategic documents are being developed." NPA, standards in the field of AI from various states and international ones relevant to the purpose of the study are being studied. A certain amount of Russian and foreign scientific literature on the stated issues is also studied and summarized, analysis and discussion with these opposing authors are present. At the same time, the author notes: "Decision No. 768/2008/EC of the European Parliament and of the Council of July 9, 2008 ... contains provisions that the specification of local standards in the field of AI is the target task of institutes in the field of standardization (CENELEC, ETSI, etc.). At the same time, product conformity assessment is the responsibility of the manufacturer only, that It is also reflected in relation to AI (Articles 16, 17, 51, 60 and 86 of the Regulations). In addition, article 71 of the Regulations establishes the amount of a negotiable administrative fine in the amount of 2%, 4% or 6% of the violator's annual turnover." Research methodology. The purpose of the study is determined by the title and content of the work: "In Russia, the standardization of activities is entrusted to Rosstandart, which prepared together with the Ministry of Economic Development a standardization program in the priority area of Artificial Intelligence for the period 2021-2024", "The risk-oriented approach proposed by the EC in relation to AI has an interesting approach in the field of standardization, but retains questions in the field of verification of AI by the manufacturer of products, and not by third parties, and these institutions are not public entities, but NGOs." They can be designated as the consideration and resolution of certain problematic aspects related to the above-mentioned issues and the use of certain experience. Based on the set goals and objectives, the author has chosen a certain methodological basis for the study. The author uses a set of private scientific, special legal methods of cognition. In particular, the methods of analysis and synthesis made it possible to generalize approaches to the proposed topic and influenced the author's conclusions. The most important role was played by special legal methods. In particular, the author applied formal legal and comparative legal methods, which made it possible to analyze and interpret the norms of acts of international, Russian and foreign legislation and compare various documents. In particular, the following conclusions are drawn: "At the same time, an interesting phenomenon is both the discrepancy at the level of GOST R 43.0.8-2017 and GOST R 59277-2020 documents regarding the definition of AI, and in terms of the practical implementation of standardization in the field of AI according to the above program," etc. Thus, the methodology chosen by the author is fully adequate to the purpose of the article, allows you to study many aspects of the topic. The relevance of the stated issues is beyond doubt. This topic is important in the world and in Russia, from a legal point of view, the work proposed by the author can be considered relevant, namely, he notes "... it is impossible not to note the emerging issues of positioning artificial intelligence (AI) in the context of technical regulation and the position of both national institutes in the field of standardization and public authorities in this area". And in fact, an analysis of the opponents' work should follow here, and it follows and the author shows the ability to master the material. Thus, scientific research in the proposed field is only to be welcomed. Scientific novelty. The scientific novelty of the proposed article is beyond doubt. It is expressed in the specific scientific conclusions of the author. Among them, for example, the following: "... the problem is the potential duplication and divergence of positions regarding the regulation of AI, taking into account its multicomponent database, self-learning system and the possibility of its deployment on any data sample (military, medical, banking, government, etc.)." As can be seen, these and other "theoretical" conclusions can be used in further research. Thus, the materials of the article as presented may be of interest to the scientific community. Style, structure, content. The subject of the article corresponds to the specialization of the journal "Administrative Law and Practice of Administration", as it is devoted to public law aspects of "... technical regulation of artificial intelligence in Russia and the world". The article contains an analysis of the opponents' scientific works, so the author notes that a question close to this topic has already been raised and the author uses their materials, discusses with opponents. The content of the article corresponds to the title, since the author considered the stated problems and achieved the goal of his research. The quality of the presentation of the study and its results should be recognized as improved. The subject, objectives, methodology, research results, and scientific novelty directly follow from the text of the article. The design of the work meets the requirements for this kind of work. No significant violations of these requirements were found, except for the descriptions "The specified regulation defines a horizontal approach", "not to mention the initiative of the EU legislative initiative", etc. The bibliography is quite complete, contains publications, NPAs, to which the author refers. This allows the author to correctly identify problems and put them up for discussion. The quality of the literature presented and used should be highly appreciated. The presence of scientific literature showed the validity of the author's conclusions and influenced the author's conclusions. The works of these authors correspond to the research topic, have a sign of sufficiency, and contribute to the disclosure of many aspects of the topic. Appeal to opponents. The author conducted a serious analysis of the current state of the problem under study. The author describes the opponents' different points of view on the problem, argues for a more correct position in his opinion, based on the work of opponents, and offers solutions to problems. Conclusions, the interest of the readership. The conclusions are logical, specific "The specified contour of problems, according to the author, can be solved ... by forming administrative barriers in order to ensure conditions for the development of standards for the needs of public authorities exclusively by state scientific institutions, as well as the formation of a legislative initiative for additional verification of the standards proposed by TC 164 by the relevant authority," etc. The article in this form may be of interest to the readership in terms of the presence in it of the systematic positions of the author in relation to the issues stated in the article. Based on the above, summing up all the positive and negative sides of the article, I recommend publishing it, taking into account the comments.