March 13, 2024,The European Parliament officially passed the Artificial Intelligence Act (Artificial Intelligence Act,The following referred to as "Act")。This world's first comprehensive regulatory regulations in the field of artificial intelligence aims to regulate the application of artificial intelligence,Focus on formulating and data transparency、Rules related to the regulatory and accountability system,Protect the basic rights of citizens。The passage of the "Act" and the future effect of the future can be called a major event in the field of artificial intelligence governance。Recent,Carme Artigas, co -chairman of the United Nations High -level Consultation Institution、Frankfurt European University Digital Social Law and Ethics Professor Philipp Hacker at the Frankfurto European University of Germany、Professor Shen Satellite、Zhang Linghan, a professor at the Research Institute of the Data Studies of China University of Political Science and Law, discussed the scope of the application of the "Act"、Open source governance、Implementation difficulties and other issues。
Bet365 app download
Atgas:Usually,The EU will formulate "Directive" or "Act" (ACT) to provide general guidance rules for EU member states,Under the guidance of this guidance rule,EU member states will turn it into domestic laws according to their actual situation。However, such a regulatory program is relatively decentralized,Affected the competitiveness of Europe in the digital field。Service quality、Internet infrastructure、Digital skills and other fields,EU member states have their own different rules。Supervision itself is not a problem,The problem lies in the fragmentation of supervision。To solve this problem,EU opened the "Digital Decade Road" plan,Digital Service Acts and other regulations,Provide a unified legal guidance to all member states,with a view to improving the cohesion and competitiveness of the European digital market。
Hak:Because the range of artificial intelligence itself is very wide,To adapt to the huge development of the future of artificial intelligence technology,The Definition of Artificial Intelligence is also broadly defined。It is based on the definition of artificial intelligence by economic cooperation and development and adjustment,distinguish artificial intelligence and traditional software with the concept of reasoning,At the same time, some basic data processing techniques for human design are excluded (such as automatic summary function in Excel),Because this technology is just executing the command set by humans,The ability to evolve and evolve without myself。Despite this,The coverage of the "Act" is still wider。
Shen Satellite:We can consider the scope of the application of the "Act" from two levels。First, from the definition of artificial intelligence,The EU's definition of artificial intelligence is relatively broad,This will inevitably lead to too much coverage of the Act in the Act。The significant feature of the "Act" is grading management,It is divided into minimum risk after the evaluation of the artificial intelligence system、limited risk、Four levels of high risk and unacceptable risks,and take differentiated regulatory methods for each level。This avoids the problem of "one knife and cut"。But also pay attention,If the definition of artificial intelligence itself is particularly broadly treated,It may still lead to excessive control,Not conducive to the development of the artificial intelligence industry。
Second, the "Act" has bet365 live casino games exterior effect。Considering the EU's economic scale,Both companies in China and the United States need to comply with the "Act"。The vitality of the long -arm jurisdiction depends not only on the national strength or the first legislation,More importantly is whether it is consistent with economic development、Requirements for technological development and human development。How much influence can it have globally,Depending on whether it can satisfy the value consensus of various countries、Industrial development、Social and public needs in technical progress。If you can meet these needs,There is no so -called "Brussels Effect",Instead, I found the most radiant consensus。
In addition,The Act is not applicable to the R & D stage、The artificial intelligence model that has not been put into the market or is only used for scientific research,This is consistent with the legislative position of China and the European Union。
Atgas:The definition of "Artificial Intelligence" has gained a wider consensus。A large number of advanced technology and tools, although it also belongs to the jurisdiction of the Act,But most of them belong to the application of low risk categories,Therefore, it will not be affected too much。
Balanced supervision and innovation
Atgas:The key to open source is transparency,Requires to open models and code,To detect and modify errors in the software in time。The Act sets different responsibilities for the participants of the entire artificial intelligence value chain,Including the development/provider of open source models、The responsibility of the trainer and user of the data。Although the open source community encourages disclosure of various parameters,But the consistency and transparency of the disclosure in the actual situation is still insufficient。It is very important to enhance the transparency of the field of artificial intelligence and improve the relevant accountability mechanism,The impact of artificial intelligence on the economy and society is obvious to all,Therefore, the relevant management system should be improved,instead of developing without restrictions。
Hak:The open source model of artificial intelligence is a double -edged sword,On the one hand, it is conducive to promoting competition and scientific progress,On the other hand, if it is maliciously abused by a person who is not good,It will also cause huge damage。Although many existing models and technology are not as advanced as everyone thinks,But still have some abilities,Such abilities if they flow into the hands of terrorists and other people,It will cause a great negative effect。With the increase in the ability of artificial intelligence models,Maybe it is necessary to prevent public specific models,and forcibly require access control。It is quite challenging to get a balance between supervision and encourage innovation,But it is quite necessary to prevent potential threats such as terrorists。Current,The "Act" for the exemption of open source models is very limited,Future supervision should pay more attention to open source models with higher performance。
Shen Satellite:Compared with the "General Data Protection Regulations",The position of the "Act" becomes relatively loose,This is mainly reflected in the following two points。The first point is that the Act clearly stipulates that open source can be exempted from certain obligations,This is correct。In the world of physical fitness,property rights have been affected by socialization,In a worldless world,The definition of rights should not be absolute,All rights should serve social progress,instead of simplicity。Free to disclose your source code and model for free to help scientific research and technological innovation。Of course,Exemption is also divided into specific situation,Standard in the "Act",Application scenarios for "high risk" and above,Even the free open source model,Once damage is caused,Still bears the corresponding responsibility。China "Artificial Intelligence Law (Scholarship Draft)" also mentioned,Unless there is intentional or major negligence,You can appropriately reduce the responsibility of the general open source free model。
The Act will continue to be continuously improved
Atgas:The Act and the Regulations on the Protection of General Data "Regulations on the Uniform Rules on Fair Access and Use Data (ie" Data Act "),It belongs to a broader legal framework。In this framework,Each bill has its own focus,The most important role played by the "Act" is the artificial intelligence system that regulates high -risk high -risk。According to the provisions of the "Act",EU member states need to determine the risk level of the artificial intelligence system,Determine how to deal with through different levels of risk,If the system that directly prohibits the "unacceptable risk" level,"High Risk" level system may affect education、Employment、Safety in the fields of judicial and people's health or basic rights,You need to conduct a compliance evaluation before and after entering the market,"Limited Risk" level system only needs to register,The system of "minimum risk" level has no specific requirements。Therefore, it can be said,The core of the "Act" is to standardize technical applications and avoid risks。The Act issued a ban on "unacceptable risks" and "high risk" levels of artificial intelligence systems,Putting on law enforcement exemption clauses。I think,Through relevant regulations,EU conveys an important signal,That is even if it is feasible at the technical level,It is not allowed to allow these two types of risks to occur,Classification of people using biometric technology is one of the examples。At the end of 2022,Chatgpt release,Challenge the formulation of the "Act",How can we adhere to the application of technology only without restricting the technology itself? ChatGPT uses a large language model,"Act" does not limit the large language model technology itself,But requires it to comply with transparency requirements。
The "Act" that the European Parliament finally passed reduced many previous versions of the previous version of the common artificial intelligence model,This is due to some situations that have not been considered before。GM artificial intelligence model itself does not belong to high -risk hierarchy,It will not directly endanger the health and safety of justice and governance and human beings。But when the number of people using this model is large,It may bring a threat。The Act requires a balance in the needs of many parties,Facing the market at the same time、Government、Stress of social groups and people。How to take into account the protection of the basic rights and interests of citizens、Promoting innovation and free development、Improper supervision,Is a question that the legislators need to consider in the future。
There is not much experience for reference,The Act is not perfect,But this is a future -oriented bill,Using innovative legislative technology,The definition of specification elements in the text,List the specific details of the high -risk field in the attachment。Just as the risk level is not unchanged,These details can also be adjusted with time and environmental change,Can adapt to future development。The Act hopes to maintain practicality and dynamics while controlling the various high -risk artificial intelligence systems as much as possible。In the next two years,It is expected to have more details,Increase more practical factors into consideration,The Act will be further improved。
Hak:The "Act" is currently lacking in the part of the general artificial intelligence model,Unable to achieve the target it expects。The "Act" adopts layered governance method,The minimum requirements are proposed for those basic general artificial intelligence models,For those more important、It's difficult to control、Social impact is greater、Models with systemic risk propose model evaluation、Risk assessment and other additional requirements。Divide according to computing power,If the universal artificial intelligence model is used for the cumulative calculation amount of training more than 10, the number of floating -point operations per second,It will be identified as a systematic risk。I think there are two problems in this method,First,This threshold is higher,And the artificial intelligence model tends to develop in a smaller but stronger direction,The computing power of some models is lower than this threshold,but still have systemic risk,This will challenge the law enforcers of artificial intelligence governance。2,The additional requirements proposed on systemic risk models may not be valid。The current additional obligations include network security、Event report、Risk assessment, etc.,but lack of content supervision,However, this is very important。Comparison of comparison,China's artificial intelligence governance has not ignored the results of the model output consistent with social customs and values。The EU should also consider increasing content supervision,Reduce hatred remarks、Defamation and false news。
Atgas:As a response to Professor Hick,I think,The layered governance method of the general artificial intelligence model uses the number of floating -point operations per second as the indicator of 10.,But it can distinguish between super large models and other models like ChatGPT。The main focus of the indicator is energy consumption,We do not want the competition in the field of artificial intelligence to continue to consume a lot of energy。Technology giants can easily bear all costs and obligations,The difficulty is how to help SMEs meet the compliance requirements。
Hak:The threshold and computing power related to the layered governance method of the general artificial intelligence model,Computing power and energy consumption related,I am glad to see that the "bill" requires more information to provide more information,But the problem now is that the threshold is too high。Climate change is a global problem,It is necessary to jointly respond to the growth of the general artificial intelligence model around the world.、Demand of water resources and toxic substances。"Privacy Design" is a concept available for reference,Enterprises can try more sustainable design,Effectively relieve environmental risks。
In the future "Act" may have more adjustments。For example,Modify the list of high -risk fields,Exemption from some secondary,For basic rights、Application of less health or security。But in order to prevent corporate loopholes from drilling legal loopholes,Exemption of such exemptions also reverse mechanisms,That is, if the artificial intelligence system is used to collect data and side writing to form a portrait,Data involved in collecting large -scale people for complex inference,Can't get exemption。I think there is no problem with such regulations,The key and difficulties of the next step are to execute,Need to be global、Coordinate between companions。
"Act" faces multiple challenges
Atgas:Current,The implementation of the "Act" refers to the implementation experience of the "General Data Protection Regulations"。The Act determines several key implementation phases,Including the prohibited artificial intelligence system that is removed within 6 months after the release; the general artificial intelligence model of the general artificial intelligence needs to meet the transparency requirements within 12 months; the agencies related to the high -risk artificial intelligence system specified in the attachment one need 》 Published the corresponding obligations within 24 months,For the high -risk artificial intelligence system specified in Annex II,This time requirement is 36 months。The Act sets two types of regulatory agencies,The regulatory agency at the national level is responsible for compliance evaluation and dealing with problems such as high -risk systems,Create at least one artificial intelligence regulatory sandbox。Artificial Intelligence Office is responsible for the horizontal supervision of the general artificial intelligence model、Threshold definition and "Practice Code"。Artificial Intelligence Office shall include the artificial intelligence committee composed of member states、The scientific group composed of scholars and an external information agency。Enterprises should start to abide by the "Act",Thoroughly fulfilling the responsibility within two years from the start of large enterprises,State regulatory agencies shall help SMEs to adapt to the "Act" by regulating sandboxes。
Hak:for on time、Effectively implement the "Act",Details that need to hire relevant talents and improve the guidance practice in time,It is also necessary to consider the coordination and cooperation between artificial intelligence -related regulators and other existing regulatory agencies,Avoid conflicts between regulatory agencies in different fields and levels。The Act also mentioned the problem of integrating unified compliance mechanisms,but not enough detailed,Need to establish a more comprehensive coordination framework。Most of the system mentioned in the "Bill" currently does not be subject to special supervision of the industry in the past,Enterprises can certify compliance through self -audit。
Zhang Linghan:Experts have mentioned the framework of classification and governance,Since this risk -based governance framework is proposed by the "General Data Protection Regulations",It has been widely accepted globally。China has also proposed to build a classification and classification management system,For example, the "Data Security Law of the People's Republic of China" and "Global Artificial Intelligence Governance Initiative"。This system has many advantages,Including the point of front movement treatment、Reduce supervision and compliance costs。But I personally think,Data risk governance such as personal information protection is different from the risk governance of artificial intelligence。After the rise of artificial intelligence governance,We observed a trend,That is, artificial intelligence governance has gradually absorbed data and algorithm governance,If the risk governance framework of artificial functions will cover data and algorithm governance,May face some challenges。
A challenge is risk -oriented governance requires effective risk assessment as the premise。Article 3 of the Act define risk as a combination of the probability of damage and the severity of the damage。I think this definition is very accurate,Risks can only be compared with the damage and cost of governance caused by the risk.。But when it was introduced with the "General Data Protection Regulations", a lot of、Different applications related to personal information and data,The current development and application time of artificial intelligence are far shorter than the time for personal information processing applications,Therefore, when the risk assessment is conducted,The quantity and quality of the data are insufficient。Another,The proportion of technical factors in artificial intelligence risk assessment is getting smaller and smaller,Moral、Policy、The proportion of cultural influence has gradually increased。UN senior other artificial intelligence consulting agencies divide risks as personal、Groups and society risks。Different countries and institutions have their own risk framework,This framework seems to be complete,However,It may cause omissions。
Another challenge is how to distinguish the degree of risk。The risk bet365 best casino games division of the "bill" mainly focuses on key information infrastructure and basic human rights related application scenarios,For example,Whether to endanger human health and safety,Whether to basic rights、The environment and democracy and the rule of law have a serious adverse effects。However, these standards still have greater elasticity and fuzzyness,Rather than considering the risk of the artificial intelligence system,It's better to consider it important。In other words,Some kind of artificial intelligence system may be very important,It does not mean that the degree of risk is very high。Some potential high -risk systems may be limited,But it is precisely because the application increases,Risks may be reduced。For example,Employment assessment and driverless cars are all important rights and life and health systems that are related to people,Therefore, it becomes a high -risk system。But the application of such systems is becoming increasingly common,Data is more and more sufficient,The probability and severity that may cause damage will be greatly reduced。If you are labeled with high risk labels at the beginning, you are subject to many restrictions,It is difficult to obtain a chance to reduce risk after large -scale application,So this is also a pair of contradictions that need to be coordinated when developing artificial intelligence and governance related risks in the future。
In the recently disclosed China "Artificial Intelligence Law (Scholarship Draft)",No expression such as "high -risk artificial intelligence system" is not adopted,Instead, he chose "key artificial intelligence" and "special application scenarios artificial intelligence" and other expressions,Avoid pre -judgment or negative evaluation of it in advance,At the same time, the dynamic evaluation is introduced into the governance of the relevant artificial intelligence system。
From the perspective of general artificial intelligence,"General" refers to the scope of risk possible risk,"High Risk" refers to the degree of risk,Many arguments also appeared during the writing process of the "Artificial Intelligence Law (Scholarship Draft)",What is general artificial intelligence? Does artificial intelligence applied to two or three areas or scenes belong to general artificial intelligence?,Is it more powerful artificial intelligence? The definition of general artificial intelligence itself has a large vagueness。European scholars also pointed out,Small models focus on specific fields,Its ability may be stronger,If the model is more widely used,So there is no doubt its function will suffer a certain amount of loss and influence。Therefore,Should the general artificial intelligence system be defined as a high influence system,and whether this influence is judged by height or wide and narrow to judge,It also needs further research and discussion。
Shen Satellite:The Act triggered the competition in artificial intelligence legislation in various countries around the world。Many countries are thinking about the best timing of artificial intelligence legislation,When will、Why、How to regulate artificial intelligence。Usually,Control is one of the main legislative motivations,The purpose is to eliminate people's unwillingness to artificial intelligence,Create a trustworthy、Responsible artificial intelligence system。To achieve this goal,During legislation, we need to weigh between promoting law and control law。It is difficult to produce significant effects in promoting law,and the control method may curb industrial development,Makes the industry a lot of pressure。In this context,The Act will face a series of challenges when the actual execution。First,Legal liability setting is higher,heavy penalties may bring the market chilling effect,Disorders that lead to an unbalanced relationship between industrial innovation and security。Second,The biggest difficulty of legislation in the field of artificial intelligence is the contradiction between legal stability and technological development,Artificial Intelligence Technology Daily Crescent、Continuous iterative update,The actual execution of the "Act" will inevitably encounter related problems。Last,Division standards and boundaries of risk classification classification are not clear,This needs to be more clear,Because it directly causes the subject to bear different obligations。In short,All countries always face a question,Is the concern of people's concerns about artificial intelligence to eliminate through legislation。I personally hold a negative attitude。It should still be adopted to adopt the strategy of "resolving bells must be tied to the bell",similar to the "privacy design" mentioned by Professor Hick,Division standards and boundaries of risk classification classification are not clear,Embeds the legal value in advance in the design of artificial intelligence products,Realize the peaceful coexistence of human and artificial intelligence、Common development,instead of problems after the incident,Regulations through heavy punishment。
Hak:What I want to add is,The Act is not so bad,It is suffering some exaggerated attacks,The main content of the bill is the application of artificial intelligence that prohibits the public from harming the public,Whether this "bill" exists,Responsible subjects will not apply artificial intelligence in those areas that are harmful to humans,Therefore, we need to do a good job of publicity,It is not difficult to tell people to abide by the "Act"。With the implementation of the Act,Europe also needs to increase investment in development and deployment of artificial intelligence systems。
(Practicing Zhi Xian/Compilation and Sorting)
Friendship link:
Website filing number: Jinggong.com Anmi 11010502030146 Ministry of Industry and Information Technology:
All rights reserved by China Social Sciences Magazine shall not be reprinted and used without permission
General Editor Email: zzszbj@126.com This website contact information: 010-85886809 Address: Building 1, Building, No. 15, Guanghua Road, Chaoyang District, Beijing: 100026
>