Artificial intelligence supervision should be forward -looking
October 25, 2023 10:03 Source: "China Social Sciences", October 25, 2023, Issue 2758, author: This reporter Fang Ling

Artificial Intelligence Research Company OpenAI recently issued an announcement on the official website,The first developer conference will be held at San Francisco on November 6。The release of ChatGPT at the end of November 2022 has passed for almost a year,"ChatGPT heat", which once became one of the world's popular topics, cools down,But the "cold thinking" brought by this opportunity is far from over。Where does the necessity and importance of artificial intelligence tools such as CHATGPT? What kind of regulatory ideas and regulatory strategies are the current urgent exploration and clearness?。

  Bet365 app download

Professor of Future Calculation Research Institute of Londos Institute of Technology, USA、Founding Director James Hendler is one of the founders of semantic Wanwei.com and knowledge maps。He emphasized in an interview with our reporter,It is not only ChatGPT to be regulated,Algorithms represented by artificial intelligence algorithms also need to accept supervision。According to Hendel Introduction,The Global Technical Policy Committee of the U.S. Computer Society, which he is chairman, issued a statement at the end of October 2022,Title of "Statement on the Principles of Responsible Algorithm Systems",Summary of a series of regulatory opinions and suggestions。

The algorithm is independent、A series of operations performed in steps,Used to perform calculations、Data processing and automatic reasoning tasks。Many artificial intelligence algorithms are based on statistical models,How to learn by machine,Use the data set to perform "learning" or "training",Others are driven by analysis -discovery、Explanation and dissemination of meaningful mode。Artificial intelligence and machine learning systems are used to make specific decisions and other underlying mechanisms may be opaque,Middle causes may include information factors (the data used for training models and creation analysis is used when the data subject is unaware or unclear)、Technical factors (this algorithm itself is not easily explained and explained simply)、Economic factors (the cost of transparentness is too high)、Competitive factors (conflict between transparency and commercial secret protection may make the manipulation bet365 Play online games of decision -making boundaries possible)、Social factors (disclosure information may violate privacy expectations), etc.。This has caused the difficulty of understanding,Whether the output results have deviations or errors are more difficult to judge。

Even a well -designed algorithm system is difficult to avoid the unclear results or errors,For example,The training data used does not match the application purpose,or the conditions of the algorithm have changed,The assumptions based on designing this system have expired。It is impossible to ensure that the system is not prejudiced by using a widely representative data set。Data processing method、User feedback cycle and the way the deployment system may bring problems。Today's World,The algorithm system based on artificial intelligence is becoming more and more commonly used for individuals、The formulation and suggestion of decision -making that has a profound impact of organization and society,Employment、Credit、Many decisions in education access and other aspects have machine participation,But lack of sufficient perfection substantial review by manpower。Although algorithm systems are more fair to society、tolerance、Efficient and beautiful wish,This vision will not be automatically implemented with automation。Just like the decision made by humans,The decision made by the machine may also be unsatisfactory,Cause negative social impact such as discrimination。In view of the above reasons,Make sure the algorithm system fully abides by the established law、Ethics and scientific specifications,and make it commens to the effectiveness of the application risk and solve the problem,It becomes the meaning of deserved。

The internal category of artificial intelligence is very important,Different types of artificial intelligence should not be confused。German European Digital Research College (ENS) Digital Social Law and Ethics Professor Philipp Hacker、Researcher Andreas Engel, researcher at the University of Heidelberg, Germany, also thinks,Large-generated artificial intelligence model (LGAIMS) has been used by millions of individuals and professional users to generate a text-level text、Image、audio and video,They are greatly changing people's communication、Work、How to create and conceive new content。Chatgpt、STable Diffusion and other large -scale generation artificial intelligence models are expected to affect all fields of society,Bet365 lotto review From commercial to medicine,From education to scientific research to art。This allows professionals to free up time to focus on substantial work,It is helpful to be more effective as a decision engine、Distribution resources more fairly。but but,The huge potential and significant risks they bring are at the same time,For example,It may make fake news、Harmful remarks, etc.。Large -generated artificial intelligence model runs high speed、The accuracy of the text grammar generated by it is high,"Two choices" for false information that seems to be rigorous in large -scale production and is seriously misleading。So,Strengthening content review is a very urgent requirement。

  Trust expectations should not be too high

Mark COECKELBERGH, a professor at the Department of Philosophy and Technical Philosophy at the University of Vienna, Austria, told this reporter,artificial intelligence tools such as ChatGPT involve a series of ethical and moral issues,It is still not reliable enough,There are many errors and misunderstandings。But,Some people have given too much trust and excessive expectations to these tools。other,Some people are worried,When artificial intelligence tools develop to extremely high levels,May take some employment opportunities。This pair of writers、Reporter、Copywriters, what does it mean to write as a profession? How can colleges and primary and secondary schools deal with the situation of artificial intelligence tools to write articles?,It is also necessary to strengthen the management of school subjects。The educational industry needs to be more prepared for the potential damage caused by artificial intelligence technology,This does not mean that artificial intelligence tools should be prohibited,Because they can also effectively help the development of education and research。Policy support from the government and organization is essential,Government、Both the public sector and the private sector need to take action。The private sector can also actively act as,But you need to accept the supervision and management of governments at all levels。Enterprises, especially large enterprises, have social responsibilities that respond to related challenges。other,The voices of artificial intelligence ethics in public opinion can also help to solve the problem properly。

From the view of Kokberg,In related academic research,User's expectations for artificial intelligence、Artificial Intelligence Tools blindfolded by users, etc.,This is especially true for technical design,Analysis of the impact of artificial intelligence bet365 Play online games on the employment market is also an indispensable research theme。In order to formulate appropriate policies and regulations,It is not desirable to follow ChatGPT to only follow ChatGPT,Should pay attention to various artificial intelligence tools and how to change our society。Kokberg summarized to,Panic is unnecessary,Human beings can do something,Because ChatGPT is not "evil",Artificial intelligence can also play a positive role in social development。What people want to do is to ensure the development and application of artificial intelligence is ethical and moral,Make effective and transparent regulatory measures,To promote reasonable development and application。Hendel said at the end of the interview,There is no "magic" behind artificial intelligence,It is just one of many technologies that human society needs to be supervised。The international community has at least to a certain extent.、Climate change、Challenges such as the legislation of the ocean and its use,People need to be artificial intelligence、Social media and other supervision and management Actively explore the mechanism of solving problems。

  Regulatory measures should not be lagging

Hendel elaborated in an interview that the reasons to look at and deal with the issue of artificial intelligence supervision from an international perspective。Whether the existing artificial intelligence regulatory measures can be well applied to emerging tools such as supervision of ChatGPT,He admits,In his opinion,So far,Many regulatory measures for artificial intelligence and social media are relatively backward,There are great differences between countries,The efficiency is not high。It is necessary to connect different laws and policies around the world,This makes the supervision quite difficult。If the United Nations and other international organizations can play a greater role in specific issues such as algorithm safety and privacy protection,will be a good thing。"Openai and other companies facing the challenges when facing so many different rules are,How to maintain innovation and opening up while obeying the rules。TIKTOK is one of the most popular applications in the world,But it is dilemma between the laws between the United States and Europe。The challenge it faces is well explained why we need more international agreements。"Hendel Explanation。

Kobberg also noticed the adverse effects that lag may have on artificial intelligence supervision。He told reporters,When formulating regulatory rules,For example,When Bet365 lotto review he himself as a member of the Senior Artificial Intelligence Expert Group of the European Commission, he participated in related work,Chatgpt's emerging tool has not been released yet。This points to a series of problems,For example,Chatgpt constitutes a high risk? Such example description,EU Commission and other organizations and other organizations currently adopted by artificial intelligence supervision may be incomplete。Kobberg thinks,The very important point is,When designing and developing tools such as ChatGPT,To explain its limitations to users,This includes clear that it is "a machine" rather than "a real person"。By this way,Technology -related expectations can get better control。other,For the government,Pre -judgment of the influence of automation based on artificial intelligence on labor is also very important,For example,How to cope with the potential unemployment problem caused by artificial intelligence tools。

Hak, Engel and others proposed,Artificial intelligence supervision in the EU and many other regions mainly targets traditional artificial intelligence models,instead of a large -generation artificial intelligence model。In other words,Existing regulatory measures are insufficient preparation for the rise of the new generation of artificial intelligence models,It may not be able to properly respond to its multifunctional protrusion、Risks brought by the characteristics of wide application scope。The case of the large -generation artificial intelligence model highlights the limitations of regulatory measures specifically for certain technologies,It is easy to cause supervision gap,Pay special attention to those who need legislators。Research by people and others of Hak, Engel, Engel, etc. also shows,Technical-Neutral's regulatory rules may be better.。This is because of,The rules and regulations of the technical-specific-site (Technology-Specific) may face outdated problems before or after the official promulgation。In order to enhance the reliance of large -generated artificial intelligence models,Make sure they better serve the overall interests of the society,They recommend,You can formulate supervision strategies from the following five aspects。

First,Different from different participants at the discourse level,Developers who are pre -training for the model、The deployee of the model and the receiver of the output content according to the needs of the specific case (Use Cases) and the receiver of the output content,Consumers of advertising or products generated bet365 best casino games by artificial intelligence,This can tailor the more detailed regulatory obligations for different participants on the artificial intelligence chain。2,The rules in Direct Regulation shall be adapted to the characteristics of the large -generation artificial intelligence model。Regulatory laws and regulations should pay special attention to specific high -risk applications,instead of regarding the pre -training model as a whole。It is expected that the developers of ChatGPT to predict and resolve every problem in each possible high -risk scenario when using the tool,Is unrealistic。For EU countries,The focus can be placed under the high -risk scenario (such as scoring the resume in the recruitment decision),Delivery and use tools can abide by the regulations on transparency and risk management in the EU "Artificial Intelligence Act" (AI ACT) high -risk -related obligations。third,Ensure that developers abide by the principle of non -discrimination,Prevent prejudice output from the source,This is especially "right" when developers collected and managed training data from the Internet。Fourth,Develop a detailed transparency obligation。For developers and deployers,This performance index involving artificial intelligence tools、The risk of harmful speech discovered during the pre -training model; for users,This involves disclosure of the content generated by the use of artificial intelligence tools。Fifth,EU countries should expand content review rules such as artificial intelligence regulatory regulations such as Digital Services Act,Including notifications and operation mechanisms、Comprehensive audit, etc.,to make it better cover a large -scale production artificial intelligence model。In short,Content review should be performed at the content generation stage,Not remedy afterwards,Otherwise,The negative effects of artificial intelligence in harmful speech and fake news may be difficult to curb。

Hendel told reporters,The biggest challenge to explore artificial intelligence tools such as ChatGPT is,The research object is "Movement target",The algorithm is protected by many laws due to industrial property attributes,and in a constant state。This requires researchers to find a forward -looking method by exploring the ethical problems that may occur in the future,So "walking ahead",Not just trying to repair the problems that have appeared。He told reporters,From the cases in the field of biological ethics,This is feasible-The ethical dilemma such as "clone people" or "customized Bet365 app download baby",Before being technically realized,It has been solved by the academic world through unity and cooperation。Related researchers need to communicate more with artificial intelligence experts,Possibilities of the possibility of major social impact in the future,instead of trying to "catch up" the existing system。For example,Although the construction and deployment of quantum machines is a very competitive field,But the research results of the application of quantum machines in a way of ethical morality are relatively small。"It's time to think about this problem,instead of waiting for them to have business feasibility before considering。"Handel said。

Editor in charge: Zhang Jing
QR code icon 2.jpg
Key recommendation
The latest article
Graphics
bet365 live casino games
Video

Friendship link:

Website filing number: Jinggong.com Anmi 11010502030146 Ministry of Industry and Information Technology:

All rights reserved by China Social Sciences Magazine shall not be reprinted and used without permission

General Editor Email: zzszbj@126.com This website Contact information: 010-85886809 Address: Building 11-12, Building 1, Building 1, No. 15, Guanghua Road, Chaoyang District, Beijing: 100026