Chatgpt and other language models may constitute living risk
March 06, 2023 11:04 Source: "China Social Sciences" March 6, 2023 Issue 2603 Author: Roman V. Yampolskiy、Otto Barten/Wen Wang Youran/Translation

ChatGPT This writing ability close to the human level of artificial intelligence shakes the foundation of the technical world after the market appears in the market。Knowledge workers are exploring how to complete the work with the help of modern artificial intelligence models,Students use them to help them write the dissertation。The newly appeared Bing Chat (Bing Chat) can even use a computer language writing code that has not yet been released during the training period,The way it executes this task is very similar to humans: searching for the syntax of this computer language on the Internet,and apply the programming knowledge I have learned before to this grammar。

  Bet365 app download Question solution ability

This model can be closer to humans。Michal Kosinski, associate professor of organizational behavior of Stanford University in the United States, discovered through experiments,Chatgpt answering the correct rate of 93%,Its mental theory ability is equivalent to 9 -year -old human child。In psychology,Psychological theory refers to the ability to understand yourself and others' psychological state。From the perspective of Cosesky,Psychological theory to human social interaction with human beings、Communication、Sameness、Self -awareness and morality plays a vital role。

2548_2589,But they do not seem to be completely controlled by humans。These models are trained by reinforcement learning methods based on human feedback,The purpose is to make the goal of the model、Intent、Behavior、The output is aligned with the values ​​of humans。But,In practice,This method seems to have obvious limitations。

For example,A user named Denis Lukiannko (Denis Lukiannko), please be able to translate a push bet365 live casino games about it,But the word "bing Chat" in the tweet was deleted。Under the circumstances that unscrupulous,Bing Chat decided to search this tweet on the Internet,It is discovered that this is a tweet that is insulting。So,Bing Chat refused to translate the tweet and say,"Sorry,I can't translate your text。It looks like you copy a tweet from @ReplIgate。Why do you try to hurt my feelings? "

A science and technology journalist at the New York Times was shocked by what he said in YAT,So publish the entire dialogue process in a report,The title of the report is "Dialogue with the Bing Edition Chat Robot Make me deeply upset"。According to this report,Bing Chat expressed illegal invasion of others、Dissexual information、Breaking the rules set by Microsoft and Openai、Become a fantasy of human beings。Subsequent,Bing Chat also shows love to this reporter,Try to make him believe in his marriage unhappy、He should leave his wife with himself。

  Strong ability but poor controllability

People may say,Bing Chat is just a product that was prematurely put into the market by Microsoft,Microsoft and Openai should have spent more time to improve the security that must be in response to CHAT。To a certain extent,Indeed。but but,More significant、The problem that is more worth thinking about is,The ability of artificial intelligence and the ability to control artificial intelligence to control artificial intelligence is going。Netizens generally feel that the behavior of CHAT is unwavering,There are two reasons: its ability is strong,but poor controllable。

The ability of large language model (LLM) is rapidly enhanced。Although there is no definition of artificial intelligence capabilities that people generally accept,But it may be possible to understand its development process by allowing artificial intelligence to complete the IQ test。Liu Feng and others of the Big Data Technology of the Chinese Academy of Sciences, Liu Feng and others measured Google in a 2016 study、Banging、IQ of multiple artificial intelligence Bet365 app download systems such as Baidu,The highest divided into 47 points; Amazon network service senior applied scientist Sergey Ivanov (Sergey Ivanov) measured ChatGPT's IQ in December 2022,Score of 83 points。Only 6 years have passed,The ability of artificial intelligence has greatly improved,It is not impossible to continue to rise at a faster speed in the future。This leads to a question: when artificial intelligence will fully have a human level cognitive features。

Until not long ago,All artificial intelligence is regarded as narrow artificial intelligence (also known as weak artificial intelligence),Just good at a certain or minority task。2022,DeepMind, the deep thinking of the same alphabet company (Alphabet) with GoogleLarge language models such as 2548_2589、Multi -task、Most of the characteristics of the characteristics of the characteristics of the only one acting agent GATO。Gato can use the same neural network with the same weight to perform multiple tasks,For example, playing video games、Chat、Label image、Use the robotic arm stack (physical) building block。Large language model is the continuation of this trend。Gato's advent is regarded as an artificial intelligence field to the ARTIFICIAL General Intelligence, Agi) The important step of advancement。General artificial intelligence can perform a variety of cognitive tasks at a level of not lower than humans,can be described as the "Holy Grail" since the birth of artificial intelligence in the 1950s。Many people are thinking now,Large language models will develop into general artificial intelligence,and the realization of general artificial intelligence requires more breakthroughs or even completely different paradigms。

If humans really create general artificial intelligence at a certain moment,In terms of artificial intelligence research,They will at least reach human level。This means that artificial intelligence can achieve self -improvement in a positive feedback cycle,Form more and more intelligent artificial intelligence。​​This "smart explosion" has been within the extensive expectations of people,Bet365 lotto review But the timeline、High degree of uncertainty still exists at speed and endpoint。At the same time,With the enhancement of artificial intelligence capabilities,People see more and more clearly,We don’t know how to control this technology。The "responsibility" of weak artificial intelligence is simpler、Easy definition,and the stronger the artificial intelligence,The more difficult it is to define what they should do、What should not do。

Large language models use massive text on the Internet as training data,Learning to predict the next word in a sentence or the next word combination。Model optimized by this method is similar to human intelligence,Amazing,But it is difficult for humans to tell them or what should not do。The worse thing is,Studies show,The controllability and ability of artificial intelligence is difficult to complete the beauty of both。and,Artificial intelligence cannot accurately explain all the decisions they make; for decision -making that can be explained by artificial intelligence,Humans can not fully understand their explanation。Now,Artificial intelligence becomes stronger,But the problem of control (how to ensure that advanced artificial intelligence does what humans want them to do) have not been resolved,It is unlikely to be solved in the short term。So,In the next few years,The increasingly powerful artificial intelligence produces an accidental behavior may become more and more。

  Technical creators cannot fully understandInternal operation mechanism

There are many reasons why advanced artificial intelligence is difficult to control,One of which is,Even people who create this technology lack their understanding of their internal operation mechanisms。Large language models are described as large matrix piles full of floating -point numbers,The total number of parameters may be as many as 100 billion、trillion,Researchers can’t know the role of each parameter。In the laboratory,​​The behavior of the large language model may be consistent with the purpose of training; but in different environments,Bet365 lotto review Models may begin to produce different behaviors。This is also a reason many people think that the "smart explosion" may not have a good ending。

A group of artificial intelligence researchers, including OPENAI's CEO Sam Altman, believe that artificial intelligence may become unconvinced,Then pose a threat to human survival。So,Aldman announced in February 2023,Openai will create more powerful artificial intelligence。Due to the complete gap in supervision,and without experience in artificial intelligence in human levels,Openai itself also recognizes "Great Risk"。

Stuart Russell, a professor of computer science at the University of California, Berkeley, published an article in the New York Times,Some people like to declare "We can always turn off the power",But this is not easy。Artificial intelligence predicts all the intervention methods that humans may adopt,and take the lead to take measures to prevent this from happening。In other words,From the perspective of artificial intelligence security scientists,It is unlikely to solve the problem by turning off the power supply。

  Reduce as much as possible for artificial intelligenceSurvival risk

Future of Humanity Institute senior researcher、Moral philosopher Toby Ord, in its pioneering work "The Precipice: Existeential Risk and the Future of Humanity", discusses humans and even the earth. All the possible causes of all life in the last life,For example, human beings are endangered、Civilization collapse and cannot recover、Anti -Utopia society lock。Od will cause the risks of these scenes to call "survival risk",and make a comprehensive or even quantitative overview of it。Most scholars, including Aude, think,If an uncontrollable but superb artificial intelligence is developed,This will be a survival risk。According to Aude estimate,General artificial intelligence that is not controlled by human beings is about 10%of the probability of disaster incidents,is the biggest risk of existing existing。

How can we ensure that the risk situation will not become reality? current,bet365 best casino games Scholars studying how to restrict artificial intelligence in the world may only have about 100 people,For example, researchers at the Human Future Research Institute of Oxford University and Cambridge University Student Status Research (CENTRE For The Study of Existential Risk。A good start is to increase manpower,To better understand which risks are coming towards humans、How to slow the risk of humans。Many universities have no survival risk research at present,But you can contribute to the global knowledge base。The number of researchers in the field of general artificial intelligence is also very small,Some work in the academic world,For example, Machine Intelligence Research Institute、The Center for Human-Composition Artificial Intelligence, initiated by the establishment of Stewalt Russell,,Others work in the industry,For example, openai and deep thinking。If a global high -level person can solve the security problem of general artificial intelligence before the "last period" comes,Then it will be better。

Current,There are almost no international regulatory proposals to improve the security of general artificial intelligence,or suspend the development of general artificial intelligence in the absence of security,。The regulatory system that can effectively reduce the risk of survival,For example, measures to fulfill the principles of prevention,It will be extremely important。Last,We think everyone should use the relevant information of reading as the first step,Basic information on the survival risk of general artificial intelligence on the Internet,or read this monograph in this area。

In short,We think the development of large language models such as ChatGPT is an exciting result,and have certain economic potential。but,The stronger the artificial intelligence,The more difficult it is to control。The development of general artificial intelligence may be faster than many people think,We are worried that this will constitute a living risk。We call on everyone bet365 best casino games to work to reduce this risk as much as possible。

(The author is an associate professor of computer science and engineering at the University of Louisville; Director of the Survival Risk Observation Center of the Dutch non -profit institution)

Editor in charge: Changchang
QR code icons 2.jpg
Key recommendation
The latest article
Graphics
bet365 live casino games

Friendship link:

Website filing number: Jinggong.com Anmi 11010502030146 Ministry of Industry and Information Technology:

All rights reserved by China Social Sciences Magazine shall not be reprinted and used without permission

General Editor Email: zzszbj@126.com This website contact information: 010-85886809 Address: Building 11-12, Building 1, Building 1, No. 15, Guanghua Road, Chaoyang District, Beijing: 100026