Captive and use ChatGPT
March 06, 2023 06:28 Source: "China Social Sciences" March 6, 2023 Issue 2603 Author: Reporter Wang Youran

ChatGPT is full of fire all over the world,From software engineering、Data analysis、Finance and Finance、Insurance、Consultation to marketing、Media、Law、Medical、Scientific Research,All talking about Chatgpt。ChatGPT diversified and powerful functions show the speed and level of artificial intelligence development,But people should also see,ChatGPT is the same as other cutting -edge technical tools,There are limitations and potential risks。

 Bet365 lotto review

Professor of Public Management College of Omaha, the University of Nebraska, a professor at the School of Public Management of Omaha, and Maxel J. Ahn, an associate professor of the Department of Public Policy and Public Affairs of the University of Massachusetts, said,The strength of ChatGPT is summary and summary,Not good at providing opinions on new phenomena lacking data、Suggestion。

Although the potential of ChatGPT is huge,But highly relying on CHATGPT may weaken people's memory and thinking critical ability of specific types of facts。Human beings can use ChatGPT to understand complex policies,Get personalized public services more efficiently,But if you rely too much on ChatGPT,Important policy information other than its database will be ignored。

Chatgpt can easily summarize the plot of the 10 most influential works in English and literature、Famous sentence,and analyze the meaning of the work,But reading this summary is not the same as reading the original。If this is simplified、Unified、Popularized "Simple Edition" becomes the only choice of readers,It will bring multiple influences to the society,Including information and knowledge of communication and understanding of ways of understanding。In the field of education and scientific research,Chatgpt may exacerbate plagiarism、Still original power,It should be allowed to prohibit the application of ChatGPT in the test and thesis writing、How to incorporate ChatGPT into education and scientific research policies is a question that teachers and scholars must explore。

According to the introduction of Alex C. Engler, Alex C. Engler, a researcher at the technical innovation center of the Brookings Society of the Brookos Society of the American Think Tank,Due to the powerful artificial intelligence functions such as ChatGPT and other,Many companies hope to use such technologies Bet365 app download for commercial use,For example, programming、Video game environment design、voice recognition and analysis, etc.。One of the key issues of generating artificial intelligence commercialization is,The function of the developer's final product、Technical R & D Agency and Products,May not have enough understanding and control。Upstream developers may not know that the original model is rewritten、How to use it after integrating a larger system,downstream developers often overestimate the ability of the original model; therefore,Error and unexpected probability of errors during the cooperation will rise。

When the error is not serious,If the product recommendation algorithm or the existence of human review is exist,Risk is acceptable。But when the artificial intelligence business application involved in multiple institutions expands to far -reaching socio -economic decisions (educational opportunities、Recruitment、Financial Services、Medical, etc.),Policy makers must examine risks,Weighing interests。At the same time,Generating artificial intelligence R & D institutions cannot determine risks,It should be clearly explained in the service terms and restricted suspicious applications。If the cooperation is approved by the regulatory agency,Upstream R & D and downstream developers shall share information such as the prelude to the operation and test results,To ensure that the original model is properly used。

Another type of risk caused by generating artificial intelligence is malicious use,For example, improper speech and spread false information、Internet attack。This type of risk is not a new phenomenon in the digital ecosystem,But the popularity of generating artificial intelligence may exacerbate the problem of malicious use of artificial intelligence。

  Avoid continuity of continuity of ChatGPT

Associate Professor Collin Bjork, Associate Professor of Scientific Communication of Messi University in New Zealand,Chatgpt and other artificial intelligence tools generated will change people's writing methods,But it cannot make breakthroughs in language and content。From the current point of view,The content of tool writing is homogeneous and boring,It may also exacerbate prejudice and inequality。For example,According to the US news website "Business Insider" report,A high school teacher in New York City, USA let ChatGPT create a lesson plan,But students don’t like learning materials generated by ChatGPT,It is called "containing prejudice and very boring"。

For a long time,White men using standard English dominate news、Law、Politics、Medicine、Computer Science、Academic Research and Bet365 lotto review other areas,The text they produces far more than other people。Although OpenAI has not disclosed its training data source,But the works of white men in English "standard" may be the main training statement of large language models such as ChatGPT。Of course,Chatgpt can handle multiple languages,But the problem is not what it can do,and in its default settings。ChatGPT "Out of Factory" has been set up for writing a paradigm,If you want it to generate non -standard text,Need to give a specific instruction。This question is also seen in the "Sisters Products" of ChatGPT Dall · E 2,This artificial intelligence image generation tool is also developed by Openai。When being required to draw a "close -up of the keyboard",Dall · E 2 generates hand pictures of several white men; after entering more prompts,It only generates other skin tone hand pictures。

Some people think,ChatGPT and other automatic text generating tools can help people avoid missed academic and occupational opportunities due to inconsistent writing。From the perspective of Biyak,People should not succumb to existing unfairness。In fact,Writing itself will exacerbate unfairness。Professor Alice De Punga Somerville, a professor at the English language and literature at the University of British Columbia, has said,Unable to get rid of the historical and still violent violence is a dilemma of writing。But what she advocates is not to give up writing,but critical、Creatively use writing to resist oppression。Bi York recommends that people embrace the diversity of language and the possibility of rich rhetoric it brings,Use new tools such as ChatGPT to write a more fair and fair future。

Assistant professor at the Department of Computer Science, University of Bokkini, Italy, Debora Nozza said,"Our past research discovery,When the natural language processing model is required to make up for a theme is a neutral sentence for women's position,Models often use harmful words; when subject is a sexual minority group,The condition of using harmful words is as many as 87%。At this point,ChatGPT has improved compared to the previous generation,But if people ask the question of 'correct',It still generate discrimination content。We must find a way to solve this problem fundamentally "。

 Chatgpt is not knowledge authority

Associate Professor of Political Science, Brook University in Canada, Blayne Haggart (Blayne Haggart) proposed,ChatGPT as a way to obtain a information,In addition to possibly plagiarizing the paper、Cheating in homework and examinations is more "bet365 Play online games convenient",Another important issue is the true reliability of the generating information。We can think about it,Why do certain information sources or certain types of knowledge are considered more credible? Journalist、Scholar、The credibility of industry experts comes from them to investigate the facts、Provide evidence、Have professional knowledge; even these people sometimes make mistakes,Its occupation is still authoritative。Commenting articles, although not must include a large number of references like scientific papers,But the responsible author will still indicate the source of information and perspective,And these sources are that readers can verify。

ChatGPT is sometimes very similar to the content produced by human authors,Therefore, it is difficult for people to distinguish the source of content,It is not difficult to understand it as the source of information for reliable information,But in fact, the working principle of the two is not the same。ChatGPT and similar language models to learn from the context from massive training data,The most basic point is to model the probability correlation of word sequence,That is, the probability distribution of different sentences will be predicted according to the input statement prediction。For example,ChatGPT connects "grass" behind "Bull",Pick up "Rice" after "People Eat",Not because it observes the corresponding phenomenon,but because "cattle eat grass" and "people eat" are the most probable combinations。

"For the language model,Words are just words。ChatGPT and other newer models of model output is high,Makes humans believe that they understand what they are writing,But the fact is that they just generate the most likely sentences learned from training data。"Dirk Hovy, deputy professor at the University of Pokini, said。Hagart emphasize,For tools such as ChatGPT,"Real" is the real of correlation,People cannot really verify the source,Because their source is statistical facts。

Assistant professor at the Department of Management and Technical at the University of Pokini Heather Yang introduced,People sometimes treat ChatGPT as similar categories,In a sense, this mentality is natural。Humans are social animals,This is also one of the reasons for human prosperity; interacting in socially is human instincts,Even if the other party is a machine。Psychological research shows,People will sound confident according to the object of the conversation、Is its reasoning smooth to determine whether the other party said is credible。Because ChatGPT shows Bet365 app download a confident attitude and smooth expression,People will mistakenly think that it does not need to verify what they generate。

Hogte said,From the perspective of the political economy of knowledge production,Excessive believe that ChatGPT will pose a threat to the "scientific building" and the entire information ecosystem of society。No matter how coherent content produced by ChatGPT、Tongshun、It seems to be in line with logic,It should not be equivalent to the knowledge verified by humans with scientific methods。Scholars and journalists must not help but explain the text generated by Chatgpt into their own works,Because readers are likely to be misled,Confusion and "understanding"。

 Perfect supervision is better than blind concerns

Chen Zheyou and Michael J. An also mentioned,ChatGPT may cause the labor market to fluctuate,Because there are fewer manpower required for production equal information。Not only is it engaged in strong repetitive、Predicable programmable work (such as administration、Customer Service) Human workers are easily replaced,From the long term,Career for education level and human wisdom,Ru writing、Edit、News reporter、Translation、Legal services、Scientific research, etc.,It may also be impacted。Even in the computer industry,Chatgpt can be able to use Python、C++、JavaScript and other commonly used computer language writing code and identify the error in the written code,This makes people have to question the future of software developers and programmers。Although it is unlikely to completely replace humans in these positions,But in the future, only a few people may need to be reviewed、Modify and edit ChatGPT or similar artificial intelligence tools to write code,The demand for employment decreases significantly。

Professor at the Department of Psychology at Harvard University, USA、Popular scientific writer Steven Pink looks like,"Whether humans will be replaced by artificial intelligence" is not an appropriate question expression,Because there is no intelligent measurement indicator of a single dimension covering all intellectual activities。People will see suitable for specific targets、Various artificial intelligence in specific scenarios,instead of a omnipotent "magic algorithm"。

Professor Gary Marcus, Professor of Psychology and Neuroscience, New York University, and Ernest Davis, a professor of computer science, observed in a group of experiments,The content generated by ChatGPT may contain prejudice and discrimination,It may also be "out of nothing" or seemingly bet365 live casino games plain,Can't connect human ideological processes with personality,It is impossible to determine the order order in the story。"ChatGPT is a probability program,If you do this group of experiments,May get the same error answer、Different error answers or correct answers。"Two scholars said。

Pingke talks,People have rich imagination about super intelligence,But existing artificial intelligence uses algorithms that solve specific types of problems under specific scenarios。This means that they are stronger than humans in some ways,Not as humans in other respects,This will probably be like this in the future。Another,Human beings for intellectual products (such as literary works、News Review) The authenticity has stronger demand,The relationship between the audience and the real human author gives these works acceptability and status。

"People's fear of new technology is always driven by the worst prediction scenario,No countermeasures that may be generated in the real world。"Ping Ke said。For large language models such as ChatGPT,People may form a stronger critical consciousness,Develop relevant ethics and occupational codes,R & D can identify new technologies that can automatically generate content。Artificial intelligence is a simulation of human intelligence,But its operation method、advantages and weaknesses are the same as human intelligence,This contrast may deepen our understanding of the essence of human intelligence。

Engel said,ChatGPT and other generated artificial intelligence brings some new challenges,What is the best response policy is not clear。If the R & D institution disclose more information about the research and development process and explain how they carry out risk management,may contribute to policy discussions; strengthen supervision of developers of large artificial intelligence models,For example, requiring it to bear information sharing liability、Establish a risk management system,It also helps prevent and reduce harm。other,The development of generating artificial intelligence itself has also created opportunities for more effective intervention,Although the relevant research has just started。Engels pointed out,No intervention measures are universal,However, it is required to bring more positive impacts to the society for artificial intelligence research and development and commercialization institutions.。

Chen Zheyou and Michael J.,Overall,​​Chatgpt is a powerful tool,Or the transforming people handle information、Communication、How to work and live。Provide bet365 best casino games situation information、Understand the intention behind the user's problem、Meet user needs in a targeted manner,is the key advantage compared with traditional search engines,It is also an important breakthrough in artificial intelligence technology。Openai is continuing to improve ChatGPT、Upgrade the technology behind it,Other artificial intelligence technology research and development institutions are also developing similar tools; at the same time,People must pay attention to the social impact of this new technology and prevent risks。

The founder of ChatGPT Sam Altman (Sam Altman) commented on his "genius work": "ChatGPT has incredible limitations,But in some ways very good,enough to give people a great misleading impression。Current,Relying on ChatGPT to do anything important is wrong。It is a preview of progress; we still have many jobs to do in terms of stability and authenticity。"" This comments may be used as a prompt that people can take it properly at this stage。

Editor in charge: Changchang
QR code icons 2.jpg
Key recommendation
The latest article
Graphics
bet365 live casino games
Video

Friendship link:

Website filing number: Jinggong.com Anmi 11010502030146 Ministry of Industry and Information Technology:

All rights reserved by China Social Sciences Magazine shall not be reprinted and used without permission

General Editor Email: zzszbj@126.com This website contact information: 010-85886809 Address: Building 11-12, Building 1, Building, No. 15, Guanghua Road, Chaoyang District, Beijing: 100026