Following the "Epidemic Unjust" brought by artificial intelligence
November 01, 2022 Source: "China Social Sciences", November 1, 2022, Issue 2520, Issue 2520

Current,Artificial intelligence -driven artificial intelligence has become more and more deeply applied to our production and life,and widely affect all aspects of human society,generated a series of actual consequences。Follow these consequences,It has always been the meaning of artificial intelligence ethics reflection。For example,Critical inspection of data acquisition and data management in artificial intelligence applications,Uncumbling or abuse of technology,Technology design、Social power dynamics during development and use。We want to promote the "technical good" of artificial intelligence,Avoid serious damage to society and individuals。But most of the research has ignored the damage caused by artificial intelligence's epistemology of artificial intelligence -threatening humans as interpreters、The dignity of the cognitive person and the source of evidence。This article will use the concept of Epistemic Injustice and its classification with the help of Miranda Fricker's "Epistemic Injustice",Display how artificial intelligence has brought us an explanation of unfairness and testimony in epistemology。

  bet365 best casino games

As a relatively new concept as the field of epistemology,"Epidemic Unjust" was proposed for the first time in the book "Epis. Unjust: Power and Ethics of Cognition"。According to Frek's definition,"Epidemic unfairness" is "the mistake made by someone as an intellectual"。This term reveals a unique phenomenon -due to social prejudice or stereotypes, it is weakened by an unfairness.。More broadly,refers to the unjust treatment of a certain type of group or individual's rational Bet365 lotto review ability by society or others。"Epidemic Unexplained" is a discriminatory injustice,It is a disadvantage of a person who is crowded out of cognitive activities or in the epistemology due to discriminatory factors。The unfairness of discrimination is different from the unfairness of the distribution,Although it is also important to pay attention to the latter,​​But Frek believes that "it has no obvious question of epistemology",Because those things that are characterized as a product of epistemology are largely accidental。So,The unpreparedness that truly has the illusion should be the injustice of discrimination。

Frick clearly distinguishes the two types: Hermeneutical Injustice and Testimonial Injustice。When "the gap between the collective interpretation of the resources makes someone at a disadvantage when understanding its social experience",Explanation will occur unjustly。When there is no "collective interpretation of scientific resources",Lack of typical language types and concept resources to talk about and understand some experiences,Those who suffer this experience will be deprived of the opportunity to talk about and understand their own experience。Unremitting testimony refers to "Degrading the credibility of the testor's testimony due to prejudice",Therefore, it is no longer regarded as the source of reliable evidence or testimony。These prejudices are often systematic,penetration in the entire society,Unfair testimony is its reflection of epistemology。The above theoretical resources,It provides a beneficial starting point for revealing the problem of "epistemology" brought by artificial intelligence。

 Explanation of unfair learning

Cognitive non -transparent features of artificial intelligence,It will trigger an explanation of unfair learning。Cognitive non -transparency is the concept proposed by Paul Humphreys,bet365 best casino games Used to characterize the internal processes and attributes of certain computing systems。Although Hamfreds focuses on calculating models and simulation,But this concept is also applicable to artificial intelligence。Due to the use of a large number of complex programs and methods (such as machine learning、Deep neural network、Big Data Analysis, etc.),Artificial intelligence has obvious cognitive non -transparency features。

The cognitive non -transparency characteristics of artificial intelligence are mainly derived from two aspects。On the one hand,Some technology itself is opaque。For example,Deep learning and big data technologies have typical incomplete understandability and non -accessible,It makes it difficult for the results to be traced or can only get part of the explanation afterwards。On the other hand,Some social factors have also brought cognitive non -transparency。For example,Algorithms and data in artificial intelligence are mostly mastered or even monopolized by large technology companies,To maintain market competition and technical advantages,These algorithms and data are usually regarded as commercial secrets without being disclosed。

Artificial intelligence can be regarded as a complex non -transparent system,The understanding of its internal process and output results far exceeds the cognitive resources and abilities of human individuals。and,People lack cognitive control of their experience's effectiveness。This will cause people (at least non -technical experts) to question many difficulties in questioning their results or understanding its operations。Especially Bet365 lotto review When artificial intelligence has a negative effect,The affected group is obviously lacking appropriate concept resources to express、Summary and understand the damage it experienced,This will lead to the occurrence of unfair explanation。When artificial intelligence is applied to the decision -making environment,The explanations brought by it will become more prominent。

Artificial intelligence is deeply changing people's decision -making process and environment,The decisions of many commercial companies and the government have begun to participate in artificial intelligence。For example,Some technology companies will use artificial intelligence -assisted recruitment evaluation system to screen the candidate's resume。Due to artificial intelligence's cognitive non -transparency characteristics,It may make it difficult for applicants to understand the company’s evaluation method and clear reasons for being rejected。At the same time,Artificial intelligence is also widely used in the field of commercial lending,Determined whether to lend and loan quota through a credibility assessment of the applicant。At this time,Borrowers often cannot know what specific indicators are to evaluate their credibility level and make decisions。So,Affected by artificial intelligence participating in decision -making,or lack of enough resources to understand the experience you encountered,Then lose its refutation、The right to appeal or even accountable。other,Decisions participating in artificial intelligence may also include or hide some political purposes or economic benefits,But due to its "black box" operation,It will cause users to know very little,Even sometimes even experts cannot access。At this time,Users are obviously in an unfavorable status of epistemology,Even damage beyond the scope of epistemology。This kind of situation should be paid attention and valued。

  Explanation is unfair

Although Frek's testimony is unfair, it is considered what happened between human audience and human speakers,4027_4061。Artificial intelligence may be degraded by degrading human testimony,bet365 best casino games Become a source of new epistemology。Because of a large collection of human identity and behavioral data,Artificial intelligence may bring a new kind of cognitive power inequality: currently artificial intelligence is often considered better than us,This weakens our support for our credibility。For example,Between behavior predictions based on artificial intelligence and big data analysis and human testimony -based behavior prediction,Many people may tend to think that the former is more reliable。In this case,Expressiveness is established by the entire society.。

In front of artificial intelligence,Human or unfairly degraded the rational ability of themselves as cognitives,It is believed that the credibility of its testimony is significantly lower than the judgment of artificial intelligence。Although artificial intelligence based on data mining technology is usually considered to eliminate some human prejudice,But it may also be engraved or strengthened many social prejudices,"Naturalization"。So,The "testimony" of artificial intelligence is not necessarily more reliable than human testimony,We need to be alert to hidden prejudice and eliminate the unfair treatment of human testimony。The testimony of the testimony brought by artificial intelligence,will adversely affect humans in the following three aspects。

One,In some important epistemology activities (such as knowledge production、evidence provided、Theoretical verification、Decision and judgment, etc.),Human participation ability and status will be weakened or even completely excluded from the entire epistemology activity。Human beings may no longer be regarded as appropriate participants who are regarded as epistemology。and,This practice that values ​​artificial intelligence decisions and ignores human testimony,or make us miss the opportunity to learn from humans。

Bet365 app download Its two,Due to the decision of artificial intelligence, it is given more weight than human testimony,Humans may gradually lose their confidence as the subject of epistemology and dignity as a knowledge provider。Human beings continue to have cognitive ability,It has been questioned and ignored for a long time because of its testimony,It is likely to lose communication and understand the motivation of your own experience,and finally choose to keep silent。Human beings are rational subjects,Creating knowledge and spreading truth is not only a manifestation of human rational ability,It is also a direct expression of human dignity and value。Unjust weakening of the subject status of human epistemology,The concept of "people -oriented" artificial intelligence development concept will run "。

Its three,Data lack of artificial intelligence,will exacerbate the testimony of the testimony of related groups。Lack of data on specific social events or phenomena,does not mean that such events or phenomena do not actually exist。Artificial intelligence makes predictive judgment on the future based on existing data in the past,It is easy to ignore some unprecedented factors or reinforcement existing bias。The testimony of some unprepared social groups may be excluded from the artificial intelligence algorithm system,Its interests and positions may be ignored because of this。So,If the lack of manual review and output judgment of training data, the moral sensitivity assessment,Artificial intelligence may become an irresponsible innovation。

In short,Disposal the "epistemology of unfairness" brought by artificial intelligence,It aims to call on more scholars and artificial intelligence experts to pay attention to this issue,Actively exploring the problem of solving problems。Although the direct damage discussed here occurs in the theory,But this will also indirectly cause some moral and social consequences。So,Artificial intelligence ethics reflection Bet365 lotto review from the perspective of epistemology,will become more fulfilling and comprehensive,Then promote better development of artificial intelligence。

  (This article is the results of the Renmin University of China in 2022, the results of the nursery of innovative talent cultivation of talents)

(Author Unit: School of Philosophy of Renmin University of China)

Editor in charge: Zhang Jing
QR code icons 2.jpg
Key recommendation
The latest article
Graphics
bet365 live casino games

Friendship link:

Website filing number: Jinggong.com Anxie 11010502030146 Ministry of Industry and Information Technology:

All rights reserved by China Social Sciences Magazine shall not be reprinted and used without permission

General Editor Email: zzszbj@126.com This website contact information: 010-85886809 Address: 11-12 floor of Building 1, No. 15, Guanghua Road, Chaoyang District, Beijing: 100026