Kategori: politisk alvor
Kunstig intelligens (Articial Intelligence eller AI) har indvirkning på vores menneskerettigheder både positivt og negativt.
24-01-2019
Jeg har særligt fokus på de digitale komplekse systemer ( Artifial Intelligence eller AI), som har betydelige konsekvenser for vores liv og indvirkning på vores menneskerettigheder. Rapporten 'Artificial Intelligence and Human Rights: Opportunities and risks' ( link ) fra Harvard har netop fokus på

those technologies that are being used to make decisions with real-world consequences for the simple reason that these are the technologies that are most likely to have discernible human rights impacts. fra Harvard rapport s. 11
Jeg vil efterhånden kommentere og uddybe hvert citat.

Rapporten handler om brugen af AI i retsvæsenet, finanssektoren, sundhedsvæsenet, sociale medier, uddannelsessystemet og ved ansættelser.

Indledning

Criminal Justice (risk assessments) Finance (credit scores) Healthcare (diagnostics) Content Moderation (standards enforcement) Human Resources (recruitment and hiring) Education (essay scoring) fra Harvard rapport s. 17
Med hensyn til menneskerettigheder kan AI have både positive og negative konsekvenser. Indtil nu har det mest været negative.
Privacy is the single right that is most impacted by current implementations of AI. Other rights that are also significantly impacted by current AI implementations include the rights to equality, free expression, association, assembly, and work. Regrettably, the impact of AI on these rights has been more negative than positive to date. fra Harvard rapport s. 4
Digitale komplekse programmer (AI) kan bearbejde enorme mængder af data. Og ved at sammenkøre tilsyneladende harmløse data fra vores liv kan disse programmer afsløre vores dybeste hemmeligheder.
Foremost among these is that AI systems depend on the generation, collection, storage, analysis, and use of vast quantities of data with corresponding impacts on the right to privacy. AI techniques can be used to discover some of our most intimate secrets by drawing profound correlations out of seemingly innocuous bits of data. fra Harvard rapport s. 7
Mennesker, som anvender AI, tror blindt på det resultat, som AI kommer frem til, som om det er objektiv viden. Men data, som AI bygger på kan være fyldt med fordomme (bias) fra mennesker, og - tænk! - AI kan selv danne bias, som er værre end de bias vi mennesker har.
AI can easily perpetuate [ 'videreføre' ] existing patterns of bias and discrimination, since the most common way to deploy these systems is to 'train' them to replicate the outcomes achieved by human decision-makers. What is worse, the 'veneer [ 'skin'] of objectivity' around high-tech systems in general can obscure the fact that they produce results that are no better, and sometimes much worse, than those hewn from the 'crooked timber of humanity.' fra Harvard rapport s. 7
Digitale Komplekse Programmer (AI) bruger statistik og sandsynlighedsregning på en utrolig mængde af data og det er utroligt, hvad AI kan finde ud af, som vi aldrig havde set, og som måske passer, men om det enkelte menneske duer det ikke. https://isn.page.link/dVUg

While these systems are impressive in their aggregate capacities, they are probabilistic and can thus be unreliable at the individual level. fra Harvard rapport s. 11

Firmaer, som udvikler Digitale Komplekse Programmer (DKP eller kunstig intelligens) praler ( med rette?) med billed- og ansigtsgenkendelse, men der er flere eksempler på, at et AI system tager gruelig fejl og tror, at en skildpadde er en revolver https://isn.page.link/dVUg
Deep learning computer vision systems can classify an image almost as accurately as a human; however, they will occasionally make mistakes that no human would make such as mistaking a photo of a turtle [ skildpadde] for a gun. fra Harvard rapport s. 11

They are also susceptible to being misled by 'adversarial examples,' which are inputs that are tampered with in a way that leads an algorithm to output an incorrect answer with high confidence. fra Harvard rapport s. 11
I USA bruges AI i retsvæsenet bl.a. til at vurdere risikoen for at en dømt vil begå ny kriminalitet, hvis han løslades. Positivt medfører det, at mange ikke kommer i fængsel, fordi risikoen for ny forbrydelse er lille. Undtagen for minoritets grupper. De vurderes forkert af AI. https://isn.page.link/dVUg
The use of automated in the criminal justice system may reduce the number of individuals from the majority group who are needlessly incarcerated [ fængslet], at the very same time that flaws in the system serve to increase the rate of mistaken incarcerations for those belonging to marginalized groups. fra Harvard rapport s. 17
Designere skal være meget omhyggelige, når de udvikler de nye digitale programmer (AI), og når disse systemer er i brug skal de hele tiden evalueres på, hvordan de påvirker forskellige grupper af befolkningen. Ellers skaber de ulighed.
Unless AI systems are consciously designed and consistently evaluated for their differential impacts on different populations, they have the very real potential to hinder rather than help progress towards greater equity. fra Harvard rapport s. 18
En retssag i Canada handlede om de digitale programmer (AI), som forudsiger risikoen for at en dømt vil begå ny kriminalitet, hvis han løslades. Højesteret fastslog, at hvis disse systemer er baseret på data fra store grupper i befolkningen, vil de ikke kunne forudsige noget, når det gælder mindretalsgrupper.

Brug af Digitale Komplekse Programmer (DKP eller AI) i retsvæsenet.

the Supreme Court of Canada recently noted in Ewert v. Canada, risk assessment tools that are developed and validated based on data from majority groups may lack validity in predicting the same traits in minority groups. fra Harvard rapport s. 22
Not only do courts lack the institutional capacity to review the operation of such tools (AI), but the objective veneer that coats the outputs of these tools obscures the subjective determinations that are baked into them. fra Harvard rapport s. 22

these tools raise fundamental questions as to whether it is fair to treat a particular individual more harshly simply because they share characteristics with others who have reoffended. fra Harvard rapport s. 22

it may be well-nigh impossible to design algorithms that treat individuals belonging to different groups equally fairly across multiple different dimensions of fairness fra Harvard rapport s. 23

Brug af Digitale Komplekse Programmer (DKP eller AI) i finansvæsenet.

ZestFinance, one of the leading companies in this field in the US, considers over 3,000 variables in deciding whether to offer someone credit ' including whether the applicant tends to type in all-caps, which apparently is correlated with a higher risk of default. fra Harvard rapport s. 29

This AI-based approach has the potential to help members of historically marginalized groups, such as women and ethnic minorities, gain access to credit in the devel- oped and developing world alike, thereby foster ing financial inclusion and advancing the right to equality. fra Harvard rapport s. 29

The use of AI in financial decision-making may even burden individuals' freedom of opinion, expression, and association by chilling individuals from engaging in activities that they believe will negatively affect their credit score. fra Harvard rapport s. 30

Another lender in the U.S. reduced the credit limit of its customers who had incurred expenses at "marriage counselors, tire retreading and repair shops, bars and nightclubs, pool halls, pawn shops, massage parlors, and others". fra Harvard rapport s. 30

Compared to the status quo credit scoring algorithms, the introduction of AI into the lending process is likely to have an overall positive impact on the ability of objectively low-risk borrowers to access credit. fra Harvard rapport s. 31

Brug af Digitale Komplekse Programmer (DKP eller AI) i sundhedsvæsenet.

AI-based diagnostic systems, especially the latest generation of systems that leverage artificial intelligence, are very likely to positively impact the right each of us enjoys to the highest attainable standard of health. Not only do AI-based diagnostic systems appear to meet or exceed the performance of human experts in diagnosing disease, they have the potential to be much more accessible than specialized human experts, who require years of training and experience to rival the accuracy of an AI. fra Harvard rapport s. 34

Indeed, there is already evidence that the impressive performance of AI-based diagnostic systems is leading medical students to shy away from entering certain specialty fields, such as radiology, where AI systems routinely outperform humans. fra Harvard rapport s. 35

the recent case from California of a 1970s-era serial killer who was identified based on the statistical analysis of DNA samples that his distant relatives submitted to a family ancestry website. fra Harvard rapport s. 35

Brug af Digitale Komplekse Programmer (DKP eller AI) i internet medier.

some commentators have noted that the largest online platforms, such as Facebook and Google, exercise more power over our right to free expression than any court, king, or president ever has fra Harvard rapport s. 38

In view of the massive volume of content that the leading Internet platforms host, these companies now each employ thousands if not tens of thousands of individuals whose sole job is to determine the fate of content that has been flagged fra Harvard rapport s. 39

Facebook's broader policy against the display of nudity on its platform drew controversy when it removed images of breast-feeding women and the infamous 'napalm girl' photograph from the Vietnam War from its platform. 156 Facebook ultimately relented in the face of public pressure in both incidents, but that too raises further questions about the consistency of its application of policies that burden the right to free expression. fra Harvard rapport s. 39

For example, Facebook permitted a U.S. Congressman to state his view that all radicalized Muslims should be 'hunted' or 'killed, ' whereas it banned activists associated with the Black Lives Matter movement from stating that 'all white people are racist. ' fra Harvard rapport s. 39

These technologies are still in their infancy, and most simply work to identify potentially problematic content for a human reviewer to evaluate. fra Harvard rapport s. 40

This has led skeptics, including Facebook CEO Mark Zuckerberg, to conclude that AI systems are not yet sophisticated enough to replace human reviewers fra Harvard rapport s. 40

This, however, does not render AI technologies useless. The speed at which they can sift through content makes them a powerful tool to assist, rather than to replace, human reviewers by identifying content that appears to be suspect fra Harvard rapport s. 40

these individuals are exposed to the very worst of humanity day in and day out - from child pornography to gruesome acts of violence. Content reviewers are disproportionately female, but reviewers of all genders suffer from depression, burnout, anxiety, sleep difficulties, and even from post-traumatic stress disorder at extraordinary rates. Using AI to lessen the psychological burden associated with this work could well have positive human rights impacts on a group of individuals who are often forgotten in conversations about how best to respond to problematic content online. fra Harvard rapport s. 41

Brug af Digitale Komplekse Programmer (DKP eller AI) i firmaer ved ansættelser .

the veneer of objectivity that technology provides can be dangerous, because it obscures how AI often replicates human biases at scale. This is particularly worrying when AI is used to devise predictors of success that will de- termine hiring and advancement opportunities for future applicants and employees. fra Harvard rapport s. 45

AI-based hiring systems may have a greater negative impact on the freedoms of association 203 and expression than the current human-based system. fra Harvard rapport s. 46

Brug af Digitale Komplekse Programmer (DKP eller AI) i undervisningsvæsenet .

The use of automated grading systems has the potential to positively impact the right to education in a number of different ways. fra Harvard rapport s. 49

Relatedly, automating certain aspects of the grading of writing might free educators to spend more time focusing on higher-order teaching tasks, such as engaging with students' ideas and arguments. fra Harvard rapport s. 49

On the flipside, there are serious concerns relating to the fact that these systems cannot understand what is written in the same way as human readers. Some systems might well be able to detect offensive content, but at least for the foreseeable future, artificial intelligence systems will not realistically possess the general intelligence that humans do which enables them to evaluate the validity of written material. fra Harvard rapport s. 49

Consider, for example, that a famous essay by the renowned MIT linguist Noam Chomsky received a grade of 'fair' when it was fed into an automated grading system. fra Harvard rapport s. 50

If students respond to the growing prevalence of automated grading systems by focusing on form and length to the detriment of style and substance, these technologies may be doing them a disservice. fra Harvard rapport s. 50

On balance, the rise of automated grading systems is likely to have a positive impact on the right to education, as these systems can potentially increase global access to at least some feedback on people's writing. In much of the world today, educational systems are simply too overburdened to provide the kind of individualized evaluation fra Harvard rapport s. 50

It is heartening that many of the biggest players in developing AI have risk management systems in place that trigger human rights due diligence processes at all appropriate stages in the lifecycle of a technology. fra Harvard rapport s. 53

The problem is particularly acute in AI systems which utilize machine or deep learning, such that the AI develop- er herself may not be able to predict or understand the system's output fra Harvard rapport s. 54

In Europe, this challenge has been framed in part by the provisions of the General Data Protection Regulation, which requires some human involvement in automated decision-making and encourages the development of 'a right to an explanation. ' fra Harvard rapport s. 54
the relationship between artificial intelligence and human rights is complex. A single AI application can impact a panoply of civil, political, economic, social, and cultural rights, with simultaneous positive and negative impacts on the same right for different people. fra Harvard rapport s. 58

We are heartened by the growing attention that human rights-based approaches to assessing and addressing the social impacts of AI have begun to receive. We view it as a promising sign that so many of the private enterprises at the forefront of the AI revolution are recognizing their responsibility to act in a rights-respecting manner. But the private sector cannot do it alone, nor should it: governments have a crucial role to play, both in their capacities as developers and deployers of this technology, but also as the guarantors of human rights under international law. fra Harvard rapport s. 58
se alle indlæg
cookies