Kategori: menneske og computer
Automating society 2020. Report from AlgorithmWatch.
06-11-2020

Here are quotes from the AlgorithmWatch report 'Automating society 2020 '

This blogpost is not meant to be an abstract of the comprehensiv report, but I quote some of the views I find interesting.

Dangers
Is it appropriate to use ADM's ( algorithms and AI systems) to profile persons or predict their lives?

predicting individual behavior is likely to be impossible ( s. 258)


A study tested 160 AI systems on data from several thousand families and:

The results, published in April 2020, are humbling. Not only could not a single team predict an outcome with any accuracy, but the ones who used artificial intelligence performed no better than teams who used only a few variables with basic statistical models. ( s. 258)


Laws must be interpretted in the specific situation for an individual. But AI systems or algorithms can't do that:

the legal text about financial aid gave social workers a great deal of room to manoeuver, since the lawwas saying that you couldn’t generalise. When this law isconverted to code, it becomes clear that social work haschanged. By converting law to software, the nature of financial aid changes, as you can’t maintain the same individualassessments as before.”


“As far as we know, there aren’t yet any algorithms that take individual cases into account sufficiently to follow the law. Not when it comes to children with special needs, or any other kind of individual case.” ( s. 244)


“any form of citizen scoring can lead to the loss of [the citizen’s] autonomy and endanger the principle of non-discrimination”, and “therefore should only be used if there is a clear justification, under proportionate and fair measures”. ( s. 22)


There are problems with systems that are meant to assist humans:

This shows that, even if ADM systems are only being used to offer suggestions to humans, they greatly influence the final decision. ( s. 186)


And there are problems with automated systems

a distinction between completely and partially automated processes suggests that, after a certain point, increasing the level of decision automation is not simply a shift from less to more automation, but it brings about a qualitative shift to a different kind of process that has new implications for compliance with legal requirements. ( s. 89)


In the end, caseworkers complained that their workload increased, as they must redo by hand many of the automated processes ( ( s. 105)


But a 2014 study found that in 95% of the cases, they stuck to the automatic outcome.


many social workers feel it is a threat to their profession, ( s. 244)


It's a question if ADM' should be deployed.

And while both institutions recognize the technical flaws that affected the performance of the ranking algorithm, they are also clear that the actual flaw is even more fundamental, as it involves the very rationale behind its deployment — according to the TAR, even contradicting the Italian Constitution and the European Convention on Human Rights. ( s. 152)


The Buona Scuola algorithm failure, now enshrined in multiple rulings, should stand as a warning: automation can lead to very painful consequences when rushed and ill-conceived, ( s. 155)


Many countries and councils have abandoned ADM systems.

The danger with surveillance and face recognition:

“These systems could deter people from exercising their rights and could lead them to modify their behavior,” he wrote. “This is a form of anticipatory obedience. Being aware of the possibility of getting (unjustly) caught by these algorithms, people may tend to increase conformity with perceived societal norms. Self-expression and alternative lifestyles could be suppressed.” ( s. 258)



Pilots or experiments
If you search the word 'pilot' in the report from AlgorithmWatch, you get many results.

One of the main takeaways from the report is that, while neither AI nor ADM are used very broadly in any of the Nordic countries, local municipalities are running several different pilot studies. ( s. 249)


Failures
But many of these pilots was put on hold or abandoned. The same with systems, which have been used a long period.

In August 2020, the UK government’s Home Office abandoned an ADM system to determine the risk represented by visa applicants. ( s. 278)


The Guardian reported that several councils had abandoned RBV systems after reviewing their performance (Marsh, 2019). ( s. 279)


An even more important consequence is that, after the Ofqual algorithm fiasco, councils started to “quietly” scrap “the use of computer algorithms in helping to make decisions on benefit claims and other welfare issues”, the Guardian revealed. Some 20 of them “stopped using an algorithm to flag claims as “high risk” for potential welfare fraud”, the newspaper wrote. ( s. 280)


the ministry of education put the tool on hold ( s. 55)


Shortly after that, however, the service was discontinued. ( s. 91)


– a modular off-the-shelf IBM product that can be tailored for specific needs – which has been criticized for functioning inaccurately in Canada ( s. 116)


As a result, the commission recommends major revisions or a complete shut-down (Lasarzik, 2019) of the software. ( s. 116)


The project came to an end in August 2019 ( s. 132)


The algorithm has since been discontinued ( s. 152)


investigative journalist Milena Gabanelli concluded that errors might concern as many as 50% of taxpayers. ( s. 153)


However, use of the algorithm was stopped about a week after it was introduced. ( s. 55)


In Gothenburg, one case illustrates the failure of the automated system as the algorithm only considered the straight line distance to a school, and not the geography, or the actual time it takes to get to the school building by foot or car ( s. 249)


As a result, the government decided to end its experiment with profiling the unemployed, and the system was finally scrapped in December 2019. ( s. 186)


after testing it for nine months the municipality decided not to go on using it. ( s. 232)


after testing it for nine months the municipality decided not to go on using it. ( s. 232)


The Buona Scuola algorithm failure, now enshrined in multiple rulings, should stand as a warning: automation can lead to very painful consequences when rushed and ill-conceived, ( s. 155)


But burglaries fell in every Swiss canton, and the three that use Precobs are nowhere near the best performers. ( s. 257)


A 2019 report by the University of Hamburg, could not find any evidence of the efficacy of predictive policing solutions, including Precobs. No public documents detail how much Swiss authorities have spent on the system, but Munich paid 100,000 euros to install Precobs (operating costs not included). ( s. 257)


another study looked at the false positives, individuals labeled dangerous who were in fact harmless, and found that six out of ten people flagged by the software should have been labeled harmless. In ( s. 257)


only a quarter of the prisoners in category C committed further crimes upon being released (a false-positive rate of 75%) and that only one in five of those who committed further crimes were in category C (a false-negative rate of 80%), ( s. 258)


the pressure forced the company to switch off the system and declare that it had no further plans to use any form of face recognition technology in the future. ( s. 281)


Around the world there are examples of failing algorithms. I will mention:
Australia 'Robodept', Canada 'SAMS', USA 'MIDAS'

It's a mystery that EU commision in 'White Paper on Artificial Intelligence' 2020 writes:

It is essential that public administrations, hospitals, utility and transport services, financial supervisors, and other areas of public interest rapidly begin to deploy products and services that rely on AI in their activities.( s. 9)


AlgorithmWatch points out:

Throughout the whole document [the white paper], risks associated with AI-based technologies are more generally labeled as “potential”, while the benefits are portrayed as very real and immediate. ( s. 18)


At last: I am not a techophobian. I have used serveral self-developed java programs to write this blogpost.
When the program does not work as it is supposed to, the error will appear clearly during the execution of the program. I then can find the error (the bug) in the code and fix it.

ADM's or AI systems are different. Many of the errors in the output you can't see. Even if you must admit, that there is an error, you can't debug the code and fix the fault. In a way: You must have blind trust in AI systems.

I have a degree in theology. I have only blind trust in God. And that trust tells me to care about all humans, especially the weak and vulnerable.


se alle indlæg
cookies