Kategori: menneske og computer
Artikler med links om det farlige i ansigtsgenkendelse.
24-10-2019
Hvorfor er det ikke kun falske personer, som spreder falske nyheder. Medierne gør det også. Er det kun mig, som har den fornemmelse, at Ritzau er allerværst og at det muligvis er fordi deres nyheder skal være så korte og at de ikke har tid til at efterforske, om nyhederne er sande. Mærkeligt er det dog, at andre medier ikke er mere kritiske og kun fortæller den halve sandhed.

Her artikler med links om det farlige i ansigtsgenkendelse. Dette indlæg skyldes nyheden om at Københavns Politi vil have adgang til ansigtsgenkendelse.

Københavns Politi vil have adgang til ansigtsgenkendelse - politiken.dk


Venstre og DF støtter politiets ønske om ansigtsgenkendelse, mens regeringen afviser, at det er nødvendigt.
Det skal i fremtiden være muligt for politiet at bruge ansigtsgenkendelse, når de overvåger borgerne via videokameraer
Regeringen og dens støttepartier vil dog ikke imødekomme Københavns Politis ønske om at kunne bruge ansigtsgenkendelse.
»Jeg kan godt forstå, at politiet ønsker at bruge ansigtsgenkendelse, men umiddelbart har vi ikke nogen planer om at tillade ansigtsgenkendelse«, siger retsordfører Jeppe Bruus (S) til Berlingske.
Information nævner, at ansigtsgenkendelse anvendes i England, men nævner ikke, at flere tests har vist, at teknologien er meget unøjagtig og ikke duer.

Københavns Politi vil have adgang til ansigtsgenkendelse | Information


Ansigtsgenkendelse bliver et mere og mere udbredt redskab blandt stater i hele verden - også i Europa.
I Storbritannien bruger politiet ansigtsgenkendelse på fodboldstadioner til at finde ballademagere under kampene.


Face recognition police tools 'staggeringly inaccurate' - BBC News


The Metropolitan Police used facial recognition at London's Notting Hill carnival in 2016 and 2017 and at a Remembrance Sunday event.
Its system incorrectly flagged 102 people as potential suspects and led to no arrests.

In figures given to Big Brother Watch, South Wales Police said its technology had made 2,685 "matches" between May 2017 and March 2018 - but 2,451 were false alarms.

Leicestershire Police tested facial recognition in 2015, but is no longer using it at events.

Because of the poor quality, it was identifying people wrongly. They weren't able to get the detail from the picture.


UK police use of facial recognition technology a failure, says report


The Met used facial recognition at the 2017 Notting Hill carnival, where the system was wrong 98% of the time, falsely telling officers on 102 occasions it had spotted a suspect.


Facial recognition is not just useless. In police hands, it is dangerous


We were invited to witness the Met’s trial of the technology at Notting Hill carnival last summer, we saw a young woman being matched with a balding man on the police database.

It’s not hard to imagine the chilling effect its unrestricted use will have. Constant surveillance leads to people self-censoring lawful behaviour. Stealthily, these measures curb our right to protest, speak freely and dissent. They shape our behaviours in ways that corrode the heart of our democratic freedoms.


As face-recognition technology spreads, so do ideas for subverting it - Fooling Big Brother


A study by researchers at the University of Essex, published in July, found that although one police trial in London flagged up 42 potential matches, only eight proved accurate.


Ganske vist fra 2017 men ikke mindre aktuel: Fordi systemet med ansigtsgenkendelse (forkert) mente at Steve Talley lignede den mand, som havde begået bankrøveri, mistede han sit job og blev skilt og hjemløs. Derefter mistede medierne interessen for ham, og jeg kan ikke google, hvordan det gik ham.

10 Times Facial Recognition Technology Got It Really Wrong


This is probably one of the scariest incidents when facial recognition system got it all wrong. Steve Talley, a financial advisor from Denver was falsely accused twice for holding up two banks. The FBI’s facial recognition system found similarities between Steve Talley and the man who robbed the banks. However, the charges were later dropped as the facial examiner failed to identify a mole on Talley’s right cheek, followed by a height analysis which showed Talley was three inches taller. The arrest, however, caused Talley everything. He ended up losing his job and family. He got injuries during arrests and is homeless. He has filed a lawsuit for the damage and is seeking $10 million.


Facial recognition software is not ready for use by law enforcement – TechCrunch


And that software is only as smart as the information it’s fed; if that’s predominantly images of, for example, African Americans that are “suspect,” it could quickly learn to simply classify the black man as a categorized threat.

In the hands of government surveillance programs and law enforcement agencies, there’s simply no way that face recognition software will be not used to harm citizens.


Halt the use of facial-recognition technology until it is regulated


Scholars have been pointing to the technical and social risks of facial recognition for years. Greater accuracy is not the point. We need strong legal safeguards that guarantee civil rights, fairness and accountability. Otherwise, this technology will make all of us less free.
Endda firmaer, som sælger kameraer til politiet, vil ikke have dem brugt til ansigtsgenkendelse. Firmaet Axon: Teknologien er ikke til at stole på og rejser mange etiske spørgsmål.

Opinion | A Major Police Body Cam Company Just Banned Facial Recognition - The New York Times


Axon, the company that supplies 47 out of the 69 largest police agencies in the United States with body cameras and software, announced Thursday that it will ban the use of facial recognition systems on its devices.
“Face recognition technology is not currently reliable enough to ethically justify its use,” the company’s independent ethics board concluded.


What is facial recognition - and how sinister is it?


Police trials have highlighted further shortcomings of facial recognition. A Cardiff University review of the South Wales trials found that the force’s NEC NeoFace system froze, lagged and crashed when the screen was full of people and performed worse on gloomy days and in the late afternoon because the cameras ramped up their light sensitivity, making footage more “noisy”.

During 55 hours of deployment the system flagged up 2,900 potential matches of which 2,755 were false positives. The police made 18 arrests using the system, but the Cardiff report does not state whether any of the individuals were charged.

The Welsh trial highlighted another challenge for facial recognition: lambs. [personer som ser helt almindelige ud, dvs. har mange fælles træk med andre]

While scanning the crowds at Welsh rugby matches, the NeoFace system spotted one woman on the South Wales police watch list 10 times. None of them were her.

A 2016 report from Georgetown Law’s Center on Privacy and Technology found that half of all Americans are in police facial recognition databases, meaning that algorithms pick suspects from virtual line-ups of 117 million mostly law-abiding citizens.

The campaign group, Liberty, has called for a complete ban on live facial recognition systems in public spaces, arguing that it destroys privacy and forces people to change their behaviour.


The Best Algorithms Still Struggle to Recognize Black Faces | WIRED


At sensitivity settings where Idemia’s algorithms falsely matched different white women’s faces at a rate of one in 10,000, it falsely matched black women’s faces about once in 1,000—10 times more frequently.
A one in 10,000 false match rate is often used to evaluate facial recognition systems.

It also has consistently found that they perform less well for women than men, an effect believed to be driven at least in part by the use of makeup.

White males ... is the demographic that usually gives the lowest FMR,” or false match rate,

Black females ... is the demographic that usually gives the highest FMR.” [or false match rate]


Egentlig skal der bruges flere systemer eller algoritmer, fx et til sorte og et til hvide.

As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias - The New York Times


a single algorithm cannot be applied to both groups [sorte og hvide] with equal accuracy.

Others were more concerned with a provision that would allow the police to go beyond identifying violent crime suspects with facial recognition and allow officers to try to identify anyone for whom a “reasonable suspicion” exists that they could provide information relevant to an active criminal investigation.


Men flere systemer ville ikke løse problemet


A dual-threshold system would not necessarily solve the problem, he added. That would require law enforcement authorities to make a judgment about each individual’s race and apply the appropriately tweaked facial recognition software — which would in turn introduce human bias.


New York’s mass face recognition trial on drivers has been a spectacular failure - MIT Technology Review


the “initial period for the proof of concept testing at the RFK for facial recognition has been completed and failed with no faces (0%) being detected within acceptable parameters.” That’s no faces accurately identified. Oops. Despite the failure, more cameras are going to be positioned on other bridges and tunnels, according to a spokesperson.


The growing backlash against facial recognition tech


A teenager is suing Apple for $1 billion. The lawsuit, filed Monday, hinges on the alleged use of an increasingly popular — and controversial — technology: facial recognition.
The plaintiff, 18-year-old college student Ousmane Bah, claims the company’s facial recognition tool led to him being arrested for Apple Store thefts he didn’t commit
an escalating backlash against facial recognition.

But leading AI researchers recently argued in an open letter that the tech is deeply flawed

how the Metropolitan Transit Authority is trying to use facial recognition to detect criminals and terrorists driving across a New York bridge, even though it failed spectacularly in tests


Making face recognition less biased doesn’t make it less scary - MIT Technology Review


Even the fairest and most accurate systems can still be used to infringe on people’s civil liberties


Politikerne må vågne op og sætte klare grænser for den digitale overvågning - politiken.dk


Danske politikere har – næsten i skøn forening – undladt at forholde sig til de tydelige problemstillinger. Men fodslæbende passivitet og mumlende pragmatisme er ikke længere en mulighed. Ikke at trække grænser er at vælge overvågningen og at overlade stadig mere styring til algoritmerne.
se alle indlæg
cookies