2021-08-15
The power of automated judgement. What could possibly go wrong?
US government tests find even top-performing facial recognition systems misidentify blacks at rates five to 10 times higher than they do whites.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?
Turns out @zoom_us has a crappy face-detection algorithm that erases black faces…and determines that a nice pale globe in the background must be a better face than what should be obvious.
The biggest ever study of real people’s mortgage data shows that predictive tools are not simply biased for minority and low income groups, but less accurate too.
ML for credit card fraud detection is one of those fields where most of the published research is unfortunately not reproducible. Real-world transaction data cannot be shared for confidentiality reasons, but we also believe authors do not make enough efforts to provide their code and make their results reproducible.
Xsolla, a company that provides payment processing options for the game industry, has laid off roughly one-third of its workforce after an algorithm employed by the company decided those 150 individuals were “unengaged and unproductive employees”.
Stephen Normandin spent almost four years racing around Phoenix delivering packages as a contract driver for Amazon.com Inc. Then one day, he received an automated email. The algorithms tracking him had decided he wasn’t doing his job properly.
Indigo airline’s (leading private airline in India) Twitter reply to a customer’s tweet last year gained lot of bad publicity for the airline. A disgruntled customer had tweeted about misplaced baggage, but it was a sarcastic tweet thanking them. Indigo’s reply was to thank the customer.
An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, a sweeping analysis has found.
As Americans look for greater government efficiencies, we increasingly turn to automated systems that use algorithms to determine who is eligible for access to housing, welfare benefits, intervention from child protective services, and more.
In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.
A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.
Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas.