Artificial Fascism


The power of automated judgement. What could possibly go wrong?

Face detection

The Best Algorithms Struggle to Recognize Black Faces Equally

US government tests find even top-performing facial recognition systems misidentify blacks at rates five to 10 times higher than they do whites.

Wired, 2019-07-22


Trying a horrible experiment…

Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?

Tony Arcieri (@bascule), 2020-09-19


Turns out @zoom_us has a crappy face-detection algorithm that erases black faces…and determines that a nice pale globe in the background must be a better face than what should be obvious.

Colin Madland (@colinmadland), 2020-09-19

Credit score

Bias isn’t the only problem with credit scores - and no, AI can’t help

The biggest ever study of real people’s mortgage data shows that predictive tools are not simply biased for minority and low income groups, but less accurate too.

Technology Review, 2021-06-17

Fraud detection

Machine Learning for Credit Card Fraud detection

ML for credit card fraud detection is one of those fields where most of the published research is unfortunately not reproducible. Real-world transaction data cannot be shared for confidentiality reasons, but we also believe authors do not make enough efforts to provide their code and make their results reproducible.

Towards Data Science, 2021-05-26

Performance assessment

Xsolla lays off 150 after an algorithm ruled staff ‘unengaged and unproductive’

Xsolla, a company that provides payment processing options for the game industry, has laid off roughly one-third of its workforce after an algorithm employed by the company decided those 150 individuals were “unengaged and unproductive employees”.

Gamasutra, 2021-08-10

Fired by Bot at Amazon: ‘It’s You Against the Machine’

Stephen Normandin spent almost four years racing around Phoenix delivering packages as a contract driver for Inc. Then one day, he received an automated email. The algorithms tracking him had decided he wasn’t doing his job properly.

Bloomberg, 2021-06-28

Sentiment analysis

Indigo Airline’s Twitter fiasco — Sentiment classification gone wrong

Indigo airline’s (leading private airline in India) Twitter reply to a customer’s tweet last year gained lot of bad publicity for the airline. A disgruntled customer had tweeted about misplaced baggage, but it was a sarcastic tweet thanking them. Indigo’s reply was to thank the customer.

Practical Data Science And Engineering, 2018-02-20


Millions of black people affected by racial bias in health-care algorithms

An algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, a sweeping analysis has found.

Nature, 2019-10-24

How Algorithms Can Punish the Poor

As Americans look for greater government efficiencies, we increasingly turn to automated systems that use algorithms to determine who is eligible for access to housing, welfare benefits, intervention from child protective services, and more.

Slate, 2018-03-29

Law enforcement

Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

The New York Times, 2020-08-03

HU facial recognition software predicts criminality

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

Harrisburg University, 2020-05-06

Artificial intelligence is ripe for abuse, tech researcher warns: ‘a fascist’s dream’

Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas.

The Guardian, 2017-03-13