Shalini Kantayya takes aim at the very human limits of the capabilities of Artificial Intelligence (AI) in her fact-packed, well-informed, and strongly feminine film.
Backed by Women Make Movies and supported by, among others, the Sundance Institute, Coded Bias relies upon a cast of mainly female characters, mostly authors and academics, who are all experts in the dangerous, anti-democratic biases of both cover and commercial AI applications.
If you are seeking reasons why you should oppose the creeping totalitarianism of the Internet of everything, then this a great starting point with a bibliography that could keep you offline for many months to come.
The film’s key heroine is Joy Buolamwini, a young black PhD candidate at the Massachusetts Institute of Technology (MIT) Media Lab.
Bright and confident Buolamwini – who combines precocious intellect with a talent for inventing affective rap poetry – falls into understanding how thoroughly compromised AI is when during a research project that relies upon facial recognition to allow users of an app to give themselves various online masks, unmasks a disturbing fact: the facial recognition apps of the giant tech companies fail to detect black faces, particularly female black faces, in a staggeringly high percentage of cases.
What was initially an irritating technical hitch in her quest for a mindlessly entertaining little app soon becomes a serious investigation into the hidden biases of a technology that is already ubiquitous in most countries.
As Buolamwini begins to seek answers she meets women who have already begun to understand the insidious nature of AI, including Cathy O’Neil, a former Wall Street advisor (who claims she was hired with no knowledge of the financial services industry, but simply because she was good at solving math puzzles).
If you are seeking reasons why you should oppose the creeping totalitarianism of the Internet of everything, then this a great starting point
O’Neil, an engaging woman with a great line in sky blue designer hair cuts, is a self-confessed geek who had an epiphany as a 14-year-old at summer camp who realized she could solve the Rubik’s Cube in a snap of her fingers.
Buolamwini meets her in Cambridge, Mass., at a book signing of Weapons of Math Destruction – How Big Data Increases Inequality and Threatens Democracy, and is inspired to look further into why AI poses such a threat (not least because as it is capable of unobserved learning, it is effectively a «black box»).
She is already aware that «the past dwells within our alogrithms» – seeing how the unconscious bias of the founders of AI (white, male, middle class) in the 1950s at Dartmouth College – have seeded the unconscious bias in the data sets used to set up AI ever since.
O’Neil is characteristically assertive in her view of AI: «An algorithm uses historical information to make predictions about the future; the problem is, that it is all about who owns the fucking code; the people that own it then deploy it on other people….There is no symmetry, no way for people who did not get credit cards to use it against the credit card system; there is no accountability.»
So much for the commercial uses of AI – which can, and do, much damage to ordinary people without ever being held accountable: as many as 4 million people lost their homes during the subprime crisis in the US in 2008 as algorithms that predicted future chances of default were used to issue foreclosure notices.
It was, one expert in the film notes, «the largest wipeout of black wealth in the history of the US, » adding that the «tyranny of these types of practices of discrimination have become opaque.»
With an occasional reference to HAL 9000, the all-seeing computer in Stanley Kubrick’s 2001: A Space Odyssey – and Debussy’s haunting Clare de Lune that Kubrick used to such great effect, the film moves onto more sinister misuses and abuses of AI.
Big Brother Watch
In London, the Metropolitan Police have been – without any regulatory oversight – trialing random street-level facial recognition technology, using unmarked vans (other than logos proudly, and apparently without a trace of irony, stating «Metropolitan Police. Total Policing»).
Here civil liberties group Big Brother Watch is out and about to counter the cover surveillance, telling passersby that they are being filmed by a facial recognition system without their permission. One man who hides his face as he walks by the van is challenged by the police and issued with an on-the-spot fine (presumably for anti-social behaviour, one of the few easy-to-issue fines the police can deploy, thanks to Britain’s ruling Tory government, the most right-wing in the country’s history.)
One man who hides his face as he walks by the van is challenged by the police and issued with an on-the-spot fine
Big Brother Watch conducted a freedom of information campaign and found that 98% of the matches for criminal suspects found were «matching an innocent person incorrectly as a wanted person.»
«To have your biometric face on a police database is like having your fingerprint or DNA and we have specific laws around that,» a Big Brother Watch campaigner says. «Police can’t just take your fingerprints or DNA… But in this weird system we have, they can take anyone’s biometric photo and keep that on a police database. The police have started using facial recognition technology in the UK in the complete absence of a legal basis.»
The Chinese experience
The film dips into the Chinese experience – where facial recognition is a ubiquitous tool of Communist Party social control, and many people have already become domesticated to it – and draws the comparison with the US, observing that, at least in China, the government is open about what it is doing; in America, most AI and facial recognition still goes on under the radar.
It is not all bad news. Information is power and Buolamwini’s Algorithmic Justice League has had some successes: on June 10, 2020, Amazon announced a one year pause on the police use of its facial recognition technology, and in June that year, US lawmakers introduced legislation to ban federal use of facial recognition.
However, there is still no US federal regulation on algorithms and no political action in the UK against the Met’s use of the insidious anti-democratic technology.