RexLegendi reviewed Weapons of Math Destruction by Cathy O'Neil
Against proxies, predictive moddeling, and automated decision-making
3 stars
5 years after my first read, I am discovering new aspects of Cathy O’Neil’s Weapons of Math Destruction (2016). Some of her examples have lost their urgency – by now, most people understand that they are the ‘product’ rather than the consumer, and in Europe, the GDPR has addressed some of the excesses of automated decision-making and profiling – but O’Neil’s widely cited work remains highly relevant. I read it consecutively with Meredith Broussard’s Artificial Unintelligence; the books complement each other perfectly.
Toxic proxies Although O’Neil doesn’t explicitly define it, a ‘weapon of math destruction’ is an algorithm that is opaque (‘black box’), damaging (harmful to individuals or society), and scalable (applied broadly). Closely associated are automated decision-making, predictive modelling, profiling, and the use of proxies. Especially the last of these struck me this time. Since the truth is often too difficult to quantify, models are rely on measurable …
5 years after my first read, I am discovering new aspects of Cathy O’Neil’s Weapons of Math Destruction (2016). Some of her examples have lost their urgency – by now, most people understand that they are the ‘product’ rather than the consumer, and in Europe, the GDPR has addressed some of the excesses of automated decision-making and profiling – but O’Neil’s widely cited work remains highly relevant. I read it consecutively with Meredith Broussard’s Artificial Unintelligence; the books complement each other perfectly.
Toxic proxies Although O’Neil doesn’t explicitly define it, a ‘weapon of math destruction’ is an algorithm that is opaque (‘black box’), damaging (harmful to individuals or society), and scalable (applied broadly). Closely associated are automated decision-making, predictive modelling, profiling, and the use of proxies. Especially the last of these struck me this time. Since the truth is often too difficult to quantify, models are rely on measurable aspects instead. This makes outcomes unreliable, susceptible to manipulation, and often toxic. University rankings are a good example. Because ‘educational excellence’ is too complex, rankers use whatever data approximates it. As a result, universities focus on improving those metrics rather than the quality of education itself.
We are ranked, categorized, and scored in hundreds of models, on the basis of our revealed preferences and patterns. This establishes a powerful basis for legitimate ad campaigns, but it also fuels their predatory cousins: ads that pinpoint people in great need and sell them false or overpriced promises. They find inequality and feast on it. The result is that they perpetuate our existing social stratification, with all of its injustices. The greatest divide is between the winners in our system, like our venture capitalist, and the people his models prey upon.
Predictive modelling As a contrast, O’Neil mentions Michael Lewis’ Moneyball as an example of safe predictive modelling: the algorithms are transparent, regularly updated, and based on actual game performance rather than proxies. Stakeholders understand the process and share the objective (winning the league). WMD’s, however, work differently. Fairness and public values are not part of the equation. In practice, they harm people with disadvantaged backgrounds, notably poor people. In segregated countries, geography serves as a ‘highly effective proxy’ for race. Imagine the police prioritising surveillance in a wealthy area, O’Neil suggests. This would inevitably lead to an increase in recorded crime, as affluent individuals currently evade scrutiny. New data would then reinforce the perception of the area as a high-risk zone, justifying even more surveillance. Another example of a WMD is predictive marketing – such as marginal universities targeting people with a lower income, persuading them to pay fees for worthless certificates.
Automated vs. human decision-making Towards the end, the book becomes somewhat sluggish, with less engaging examples. Compared to my first reading, it lost a ★. Overall, I’m glad to have revisited it. O’Neil’s conclusion that WMD disproportionately punish the poor, who are often processed by machines, while the rich can rely on human decision-making, remains all too relevant, as does her remark that proxies should be limited to positive feedback loops. I will keep those in mind as I continue with Madhumita Murgia’s Code Dependent.
Phrenology was a model that relied on pseudoscientific nonsense to make authoritative pronouncements, and for decades it went untested. Big Data can fall into the same trap.