Weapons of Math Destruction is a 2016 American book about the societal impact of algorithms, written by Cathy O'Neil. It explores how some big data algorithms are increasingly used in ways that reinforce preexisting inequality. It was longlisted for the 2016 National Book Award for Nonfiction but did not make it through the shortlist has been widely reviewed,
and won the Euler Book Prize.
Review of 'Weapons of Math Destruction' on 'Storygraph'
5 stars
Excellent discussion on how the use of algorithms is affecting our education system, how likely are we to be hired, how much we pay for insurance and mortgages. These models have become black boxes that nobody knows exactly how they work but are considered reliable. What few people realize is that these algorithms are reinforcing discrimination and have biases built in them. So, instead of a fair objective system to evaluate whatever (loan approvals, credit scores, job candidates, school teacher's performance, etc), we have opaque models being applied everywhere that cannot be disputed or even understood. It's scary to think that our future life decisions will rely on algorithms.
Review of 'Weapons of Math Destruction' on 'Goodreads'
4 stars
I, too, was a mathematician once, but I lost my faith. Ms. O'Neil still seems to have much of hers, for an Occupier. I kept thinking she was naive that this stuff is fixable, but I may just be naively paranoid.
That algorithms can be biased was not a surprise to me. Logic itself can be biased because it is dependent on language which, like many WMDs, is a black box. It is full of proxies. Take the term "criminal" which (like "terrorist") brands the one so called as an evil doer. And it's measurable by determining if one has been convicted of anything. You can read [b:The New Jim Crow: Mass Incarceration in the Age of Colorblindness|6792458|The New Jim Crow Mass Incarceration in the Age of Colorblindness|Michelle Alexander|https://images.gr-assets.com/books/1328751532s/6792458.jpg|6996712] and discover that going to jail is part of systemic racism but when you hear the word "criminal" or "convict", you …
I, too, was a mathematician once, but I lost my faith. Ms. O'Neil still seems to have much of hers, for an Occupier. I kept thinking she was naive that this stuff is fixable, but I may just be naively paranoid.
That algorithms can be biased was not a surprise to me. Logic itself can be biased because it is dependent on language which, like many WMDs, is a black box. It is full of proxies. Take the term "criminal" which (like "terrorist") brands the one so called as an evil doer. And it's measurable by determining if one has been convicted of anything. You can read [b:The New Jim Crow: Mass Incarceration in the Age of Colorblindness|6792458|The New Jim Crow Mass Incarceration in the Age of Colorblindness|Michelle Alexander|https://images.gr-assets.com/books/1328751532s/6792458.jpg|6996712] and discover that going to jail is part of systemic racism but when you hear the word "criminal" or "convict", you usually don't think much further.
Or consider how we think about patents. It's supposed to protect inventors and most people think that's what it's doing (and perhaps much of the time it is) but it has become a weapon of big companies to keep small ones from entering a field.
This is all before mathematics enters into it.
When reasoning is done by a computer, it's faster, bigger, and less reflective. You might think the marketplace can fix this. A business that does less error-prone reasoning will make more money and prevail. Ms. O'Neil tries to explain why it doesn't work this way but I don't think she does that good a job of it. At least she tries.
The problem is that even with a sloppy algorithm, you can make up the difference on volume. In the world of actual weapons of mass destruction, you could nuke entire countries and win the war. Most of the people you killed are "innocent" except in the sense of having been born into the wrong nation. Businesses using "approximate" algorithms will succeed and success is self-validating. On an individual level we say "life is unfair" but on a corporate level the winners selected by the marketplace are seen as deserving their success if they haven't broken any laws which anyone noticed. (And if someone noticed? Google "war on whistleblowers.")
Ms. O'Neil thinks that this is just the birth throes of a new technology and in the future, we'll look back at it like we do at sweat shops and child labor, excesses that we have managed to overcome. I am less confident of our rosy future. I'm more like [a:Chris Hedges|15438|Chris Hedges|https://images.gr-assets.com/authors/1273339351p2/15438.jpg] (one of whose books I read right before this one) who sees morality not as a dimension of society that progresses like technology and science, but one which remains at pretty much a constant level. I see the technology as multiplying our moral failings in a way that will be difficult to correct. (I am reminded of those who believe that those who fear global warming are underestimating man's ability to solve problems and some yet unforeseen discovery will come along when we need it, and we can just ignore it for now.)
Currently, outside of dystopian science fiction, people see apps as a universal good and trust our cyber-overlords. Then, some may find themselves among the collateral damage, and if sufficiently hurt, will discover no one will want to listen to them. Isn't it an axiom of capitalism that, with some exceptions we are safe to ignore, the poor deserve to be poor?