Open your phone. Scroll through the web. Apply for a job. In the Information Age, we rely more and more on the internet and the world of numbers to take care of our requests. The responses we are delivered, by Google’s search engine or Instagram’s algorithm or LinkedIn’s job postings, are all the result of complex predictive models that work behind the scenes. But as Cathy O’ Neil points out in her bestselling book, Weapons of Math Destruction, the damage these predictive models can create may be more than we think.
A Weapon of Math Destruction, or WMD, as O’ Neil defines, is a specific kind of mathematical model that makes predictions based on past data. But unlike benign predictive models, like the ones used to rank baseball players and decide which team they play for, WMDs are dangerous. This is because they combine their opacity and pernicious feedback loops with a frighteningly wide and impactful scale. WMDs, O’ Neil argues, are now everywhere.
Just think of the fallout from 2001’s No Child Left Behind Act. It launched a slew of predictive models at public school teachers across the country, using student grades and various standardized test scores to predict student performance and measure deficits caused by “poor teacher performance”. In her book, O’ Neil explains how these models that tried to score teachers based on well-seeming but highly flawed metrics only led to good teachers getting fired, and “bad” teachers getting by through cheating.
Clearly, the large and destructive scale of these teacher scoring models is obvious here, because this model had power over the lives and employment of teachers across a school district. But perhaps the more important thing to note is that this WMD was opaque– no teacher could find out exactly what got them fired. The models also couldn’t accept feedback. If they fired a good teacher, the model would never know.
Weapons of Math Destruction goes on to describe dozens of WMDs at work in our everyday lives. Hedge funds betting against the poor, predicting they’ll default on their loans, college rankings pushing up prestigious schools with grand facilities despite their high tuitions, and recidivism predictors in the criminal justice system all work to increase efficiency in their various sectors. Hedge funds make more money, the U.S. News & World Report gains popularity for their college rankings, and criminal courts can spend less time deciding whether to post bail. But all of these systems don’t take feedback. If they do, they only confirm their own results: Someone commits petty theft and doesn’t get bail, they lose their job, they slide deeper into poverty and commit theft again. As O’ Neil describes, the model is validated because it becomes “destiny”.
O’ Neil illustrates in her variety of examples how these models have the tendency to discriminate against people who are poor, immigrants, or people of color. WMDs make flawed assumptions, but without the negative feedback loop in place to correct them, a positive feedback system takes precedent, and the model grows exponentially in its destructiveness. A young, motivated job applicant suffering from anxiety is rejected again and again by employers because a personality test-based predictive model suggests that they have high levels of “neuroticism”. As a result, the applicant is not hired, and their next potential employer wonders why they’ve been unemployed for so long.
These models are so insidious precisely because they are everywhere. And unlike human error, which is contained by default, algorithmic error spreads. And such is the question O’ Neil presents: what do we do about WMDs? Do we get rid of them entirely, despite the benefits they entail? Or do we try to reform them, and see if they can be used for the better?
O’ Neil wrote this book in 2015, updating the afterword in 2016 to discuss that year’s presidential election, but even almost a decade later her findings remain relevant. As simple algorithmic models are replaced by AI models, they seem to get more and more opaque. How does one, after all, decode the zeroes and ones that outline a predictive model run by an AI deep neural network? It is as difficult to do as looking inside a human brain and trying to decipher how it functions: we can stimulate a neuron here and there, and find that it correlates with a craving for chocolate cake, but we have no way of knowing exactly what factors go into this craving. But unlike the human brain, WMDs can cause systemic problems all over the country, far faster than a human can eat a single slice of dessert.
So as AI is applied to every single predictive model alive, from search results to resume filters, Weapons of Math Destruction is here to urge us caution. As long as these models exist to help people, to provide resources to struggling teachers or identify areas where a system could improve, they can be beneficial. But often, these models only focus on making money and improving the bottom line.
Was it a great read? Certainly. Concisely explained in just under 250 pages, this little book can teach a lot, and is definitely worth reading for anyone– interested in data science or not– for the educational value it presents. Chapter 3, in particular, titled “Arms Race: Going to College” may interest many DHS students.
Overall, however, it raises more questions than answers. Cathy O’ Neil may suggest some simple guidelines to reforming WMDs, like protesting, alerting the press, advocating for government regulation, and legal reform, but the question of how to accomplish these goals remains unanswered. For now, as the data and tech landscape continues to shift day by day, perhaps we all just need to start paying attention. Weapons of math destruction lurk in plain sight.