#Book #AI #AI/Ethics #2021/9 *September 18, 2021* ![[weapons-of-math-destruction.png]] [Amazon](https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815) ### Summary *Weapons of Math Destruction* examines the "Dark Side of Big Data." Statistical models are being implemented broadly across the economy and government to optimize efficiency. Algorithms can help squeeze extra profits out of niches too small for humans to scrutinize, or they can direct resources to the areas that they are needed by predicting in advance where those resources could be best utilized. While the implementation of this technology is normally well-intentioned, careless implementation results in a so called "Weapon of Math Destruction" (WMD). Three features characterize WMDs: 1. **Opacity** - WMDs are inscrutable. No one knows why the algorithm is making certain decisions. This is a byproduct of the superhuman quantity of parameters contemporary models are using to make decisions. By definition, models are being used to interpret data that is too complex for humans, but relatively little effort has been invested in making sure these algorithms make explainable decisions. 2. **Scale** - A model that makes decisions among only a few stakeholders is not going to wreak havoc at a societal scale, but WMDs are software entities; they can scale. - For example, when an algorithm is used to analyze the data of people at a state or national scale, weaknesses in the model will magnify and no longer make false predictions about only a small number of edge cases. A model that makes predictions about 1 million customers with a failure rate of 0.5% will make the wrong decision about 5,000 people. This is not a big deal when an algorithm is simply deciding what advertisements to show you, but if the algorithm is deciding what sort of insurance you qualify for or whether you are eligible for a job you want, then these impersonal mathematical errors will have life-altering consequences. 3. **Damage** - Decisions have the potential to perpetuate conditions that exist in the world. For example, if everyone suddenly decides that apples are not tasty, then sales will plummet and apples will stop being grown. This is a [[Feedback Loops|Negative Feedback Loop]]. Alternatively, if everyone decides that kale is the best food ever, then kale will continue being grown, and the former apple fields may be repurposed for growing kale. This is a [[Feedback Loops|Positive Feedback Loop]]. - The benign example above can help us understand how algorithms perpetuate harm in the real world. Imagine that an algorithm is used to help bankers decide whether people are deserving of a loan to buy a house. The algorithm provides a probability that each person will pay off the loan or default. In order to make decisions that are as comprehensive as possible, the designers of the algorithm gather data about peoples' credit scores, payment histories, education background, but also about their ZIP code, as well as the people who live near them. The algorithm might predict that someone will not pay their loan on the basis of the town they live in. This feature is not necessarily reflective of the character of the person, but that person will be judged on the basis of where they live regardless. These decisions prevent people from a certain neighborhood from getting loans, which prevents them from stabilizing their neighborhood and raising their status. This condemns the neighborhood to continue with whatever problems it has been having, because the algorithm codifies the past instead of considering whether the future might be different. O'Neil characterizes this sort of feedback loop as a *pernicious feedback loop*. As algorithms become widespread, we can see examples of WMDs in the real world. It would probably be a good idea to work to prevent their occurrence, since the societal impact of unethical AI includes [increasing wealth inequality](https://hbr.org/2020/10/algorithms-are-making-economic-inequality-worse), [perpetuating bias against minorities](https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities/), and [fragmenting society into pieces](https://www.technologyreview.com/2020/12/04/1013038/the-fragmentation-of-everything/). I'm generally a huge fan of [[Artificial Intelligence|AI]]. I think it's the most incredible technology we've yet invented, but all new technologies offer a sharp and double-edged sword. I recommend reading *Weapons of Math Destruction* if you would like to get a good overview of what happens when that sword strikes the wrong edge.