Being perfectly rational is not an evolutionarily viable form of reasoning. It’s slow and requires a lot of information that may not always be available.
Think about the problems we are evolved to solve: which plants are safe enough to eat, how to survive a contagion, how to protect resources from outsiders, how to figure out who to trust, how to find a mate. It’s impossible to find these answers through rational means alone; it would take too long or require more information than we possess.
What helps is reasoning through bias. A bias is a cognitive shortcut, a form of reasoning that is quick, doesn’t require perfect information, gets the job done, and reduces the kind of errors that have existential consequences. Our brains developed biases as a means to survive challenges in order to reproduce and then protect our progeny.
The is a particularly fascinating study of these evolutionary biases. Men, for instance, tend to overperceive sexual interest in women because it’s evolutionarily viable to do so. Women do the opposite; they’re looking for viable partners in child rearing and can’t afford to get it wrong.
Which brings us to what we’ll be talking about in this episode: algorithms have biases too. They help navigate imperfect information in quick time while minimising all kinds of errors. But over time, biases have started reinforcing themselves, further disenfranchising the disenfranchised. So, what do we do about it?
We talk to Osoba Osonde, a senior information scientist, co-director of the Center for Scalable Computing and Analysis, and professor at the Pardee RAND Graduate School, to discuss how we can make algorithms fairer. We also talk to Benjamin Boudreaux, a professor at the Pardee RAND Graduate School, about what it means to make algorithms fairer.