Human Autocomplete is Biased
Observer is the Observed.
I don’t know anything about soccer. But every time I watch a game, I am struck by how people watching the same game can reach two completely different conclusions on the same play.
(Was that a penalty? Argentina and France fans likely disagree.)
Part of the problem is our hardware. Our brains can process somewhere between 10 and 50 bits per second. So a key design principle in human architecture is human autocomplete (well before Google and AI researchers coined the term). We fill in the blanks based on our ex-ante opinions, beliefs, and values (i.e., it’s biased based on our pre-training data).
There is nothing wrong with this architecture. It’s actually very computationally efficient. Imagine a soccer player trying to think from first principles every time they kick the ball. They would be paralyzed! Or if you were a prehistoric hunter-gatherer, if you took the time to analyze every growl, you would quite likely become human lunch meat for your area Lion! Human autocomplete is a good, efficient system design that has been bred into us through evolution.
But sometimes efficiency is the enemy. For a football game, maybe the stakes are not so high. Fans may be predisposed to support their team. And even if the umpires make the wrong call, the teams will go on to play another day. But sometimes the stakes are higher.
Take the trolley problem. If a self-driving car has to choose between damaging a building or killing a human, the programming is clear. Damage the building and save the human. But what if the alternatives are less clear? — let’s say the alternatives are kill a cat vs. a dog. If you surveyed cat fans, they might be predisposed to saving their feline friends, and dog fans, vice versa. Strip away all the dramatics, and the trolley problem is a simple question: what do you value more? And why?
Our values are derived from a mixture of our beliefs, lived experience, and authorities we subscribe to, among other things. Some value judgments are easy and universal, such as the notion that human life is more valuable than a building. But others are more subjective and biased based where we come from, who we are and the authority figures we subscribe to (e.g., church, sports team or other ideological affiliation).
Maybe this is all very obvious. But in this world of polarized responses to bits of (mis)-information, it’s worth revisiting the mechanistic aspects of Human Autocomplete. Competing judgments we hear on the same set of facts are not a sign that there is no truth (as some have claimed). It is more of a symptom of the underlying machinery. The observer is the observed.


