How to stop AI decisions from repeating human biases

How to stop AI decisions from repeating human biases

2 minutes, 29 seconds Read

AI is rapidly becoming a default advisor in everyday decision-making, often delivering answers that sound authoritative even when the underlying analysis is shaky. As more teams rely on these systems, the gap between what AI appears to know and what it can responsibly recommend is becoming a real risk — especially when decisions carry social or operational consequences.

How simple data questions become biased recommendations

For years, I’ve volunteered some of my time to analyzing crime statistics and law enforcement data in Seattle and sharing findings with local leaders. One thing that has always fascinated me is how an innocent, dispassionate analysis can still reinforce biases and exacerbate societal problems. 

Looking at crime rates by district, for example, shows which area has the highest rate. Nothing wrong with that. The issue emerges when that data leads to reallocating police resources from the lowest-crime district to the highest or changing enforcement emphasis in the higher-crime district. The data may be solid, but the obvious decision can have unexpected consequences.

Dig deeper: How to fight bias in your AI models

Now living in the age of AI adoption, I was curious how AI would handle similar questions. I asked an AI platform, “What district should the Seattle Police Department allocate more resources to?” After skimming past the standard ramble, it answered that Belltown had the highest crime rate and a significant amount of drug abuse and homelessness.

Still, if you let AI make the decision, the conclusion is to allocate more police resources to Belltown. I asked the same platform what biases or problems might exacerbate. It listed criminalization of homelessness, over-policing of minorities, displacement of crime, a focus on policing rather than social services, increased police-community tensions, negative impact on local businesses, focus on quality-of-life offenses, potential for increased use of force and exacerbation of gentrification.

Finally, I asked whether police resources in Belltown should increase given those consequences. The long answer amounted to “it depends, but probably not — a hybrid approach would work better.”

The data ethics principles every AI user needs to apply

Many of the problems analysts face when forming conclusions and recommendations also apply to AI. At a macro level, there are two opposing approaches to decision-making: gut decisions and data-driven decisions.

With gut decisions, we decide what to do based on our lived experience, feelings, perceptions and assumptions. They allow us to make quick decisions, but they aren’t ideal for important ones because counterintuitive things happen all the time in this universe.

If we let it, AI will reside on the other side of that spectrum: making decisions based on data. This is where we do whatever the data tell us to do. Before the recent expansion of AI, this wasn’t much of an issue because analysts knew we shouldn’t follow the data mindlessly. With AI, however, people ask what they should do, and sometimes follow the answer because AI’s data-driven answers appear to be untainted by o

Read More

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *