Is The Power And Potential Of AI Limited By Bias? Not If You Do This

We are in the midst of a seismic global economic shift because of AI.

From how work is done to how we protect the public to how we build urban infrastructure, the power of artificial intelligence can be seen on a massive scale. In fact, there is a predicted global spend of $52.2 billion annually by 2021, and billions more gained in efficiencies and savings.

Consultancy PwC estimates that AI could contribute up to $15.7 trillion to the global economy in 2030, more than the combined output of China and India today.

Few are blind to AI’s enormous potential.

However, there’s another dialogue emerging when it comes to AI, and that’s whether and to what extent AI is subject to bias. 

Critics have noted that AI models are only as good as the data you feed them, and therefore deep learning systems are not neutral.

Certainly, AI is inherently biased. However, I would also posit that all intelligent systems, including humans, are biased, for our own cognition is predicated on our personal experience and knowledge.

With the hyperbole surrounding AI today, bias is being cast as an evil crippling flaw unique to AI that will limit its value and widespread adoption. I strongly disagree.

As Jonathan Vanian notes in an article for Fortune, AI is only as good as the data that humans provide. Vanian goes on to write that, as AI practitioners, we know: "the data used to train deep-learning systems isn’t neutral. It can easily reflect biases, conscious and unconscious, of the people who assemble it. Data can be slanted by history, with trends and patterns that reflect centuries-old discrimination."

Vanian points out that a sophisticated AI algorithm, or even human statisticians, could scan a historical database and conclude that white men are the most likely to succeed as CEOs, not recognizing that, until recently, people who weren’t white men seldom were afforded opportunity to ascend to a CEO role. Blindness to bias is the fundamental challenge, not bias in itself.

While we speak about it in careful and diplomatic terms, it is top of mind for everyone in the AI arena.

But as I have seen first-hand, bias in AI can be navigated in much the same way that we can overcome bias in humans, clearing the way to recognize the full potential of artificial intelligence.

It’s instructive to examine how human bias has been handled in a historical context. For example, the proven antidote for human bias is collective wisdom.

As an article in the MIT Technology Review explains: "In 1906, the English polymath Francis Galton visited a country fair in which 800 people took part in a contest to guess the weight of a slaughtered ox.

After the fair, he collected the guesses and calculated their average, which was 1,208 pounds.

To Galton’s surprise, this was within 1% of the true weight of 1,198 pounds." "Vox Populi," the article written by Galton about the experience, was published in a 1907 issue of Nature and is one of the earliest descriptions of the wisdom of the crowd phenomenon, namely how the collective opinion of a group of individuals can be better than a single expert opinion.

AI algorithms, while still in their infancy, are extremely adept at pattern recognition; however, they lack the ability to question the patterns they uncover.

With humans, education is a significant prophylactic to bias, for education teaches us to question our own understanding and bias perspectives in search for truth and general understanding.

This search often leads to research and an expansion of one’s knowledge base.

The correct conclusion is rarely the average wisdom contained within this new broader dataset; instead, the educated person evaluates the problem from multiple perspectives and selects the model that best matches their observations. 

With AI, we avoid bias similarly, by applying "ensemble learning," which will always have more depth and power than that of an individual cognitive engine. Homogeneous input into a single engine generates output that is singularly limited in scope and value. Combining the output of several accurate yet diverse engines enables deeper and more accurate cognitive results. We know that AI, machine learning and deep learning can produce dangerous results if unchecked by extrapolating outdated mores to predict the future.

The result would be the perpetuation of unjust perceptions of the past. Any responsible AI technology must be aware of these limitations and take the steps to avoid them.

So how do we balance good bias for smart business use against bad bias stemming from human beings or narrow business use cases?

Good bias involves deploying technology within the parameters of cost-effectiveness, result accuracy and processing speed. Bad bias stems not only from human bias but also from instances when a company weights its AI to be solely self-promoting. A

mazon, an infrastructure partner of ours, as well as IBM, Google, Microsoft and Facebook have developed extensive AI assets for internal and external use, deploying solutions inherently designed to benefit their own challenges that are thus less optimal for external datasets.

 Google’s monolithic engine, which decides what the top 10 hits of 500 are based on non-objective parameters, resulted in a situation where it was more likely to show results for high-paying jobs to men than women.

In AI terms, diversity simply means many.

By intelligently processing data through an ecosystem of multiple cognitive engines, then optimizing with an additional AI top layer for accuracy, bias will be mitigated from the risks of homogeneity.

Only then will more nuanced inferences be produced from the multiple datasets.

The best of human decision making has been enhanced through collective learning and empathetic curiosity. Why shouldn’t the same hold true for technology?

As Atticus Finch says in To Kill a Mockingbird: “If you can learn a simple trick, Scout, you’ll get along a lot better with all kinds of folks. You never really understand a person until you consider things from his point of view, until you climb inside of his skin and walk around in it.”

Popular

More Articles

Popular