Advanced Computing in the Age of AI | Thursday, March 28, 2024

‘De-Blackboxing’ AI: Bias in Learning Machines 

source: Shutterstock

We humans like to think of ourselves as rational creatures who make decisions with clear, objective and unbiased minds. Many decision making processes have been predicated on this assumption.

But back in the late 60’s two psychologists, Daniel Kahneman and Amos Tversky, after decades of research, proved we are not what we think we are. We have both known and unknown biases that significantly impact our ability to make rational, objective decisions.

By extension, so do our computer algorithms designed to make decisions on our behalf. Why? Because our biased minds created the heuristics that computers use. It’s impossible for humans to consider every variable and all of the available information necessary to make truly unbiased and thorough decisions, so we’ve developed shortcuts based upon experience and intuition. These shortcuts are riddled with bias.

Hidden Rationale

This “hidden rationale” problem exists in the analog world today. Until 2009, consumers were not provided the rationale behind their own FICO scores, the number used not only to determine creditworthiness and set individual limits and interest rates, but also often used to determine employability. Even now, we are not provided with the exact algorithms that govern our scores and there are plenty of opportunities for — and accusations of —bias in how the numbers are arrived at.

Now we live in a time where the prospect of learning machines is a practical reality. If learning machines can consume all of the available data and variables, can they produce truly unbiased, thorough results? Maybe not.

Impartial Decision Making

Apart from the familiar doomsday scenario of our Artificial Intelligence (AI) servants becoming our overlords, a more practical and near-term problem is the relatively black-box nature of machine learning-based AI. These concerns can be exemplified with one question: “if we cannot understand why an AI made a decision, how will we know when the decision is wrong or right?”

Two major culprits contributing to this concern are errors in how the models are trained, which include “sample bias” and “omitted-variable bias.” These are not machine learning errors but human errors in how the model is trained. These aren’t typical human biases, but rather they are of the statistical variety.

Sample bias problems result from not feeding the machine a representative data set. This causes the model to improperly emphasize some features over others. Think of how a sample model in computer vision identifies a horse. If the input contains pictures of both horses and non-horses, but the input set has more non-horse pictures than pictures of different horses, then it is possible that real pictures of horses that have greater resemblance with the non-horse samples will be misidentified as non-horses.

Omitted-variable bias occurs when a model leaves out important factors that cause the model to either over- or under-compensate using the remaining factors. If we have pictures of horses and non-horses, but the input set has significantly more images of horses with chestnut coats, then other horses (such as Pintos) may be incorrectly classified as something else that is visually similar, perhaps a cow, or not classified at all.

Ultimately, either bias can produce results that are far from ideal.

Evidence of these biases makes for attention-grabbing and worrying headlines, as illustrated by the coverage of Tesla’s fatal Autopilot car crashes, Google’s 2015 image classification failure, Northpointe’s problems with racial bias in its crime predictor and Microsoft’s prejudiced chatbot.

Another typical problem is that, even with statistically-valid sample input, the model still might not produce the correct results. Understanding why the system arrived at a certain answer and adjusting it is often difficult, requiring someone experienced with underlying machine learning algorithms — capabilities not in abundance.

Valid AI

So what is the answer to these hidden problems? How do we know when answers provided by AI systems are valid?

The reality is that there are ways to build-in “instrumentation,” or at least provide guidance, for the key factors that have been used by a system to make decisions. In the horse classification example cited above, the answer could also supply the specific visual elements it used to make a determination. In credit scoring, it could highlight the data used to arrive at the final decision. With this additional data, we can understand better why the system provided one answer over another and can then determine if using the answer is appropriate.

De-blackboxing the Black Box

Another way to gain insight into the machine would be to publish the statistics of how the model’s input sample set was constructed and the underlying rationale. In doing so, we can understand potential errors or biases that were made with the input design by understanding the range of information provided along with the relative weighting. For instance, if we keep getting pictures of horses mislabeled as dairy cows, understanding the input sample provides clues as to why—and maybe how—to correct the errors by adding more samples of the target image subject.

Decision Making at Its Core

AI-based software can be regarded as a black box, but we humans are uncomfortable ceding authority for decisions unless we can peer into the inner workings of these machines. Ultimately, machine learning systems that automate decisions or other actions need greater transparency to satisfy concerns about bias and reliability.

Greg Council is vice president of product management at Parascript.

EnterpriseAI