Solutons Lounge

How to reduce AI bias, according to tech expert


Artificial intelligence has developed quickly over just the past year. However, the technology still faces a big problem that can have potential devastating real-world consequences — AI bias.

People can have covert or overt biases against others based on various factors such as race, gender or income level. Since humans create AI models, AI bias occurs when those models produce biased or skewed outputs that “reflect and perpetuate human biases within a society,” according to IBM.

One of the reasons bias can pop up in an AI system is in the data it was trained on. AI models use a sophisticated series of algorithms to process massive amounts of data. They learn how to identify patterns within the training data in order to identify similar patterns in new sets of data.

But if the training data itself is biased, the AI model could pick up on skewed patterns and produce similarly biased outputs.

Say a company wants to use an AI system to sift through job applications and find qualified candidates. If the company typically hires more men than women and that historic data is used to train the AI system, the model may be more likely to reject female job applicants and label male job applicants as qualified.

“The core data on which it is trained is effectively the personality of that AI,” Theodore Omtzigt, chief technology officer at Lemurian Labs, tells CNBC Make It. “If you pick the wrong dataset, you are, by design, creating a biased system.”

Mixing training datasets won’t necessarily reduce AI bias

Simply diversifying the data isn’t going to fix biased AI models.

Say you’re training an AI chatbot using dataset “A,” which is biased in one particular way, and dataset “B,” which is biased in a different way. Even though you’re merging datasets with separate biases, that doesn’t mean those biases would necessarily cancel each other out, Omtzigt says.

“Combining them hasn’t taken away the bias,” he says. “It has just now given you [an AI system with] two biases.”

Every dataset is limited in some way and therefore biased, Omtzigt says. That’s why there should be people or systems that check an AI model’s responses for potential bias and judges whether those outputs are immoral, unethical or fraudulent, he says. When the AI receives that feedback, it can use it to refine its future responses.

“The AI doesn’t know good from wrong,” he says. “You as the receiver of that information need to have the critical thinking skills, or the skepticism, to ask, ‘Is this true?'”

OpenAI and Google say they are addressing AI bias

Some tech organizations creating AI systems say they are working on mitigating bias in their models.

OpenAI says it pretrains its AI models on how to predict the most probable next word in a sentence by using a large dataset that “contains parts of the Internet.”

However, the models can also pick up on “some of the biases present in those billions of sentences.” To combat this, OpenAI says, it uses human reviewers who follow guidelines to “fine-tune” the models. The reviewers prompt the models with a range of input examples, then review and rate the AI model’s outputs.

Similarly, Google says it uses its own “AI Principles” and feedback and evaluations done by humans to improve its Bard AI chatbot.

DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

Get CNBC’s free Warren Buffett Guide to Investing, which distills the billionaire’s No. 1 best piece of advice for regular investors, do’s and don’ts, and three key investing principles into a clear and simple guidebook.

CHECK OUT: The ‘relatively simple’ reason why these tech experts say AI won’t replace humans any time soon



Source link

Exit mobile version