Testing for AI Bias: What Enterprises Need to Know

This post was originally published on IT Pro Today

As the adoption of artificial intelligence increases, so does the call for more explainable systems that are free of bias. To pass the sniff test for AI ethics, enterprises are adopting frameworks and using software to ensure that the datasets used in AI models – and the results they generate – are free from bias.

The growing complexity of AI models and their integration with enterprise architecture are creating many failure points, said Balakrishna DR, Infosys head of AI and automation. New “AI assurance” platforms are gaining traction to cater to this critical challenge. These platforms come with specialized procedures and automation frameworks to provide model assurance, test data needed for model testing, and evaluate performance and security.

3 Things To Know About AI Ethics Testing

Here are a few aspects of testing for ethics and bias to consider.

1. Testing for AI bias is a complicated can of worms

While quality control testing for software is pretty routine, especially because testers know what to look for, AI bias testing is not so straightforward.

“Testing for AI bias is subjective and depends on the context and domain characteristics,” DR said. “Bias

Read the rest of this post, which was originally published on IT Pro Today.

Previous Post

How to Build a Personal Cloud Server for Private File Storage at Home

Next Post

Rational exuberance drives IT spending