As the power of generative AI has become increasingly available to everyone, and increasingly popular, responsible usage concerns have also increased.

AI relies on data, and all data should be considered dangerous. This means that AI models need to be interpretable and inspected and continually monitored for bias. We cannot blindly apply AI to data and assume the AI is safe to use for important operations.

When developing and building new models, businesses should assume that all data is biased, dangerous and a liability. This perspective requires deep inspection into models developed, and in particular using interpretable machine learning, which allows the business to understand what the model has learned, judge whether it is a valid tool, and then to apply that tool.

Businesses cannot hide behind the black box

It is vital that machine learning models are not built naively on data — you must assume all data contains a variety of biases that could be learned by the machine learning model. If such models are deployed, they will systematically reapply biases in the models used in making decisions. Organisations need to understand and take responsibility for the fact that they are deploying human-in-the-loop machine learning development processes that are interpretable.

Businesses cannot hide behind the black box, but instead must use transparent technologies that allow concrete demonstration that these models are not causing a disparate impact or discrimination towards one group versus another.

A recent FICO survey carried out with Corinium showed that just 8% of organisations have even codified AI development standards. In the future, consumers will need to be able to ask whether organisations using AI have defined model development standards – in the same way that they currently have expectations around how their data is being used and protected. Consumers and businesses alike also need to understand that all AI makes mistakes. Governance of their use includes an ability to challenge the model and leverage auditability to challenge key data used to make decisions about a consumer. In a similar way to how consumers provide consent to share their data for specific purpose, they should also have some knowledge of what different AI techniques a financial institution is using to challenge the model, and this requires built-in transparency.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

ML: a tool rather than a magic box

If you think about machine learning as a tool, rather than a magic box, you will have a very different mentality, which is based on needing to understand how the tool works and how differences in data inputs impact that tool. This leads us to choose to use technologies that are transparent. It will take time, but the more conversations we have about interpretable machine learning technologies, the more organisations can start to demonstrate that they meet the necessary model transparency and governance principles, and the more customer confidence will improve. What is fundamental to this is ensuring that models are being built properly and safely, and not creating bias. This is what will start to establish trust.

Dr Scott Zoldi is Chief Analytics Officer at FICO