Ethics and Trust in AI

@ the IERP® Global Conference, October 2022

“Defining anything in the AI spectrum is problematic,” said Dr Janet Bastiman, Chief Data Scientist with compliance technology specialists Napier. “Even the phrase AI itself can mean different things to different people.” There was even disagreement about the nuances of AI among practitioners, she added, which contributed to the lack of clarity. Coupling it with ethics, the moral principles that are intended to avoid harm or wrongdoing, further complicates it. Can organisations be sure that they are using AI ethically? More importantly, why should they care? Why think about ethics at all? Can ethics be considered a risk problem?

Most of them already have their hands full just maintaining themselves as going concerns. Trying to apply AI ethically may end up muddying already murky waters. Explaining that the focus of AI today was on doing no harm, she said that in addition to legal risks, there may be things which may not be considered illegal but could nevertheless cause reputational harm. Organisations thus needed to think of these issues from the mitigation perspective. Bad AI does have a tendency to linger and compound itself if not checked early enough, and escalate problems even after they have been phased out. She cited the example of AI implementation in tracking and monitoring through CCTV feeds.

It started out as a way of tracking and monitoring but was opened up to the extent that the general public was able of taking photos of ordinary people walking on the street, and accessing a great deal of information about them almost instantly, which posed a confidentiality and security risk. “In business, you need to consider AI alongside all other business risks,” she advised. “The business needs to take it seriously or it won’t work.” The nature of ethics is such that it is not confined to just one department of the business; it has to be championed from the top down, and the organisation must make time for it or efforts to apply it effectively will fail.

She said Ethics could be broadly divided into bias, privacy, the way the system is used and its intent, and industry-specific issues. Bias could be in the data, models, team or design of the solution. Privacy issues may arise when people gather data which they don’t necessarily have the right to use. “They may have the data, but not the permission to use it in a certain way,” she said, making the clear distinction. People using the system also needed to understand how it works; the system itself should be designed in a way that allows ethical use, so that it is less likely to be misused. Specific industry issues are a factor as well.

The business may be in areas that fight financial crime, in medicine or any other industry which has specific issues that need to be considered. Different levels of AI have to be noted, as different types of AI have challenges peculiar to their application. For instance, AI utilised for image labelling will have different issues from AI applied to a self-driving car, or to robotics. Stressing that there was no one-size-fits-all solution, she said that it was not exclusively a technology problem. Thus, it could not be left to engineers to solve. “If you leave it to the engineers to solve, you will get the wrong answers,” she said. “This is where we see the failures and the big ethical problems.”

Engineers are not ethicists; they do not understand all the different things that require consideration. “It all starts with the definition of issues, before you start development,” she pointed out. “If you try to build it in after you have built the solution, it will be expensive and you will probably not cover everything.” Just as with any other technical project, the ethics of AI need to be considered up front, together will the potential risks of the system. “This is all risk mitigation,” she stated, adding that she was often asked how to turn the ‘do no harm – just do good’ philosophy of ethics into something tangible.

She advised approaching it in the same way as technology: by thinking about what it should and shouldn’t do, and what could happen if it was given the wrong input. Articulating or documenting it makes the advantages and disadvantages more obvious, and supports the formation of appropriate ethical values which can then be applied to the company or project. “You get a list of very specific problems – a list of what you want to do, are happy to do, and where the red lines are. And you can mitigate these risks,” she said. “It sounds really simple but a lot of companies really struggle to turn that intangible ‘do no harm’ philosophy into something they can document, audit and mitigate.”

Companies may often think that their users will not understand the results, or that there will be no benefit because of the assumption that the results were not accurate. Perceptions like these place a value on AI; it must therefore be accessible, and users must be informed. “AI must have explanations,” she said. “It needs to be visible and intuitive, and gather feedback from end users. Find out what the risks are, what our values, measures and actions are.” This will enable it to turn from something fluffy or insubstantial into something tangible. She also explained the challenges confronting auto-updating AI, which trains itself to make auto updates.

“If you are making automatic decisions, even if you find users who are potential financial criminals, you won’t be able to say why when the issue is escalated to the regulator,” she said. “It must be clearly auditable, and while we can train and update in the background, we don’t want to deploy anything that hasn’t been tested. We don’t want decisions to be made that are not supported.” Testing of models and an audit trail were necessary before application, to have something concrete to report on. Stressing that everyone has biases based on personal experiences, knowledge and assumptions, she said this impacts data sets and how data is gathered to build solutions.

“We need to be aware of this and build it into ethical programmes,” she said. “There has been an awful lot of big embarrassing ethical mistakes because people have not considered bias.” To illustrate, she cited the example of the AI-driven hand dryer that could only recognise white skin and wouldn’t turn on for anyone else. “Bias is generally solved by taking more data or changing the data but sometimes simply throwing more data at the problem doesn’t help,” she said, recounting the example of the AI recruiting tool that had been built on many years’ worth of data, which turned out to be completely sexist, discriminating against women’s CVs.

“The developers tried to fix it by adding more data to the system but it didn’t solve the problem,” she said. “Eventually because they couldn’t see a way around the problem it had to be turned off, and hiring went back to the traditional methods.” Advocating extensive testing, she explained nevertheless that test results on their own will not help in eliminating bias but test results are important because they can ensure that the right thing is being done. On the issue of explainability as a means of trust, she said that it was sometimes difficult to explain things to end users, particularly if the explanation was being done at the wrong level.

“A lot of people think that you don’t need to do it but if it’s not explained the right way, you will never gain trust,” she stated. “Many avoid explaining because it takes too much effort to explain AI.” Most people understand probability very well but may not be able to correlate the probabilities with their impacts. Noting that technology changes very quickly but the law doesn’t, she said that recommendations were turning into laws worldwide. In the EU, for example, recommendations have become guidelines which in turn have become law. “Every jurisdiction in the world is coming up with regulations,” she said. “We must be aware of these developments.”

There are also many laws dealing with fairness and deception, which include AI. She said there were also specific industry regulations, and cautioned against designing programmes without thinking thoroughly about business processes first. “You need to make ethics a priority,” she said. “It’s not just a technology problem. You need an internal review board, and to get explainability at the correct level, and then test those assumptions. Things change; new technologies come out, and what may have started out as innocent initially, may become an ethical problem over time.”

Share the Post

Upcoming Events

Enterprise Risk Management : Driving the Sustainability Dragon

Aug 12, 2024

Tea Talk – 26 April 2024

Apr 26, 2024

Directors Networking Group – 12 July 2024

Jul 12, 2024

Chief Risk Officer Networking Group – 17 May 2024

May 17, 2024

Latest Articles

Share the Post

Subscribe to our weekly newsletter
and stay connected!

Subscribe to our weekly newsletter and stay connected!

Receive the latest update on our risk management program, industry news, events and more!

Subscribe to our weekly newsletter