Successful AI Implementation – Myths and Reality

@ the IERP® Global Conference, October 2022

Many companies approach an AI project the way they approach any other standard technical implementation, without realising that there are many different things to consider. How should they prepare in order to get the best out of their projects? Dr Janet Bastiman, Chief Data Scientist with compliance technology specialists Napier AI, delivered this session on AI implementation and shared some advice which could improve outcomes of AI projects. She cited the example of a hospital’s AI project – supposedly patient-centric, personalised cancer treatment plans – which was initially announced with great fanfare, then quietly scrapped about six years later, at a cost of more than US$62 million.

“It was not an isolated case,” she said. “There are thousands of (similar) AI projects every year…but it doesn’t have to be that way. There are some sensible actions you can take to make sure that your projects are successful or at the very least, can be beneficial even if it was not what was originally intended.” Explaining that what the IBM-Watson hospital project team was trying to do was really hard, she said, “Ingesting medical papers and summarising them in context…is not straightforward. It can be done but it takes a lot of time and effort. Recommending personalised treatment with all the variability of human medical data isn’t really a problem.”

What was really hard, however, was trying to combine the two, to come up with an effective answer. “The problem that they started off with was that they jumped into trying to solve a world with AI,” she said. “They also did not do proper risk analysis. They didn’t consider the ethical risks, regulatory risks or potential changes in the data they would get and how they would be able to use it.” They also were not really interacting with users as much as they should have. “AI is very good at problems that are structured and need automating,” she explained. “You also need to consider how you can measure whether your solutions are working or not.”

Not every problem requires machine learning, she said, urging users to understand the tools that they have. Some things can be solved in very different, easier and much faster ways, and may not require AI at all. “Some of the most successful companies started with really simple statistical analysis that took them a long way,” she said. Companies also need to do their own due diligence first, before determining what will really solve their problems; they should also carefully consider ethics, intent, malicious users and industry regulations. Risk assessment is necessary as it relates directly to how AI is used, and how it could be used.

“Just because you can do something, you have to think if you should actually do it,” she said. “There may be good intentions but no thought as to how it will be used.” Industry regulations abound, and should be strictly adhered to. One of the most common reasons for project failure is the lack of a proper team. Sometimes the required talent is not available, or the company may not be able to afford it. But beyond all this is the need for data. “Start with data,” she stressed. “There’s no point having the team if you don’t have it. When you do have the data, you will need to understand what it’s actually going to tell you, and what you need to do.”

The variability of data is a major consideration; this needs to be appropriately managed, preferably in collaboration with subject matter experts. “When you do get the technology, you need to get an efficient pipeline to handle the data process,” she advised. “Don’t dump data on users. AI needs to make things more efficient and effective.” Also, if ethical risk planning has been done correctly, test results and explainability will be available for sharing with users, who should be encouraged to ask questions and voice their concerns. Additionally, transition from one system to another may cause a certain amount of misgiving; a major portion of the success of the system hinges on user comfort.

Advising care, documentation and mindful thinking throughout all stages from conceptualisation to implementation of the project, she acknowledged that wrong decisions could be made at any time. “What we tend to do is hold AI to much higher account than humans. It is important to separate a good decision from the best outcome possible. If you can do that, you probably have the best possible chance of successful implementation. Implementation is only a small part of the whole process. There is so much you need to do before you get round to writing your AI and deploying it,” she said, adding that if the Watson team had thought things through, they wouldn’t have wasted US$63 million.

Quality control is imperative once the system has gone live. Adjustments have to be made over time, depending on changes to the data and regulations. Many companies don’t know what to do with their data, and find it difficult to cope with regulations, she said, but “If you have done the prep for the project properly, you will be able to say, very early on, you know what you’re doing…(and be able to decide) whether to pivot the project, stop the project or continue.” To a question from the floor on how to tell whether a company’s AI was good, she said that there were two definitions of ‘good’ – ‘good’ as in successful and ‘good’ as in not ethically bad.

AI deployed was not ethically bad if it was not doing something that could be misused, abused or do harm. In terms of whether the AI is successful or not, that depends on what the user wants to achieve. What constitutes measurable success needs to be defined for the project. She stressed that as long as users did as much of the right thing as possible from the beginning, from testing to explainability, and did everything possible to mitigate risks as if the AI were a human, then it would be very difficult to do something wrong.

Share the Post

Upcoming Events

Tea Talk – 8 November 2024

Nov 8, 2024

Latest Articles

Share the Post

Subscribe to our weekly newsletter
and stay connected!

Subscribe to our weekly newsletter and stay connected!

Receive the latest update on our risk management program, industry news, events and more!

Subscribe to our weekly newsletter