The Institute of Enterprise Risk Practitioners (IERP®) is the world’s first and leading certification institute for Enterprise Risk Management (ERM).

Image Alt

IERP® International Institute of Enterprise Risk Practitioners

  /  Thought Leadership   /  What should risk managers consider when embarking on the Artificial Intelligence journey?

What should risk managers consider when embarking on the Artificial Intelligence journey?

Whether we like it or not, the age of Artificial Intelligence (AI) is here. It has been here for quite some time but it is only lately that technological development has pushed it concertedly into the spotlight. The Covid-19 pandemic has spurred more businesses and individuals to go online, which has led to higher connectivity and the blurring of boundaries. The digital economy is booming, and major potential ICT opportunities abound. This is one area which is thriving amid the disruption and uncertainty brought on by the virus.

The opportunities that come with this are staggering, but so are the pitfalls. Never before in human history has so much data been available – and this is both a blessing and a curse. “More data is being created because of the increase in online transactions,” said Dr Ong Hong Hoe, Bank Negara’s Head of Data Science and Analytics.

“By 2025, there will be 175 zettabytes in use. AI will be needed to manage data, to determine what is important to the customer.”

It is also expected to help businesses generate more than US$31.2 billion in revenue by 2025. But what exactly is it? AI can be classified into three categories: narrow, general and super intelligence. Of the three, narrow intelligence is most widely used, mainly in machines. Alphabet’s driverless car, Amazon’s anticipatory shipping app, neural voice cloning, voice-based chatbots and face detection or recognition software are examples of general intelligence technology that is currently deployed. Some of these are very accurate; at least 50 countries have already seen fit to put AI policies in place, and tighter regulation is likely.

AI systems are dynamic and can evolve as technology progresses. They are able to “make” decisions based on logic and rules, and can be programmed for destruction. Far from being the stuff of science fiction, AI takes all its biases from humans, “learning” racist or sexist behaviour, and incorporating human prejudices into its algorithms through data annotation. Besides these inherent weaknesses, several AI-related risks have emerged in recent years as well, warned Dr Ong. Companies leveraging on AI in their vehicles may face increased liability risk. In the case of driverless cars, for example, it may not be easy to determine who is at fault in an accident.

The ability of AI to handle large amounts of data may lead to data privacy risk. AI used to analyse data could be programmed to select the kind of information to use, giving rise to privacy issues and putting confidentiality at risk. In recent years, the issue of “Deep Fakes” – AI-enhanced images – has emerged, and the growing sophistication of the technology has made it increasingly difficult to differentiate a manipulated image from an authentic one. Coupled with social media, this is making it easier to spread disinformation that is not only incorrect but can be dangerous. Compounding this problem is that it may not be illegal in some jurisdictions which have no laws in place to check it.

The risk of losing jobs to AI becomes real not when tedious jobs are automated, but when staff cannot be retained or retrained. Many companies are unable or unwilling to redeploy staff who may be affected by such changes because of the internal environment. Retraining or upskilling may be necessary but firms may not be able to afford it, particularly when large numbers of employees are concerned. Implementing AI is never an easy task, and is almost always an expensive one, even for large organisations. Smaller firms may find it impossible and therein lies a conundrum: small firms could benefit extensively from AI but may not be able to afford it.

This could lead to what Dr Ong described as “monopolies of sovereignty risk” where smaller companies lose out to big ones because they just cannot afford the tools that will amplify and sustain their competitiveness. AI may be the solution to many productivity issues but it may present more complications than it can resolve. What can organisations large or small do, to make it work for them? As with many things that need managing, the first step is to develop a thorough understanding of the issues, and the major challenges that the organisation may face. Only then can decisions be made about how much to invest in R&D to bring the organisation up to the level of AI which it actually needs.

Despite its “artificial” character, attention must be paid to the human aspect of AI, to better develop the processes required to manage it appropriately. This is particularly important from the documentation and compliance perspectives. Dr Ong urged the audience to “plan for failure,” stressing that AI will not happen organically. A future which includes AI could be painful. “Job loss is real, but people can upskill,” he stated.

“AI takes away jobs which are tedious and repetitive.” Risk Managers, he said, will have to look into a multitude of dimensions to identify where or how their respective organisations will be at risk from AI as it is almost always customised to the organisation.

The current low level of AI use across industries may be a good thing. Most sectors prioritise automation, which is at the narrow intelligence end of AI. The environment may be conducive, therefore, with the concerted move to online commerce because of the pandemic, to step up AI implementation. Considering the direction that businesses will have to take post-pandemic, more concerted application of AI may well be a necessity, not an option. The risk will lie in not implementing it fast enough to allow the organisation to recover, grow and maintain its competitiveness in the New Normal.

User registration

Reset Password