AI’s next Evolutionary Step

AI next evolutionary step

@ the IERP® Global Conference, August 2023

Artificial Intelligence has been around for more than 50 years but with the advent of apps like ChatGPT, its use has accelerated and become more widespread. “The usability of AI has significantly improved,” said Nitin Acharekar, Vice President of Consulting & Research at tech advisory firm Twimbit. “Now the C-level is pushing for its use.” Once the C-level does this, he added, uptake can be expected to increase across the organisation. His presentation covered several areas of AI, including how and where it has been adopted today, how it can help and harm individuals and organisations, and some key principles for responsible AI use.

As an example of the increasing use of AI in financial services, he quoted the CEO of a major bank in Singapore as saying that the bank was becoming an AI-driven establishment. “Banks have been using chat bots for some time now but this function is now becoming easier to use,” Acharekar said. Describing the chat bots as ‘friendlier’ he said that small differences in the language used could enhance the user experience; AI was now capable of fine-tuning this. The fraud detector function of AI was also being put to greater use, and the personal banker function was being expanded to offer advisory services based on an individual’s risk profile.

“The fraud detector function of AI is being used to detect anomalies,” he said. “And AI can now provide information to help you make decisions and manage your finances.” In the telecommunications industry, AI was being increasingly applied in the areas of network optimisation and predictive maintenance, to analyse network performance. “Based on this, it can make decisions and take action,” Acharekar said. It was also being utilised in billing, fraud detection and security, customer management, predictive analytics for customer churn, and marketing and sales. Applying AI in government services was making the job of civil servants easier, he said, and professionalising delivery.

It could also offer solutions for public safety and healthcare, education, social welfare and emergency services. Pointing out that AI was now part of almost every business process, he said it was impacting every process accordingly, and there was more extensive human interaction with it. “Every job will (ultimately) benefit or be impacted or affected by AI,” he said, describing it as ‘an arms race’ between companies, with the winner being the one which adopts it faster or applies it more extensively. But with all this, comes risk; a major challenge is in the area of security with the ‘explosion of data’ brought about by the growing use of AI.

“One of the challenges that an organisation’s Chief Security Officer faces is being overwhelmed by the amount of information that needs to be digested,” he said. “Security has to run 24/7 to identify risks associated with this, and other issues. Without technology, this will not be possible. But with AI, it is manageable. The infrastructure itself is becoming more ‘intelligent’ with the embedding of AI.” Users should also note that just as AI helps them identify threats and avoid spam, hackers are likely to be using more AI as well. Recommending that all security should be supported with some level of AI, he said that it could be used to identify security gaps and fill them.

In the area of risk management, AI shines. He highlighted just three areas where AI enhances risk management – IT security, supply chain and ESG – but these showcased the extent to which it can be effectively applied. In IT security risk management alone, there were at least a dozen areas where AI could strengthen controls. These include cyber risk quantification and residual risk calculation; network intrusion detection and prevention; automation of CyberSecurity controls; malware detection, analysis and prevention; phishing and scam detection and filtering; countering advance persistent threats; behavioural modelling and analysis; and preventing zero-day attacks, among others.

Where supply chain risk management is concerned, AI can be applied in logistics and transportation; to forecast demand and manage inventory; assess suppliers’ risks; for warehouse automation, health and safety; business process automation; and customer service. “With the pandemic, supply chain management has become very important,” Acharekar said. “AI helps companies to understand the day-to-day risks of the supply chain.” In the area of ESG risk management, AI can be applied to almost every aspect: climate change, biodiversity, conservation, water security, clean air, and weather and disaster resilience. “We cannot avoid the weather but we can mitigate its effects,” he said.

However, while there were many advantages to AI, there were risks as well, he cautioned, adding that there were different risks at different levels. One of these is that biases created by AI tend to increase with use. These can become more targeted as well; the increase in targeted information can influence users negatively, sometimes to extremes. “It can drive you more and more towards a polarised view,” he said, citing the case of a Belgian user who was ‘encouraged’ by an AI chatbot to sacrifice himself to stop climate change. AI may also have vulnerabilities that can be exploited by malicious actors which may go undetected.

“At the organisational level, specifically because of AI, certain data is needed but malicious actors may gain access to systems and steal data or tamper it,” he explained. The AI will start making wrong decisions, which may be detrimental to the organisation in the long term. In recent years, the issue of deep fakes has also been in the spotlight; AI is sophisticated enough to create believable fakes that may cast doubt on people’s integrity and harm reputations. This has serious implications and presents a geopolitical risk. “These are particularly harmful to leaders or people in positions of power who can influence decision-making,” he said. But the area of risk is vast, and cannot be policed constantly.

Companies must decide what is important within their respective contexts, and look at that as a priority, he advised. “One company’s priorities may not be the same as another,” he said. “Even organisations which already have frameworks and guidelines may not agree.” Frameworks and guidelines have been issued by different bodies such as the World Economic Forum (WEF), Infocomm Media Development Authority (IMDA) Singapore, CSIRO Australia, and the US National Institute of Standards and Technology (NIST). He pointed out the differences between those who were socially oriented and those who were commercially driven.

While frameworks may guide the management of risks, there were different measurements used by different bodies, he stressed. “Other tech tools also help to measure and monitor,” he said. His conclusion stressed that AI was here to stay, and was a valuable tool for risk management practitioners. However, it came with its own risks that organisations must learn to manage. “Every business process will be embedded with AI in the near future. Its benefits can be put to good use but we must be aware of how they can be misused.”

Share the Post

Upcoming Events

No data was found

Latest Articles

Share the Post

Subscribe to our weekly newsletter
and stay connected!

Subscribe to our weekly newsletter and stay connected!

Receive the latest update on our risk management program, industry news, events and more!

Subscribe to our weekly newsletter