Machine Learning (ML) is becoming an integral part of everything we do; there’s no getting away from it. We now reach for Google Maps and Waze – which are examples of how far ML has become integrated into our everyday activities – instead of thinking out our routes when we’re travelling. ML systems “understand” our preferences and can “remember” what we have bought, to the extent that they can recommend what to purchase next. While this is a boon to retailers who can angle their marketing in a more targeted manner, the rise of ML and Artificial Intelligence (AI) will see the rise of related risks, in tandem.
At the IERP’s Tea Talk on Machine Learning, Artificial Intelligence and Risk Management, Chairman Ramesh Pillai pointed out that technological changes that were bringing AI and ML into mainstream business was evolving rapidly and driving change. This change, in turn, was creating risk. “As the pace of change accelerates, risk accelerates as well,” he said. Because there was always the possibility of tech tools to be subverted for sinister purposes, there was an urgent need for risk professionals to understand what AI and ML were, i.e., systems that assist businesses in achieving their objectives – and to apply them appropriately.
AI is a concept to create intelligent machines that can simulate human thinking capability and behaviour, whereas ML is an application subset of AI that allows machines to learn from data without being explicitly programmed to do so. Using algorithms that can work with its own intelligence, AI can mimic human intelligence. Some examples already in use include Apple’s SIRI, Google’s AlphaGo, and the AI used in chess playing. It can be classified at three levels: weak AI, general AI and strong AI. Most AI currently being applied is done so at weak or general levels. Ramesh cautioned that the danger lay in becoming too reliant on AI, without setting appropriate checks and balances.
ML is about extracting data, mainly historical data and experiences. This helps machines “learn” without being explicitly programmed; the historical, structured or semi-structured data can provide accurate results or predictions but only within specific domains. There are currently three types of ML in application: supervised learning, unsupervised learning and reinforcement learning. The rating system used by Netflix is an example of ML, as are spam filters used in e-mails. With supervised learning, the user decides what to feed the machine to improve its performance in a guided, controlled process.
In unsupervised learning, on the other hand, the algorithm allows the system to pick, in order to refine answers; while in reinforcement learning, the system uses observations taken from the environment to continually learn. Positive reinforcement happens when the right decision is made, for example, when the computer beats humans at a game. Ramesh detailed some common risks in ML that could affect its application. One of these, he said was the lack of strategy and know-how. With new technology, there is always a learning curve but the user’s experience may be lacking, thus preventing optimum understanding of the system.
The organisation may also not have a clear AI strategy in place, or lack the necessary talent with appropriate skill sets to operationalise such systems; these contribute to barriers to greater adoption of technology. Another risk is poor or unreliable data. ML relies on human-supplied data; if there is none, it cannot learn. Errors in data will also affect it, as will meaningless or unstructured data or “noise” that cannot be correctly interpreted. Data integrity and governance is thus imperative. There is also the possibility of “over-fitting data” – setting too many restrictions in the system that results in narrowing its parameters for learning, thus forcing it to take longer to deal with the data or developing a myopic view.
Besides these, human biases may be inadvertently programmed into ML systems. A few instances have already appeared, much to the embarrassment of the companies involved. Some of the incidents have even led to reputational damage, with customers alleging racial discrimination, gender insensitivity or occupational biases. This indicates an obvious ethical risk with ML, but the core of the problem here is due to human involvement. As it turns out, humans are the biggest AI risk to manage. Misuse and abuse of AI systems are rising, as evidenced by many realistic but fake audio and video recordings – “deepfakes’ – making the rounds.
Being a neutral tool, the application of ML depends on the intentions of the user. A major challenge therefore is to ensure that machines are used ethically. This can be done by following standards and guidelines issued by the authorities or standards bodies. Guidelines and governance are likely to increase as the technological environment evolves, and with the increasing uptake of AI and ML across industries. Risk professionals will have to keep up. “Risk professionals must become comfortable with the technology,” advised Ramesh. “Don’t be scared, or you will not be able to understand it.” He added that it was important to learn how to assess and manage it.
Reading the proper journals, asking the right questions and doing due diligence will go a long way towards helping risk managers understand the risks of AI and ML. It is essential that risk professionals undertake this, as properly-selected tools, knowledge and technology can help organisations create their own ML models. With the rapid advancement of ML, risk professionals needed to advance in parallel, to keep biases, the lack of skill and unethical behaviour from negatively affecting projects. They will need to keep themselves constantly updated, in order to give proper guidance on the assessment and management of new technologies.