@ the IERP® Global Conference, August 2024
The views and opinions expressed in this article are solely those of the featured speakers and do not necessarily reflect the official view or stance of the IERP®. The content is provided for informational purposes only.
Speaker Tanvinder Singh quoted some startling figures from the very start of his presentation. Picking up numbers from a long-running global survey for CIOs, CISOs, and decision-makers in the AsiaPac region, he said that 51% of users were concerned about cloud-delivered attacks, making the cloud one of the biggest threats in today’s business environment. “The cloud cannot operate on its own,” PwC SEA’s Director for Cybersecurity said. “It has to be connected (to other devices). The cost of fixing an issue is going up. In 2023, 31% of respondents reported breaches of over US$1 million.”
This increased in 2024. In the AsiaPac region, where Cloud was most rapidly adopted, 20% of users experienced breaches. The region has also seen the launch of several data centres for tech giants like Google and Microsoft. Breaches have happened across sectors, including banking, finance, telecommunications, media, and industrial manufacturing. “All these are strong in Malaysia,” he said. “(But) If you adopt Cloud, this is the result. The message here is that irrespective of the investments we make in security, we cannot guarantee being breach-proof.”
Targetting to be breach-proof, is the wrong target; instead, organisations should be considering how to make themselves a harder target to breach, and how to recover fast. “Breaches are going to be the new normal,” he said, urging businesses to look instead at how to reduce the cost of fixing a breach, how to fix it quickly, reduce the blast radius, and minimise the impact on the business. “If cybercrime were a country, it would be the third largest economy, after the US and China,” he continued. “There is a lot of money to be made.”
There were a lot of resources available in this dark economy, with a multitude of parties involved in its running, making profits, causing damage or harm, and making breaches their business. Hacking services can be hired; ransomware can be bought and installed; ransom accepted on another’s behalf; cryptocurrency converted; and money laundered on demand into any legitimate currency. “As long as someone is able to give the right information, you can hire these services in the market,” he said. These malicious actors are also constantly looking for new ways to improve, such as Gen AI.
Statistics from the global report, quoted 69% of respondents as saying that they used Gen AI for cyberdefence; and 47% were already using it in risk detection mitigation. “Typically, organisations have a security operations centre where they collect and analyse logs from all devices in their ecosystem,” Tanvinder explained. “They analyse logs for abnormal behaviour then try to figure out if there is a rogue element or abnormal activity happening in the ecosystem. It’s a manual, labour-intensive job…but Gen AI is doing very well here, identifying anomalies in the network or system.”
Fewer people are needed but this doesn’t mean people lose their jobs. Instead, they are upgraded to other roles, as there has always been a shortage of staff in cybersecurity, he added. Business expectations of Gen AI are high; organisations developing new lines of business say they trust Gen AI, as it helps the business, and with productivity. Opining that Gen AI-driven processes in organisations will increase, he said that it was accessible, and that most people were looking forward to it, and already using it in some form or shape.
Many tech companies are developing more uses for it, and it is becoming more pervasive in day-to-day operations. In cybersecurity, it is already applied for anomaly detection. Tanvinder said that his organisation uses AI to constantly monitor logs from the applications and servers it uses, instead of doing it manually. There are also processes and procedures to follow if anomalies are detected, and decisions on what action to take are made based on the application of sophisticated analysis tools that support high levels of cybersecurity.
“Some aspects can be automated, such as cyberfraud detection, or identity misuse,” he said. Instead of using human intelligence, artificial intelligence can be used and consistently propagated. It is easier than training humans.” But he cautioned that there were contradictions amid the euphoria of using Gen AI. The global report indicated that many users were comfortable deploying Gen AI without having internal policies; they were not concerned about its security aspects. “We are looking at productivity and ease of use but ignoring the loopholes and vulnerability,” he said.
This applies to the use of any solution, not just AI. The risk is that malicious actors can also use it – without ethical or security constraints. “This is the risk we are not looking at,” he warned. “They are using deep fakes…on social media to generate hatred, and animosity. We are using AI to defend ourselves but the dark side is creating AI-enabled malware that can figure out what we will do next. We are using it to defend ourselves; they are using it to attack us. They have to be successful only once; we have to be successful 100% of the time. So the advantage is theirs.”
Because of this conundrum, proper guard rails and security are imperative; if AI is brought into an ecosystem, it has to be done securely. There are also ethical issues to face, Tanvinder added. Using AI involves vast amounts of data; bona fide users are always scrupulous about guarding clients’ privacy. Malicious actors are not. “How do I maintain the balance between collecting private data and public data? That’s the concern we have,” he said. Additionally, AI systems are trained on sample data, which inevitably contains biases that need to be detected and removed, for balanced outcomes.
Similarly, when using tools like ChatGPT, users do not know where the data goes. “It’s a black box for us, there is no meta data, nothing we know about it,” he said. “If you don’t understand how it’s operating, you don’t know what risk it is exposing you to.” Another area of concern is regulatory compliance, and the complications which arise when applying, storing and transferring data across borders. Data sovereignty is an issue but bringing in more regulations and laws will likely increase the cost of compliance. Governance is the key, he emphasised.
“If you are in a position to question the adoption of AI tools, please do so from a governance aspect,” he advised. “What risk is it exposing us to? Are we compliant with local laws? Has somebody done risk assessment in terms of where the data is going? How are we complying with privacy? These are the main aspects we need to look at. The survey also said that people look to their leaders for guidance. If you are in a leadership position, you can influence the outcome and people will follow you…Start questioning and identifying the risks associated with Gen AI.”