On 31 October 2025, the Institute of Enterprise Risk Practitioners (IERP®) hosted a Tea Talk that delved into one of the most critical governance topics of our time, the intersection of artificial intelligence (AI), ethics, and accountability. Presented by Chari TVT, Board Member and Chairman of the Risk and Governance Committee at UEM Sunrise Berhad, the session examined how organisations can navigate the growing ethical and operational challenges of AI—and, crucially, who should be held responsible when technology fails.
In his opening remarks, Chari emphasised that while AI can transform industries, enhance productivity, and enable precise decision-making, it also introduces unprecedented risks—especially in accountability, data integrity, and governance. “AI doesn’t eliminate human responsibility,” he stated. “It changes how responsibility must be exercised.” He explained that as AI systems evolve from simple automation tools to self-learning mechanisms, the need for ethical oversight and clear accountability becomes even more urgent.
Chari pointed out that while the debate around AI ethics is not new, its implications have grown significantly as AI becomes increasingly embedded in business operations and daily life. What were once theoretical questions about fairness, transparency, and bias have now become pressing corporate governance issues. “When an algorithm makes a decision that affects people’s lives, the question isn’t just whether it was accurate,” he said. “The real question is: who is accountable when it goes wrong?”
He elaborated that AI systems, by design, rely on vast amounts of data and therefore inherit the biases and limitations present in that data. This creates ethical blind spots that may not be apparent until harm has already occurred. He cited examples of automated decision-making systems in finance, healthcare, and recruitment that have unintentionally discriminated due to flawed data sets or poorly defined objectives. “We are delegating judgement to machines without fully understanding how those judgements are made,” he warned.
According to Chari, the issue is not whether organisations should adopt AI, but how they should govern its use. He urged companies to develop AI governance frameworks that align with corporate values and ethical principles. “Technology should serve humanity, not the other way around,” he said. “We must ensure that AI decisions are explainable, auditable, and consistent with the standards of fairness and integrity expected of human decision-makers.”
He explained that accountability must remain human-centric. While AI can process vast amounts of data faster and more accurately than humans, it cannot make ethical judgements or assume responsibility for outcomes. “When something goes wrong, you can’t hold the algorithm accountable, you hold the board, management, and developers accountable,” Chari said. This, he added, makes it imperative for governance structures to clearly define who owns the risks associated with AI deployment.
He also highlighted that boards and senior management must broaden their understanding of technology-related risks. Many board members, he observed, still treat AI as a technical issue rather than a strategic one. “AI is not an IT issue; it is a business and governance issue,” he asserted. “Boards must ask the right questions, not about how AI works, but about how it aligns with organisational purpose, culture, and accountability.” He recommended that boards ensure AI-related decisions receive the same level of scrutiny as financial, operational, or reputational issues.
Chari noted that one of the most pressing challenges lies in regulatory ambiguity. As AI technology advances faster than laws can be written, organisations find themselves navigating uncharted territory. However, he cautioned against waiting for regulators to define standards. “Good governance doesn’t start with compliance; it starts with values,” he said. “If you wait for regulation to tell you what is right, you are already behind.” He encouraged organisations to adopt self-regulatory measures and internal policies that promote transparency, ethical data use, and responsible innovation.
He also discussed the importance of trust as a critical component of successful AI adoption. “Trust is the new currency of the digital economy,” he remarked. “If people don’t trust your systems, your decisions, or your data, they won’t trust your brand.” To build and maintain this trust, he advised organisations to establish clear principles on how data is collected, used, and protected, and to ensure that AI-driven outcomes are explainable to all stakeholders.
Chari warned that over-reliance on AI could create a false sense of security, leading to complacency in decision-making. “AI can provide information, but not wisdom,” he said. “We must retain the ability to question, interpret, and make ethical judgements beyond what data alone can tell us.” He stressed that human oversight remains non-negotiable in all AI applications, especially in areas involving safety, financial integrity, and personal rights.
He suggested that organisations integrate ethical risk assessments into their enterprise risk management frameworks. These assessments should evaluate how AI systems could unintentionally cause harm, whether through data misuse, privacy breaches, or social inequities. “Ethical risk is still risk,” he said. “It may not appear on your balance sheet, but it can destroy your reputation and stakeholder confidence overnight.”
As part of proactive governance, Chari recommended forming cross-functional ethics committees to review AI-related initiatives. These committees should include not only technology experts but also representatives from risk, compliance, human resources, and sustainability. “AI risk cannot be managed in silos,” he said. “It requires collective ownership and continuous oversight.” He also highlighted the need for AI literacy at all levels of the organisation so that employees understand both the capabilities and the limitations of the tools they use.
In concluding his remarks, Chari reflected on the broader societal impact of AI. He urged leaders to consider the long-term consequences of automation on employment, privacy, and human dignity. “The purpose of innovation is not just to make things faster or cheaper,” he said. “It’s to make life better and more meaningful. If AI doesn’t serve that purpose, then we need to rethink how we are using it.” He summarised the session with a reminder that accountability cannot be automated. No matter how advanced technology becomes, organisations must ensure that ethical principles remain at the core of decision-making. “The future belongs to those who innovate responsibly,” he concluded. “AI can be a force for good, but only if we, as humans, take ownership of the risks it creates.”
The session underscored that AI’s transformative potential must be balanced by thoughtful governance and ethical vigilance. As Chari emphasised, the real question is not whether AI can make better decisions than humans, but whether humans can ensure that AI is used wisely, transparently, and responsibly in the service of society as a whole.






















