Mid-year considerations: AI and the value of self-regulation

New research predicts that AI integration will pose further risks over the coming months as accelerated digitisation continues to outpace regulatory guardrails.
Mid-year considerations: AI and the value of self-regulation

Preparing for risks has become a more elusive task for companies as uncertainty looms over the economy, geopolitics, and supply chains. While the pace of technological progress is expected to alleviate many of these risks, this in itself poses challenges for companies, new research finds.

The report, Chief Risk Officers Outlook, from the World Economic Forum (WEF) explores the key risks facing organisations throughout the rest of 2023. The research is based on 24 survey responses and discussions with chief risk officers in June.

“The rapid development of [AI] technologies has triggered a wave of investment … [but] the ease of use of these technologies makes them accessible to a wide range of users, including those with malicious intent,” the report said.

Respondents ranked economic headwinds (86%), supply chain disruptions (55%), armed/political conflict (50%), and regulatory changes (50%) as the biggest external risks facing companies this year, the report said, in line with S&P’s research conducted earlier this year.

Even though the evolution of digitisation and automation is often necessary for companies to get real-time insight into the risks ahead, the integration of artificial intelligence (AI) carries its own risks that companies will need to examine and prepare for, the report said. Among respondents surveyed, three-quarters expect technology-related volatility, with over half expecting global upheaval.

“Seventy-five per cent of chief risk officers agree that the use of AI technologies is posing reputational risks to their organisation,” the report said. “They note the importance of following responsible AI principles and flag risks related to the inadvertent sharing of personal data as well as bias in algorithmic decision-making.”

Moreover, respondents believe that regulatory bodies are being outpaced by the acceleration and deployment of AI technologies, as much of this ambiguity is driven by a lack of insight on how to manage these risks, the report notes. In response, 43% of respondents agree with the idea of slowing or pausing the development of AI technologies until the associated risks are better understood.

Only 55% of chief risk officers surveyed say they understand how existing and upcoming regulation relating to AI will impact their organisation.

Regulation is slow, but companies can act now, the report acknowledges. “More than half of chief risk officers surveyed indicated that their organisation plans to conduct an AI audit within the next six months to ensure the safety, legality, and ethical soundness of the algorithms being used.”

Self-regulation is important, the report says. In the face of the growing disruptive power of AI technologies, it will be increasingly important for organisational leaders to demonstrate that their use of it clearly aligns with societal values and interests.

— To comment on this article or to suggest an idea for another article, contact Steph Brown at Stephanie.Brown@aicpa-cima.com.

Up Next

Outsourcing grows globally as leaders grapple with talent, cost constraints

By Steph Brown
January 6, 2026
C-suite leaders are outsourcing IT services to harness external expertise amidst talent and budget shortfalls, but overreliance on third-party guidance poses strategy risks
Advertisement

LATEST STORIES

Outsourcing grows globally as leaders grapple with talent, cost constraints

Finance and cyber resilience

5 elements of an effective AI prompt

AI readiness, skills gaps top concerns of finance leaders

Expert advice for navigating challenges, changes, self-doubt

Advertisement
Read the latest FM digital edition, exclusively for CIMA members and AICPA members who hold the CGMA designation.
Advertisement

Related Articles

Finance and cyber resilience