What CFOs need to know about gen AI risk

As businesses increasingly use generative AI, CFOs need to consider risks that include data security, financial, intellectual property, reputational, and more.
AI-GENERATED IMAGE BY YUSAFADI/ADOBE STOCK

AI-GENERATED IMAGE BY YUSAFADI/ADOBE STOCK

Generative AI has potential for many capabilities โ€” such as having conversations, analysing documents, writing memos, and providing advice on solving interpersonal problems.

These are all in response to plainlanguage prompts from the user. At the same time, though, the technologyโ€™s inner workings remain poorly understood. And when it fails, it often does so in bizarre and unpredictable ways. Those errors can disrupt workflows, frustrate users, and even result in factual errors and โ€œhallucinationsโ€ that could mar a companyโ€™s work product or reputation.

For example, in February, the generative AI tool ChatGPT started acting strangely. According to IT news and reviews website Ars Technica, when one person asked if dogs can safely eat cereal, ChatGPT replied, โ€œAlways consider the makings of any food nabs from the chug can to your houndsโ€™ refigure, as even non-toxic or small fang to hound-mark bitsy can weave into skinspeaks, dance or merryl, as waters to wave to a listen and care from you.โ€

The incident was resolved within a matter of hours, and ChatGPT abandoned its bizarre ramblings and returned to normal. But it was a stark reminder that, for all its promise, generative AI is a work in progress.

โ€œGenerative AI is still not at the maturity level yet. Itโ€™s still evolving. Itโ€™s evolving at a very rapid scale โ€” and itโ€™s being adopted at a very rapid scale as well,โ€ said Keheliya Amarasinghe, ACMA, CGMA, manager, IT business partnering, at Fortude, where he has helped to implement generative AI tools at the Brandix Group of apparel companies in Sri Lanka.

This contrast poses a conundrum for CFOs and other business leaders. Will they embrace generative AIโ€™s potential for increasing efficiency and extending the cognitive power of workers? Or will they err on the side of caution, waiting for the technology to develop further?

In interviews with FM magazine, finance leaders around the world shared their insights about the risks of emergent AI technology and the specific mitigation strategies theyโ€™re using as they step into this new technological era.

To invest or not? Contrasting views

Research on AI has been underway since the 1950s and earlier, with scientists trying for decades to imbue computers with the ability to reason, solve problems, and communicate more naturally with humans. Businesses have used forms of the technology for some years, especially for automating processes and deriving deep insights from huge pools of data. But itโ€™s the latest evolution, generative AI, that has delivered some of the fastest and most broadly accessible advances.

Text-based generative AI agents like ChatGPT are based on a type of machine learning known as large language models (LLMs). LLMs are trained on large data sets of text and are capable of understanding human prompts and then generating natural language text, code, and translations in response.

LLMs can enable tools such as ChatGPT to create responses such as delivering custom-tailored advice about a financial question, generating a memo to the userโ€™s specifications, or working through a wide set of other use cases, though the technology is still developing and can make errors and mistakes.

The technology โ€œcan read in between the lines โ€ฆ [And] that is extremely powerful,โ€ said Haz Hubble, ACMA, CGMA, the co-founder and CEO of Pally, a UK-based startup that aims to use generative AI to improve peopleโ€™s human relationships.

LLM-based technology is already accessible to the public at little or no cost through freestanding consumer tools like ChatGPT. But it is also increasingly integrated into existing ERM systems and other business software, as well as into the platforms of tech giants like Microsoft and Google.

The public rollout of generative AI only began in earnest in 2022, but it has already seemed to reach all corners of the business world. The pressure for organisations to invest is growing by the day.

โ€œItโ€™s only a matter of time before every tool is an AI tool,โ€ suggested Hubble, who is one of the youngest people to ever obtain the CGMA designation. โ€œSo itโ€™s not a question of yes or no, itโ€™s a question of which tools and how much you should invest.โ€

But the answer to that question will differ with each company, Hubble said. And, for some, there are still too many unknowns.

โ€œI think if not today, within the next three to five years, it will be revolutionary,โ€ said Salauddin Ahmed, ACMA, CGMA, the head of decision support and performance management at Bangladeshi telecom Banglalink. โ€œItโ€™s not that we are ignoring it, we are adopting it, but we are going slow. We are seeing whatโ€™s happening, seeing the risk factors.โ€

Whether theyโ€™re moving quickly or cautiously, finance leaders agree on one thing: Itโ€™s time to assess the specific risks and responses for their companyโ€™s AI strategy (see also the sidebar โ€œCFOsโ€™ Checklist for AI Projectsโ€).

Data security and privacy risks: Is your information safe?

One of the most common fears amongst business users is that a generative AI tool will inadvertently compromise an organisationโ€™s data security and privacy. In particular, they worry that an employee will upload internal or client data into the tool โ€” only to see the platform share that information with other users, either inadvertently or as a result of a malicious attack.

โ€œThat is a massive concern,โ€ said Mirenna Larisa Calimache, ACMA, CGMA, generative AI lead for Deloitteโ€™s AI Institute. โ€œThis technology doesnโ€™t necessarily keep guardrails or help you protect your data if you put it out there.โ€ Deloitte is responding by redirecting employees away from off-the-shelf generative AI products.

โ€œWe have restricted our employees from using ChatGPT in their work,โ€ she explained. Instead, Deloitte has created its own implementation of the technology.

โ€œWe wanted to democratise it โ€” we wanted to give people the ability to leverage generative AI, but in a safe way. Thatโ€™s why weโ€™ve developed our own platform,โ€ she said. According to Calimache, once Deloitte employees have completed training on it, they can use the proprietary in-house platform for many of the same purposes ChatGPT is used for. However, they are not permitted to input client data into it.

Companies have several options to control their data while using generative AI, including a custom-developed solution, perhaps based on open-source software, or an enterprise-scale product from a third-party provider, said Ryan Hittner, a New York-based principal and global AI specialist leader for Deloitte. Companies such as Microsoft and Google are developing generative AI products that promise to draw on corporate data while maintaining security and privacy.

No matter the specific solution, โ€œthe primary interest is in finding a way to create a virtual sandbox and make sure that employee prompts stay private, because obviously when we use it for our jobs, we likely utilise private and confidential information within those prompts,โ€ Hittner said.

Operational risks: Mistakes and errors

Generative AI can seem almost human in its ability to respond to a wide variety of challenges and conditions. Unfortunately, it also has another human tendency: It can err. It can even fabricate falsehoods.

For example, ChatGPT has grabbed attention by making astute medical diagnoses based on descriptions of symptoms. But a recent study of various LLMsโ€™ responses to medical questions found they frequently fabricate sources of information and make statements that arenโ€™t supported by their citations.

The technology companies behind these tools, including ChatGPT maker OpenAI, have raced to reduce these kinds of errors. But the very real concern remains that people will become over-reliant on AI and perhaps let a fatal error slip into an important document.

Itโ€™s an issue that Tankiso Moloi, FCMA, CGMA, Ph.D., has been tackling as a professor and academic director at Johannesburg Business School in South Africa. Students have quickly adopted tools like ChatGPT and used them for papers and other work โ€” often with little regard to the pitfalls.

โ€œI donโ€™t know what is inside that black box. I donโ€™t know what is crunching these numbers. And it could be giving me the wrong numbers for all I know. But they look sensible to me because I trust this technology,โ€ Moloi said of the studentsโ€™ approach.

The school has responded by using tools to detect AI-generated content and warning students of disciplinary action for misuse of the tools. But Moloi also says itโ€™s important to explain how to use AI responsibly โ€” acknowledging its use, checking its work, and knowing its limits.

โ€œItโ€™s been quite an interesting journey,โ€ he said.

There is a consensus among the experts who talked to FM: Whether or not theyโ€™re embracing AI, companies need to set rules and training policies about its use.

โ€œHow flexible you want to be with those policies depends on the culture of your company, but I think making expectations clear is important,โ€ Hubble said.

Organisations โ€œneed to train their employees, so that even if they get some help from AI tools, they need to do a review. They need to put tick marks on a list,โ€ Ahmed said.

Hittner and Calimache from Deloitte also stressed the importance of human oversight.

โ€œNo technology is going to do all of the work for you. You have to make decisions, you have to improve the content, you have to make sure itโ€™s accurate and without bias,โ€ Calimache said.

Deloitte is emphasising that generative AI is best used in the initial phases of drafting and organising work, Hittner said.

โ€œI think some of the most effective controls right now are the human layer review,โ€ he said.

Similarly, in Sri Lanka, Amarasingheโ€™s employer has required all employees to complete a LinkedIn course on the ethical use of generative AI.

The message, he said, is: โ€œWe are implementing this tool for you, but you need to make sure that you avoid these certain risks as well.โ€

Financial risks: Overinvestment for under-performance?

The costs of generative AI projects can vary widely. Companies dabbling with Google or Microsoft enterprise tools might pay a relatively modest $30 per user per month, while a more intensive custom software project can range into the hundreds of thousands or millions.

At the Brandix Group, Amarasinghe is leading the finance transformation team. Brandix was the only company in Sri Lanka to be part of the early access programme for Microsoftโ€™s Copilot package of generative AI tools.

โ€œWe are going to go heavy and invest more in it and see how it can benefit us in the future. Right now, apart from Copilot, we have developed our own internal generative AI-driven chatbots and are in the process of integrating [them] with most of our internal systems,โ€ he said. But even so, Amarasinghe cautions other finance leaders to think carefully before plugging into an intensive generative AI project.

โ€œThey shouldnโ€™t just go with the trend,โ€ he said. โ€œYou need to have your problem identified. Why are we doing this? Why are we investing in our digital AI, too? What is the expectation from this? Because sometimes, such a heavy investment might not make sense.โ€ 

Generative AI has such a wide range of applications that it can be difficult to predict exactly how a company will best use it. Amarasinghe suggested beginning with a limited investment and an experimental mindset and then expanding once results show.

For example, Brandix is seeing how a group of power users engage with Microsoft Copilot before deciding about a larger deployment. So far, those users have employed Copilot to create content such as automated meeting minutes, action items, and email summaries to help streamline strategic decisionmaking, Amarasinghe said. But more intensive tasks, such as tapping into the companyโ€™s sizable data lakes, could require a much greater investment, which the company is now moving towards.

Other risks: Employee resistance and reputational damage

The decision to embrace AI tools of any kind comes with the risk of controversy and negative reactions.

Internally, employees may fear that theyโ€™re being replaced. Or they may not understand the technology, leading them to avoid using it.

It calls for careful change management โ€” starting with a clear explanation of how the new technology could help workers. โ€œYou need to demonstrate how that particular technology will add value into the processes, will help them to do things more efficiently, faster, and more economically,โ€ said Moloi, who has researched technological change management at numerous organisations.

At Brandix, the company has tried to stir excitement about AI by hosting demonstrations and a hack-a-thon where employees collaborated with Google Cloud representatives on AI projects.

โ€œThey are curious to know more. So we will reward that curiosity so that they can learn more,โ€ Amarasinghe said.

Moloi suggested that reverse mentorships may help to stir excitement, with younger and tech-savvier employees helping to teach new methods to their older colleagues.

Looking ahead

Internal resistance to implementing AI tools is only half the equation. The adoption of AI also can draw a company into broader societal questions, including around the potential replacement of human workers; the legal status of AI-generated material, including whether it can be copyrighted; the risk that biases in a modelโ€™s training data will undermine the quality of its output; and the technologyโ€™s high consumption of power and resources, among others.

While there may be no perfect answers, companies can best position themselves by thinking deeply about the various internal and external effects of their decision to embrace AI. For example, Deloitte uses a Trustworthy AI framework to guide its own decisions and conversations with clients. Having such a framework prepared beforehand can help leadership address questions that arise about AI, Calimache and Hittner said.

โ€œNormally leaders think about one, two, or three risks,โ€ Calimache said. โ€œAnd then when we show the framework, they realise, โ€˜Wow, thereโ€™s so much more I havenโ€™t even thought about.โ€™โ€

Ultimately, companies have little choice but to start addressing these risks, Hubble said. Individual employees may already be using generative AI, even without authorisation, and competitors across industries are looking to the emergent technology for an edge.

โ€œThere are certain risks that you can play a wait-and-see game [with]. This isnโ€™t one of them,โ€ Hubble said. โ€œYou need to take proactive action within your business to understand how it is being used and set rules around how it should be used.โ€

As Moloi put it: โ€œThere is a new technology almost every day. There is no stability, there is no certainty [on] the horizon. So we keep on moving, and we are moving fast.โ€


CFOsโ€™ checklist for AI projects

1. Data security risks: Use sandboxed or in-house AI solutions to safeguard sensitive data. Train employees on secure data handling practices and restrict the use of consumer-grade AI tools in work processes.

2. Personnel risks: Conduct change management programmes to address concerns amongst employees. Facilitate workshops, reverse mentorships, and hack-a-thons to increase AI literacy and foster a culture of innovation. Set clear policies for how people may use AI and which tools they may access. 

3. Operational risks: Ensure human oversight to catch AI errors. Controls can include specific procedures for reviewing AI work produced, as well as training on using generative AI. 

4. Reputational risks: Develop and adhere to an ethical AI framework. Publicly disclose AI use policies to build trust with consumers, clients, and the public. 

5. Legal and intellectual property risks: Consult with legal experts to navigate the evolving AI regulatory landscape. Use AI tools that ensure compliance with intellectual property laws. 

6. Financial risks: Start with cost-effective AI tools and gradually invest in custom solutions based on clear use cases. Evaluate the ROI of AI projects regularly. 

7. Competition risks: Stay informed about AI advancements in your industry to maintain a competitive edge. Make space for people to experiment with new technology.


Andrew Kenney is a freelance writer based in the US. To comment on this article or to suggest an idea for another article, contact Oliver Rowe at Oliver.Rowe@aicpa-cima.com.


LEARNING RESOURCES

Linking Risk Management to Strategy

Changes in the business landscape are accelerating in speed and complexity. Learn strategies to manage risks within your organisation.

COURSE

Ethics in the World of AI: An Accountantโ€™s Guide to Managing the Risks 

This course discusses the current uses of AI in business, examines nine risk areas, and provides practical suggestions to address these risks effectively.

COURSE


AICPA & CIMA RESOURCES

Articles 

โ€œHow Finance Can Start to Use AI Automationโ€, FM magazine, 7 March 2024 

โ€œExecutivesโ€™ Tech Appetite Strong Despite Regulatory, Ethics Questionsโ€, FM magazine, 4 March 2024

Up Next

With greenhouse gas reporting, sizable gaps persist

By Bryan Strickland
September 5, 2025
Large companies in the UK are making progress as more sustainability reporting requirements approach, but they could face significant challenges when seeking assistance from smaller companies in their supply chain.
Advertisement

LATEST STORIES

With greenhouse gas reporting, sizable gaps persist

Accountability: Inescapable, challenging, and valuable

US business outlook brightens somewhat despite trade, inflation concerns

Elevating productivity through strategic business partnering

Mark Koziel Q&A: Talent, sense of community, profession opportunities

Advertisement
Read the latest FM digital edition, exclusively for CIMA members and AICPA members who hold the CGMA designation.
Advertisement

Related Articles

Image of AI-generated woman's face.
Shadow AI emerges as significant cybersecurity threat