
We are working in a significant period of technological advance, ripe with opportunities for those of us bold enough to take them. There is no doubt that the current digital revolution will radically alter how many organisations operate and are structured.
I am optimistic that it will be a catalyst for our profession to increase the scope and power of the analysis we offer. These changes will mostly be for the better and should raise the rate of productivity growth, which is something we all want to see.
That said, as finance professionals, we need to approach these digital innovations with our eyes open, acknowledging both the potential new risks they could pose as well as their potential to enhance productivity.
Gen AI
This is particularly true when we think about generative AI. To manage the risks inherent in the use of this technology, you have to understand how it works. In essence it “learns” from existing data what is the most likely response to a prompt. If you are using some publicly available large language model (LLM) applications, they will save the data from your prompts and use it to refine their model further.
This raises the key issue of data protection. Remember the adage about social media: “If the software is free to use, you are the product.” This is relevant to all generative AI LLMs that learn from the data you input. You should not be inputting customer or client data or anything else you would not want in the public domain. You certainly need to understand the regulatory implications of this type of data use in the jurisdiction you are operating in.
One of the key applications accounting and finance professionals have found for generative AI is as a research tool. You can use it to summarise reports or look up accounting standards or formulae. The time that can be saved by doing this is a great example of how much a productivity-enhancing innovation AI can be, if used properly.
Check outputs
If you are one of the many people who are researching in this way, you need to be very clear about the sources the software is learning from. It is imperative that you check the outputs using your own skills and knowledge.
Remember the software does not “know” (or care) whether what it generates is objectively right or wrong. You need to independently establish that yourself. You will want to keep an audit trail of all the data sources you are using, so you can refer back to them and answer queries.
At AICPA & CIMA, we are developing further resources to help you take advantage of this new technology in your own work. In the meantime, if you would like to explore these issues further, I would highly recommend this episode of The Finance Leadership Podcast, where Paul Parks, CPA, CGMA, AICPA & CIMA’s Americas director–Research & Development, discusses in detail the risks and opportunities of AI.
Andrew Harding, FCMA, CGMA, is chief executive–Management Accounting at AICPA & CIMA, together as the Association of International Certified Professional Accountants.