The rapid adoption of artificial intelligence (AI) is creating new risks for businesses. As companies hand over more autonomy to computer systems, they may inadvertently violate their ethical standards and the rights of customers and others.
In response, companies have scrambled to set new checks and balances on their AI use. Academics are debating the ethical and safe usage of AI, and consultancies increasingly offer advice on the topic. And major companies from Microsoft to Unilever have established AI ethics programmes.
“The kinds of situations that we see are legal, reputational, and ethical,” said Michael Brent, Ph.D., the Colorado-based director of responsible AI at Boston Consulting Group. “If you use AI in a way that harms people, it can damage your brand, it can damage how employees perceive themselves.”
Brent focuses on reviewing proposed uses of AI to identify and mitigate potential harms that can include violating people’s data privacy and following AI-generated decisions that are biased against disadvantaged and vulnerable groups.
Multiple models to address
With the increasing pace of adoption, the question of AI ethics is no longer merely a philosophical one.
Until recently, “it was the … Wild West as they say”, said Mfon Akpan, CGMA, DBA, an assistant professor of accounting at Methodist University in the US, who recently co-wrote a paper on ethical accounting and AI.
Now, however, companies are deploying several strategies to ensure ethical AI usage.
“Over the last 12 to 24 months, the true insurgence of [large language model] capabilities into the marketplace has been this critical moment of companies taking [responsible AI] a lot more seriously,” said Sasha Pailet Koff, CPA, CGMA, consultant and former senior supply chain executive at Dell and Johnson & Johnson.
“You see dedicated AI teams that are responsible for reviewing AI projects. You may see ethical committees that are clearing different types of AI cases against legal, technical, and societal norms,” said Pailet Koff, who is based in the US in New Jersey and now leads the digital transformation consultancy So Help Me Understand.
Leading organisations are also enlisting third-party experts and crafting AI governance frameworks that provide comprehensive instructions for assessing questions of fairness, transparency, accountability, and privacy.
Organisations that are deploying AI on a large scale should even consider elevating AI to the C-suite with a chief ethics officer, Brent advised. (See the sidebar “Rise of the Chief AI Ethics Officer”.)
Rajeev Chopra, FCMA, CGMA, a consultant who previously worked in the airline industry, said that responsible AI requires data scientists, legal expertise, and executive leadership.
“Absolutely encourage AI, machine learning, all the latest technologies, but create a good corporate governance structure and make sure that you are seen as a role model for implementing the new technologies,” said Chopra, who is based in India.
Anna Huskowska, ACMA, CGMA, is a divisional head of central planning at Etex, a construction materials company based in Belgium. She said companies should aim to diffuse AI ethics throughout the organisation.
“The idea is [also] to transfer the knowledge to different parts of the organisation,” she said. “You want to consider how it impacts the business and the client.”
What can go wrong?
The experts interviewed for this FM article identified several ethical risks that can come with AI โ and shared strategies that companies may use to address them.
Bias
AI models can make decisions that reinforce damaging social biases.
For example, Sweden’s social insurance agency has been under fire for using a machine-learning system on the alleged grounds it disproportionately flagged applications from women, foreigners, low-income earners, and people without university degrees for further benefit fraud investigation.
Meanwhile, some AI recruiting tools are accused of propagating bias and rejecting qualified candidates. A recent University of Washington study found three large language models exhibited racial, gender, and intersectional bias in how they ranked CVs.
Combating algorithmic bias requires mathematical and technological expertise. Brent’s team at BCG, for example, uses a battery of statistical tests to determine whether AI products are exhibiting bias.
“You need the technical expertise. You need a reliable person who can tell you what … is going on,” Brent said.
Transparency and accountability
The use of “black box” algorithms can exacerbate ethical issues. Generative and predictive AI technologies may not explain why and how they reach conclusions, making it harder to assess whether results are biased, flawed, or false.
That lack of transparency creates the risk that accountants and others may violate their duty to do their work with care, transparency, and accountability.
“One way of managing that risk is to make sure that there is this transparency, [a] push on transparency, on the companies that are providing these AI models,” Huskowska said.
Pailet Koff agreed that transparency and accountability are key. “Who’s responsible for the AI? And when it does potentially make an incorrect and harmful decision, who’s responsible for owning up to that?” she said.
Data privacy
Data privacy and security is both a practical and ethical concern for those managing AI and other tech deployments.
Digital ethicists โ and European lawmakers, amongst others โ have recognised that people have a right to privacy. Collecting, sharing, and using data without explicit consent may be a legal and ethical breach.
“Are organisations inadvertently exposing sensitive customer information or potentially employee data?” Pailet Koff asked.
Leaked data may create security risks for users, allowing unwanted third parties to access their information or letting generative AI models use and learn from their data without their consent.
“The ethical concern obviously is your privacy,” Chopra said. Companies increasingly are using AI-powered tools to combat fraud and other risks. But those tools may require collecting and analysing large volumes of customer information โ for example, analysing customers’ patterns can help to detect potentially fraudulent charges on their accounts. Companies must be vigilant to protect the data they’re using in these efforts, limiting access and ensuring it’s not leaked into public view. Additionally, they must be cautious when using third-party services to combat fraud and other risks; sharing customer data with those companies may raise ethical and security risks for customers.
Steps to address ethical risks
Brent and other experts identified four key steps for assessing and addressing AI ethics risks.
Categorise use cases according to a risk taxonomy
The EU’s Artificial Intelligence Act sets out several categories of AI risk, from “minimal” to “unacceptable”.
For the purposes of the law, “unacceptable” risk includes uses like social scoring, facial recognition, and manipulating people. The EU AI Act applies directly to businesses operating in the EU and also those outside โ if they have a role in the AI value chain that touches the EU.
Brent said that risk categorisation is a wise first step โ helping companies to apply a standardised metric and identify areas of concern.
Assess use cases
Next, examine potential negative impacts by conducting brainstorming and planning exercises. Ask participants to identify the project’s potential effects on different groups of people, or ask them to map out worst-case scenarios. Additionally, conduct research into comparable uses by other companies, and consult with experts.
AI risk can also be assessed quantitatively with mathematical measures that can indicate whether algorithms are displaying demographic bias.
“Look for mathematical evidence of bias,” Brent advised. “You can take the qualitative and the quantitative measures and identify the performance of an AI system, and then try to build the mitigations.”
Design mitigations
Though not all ethical risks can be counteracted, some can be mitigated through technical adjustments, legal and contractual changes, transparency, and training.
For example, adjusting the model’s design and its supporting data can combat algorithmic bias. The model can also be forced to document its decision-making and analysis more transparently, which may expose bias.
A company contracting with an AI provider can also write the contract in a way that requires the provider to protect against perceived risks.
Test, evaluate, and provide documentation
Ultimately, the AI deployment team must polish and prepare the project for its user.
Besides testing that the mitigations work and evaluating the system’s performance, this phase involves delivering documentation and training for the end user. Simply handing over the keys to an AI product may result in unethical and unwise uses. This final step can help ensure that others in the business know how, why, and when the tool should be used.
The power of culture
Companies are developing formal approaches and deploying technological solutions to address AI ethics. But new management structures and technological fixes only go so far. Ultimately, a company must prepare its people for AI.
With the rush to embrace AI, Pailet Koff emphasised the importance of vetting the background and expertise of anyone working with the technology.
“How are you thinking about the vetting of the individuals that you’re putting on the team and the independence of the code?” she asked.
Additionally, companies must watch out for casual misuse of the technology. Even if executives have placed limits on generative AI in the workplace, employees may still freely access consumer products like ChatGPT. This “ghost usage” opens up the possibility of a data breach or the use of an opaque AI model.
“It’s there, it’s free, and people want to be able to be more productive โ even without that proper guidance or understanding,” Akpan said. “Understand what your employees are doing. I would assume they’re using it, so how do you talk to them about it?”
AI usage also can raise other cultural issues. For example, if an employee has found a way to cut their workload by several hours a day, how should management respond?
“Is that encouraged or discouraged?” Akpan asked. “Once you have the open dialogue, that information can flow freely across the organisation.”
Leaders should think carefully about the culture of their organisation and how that culture can be adapted to encourage beneficial use of AI, Pailet Koff said.
“Every family has their own rules,” she said. “How are you actually using your organisational norms to promote ethical use of these tools?”
The human impact and larger questions
The implications of AI’s growing usage go far beyond a single project.
Huskowska’s team already uses predictive analytics and is experimenting with generative AI for supply chain forecasting and optimisation. She’s excited by how AI can potentially expand a small team’s reach.
But she also worries about the next generation. Huskowska started her career with a job in cost analysis โ a job that taught her a lot, but which now is a prime target for automation.
She wonders how new workers will learn fundamentals when AI has taken over basic tasks.
“It’s just hard to imagine, and I think it’s a risk that we don’t talk about much in terms of how we learn our job as financial managers,” Huskowska said.
Chopra and others raised similar concerns.
“People are losing their jobs, job displacement is happening,” Chopra said. “How are you going to address that?”
Companies should consider how to offer career opportunities for people from diverse backgrounds and how they’ll ensure that anyone in the organisation can develop skills related to the new technology.
“If you want to implement AI as an organisation, how do you make sure that you give equal opportunities for people to learn?” Huskowska asked.
In the bigger picture, countless questions about the ethics of AI remain unresolved in the courts and in public opinion, Brent said.
Who exactly is responsible for the output of an AI model? How autonomous should these systems become? How should people be taught to interact with AI? Do AI’s returns justify its current heavy usage of water and energy?
Those questions may go beyond the direct scope of a finance leader, but in the face of dramatic technology change, everyone needs to consider how people will benefit from โ or be harmed by โ AI. Ultimately, Chopra said, it’s about keeping people in the picture.
“There are so many areas where this has to be governed very, very carefully,” he said, “and the best way is you must pair human intelligence with AI.”
Rise of the chief AI ethics officer
Some companies are tasking new leadership positions with ensuring the responsible and ethical use of AI and other technology.
For example, IBM has an AI Ethics Board led by its chief privacy and trust officer. Salesforce has an Office of Ethical and Humane Use. Boston Consulting Group has a global team dedicated to reviewing and analysing proposed uses of AI through an ethical lens.
Any company that is establishing positions such as chief information officer, data privacy officer, or chief engineer should also consider creating a specific ethics leadership position, suggested Michael Brent, Ph.D., a director with the BCG team.
โMy team helps BCG and our clients identify those risks and mitigate them to the extent thatโs possible,โ he said. โIโm in the business of avoiding ethical nightmares.โ
AI ethics, Brent added, should not simply be lumped into related fields.
โA chief AI ethics officer should not be a risk and compliance officer, should not be a lawyer. It should be someone trained specifically,โ he said. โThey have to understand specifically what are the technical risks [and] the social, legal, and cultural risks.โ
AI ethics teams can help establish standardised processes to guard against ethical risks. Teams should identify and categorise risks, develop mitigations, and ensure users are properly trained.
Of course, the delegation of AI ethics responsibilities will depend on a companyโs size, the scale of its AI usage, and other factors. Dedicated AI teams are more common in organisations โfurther along in their maturity effortsโ, said Sasha Pailet Koff, CPA, CGMA, consultant and former senior supply chain executive at Dell and Johnson & Johnson.
Companies may also rely on ethics committees or third-party experts for such responsibilities.
Overall, Pailet Koff said, organisations are increasingly embracing governance frameworks that establish processes, assign responsibilities, and define guidelines for the use of AI. Meanwhile, she said, individuals can educate themselves through guidance and training offered by groups like The Alan Turing Institute and the Partnership on AI.
Andrew Kenney is a freelance writer based in the US. To comment on this article or to suggest an idea for another article, contact Oliver Rowe at Oliver.Rowe@aicpa-cima.com.
LEARNING RESOURCES
Ethics in the World of AI: An Accountant’s Guide to Managing the Risks
This two-hour training session discusses the current uses of AI in business, including nine risk areas, and provides practical suggestions to address these risks effectively.
COURSE
Ethics Without Fear for Accounting and Finance Professionals
This fast-paced and interactive presentation will help you keep your ethical skills sharpened to reduce your fear and raise your courage as you make tough decisions in real time.
COURSE
MEMBER RESOURCES
Articles
โWhat Gen AI Means for Executive Decision-Making,โ FM magazine, 9 October 2024
โWhat CFOs Need to Know About Gen AI Risk,โ FM magazine, 19 August 2024