Building ethics into AI

Here are some steps that management accountants can take to ensure the ethical use of artificial intelligence.
Building ethics into AI

The increasing use of artificial intelligence (AI) applications in business poses a number of questions for managers and financial professionals. The introduction of AI systems into a business may raise the expected technical, legal, and compliance issues but also questions of ethics. For example, are we treating customers fairly by using AI, or might we discriminate against certain groups?

Whilst unlikely to be experts in actually writing algorithms and building AI, management accountants possess important skills and are well positioned in organisations to take a key role in ensuring AI is implemented in an ethical manner. This is likely to benefit the organisation.

Management accountants’ skills are crucial in AI solutions in the finance department, but the skills also can come into play in effectively evaluating the use of AI across the wide business.

Here are a few key actions finance professionals can take to ensure ethical principles are embedded in an organisation's use of AI.

Establish values. Microsoft CEO Satya Nadella said at a company keynote speech in 2018 "we need to ask not only what computers can do but what computers should do”. Ask why AI is being proposed as a solution to a particular challenge or problem and what the impacts will be compared to continuing with a non-AI solution. This could include financial costs or savings, as well as wider factors such as impacts on jobs, reputational risk, or improved end-user experience.

Alongside this, companies should consider the ethical principles they value, and how they will ensure that any AI mirrors these values. This might include committing to only use AI that benefits wider society, ensuring that AI is unbiased before implementation, or ensuring that those affected by the decisions made by AI are able to challenge the outcome.

Michael Hobbs, founder of, a London-based startup which accredits the use of AI against a set of nine ethical principles, said, “The key is asking the right questions and defining the ethical principles early on. This can save issues down the road by making sure that ethics are built into the process from the beginning.”

Management accountants can bring a lot to the table when it comes to setting these ethical principles. Their understanding of company performance and strategy will be key to understanding the range of impacts of introducing AI solutions. Equally, their commitment to objectivity, one of the fundamental principles of the CIMA Code of Ethics, will help them ask probing questions which challenge those working in other areas without allowing pressures to affect their judgement.

Encourage transparency. A common area of concern for both stakeholders within companies and end users or consumers is the lack of understanding of how AI is being used.

Trust in an AI solution will quickly be lost if it is challenged and the company cannot explain quickly and clearly a particular output. The public outcry and investigations over the recent case of the Apple Card, which was branded as sexist, demonstrate the reputational damage that can occur when organisations cannot quickly explain the decisions made by their algorithms.

With their understanding of risk mitigation, management accountants can be key in ensuring that those involved in building and implementing AI have considered this during the build phase.

Companies must be able to explain clearly what data has been used and what assumptions have been applied in order to get to a decision. “It is key also that companies are able to tell people what they need to do differently to get a different outcome,” Hobbs argues.

Being able to explain decisions at a micro level is also important to the business. The reputation of a business can be affected by word of mouth and individuals on social media as much as major press coverage. For example, it is better to provide information to a customer rejected for a loan or mortgage right away, giving them explanations as to why the outcome was as it was, thus giving them options.

Management accountants should be proactive in asking questions of those designing and building AI, ensuring at all times that a clear understanding of inputs, decision processes, and outputs is recorded. When running tests or trials, ask developers to explain the process of decision-making the algorithms are following. If they are not able to explain in plain language the average consumer would be able to understand, this needs to be addressed before the process goes any further and the organisation is exposed to reputational risk.

Ensure accountability. A major failing point for companies using AI can be a lack of clear accountability. “The key is in ensuring that decisions made algorithmically are understood up the management chain within the organisation and there is a clear line of responsibility. The company must take ownership of any decisions made by algorithms,” Hobbs said.

The Institute of Business Ethics concludes in its report Corporate Ethics in a Digital Age that “ultimately, there must always be human accountability”. The IBE report specifically highlights board directors having enough of an understanding of how AI is being deployed to be able to decide whether it is right, and to be comfortable being accountable should things go wrong.

Management accountants are likely to be central to this accountability process if it is done correctly. CFOs and senior finance managers are ultimately responsible for all finance-related decisions including those made by AI. As such it is key that they have a level of understanding which allows them to make informed decisions as to what AI should be used for, what data is feeding the algorithms, and how the AI is making decisions.

A good test is asking yourself, “Could I stand in front of the board and justify the decisions made by algorithms?” If not, then getting up to speed on the basics of AI, questioning its use, and focusing on the steps outlined above to ensure explainability and accountability will help you ensure that AI is being used in an ethical way and that you have mitigated the risks where possible.


Bryony Clear Hill is the associate manager–Ethics Awareness for CIMA and is based in the UK. To comment on this article or to suggest an idea for another article, contact Drew Adamek, an FM magazine senior editor, at