Ethical implications of artificial intelligence
AI presents a new set of ethical challenges for business leaders whose deployment of mechanisation may have profound effects on the workforce and society.
In the race to adopt rapidly developing technologies, organisations run the risk of overlooking potential ethical implications. And that could produce unwelcome results, especially in artificial intelligence (AI) systems that employ machine learning.
Machine learning is a subset of AI in which computer systems are taught to learn on their own. Algorithms allow the computer to analyse data to detect patterns and gain knowledge or abilities without having to be specifically programmed. It is this type of technology that empowers voice-enabled assistants such as Apple's Siri or the Google Assistant, among myriad other uses. In the accounting space, the many potential applications of AI include real-time auditing and analysis of company financials.
Data is the fuel that powers machine learning. But what happens if the data fed to the machine are flawed or the algorithm that guides the learning isn't properly configured to assess the data it's receiving? Things could go very wrong remarkably quickly.
Microsoft learned this lesson in 2016 when the company designed a chatbot called Tay to interact with Twitter users. A group of those users took advantage of a flaw in Tay's algorithm to corrupt it with racist and otherwise offensive ideas. Within 24 hours of launch, the chatbot had said the Holocaust was "made up", expressed support for genocide, and had to be taken offline.
With regulatory and legal frameworks struggling to keep pace with the rapid pace of technological change, public demand is growing for greater transparency as to how these tools and technologies are being used. The UK's Institute of Business Ethics (IBE) recently issued a briefing urging organisations to examine the risks, impacts, and side effects that AI might have for their business and their stakeholders, as well as wider society. Tackling the issues requires these diverse groups to work together. (See "10 Questions to Ask About Adopting or Using AI", at bottom of page, for key considerations listed in the IBE report.)
The research identifies a number of challenges facing business leaders. These include:
- What degree of control can we (as an organisation) retain over our machines' decision-making processes?
- How can we ensure that the systems act in line with the organisation's core values?
- Since biased algorithms can lead to a discriminatory impact, how can we ensure fairness and accuracy?
The report also encourages companies to "improve their communications around AI, so that people feel that they are part of its development and not its passive recipients or even victims". For this to be achieved, "[e]mployees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI, and they need to be provided with the skills to do so".
The report proposes a framework outlining ten core values and principles for the use of AI in business. These are intended to "minimise the risk of ethical lapses due to an improper use of AI technologies". The values are:
- Accuracy.
- Respect of privacy.
- Transparency.
- Interpretability.
- Fairness.
- Integrity.
- Control.
- Impact.
- Accountability.
- Learning.
Avoiding the 'black box' problem
Companies applying AI to the finance function face the challenge of designing algorithms that produce unbiased results and are not too complex for users to understand how they work and make decisions.
MindBridge Analytics, based in Ottawa, Canada, develops computer-aided audit tech powered by AI. The product uses a hybrid of advanced algorithmic techniques to enhance a human auditor's ability to detect and address unusual financial circumstances.
A key aspect of the MindBridge application is that it explains why certain transactions have been highlighted and then leaves final decision-making authority to a human, said chief technology officer Robin Grosset.
"The algorithms give weighted scores to features of transactions for subsequent human review in order to identify the risk of irregular circumstances," he said.
This transparency is essential to avoid the "black box" problem, in which a computer or other system produces results but provides little to no explanation for how those results were produced. In the case of machine learning, the greater the complexity of an algorithm, the more difficult it is for users to understand why the machine has made a certain decision.
"Almost all concerns that relate to 'improperly set up AI' can be solved by the AI explaining its thinking," Grosset said. "If the human counterpart of the AI can understand why something is flagged, then they can make better informed decisions. Human judgement is still a key component of a balanced AI system."
Avoiding bias in the data
Another challenge is to avoid bias in the algorithm and in the dataset the algorithm uses for learning.
One way of mitigating bias is to use combinations of learning types, including unsupervised learning, Grosset said. "Supervised learning is based on label data, and often the labels themselves create bias," he said. "Humans essentially bring their own biases to machine-learning scenarios. By contrast, unsupervised learning has no labels and essentially will find what is in the data without any bias.
"The key piece of advice here is to curate the data that is used as input to a system to ensure the signals in the data support the training objectives. For example, if you are creating an AI to automate driving a car, you want your AI to learn from good drivers and not from bad drivers," Grosset said.
MindBridge's testing process includes validation testing for algorithm intent, with regression testing. The process involves synthetic and real data.
Amy Vetter, CPA/CITP, CGMA, author of the book Integrative Advisory Services: Expanding Your Accounting Services Beyond the Cloud and CEO of The B3 Method Institute, advises organisations to seek alternative perspectives from professionals who do the work today that may be automated in the future.
"It's important to include them in the discussion and decision-making about how to incorporate prospective uses of AI into future workflow of the firm and what skills the staff will need so the appropriate training and goal setting is incorporated into any implementation plan," she said.
At MindBridge, the chief information security officer and chief technology officer are responsible for key aspects of technical use of AI and related privacy issues. The staff also includes research scientists who focus on privacy-preserving algorithm design.
Implications for society
AI provides a difficult set of ethical questions for society as well. One question centres on the preservation of the workforce. In the accounting profession, for example, AI can extract data from thousands of lease contracts to enable faster implementation of new lease accounting standards.
This can enable the people who would have handled data extraction to perform more complicated accounting tasks and perhaps even contribute to strategy. This can be a positive development as those people perform more meaningful work.
But if the people whose tasks are replaced by AI lose their jobs rather than being promoted to higher-level work, the implications for society can be ominous. If people who perform repetitive tasks across multiple professions and industries all lose their jobs instead of being promoted, implementation of AI could leave many people without options for work and damage their lives and the economy.
The trucking and haulage industry alone could experience enormous job losses if self-driving vehicles replace human drivers. In 2016, more than 3.3 million drivers were employed in trucking in the US and an additional 318,700 heavy goods vehicle drivers were working in the UK, according to the US Bureau of Labor Statistics and the UK Department for Transport.
"That creates an ethical problem without a shadow of a doubt, but also a pragmatic problem because these populations [people who perform tasks that can be automated] were in the heart of the economic system," said Jeremy Ghez, affiliate professor of economics and international affairs at HEC Paris — a management sciences teaching and research institute. "And if they're not there anymore, then the system becomes unstable. There isn't anyone to sell stuff to."
Ghez said it's understandable that businesses will look at AI as a way to cut costs. But he said it's also imperative for business leaders to be more imaginative with their human resources now, utilising more effectively the things that people can do — and machines can't do. The human skills of using intuition and building relationships with customers might be differentiators for businesses whose competitors may take automation to an extreme and perhaps frustrate customers with chatbots and other applications that remove a personal touch from customer-facing positions.
While businesses attempt to solve their own AI-related ethical issues, the public sector also will have a role to play. Regulators obviously will have a say in whether self-driving vehicles will be permitted on the roads. That will be the easy part. It will be more challenging to consider other issues such as workforce preservation and how to protect segments of the population that may be disadvantaged by biases in algorithms.
The technology may be here now, but the ethical rules for managing AI will take time to develop.
"It opens up a whole wide range of questions that the private sector is not going to feel very comfortable to answer immediately altogether," Ghez said. "I think it's going to require a multidisciplinary effort and bridges to be built to figure out how to go to win-win situations."
10 questions to ask about adopting or using AI
1. What is the purpose of our job, and what AI do we need to achieve it?
2. Do we understand how these systems work? Are we in control of this technology?
3. What are the risks of its usage? Who benefits and who carries the risks related to the adoption of the new technology?
4. Who bears the costs for it? Would it be considered fair if it became widely known?
5. What are the ethical dimensions, and what values are at stake?
6. What might be the unexpected consequences?
7. Do we have other options that are less risky?
8. What is the governance process for introducing AI?
9. Who is responsible for AI? Because machines are not moral agents, who is responsible for the outcome of the decision-making process of an artificial agent?
10. How is the impact of AI to be monitored?
Source: IBE Business Ethics Briefing, “Business Ethics & Artificial Intelligence”.
Jeff Drew is an FM magazine senior editor; Ken Tysiac is FM magazine's editorial director; and Samantha White is a writer and editor based in the UK. To comment on this article or to suggest an idea for another article, contact Jeff Drew at Jeff.Drew@aicpa-cima.com.