Why AI ethics help manage risk
Artificial intelligence and machine learning can increase efficiency and reduce cost, but without an AI ethics policy the technology can also put the business at risk.
In the absence of standards, businesses are largely on their own when it comes to defining and enforcing AI ethics. Still, experts advise companies to delegate an AI ethics resource — such as an attorney, privacy officer, ethicist, or an ethics advisory committee — to oversee the ethical implications of using machine learning and other types of AI. Some say an accountant should be on the ethics team.
"The ethical framework conversation is just catching up to the technology," said Maureen Mohlenkamp, a risk and financial advisory principal at Deloitte who specialises in ethics and compliance services.
Laying the groundwork for AI ethics starts before the algorithms are developed, by setting standards for data governance and data privacy. It also involves monitoring for racial, gender, and other types of bias after the algorithms are deployed, and autonomously teaching themselves. Ethical AI is often described by such terms as explainability, accountability, auditability, and transparency, as those terms relate to the corporate use of personal data and to the automated decisions generated by AI and the data.
According to a 2019 Deloitte survey of more than 500 executives at organisations that use AI, only 21.1% of respondents said their organisations have a framework in place for the ethical use of AI within risk management and compliance programmes. Another 26.2% said they plan to develop one in the next 12 months.
Mohlenkamp suggested companies address AI ethics at the executive and board levels to make sure top officers know to ask the right questions, understand the risk, prevent problems from happening, and have a process in place to address problems when they do happen. Outside of the executive ranks, data scientists and other data professionals could develop an AI "code of conduct" with clear channels for escalating concerns and issues to the top of the organisation and the board.
Enza Iannopollo, a London-based senior analyst on the security and risk team at Forrester Research, noted a parallel data ethics lag in the EU. Companies can rely on anti-discrimination laws in areas such as housing, lending, and hiring, and also from the EU's General Data Protection Regulation, for general guidance, she said. For specific AI use cases, however, businesses have to craft their own standards, regardless of whether they're in retail, customer service, marketing, manufacturing, or other areas.
"This is still seen as a voluntary effort, and as an organisation you decide how far you want to push the effort," Iannopollo said.
An accountant's expertise
Because AI ethics overlap with risk, compliance, and auditing, accountants should be in the forefront of developing AI ethics governance frameworks for their employers and for clients who use AI, said Cory Ng, CPA, CGMA, DBA, assistant professor of instruction in the accounting department of Temple University's Fox School of Business in Philadelphia.
"I believe accountants should take a leading role in the design of AI ethics standards, rather than leaving it up to the lawyers and IT officers," Ng said.
Accountants' professional ethics standards are high, and they have expertise in designing internal controls to mitigate risk and to remain in compliance with various rules and regulations, Ng said. "This combination positions us as the ideal professional to develop frameworks to help make sure that AI is used appropriately and fairly."
How to establish ethical policies for AI
Adopting policies on AI ethics does not require a large corporate workforce. Apex Parks Group, which owns 16 amusement and water parks and entertainment centres across the US, employs 25 people at its headquarters. The company handles AI ethics through its in-house centre of excellence, a leadership team that focuses on best practice and training, said Rich Fox, CPA, Apex's vice-president for data science and analytics.
Among Apex's corporate ethics policies: no selling of customer data to third parties.
Apex relies on machine learning for customer segmentation to market services to existing and prospective customers and for forecasting sales and demand. The algorithms analyse customer spending patterns to help the marketing team design personalised marketing campaigns based on customer preferences, Fox said.
To protect customer privacy, the algorithms cluster customers into segments, typically between five and 20 distinct groups comprising thousands of customers, to create marketing materials for each group. Segmenting customers helps create personalised marketing without personalising to a level of subatomic granularity that would become "creepy", Fox said.
"We don't wing it," Fox said of AI ethics. "We take privacy very seriously."
Bias can creep in when algorithms are trained on data skewed by historical and social biases. Blinding the algorithms to race and gender in the data is one way to mitigate for bias. But the strategy can fail when the algorithms find proxies, such as postal codes for ethnicity, and conclude that certain postal codes are associated with a higher risk of loan default or a lower probability of job success.
One expert who's in the business of preventing such scenarios is Reid Blackman, a former philosophy professor and founder and CEO of Virtue Consultants in New York City. His job is to help his clients mitigate AI risk, such as legal exposure or negative publicity, by introducing AI ethics into AI processes. He is also a senior adviser to EY and sits on EY's AI Advisory Board.
Ethical problems can start, Blackman said, when companies train algorithms on anonymised and aggregated sets of data that lack informed consent from people whose data is being used; these people may not have consented had they understood how their data is used. Because the companies don't know the identities of the individuals in the data, and the individuals don't know their data is being used, the set of data is presumed to pass the ethics test.
Blackman said businesses need to understand the risk involved in using such data under "mounting ethical and social pressure" to obtain meaningful and clear consent and provide full transparency.
— John Murawski is a freelance writer based in the US. To comment on this article or to suggest an idea for another article, contact Sabine Vollmer, an FM magazine senior editor, at Sabine.Vollmer@aicpa-cima.com.