8 insights boards should know about artificial intelligence

Artificial intelligence

Many businesses are only beginning to experiment with artificial intelligence (AI). But once fully embraced, the technology — which combines powerful computers, algorithms, and big data — has the potential to quickly transform workplaces.

Early adopters, such as tech giants Google and Amazon, are investing in AI to optimise searches and personalise marketing. Automakers are using AI to develop self-driving vehicles. And the financial services industry is looking to improve customers’ experience with the help of AI.

The more companies use the technology, the more potential for revenue and market share growth they see in it, according to a 2017 survey by consulting company McKinsey. The survey also suggests that successful adoption of AI requires strong leadership support. Add to that changes in how work is being done, and AI becomes a topic boards of directors may want to track, according to PwC.

To keep up with AI developments and their effects on businesses, boards should consider the following:

AI will require organisational retooling. Effective AI systems are driven by multidisciplinary teams whose members come together to solve a problem. Organisational retooling should break down silos that separate data into cartels and employees into isolated units.

Workforce upskilling will support a more collaborative approach to working and teach employees new skills.

AI-savvy employees don’t just need to know how to choose the right algorithm and feed data into a model. They’ll also have to know how to interpret the results. They’ll need to know when to let the algorithm decide and when to step in.

“While AI will provide useful insights for decision-making, employees can’t assume that AI will always be right,” said Alex Lattner, ACMA, CGMA, head of finance at the Deutsche Cyber Sicherheitsorganisation in Berlin, an organisation founded in 2015 to improve cybersecurity in German industry. “Human talent will evaluate and, if needed, overwrite the results of AI for various reasons. The database AI relies on might be biased and lead to unrealistic or unwanted outcomes.”

Boards should likewise take charge of AI. “Boards should regularly challenge AI outcomes to align them with corporate goals, targets, and strategy,” Lattner said.

AI will amplify human potential. More than half of about 500 executives surveyed by PwC last year said AI tools implemented in their businesses have increased productivity.

Tools already available can automate complex processes, identify trends in historical data to create business value, and provide forward-looking intelligence to strengthen human decisions.

“For example, AI might discover relationships like the impact of certain customer groups on total sales,” Lattner said. “Boards need to understand, challenge, and benefit from these insights to increase profitability.”

AI’s most powerful benefits are often indirect, and businesses may need new kinds of metrics to measure returns on investment, such as freeing employees from repetitive tasks or improving human decision-making.

AI will help the business do a lot more with big data. To make use of AI tools that can process data quickly, such as supervised machine learning and deep learning, enormous amounts of data must be standardised, labelled, and cleaned of biases and anomalies.

“Boards should therefore not be tempted to use AI regardless of the market they are in and the size of their business just because it is a major trend and they want to be part of it,” Lattner said. “AI makes no sense if you have, for example, just a handful of customers and act in a niche market.”

Only if the benefits exceed costs should a business move forward, because AI tools will most likely require a business to reform its data architecture and governance.

Functional specialists will guide AI customisation to ensure it supports the business use case. To customise an AI application for a specific business area, the computer scientists creating the application will need functional specialists, such as economists, financial analysts, or traders, to figure out the best design. Once the application is up and running, functional specialists will have to customise and tweak the technology.

The business must apply the sound judgement of the specialists who will use and benefit from AI to ensure it fits the specific use case. The company must first quell specialists’ concerns over potential job loss due to AI to get them on board, Lattner suggested.

“We can’t ignore the fact that these same specialists are afraid of losing their jobs with the introduction of AI-based processes,” he said. “Management must address these fears. Specialists and finance staff must be certain about their careers.”

One way to do that is to provide functional specialists with some basic understanding of AI.

Cyberattacks and cyber defence will leverage AI. The more AI advances, the more its potential for cyberattacks grows.

A US Defense Advanced Research Projects Agency (DARPA) competition, the Cyber Grand Challenge, examined how to program AI systems to safeguard computer networks. Anti-virus companies are looking to AI to detect software anomalies. Hackers who create viruses are looking at how to keep virus scanners from seeing their malicious software, too.

“Boards need to be aware of this evolution and put their organisation in a position where they can protect sensitive data and react accordingly when an attack happens. From a finance position, this also implies the provision of corresponding budgets and headcount,” Lattner said.

To heighten cybersecurity, businesses should augment the data and compute platforms that support advanced analytics with privileged access monitoring, source code reviews, and expanded cybersecurity controls.

People will need to “look under the bonnet” of AI. The algorithms at the core of AI are often very complex or labelled trade secrets. Users who do not understand how AI arrives at a certain decision may not trust the technology.

Regulators will pass along pressure from consumers to companies that create and use AI, and they, in turn, will likely have to share some details of AI’s secret sauce. “This is true on the regulatory side in some parts of the world, where regulations require it,” said Pavan Udayagiri, ACMA, CGMA, manager, corporate finance and internal audit, at Grupo Kaybee, an international trading company, in Singapore.

Most AI algorithms can be understood, but if every step must be documented and explained, the process takes longer and becomes more expensive. A framework to assess business, performance, regulatory, and reputational concerns can balance the trade-off.

Countries will compete for AI market share. World governments are increasingly interested in AI market share and the leverage it will grant their countries and economies. By 2030, AI is projected to contribute $15.7 trillion to the global economy.

“AI being one of the more powerful tools that humanity is going to experience (in this generation), there will be competition among countries for market share,” Udayagiri said.

World leaders in AI development include the UK, Canada, Japan, Germany, and, particularly, China. About one-quarter of AI’s economic contribution is projected to be realised in China.

“China has a huge AI player in [technology company] Baidu. As far as China is concerned, they use AI for surveillance. The combination of AI and totalitarian regimes is concerning,” Lattner said.

There will be pressure to use AI responsibly. As AI affects decisions impacting people’s lives and careers, such as loan decisions, for example, the question of responsible use will become increasingly important, said Lattner. “Europe has adopted the General Data Protection Regulation (GDPR), effective May 25, 2018, dealing partly with the implications of AI on people and their right to get more information about their data and how organisations use it.”

Because AI will evolve faster than regulations and public awareness that pertain to AI, companies need to be proactive.

“Regarding AI, their biggest asset is trust,” Lattner said. Customers need to be confident about the quality and results of decisions that emanate from AI insights. In communicating with stakeholders, boards need to strike a balance between maintaining their competitive advantage — that is, keeping the “secret” of AI methods — and satisfying the legitimate interest of stakeholders in understanding the implications of AI use.

David Geer is a freelance writer based in the US. To comment on this article or to suggest an idea for another article, contact Sabine Vollmer, an FM magazine senior editor, at