Senior decision-makers and executives say they don’t fully understand generative AI and how implementation will translate into measurable benefits for their companies.
These information gaps could pose compliance concerns for companies, according to a new report, as many organisations don’t currently possess the expertise or systems to accelerate innovation in this space.
The Generative AI Global Research Report from SAS, a global provider of business analytics software, found that nine in ten senior tech decision-makers (93%) do not fully understand generative AI or its potential effect on business processes.
Having a poor grasp on generative AI in a general sense means technology integration is another problem area for decision-makers, the report said. Forty-seven per cent report not having appropriate tools to implement generative AI, and 41% are experiencing compatibility issues when they try to combine generative AI with their current systems.
The report is based on a survey taken in February through April of 1,600 organisations across the globe. The respondents are decision-makers in generative AI strategy or data analytics in organisations across key sectors.
Executives do not understand how generative AI will impact business either. SAS found that fewer than half (45%) of chief information officers and only around a third of chief technology officers (36%) consider themselves extremely familiar with generative AI adoption in their organisations.
An increased focus on investment without the tools and knowledge to implement generative AI effectively could lead to companies wasting resources and dissuading customers, the report warned. Ambition continues to outweigh expertise: 47% of decision-makers are “encountering challenges in transitioning from concept to practical use of Gen AI”.
Training insufficiencies fuel compliance risks
While failed investment risks loom for companies, lacking in-house expertise means generative AI could render companies legally noncompliant down the line.
“[Ninety-five per cent] of businesses lack a comprehensive governance framework for Gen AI,” the report said. Moreover, only one in ten organisations have undergone the preparation needed to comply with generative AI regulations.
Governance risks are rife, the report noted. Three-quarters of respondents are concerned about data privacy (76%) and security (75%) when generative AI is used in their organisation, and seven in ten organisations report problems monitoring generative AI systems.
“Around four in 10 respondents (39%) say they have found insufficient internal expertise to be an obstacle to implementing Gen AI,” the report said. “Our research shows that businesses are rushing into Gen AI before establishing adequate systems of governance, which could result in serious issues with quality and compliance later.”
Next steps for companies
The report recommends four approaches companies can take to improve their generative AI strategy:
- Accelerate innovation through “decisioning” practices: Companies can “seamlessly integrate Gen AI models into decisioning workflows, AI and machine learning applications, and existing business processes” by using flow tools such as intelligent decisioning.
- Intensify data protection: Ensure user privacy and security with robust data quality measures, the report said, including synthetic data generation, data minimisation, anonymisation, and encryption — that provide sensitive information safeguards.
- Apply natural language processing techniques: Data experts can do this “to preprocess data, explain the generated output in easily understandable terms, minimise hallucinations, and reduce token costs” to produce trustworthy and explainable results.
- Enhance data governance: Use built-in workflows that validate the entire life cycle of large language models, from regulatory compliance to model risk management, the report said.
— To comment on this article or to suggest an idea for another article, contact Steph Brown at Stephanie.Brown@aicpa-cima.com.