Research uncovers ‘critical’ knowledge gaps in AI governance

MIT research reveals that companies on average are overlooking nearly two-thirds of the risk subdomains categorised in the newly released AI Risk Repository.

Researchers at the Massachusetts Institute of Technology (MIT) released the AI Risk Repository database to address “significant gaps” found in companies’ understanding of AI-related risks.

MIT’s Computer Science and Artificial Intelligence Lab and the MIT FutureTech Lab collaborated with the University of Queensland, Future of Life Institute, KU Leuven, and Harmony Intelligence to release a database featuring more than 700 identified risks.

“The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyse, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorised risk database,” Neil Thompson, Ph.D., head of the MIT FutureTech Lab, said in a news release. “It is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches.”

Researchers said urgent work was needed to help decision-makers across sectors develop a comprehensive understanding of the current risk landscape. The researchers’ review of more than 17,000 records and several academic databases identified 23 risk subdomains for decision-makers to consider, but the average AI risk framework mentioned just 34% of the subdomains.

Even the single-most robust framework reviewed made mention of just 70% of the risks documented by the researchers.

Researchers found that it was much more common for risks to surface after AI was deployed than during its development, making it all the more critical that companies are equipped to create a comprehensive risk framework upfront.

“Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots,” Peter Slattery, Ph.D., an incoming postdoc at the MIT FutureTech Lab, said in the release.

The database features seven risk domains that sit above the 23 risk subdomains and 700-plus risks. The risk domains most commonly included in risk frameworks were AI system safety, failures, and limitations (76%); socioeconomic and environmental harms (73%); and discrimination and toxicity (71%).

Risk subdomains covered in more than half of the researched frameworks included unfair discrimination and misrepresentation (63%); compromise of privacy (61%); and lack of capability or robustness (59%). On the other end of the spectrum, less than 15% of the frameworks covered AI welfare and rights (2%); pollution of information ecosystem and loss of consensus reality (12%); and competitive dynamics (12%).

— To comment on this article or to suggest an idea for another article, contact Steph Brown at Stephanie.Brown@aicpa-cima.com.

Up Next

Outsourcing grows globally as leaders grapple with talent, cost constraints

By Steph Brown
January 6, 2026
C-suite leaders are outsourcing IT services to harness external expertise amidst talent and budget shortfalls, but overreliance on third-party guidance poses strategy risks
Advertisement

LATEST STORIES

Outsourcing grows globally as leaders grapple with talent, cost constraints

Finance and cyber resilience

5 elements of an effective AI prompt

AI readiness, skills gaps top concerns of finance leaders

Expert advice for navigating challenges, changes, self-doubt

Advertisement
Read the latest FM digital edition, exclusively for CIMA members and AICPA members who hold the CGMA designation.
Advertisement

Related Articles

Finance and cyber resilience