Internal investment decisions play a crucial role in management accounting practice โ they drive long-term value creation and enhance organisational efficiency and competitiveness by implementing strategic objectives. These decisions, because of their substantial financial implications and often irreversible nature, can have a profound and lasting effect on corporate performance.
Research has identified a pervasive risk of investment failure, often due to challenges employees face when assessing key investment factors. This is because various biases can occur โ including overconfidence, emotional attachment, or overreliance on the initial information received โ that prevent employees from identifying suitable investment options.
Recent advancements in algorithmic decision support systems can help organisations address bias, as well as other challenges, and improve decision-making. Companies such as IKEA and PepsiCo already utilise algorithmic decision support systems to enhance their hiring processes by pre-selecting applicants. Other companies, such as Bosch and Siemens, have implemented algorithmic decision support systems to validate investment options and optimise several business processes.
These systems rely on algorithms that can provide more objective advice across various cognitive tasks. Consequently, implementing these systems has great potential to enhance the quality and efficiency of internal investment decisions.
Nevertheless, organisations often struggle with employees’ reluctance to adopt or rely on algorithmic advice, a phenomenon referred to as “algorithm aversion”. The reluctance may be attributable to two factors: first, the perception that the task is not suited to algorithms; and second, the complexity of the underlying algorithms employed. This was the topic of our CIMA-sponsored research (see the sidebar, “Methodology of Algorithmic Utilisation Study,” at the end of this article). Understanding this topic is important, as reluctance can hinder companies from fully leveraging these systems, leading to less effective decision-making.
What is algorithm aversion?
Algorithm aversion is defined as employees’ tendency to refrain from relying on algorithmic advice, which results in less algorithm use. In practice, algorithm aversion occurs in various internal decision-making contexts, including investment decisions, forecasting, hiring decisions, and risk management. This aversion persists even though algorithms, when deployed correctly and populated with unbiased data, often outperform humans and provide valuable feedback.
Therefore, to fully leverage the benefits of these systems, organisations should aim to shift employees from algorithm “aversion” to algorithm “appreciation” across various tasks.
Different decision types
Analysing the domains where algorithms are commonly used in internal investment decision-making reveals two main types of decisions that can affect their use: non-human-related and human-related decisions.
Non-human-related decisions include various procurement decisions, such as the purchase of new machinery. Algorithms evaluating these decisions can be especially useful in improving accuracy and consistency.
Human-related decisions, such as hiring new employees, are also critical and common business processes where algorithms can provide decision support. Many large companies, including IKEA, PepsiCo, Siemens, Google, and JetBlue, use algorithmic technology for both hiring new employees and tracking the performance of current employees. However, these decisions typically involve emotional factors, which make it more challenging to rely on algorithmic advice.
Are emotions driving suboptimal decisions?
Previous research shows that the level of analytical reasoning in decision-making depends on the emotional intensity of the decision. When emotional intensity is low, decisions tend to be more analytical. Typically, these are classic investment decisions about machinery or buildings. In contrast, when emotional intensity is high, decisions are more likely to be spontaneous and less rational. These decisions range from hiring new employees to awarding bonuses or setting personal workloads.
Research indicates that employees often believe algorithms lack emotional capabilities compared to humans. Consequently, algorithms are seen as better suited to mechanical and objective tasks, as opposed to subjective ones. Conversely, employees tend to view algorithms as less fair and trustworthy for tasks that require subjective judgement.
Algorithm explanation to mitigate aversion
Emotional intensity is not the only factor that tends to influence employees’ reluctance to incorporate algorithmic advice. Employees can also be deterred by the perceived complexity of the algorithm. This is often compounded by a lack of understanding of the general benefits of these tools.
Therefore, providing a detailed explanation of the algorithm might help increase its use, especially when algorithmic systems are viewed with scepticism. Explanation or training around algorithms can vary from highlighting general benefits to providing a more technical overview of how the algorithm works. It could also include how algorithms work for specific kinds of decisions, which may reduce perceived complexity and increase the understanding of the technology. In this way, employees have a better understanding of the advantages of algorithms, which leads to greater understanding and acceptance.
Insights from the study
What did our study teach us?
- We found that employees exhibit a stronger algorithm aversion in decisions about humans, as opposed to decisions concerning non-human objects. This implies that a decision that includes at least some emotional responses from the employees results in a greater algorithm aversion, even when it is based on objective criteria.
- Interestingly, and of particular importance, algorithm aversion in human-related decisions can be mitigated by providing an explanation of the algorithm. However, the explanation in non-human-related decisions does not influence algorithm aversion. In other words, providing an algorithm explanation results in nearly the same use of the algorithmic decision support systems for both decision types. This suggests that an algorithm explanation is particularly useful in human-related decisions. These results can be seen in the graphic, “Average Algorithm Use by Experiment Participants”.
- As these results might be driven by the specific explanation provided, we further analysed the impact of a more technical explanation on algorithm use. This explanation was more beneficial than no explanation, but it was less effective in mitigating algorithm aversion compared to explaining the general benefits.
Additionally, we analysed the use of the algorithmic advice in support of participants’ final decision. Remarkably, almost 80% of the participants who requested advice considered it in some way to make their final decision. This suggests that requesting algorithmic advice is the most important issue in integrating algorithmic decision support systems, as the level of consideration is high in both decision types and explanation settings.
Key takeaways for managers
Our findings offer valuable insights for management accountant practitioners implementing algorithmic technology.
The study emphasises the potential to increase algorithm use, especially in human-related decisions, through the implementation of an algorithm explanation. It also highlights the importance of informing employees about the general benefits of the algorithm in human-related decisions, such as hiring processes. Our findings offer a road map for organisations to effectively integrate algorithmic decision support systems, ensuring optimal performance in both decision types.
Algorithmic decision support systems are more commonly used for decisions involving non-human objects than for decisions involving humans. This trend indicates that organisations might face fewer challenges in implementing algorithms for tasks related to products or materials compared to decisions involving human factors.
For human-related decisions, providing clear explanations about the algorithm is crucial, as it fosters employee understanding and trust in the system’s output.
Among the different types of explanations, general explanations about the benefits of the algorithm are slightly more effective than highly technical ones. Moreover, when advice is explicitly requested, employees are generally willing to follow it. However, motivating employees to proactively seek algorithmic advice is more challenging. To address this issue, organisations should focus on creating an environment that encourages employees to seek advice from the system, building trust, and demonstrating the system’s value in improving decision-making.
Methodology of algorithmic utilisation study
In our empirical study, we examined the two aforementioned factors: (1) whether employees face a stronger algorithm aversion in human-related decisions compared to non-human-related decisions (non-human-related v human-related) and (2) whether providing an algorithm explanation mitigates algorithm aversion (without explanation v with explanation).
Specifically, the non-human-related decision pertains to the purchase of a new machine, while the human-related decision involves hiring a new employee.
During the study, participants assumed the role of a manager responsible for internal investment decisions in a fictitious organisation. They were presented with an investment decision in which they had to choose the most appropriate investment alternative out of three. They were also told they had to base their decision on five objective evaluation criteria. The objective criteria of both decisions were comparable, with evaluation required of annual costs, the expected remaining useful lifetime/duration of employment in years, scalability/potential, functions/competencies, and flexibility.
We also provided the names and pictures of the alternatives (machines and candidates) and a short introduction consisting of one sentence of comparable information. Participants were informed further that the organisationโs management considered it advantageous to request algorithmic advice for five reasons, which outlined the algorithmโs general benefits. This represented the algorithmic explanation. Moreover, we also implemented a more technical explanation as a separate condition and tested it against both no explanation and an explanation focusing on the general benefits of algorithms.
In the main task, participants had five minutes to analyse the given information and decide whether to request algorithmic advice, which was provided by a real algorithm that relies on 500,000 lines of data. However, the request was associated with some cost to avoid the mere use out of curiosity. Participants were also informed that the final decision remained their responsibility, meaning that they did not have to consider the advice when requested. If they decided not to use the algorithm-based system, they did not receive any advice and had to input their final decision.
Conversely, in instances of an algorithm request, participants were presented with the algorithmโs suggestion for the most suitable alternative before determining which alternative to select. Overall, participants were compensated based on the appropriateness of their decision.
Average algorithm use by experiment participants

The level of algorithm use was measured by how frequently the experiment participants requested advice from the algorithm-based system. It consists of levels between 0% (no algorithm use at all) and 100% (algorithm use by every participant).
In cases requiring a human-related decision (ie, a hiring decision), algorithm use was lower than in cases needing a non-human-related decision (ie, purchase of a new machine).
For example, when participants were given no algorithm explanation (left-hand side of the graph), the level of algorithm use was about 20% in human-related decisions as compared to about 50% in non-human-related decisions. However, when participants were given an explanation (right-hand side of the graph), the level of algorithm use increased by more for human-related decisions than for non-human-related decisions.
Tom Gubini, Ph.D., is a financial controller at Thyssengas GmbH, and Svenja Marsula, Ph.D., is an assistant professor at Ruhr University Bochum, both in Germany. To comment on this article or to suggest an idea for another article, contact Oliver Rowe atย Oliver.Rowe@aicpa-cima.com.
LEARNING RESOURCE
Investment Decisions Fundamentals
Real-world examples and exercises will give you the background on how to apply investment decisions while considering both quantitative and qualitative measures.
COURSE
MEMBER RESOURCES
Article
โEmployees to Leaders: More Training Needed in AIโ,ย FMย magazine, 24 February 2025
Podcast episode
โAIโs Future: Figuring Out What It Means for Finance Teamsโ,ย FMย magazine, 22 January 2025