Forecasts are supposed to provide the best guess of an organisation about future performance. They are essential for steering businesses in dynamic environments and managing relations with external stakeholders. However, they are often compromised by a lack of judgement and expertise as well as an individual's motivation and cognitive biases.
Forecasters might not oversee or adequately evaluate the various factors that influence a forecast and might be tempted to manipulate forecast numbers according to their needs. Early in the year, for example, managers may be reluctant to show favourable deviations from targets to not give up resources or to hide a potential revenue surplus to maintain a buffer in case problems arise at year end. When, throughout the year, the achievement of the target seems to be in jeopardy and the manager is optimistic that it requires only comparably simple action to bring their performance back in line, they may not want to show deviation from their targets to avoid intervention from higher levels and might therefore be tempted to inflate the forecast.
Despite mechanisms to improve the forecasting process, such as accuracy incentives or the use of management accountants, forecasts have often been considered as biased and the forecasting process with all its checks and balances as cumbersome and inefficient. Algorithm-based calculations such as predictive analytics have been proposed as a promising alternative to forecasts that rely on human intelligence. However, these forecasts are not perfect either. They tend to be perceived as black boxes that do not provide enough information about what is driving the forecast, which reduces their acceptance and actionability. Furthermore, algorithms rely on historic data and might not incorporate recent or highly tacit — or implied — information. In a context of structural changes and extraordinary events, human judgement might be better suited to produce a fairly accurate forecast.
Against that backdrop, our CIMA-sponsored research — The Impact of Predictive Forecasting on Corporate Control: A Comparison of Two Multinational Corporations — was a comparative case study with two multinational companies. Both are leading producers in their respective fields: the chemical and the information technology industries. We investigated how both companies use predictive analytics to produce more neutral and unbiased forecasts. More specifically, we explored the interplay of forecasting experts and algorithm-based calculations to analyse how algorithm-based forecasts come to be trusted.
In both companies, we observed that predictive forecasting was valued as providing a realistic outlook. However, it did not remove the need for:
- Traditional bottom-up forecasts and their potential to leverage local knowledge about extraordinary events and structural changes.
- Incorporating recent and tacit information that is difficult to express and difficult to include in forecasting models of the decentralised business units.
Instead, predictive forecasting was used as an additional benchmark to debias decentral forecasts and to facilitate more productive conversations with the business units.
At the chemical company, the discussions between the corporate centre and the business units became more efficient and effective as predictive analytics shifted the burden of proof to the decentralised representatives and required them to justify deviations from the corporate forecast. In this way, issues were uncovered earlier so that the company could act in a more timely manner.
Similarly, the predictive forecast at the information technology company provided an outlook that helped to ascertain the achievement of total-year targets. However, its corporate forecasters did not exclusively rely on the algorithm but extensively discussed, presented, or simply talked about the future with various decentral and other corporate actors, both in formal as well as informal meetings, and incorporated their tacit knowledge in their algorithmic calculations.
This dialogue turned out to be essential to convince central actors that the forecast numbers were unbiased and neutral. As a result, corporate forecasters could efficiently and effectively challenge business units on their bottom-up forecast to hold them accountable for their targets and to understand any differences between bottom-up and corporate forecasts to — given their rather centralised organisation — derive at a corporate level measures to ensure target achievement.
Overall, the combination of the introduction of algorithmic calculations and an intensive dialogue between management accountants and decentral managers increased the perception of the forecast as neutral and trustworthy and helped to change the decision-making culture at the interface of decentral business units and headquarters.
In addition, it was also considered helpful for driving digitalisation across the whole organisation. It showcased the potential of algorithmic calculations and gave people ideas how data can be used and how efficiency gains can be realised with the help of predictive solutions. In both organisations, this paved the way for an increasing penetration of technology in forecasting practices across decentral business units.
Further information on the research can be found here.
— Lukas Löhlein, Ph.D., is an assistant professor at WHU–Otto Beisheim School of Management in Vallendar, Germany; Utz Schäffer, Ph.D., is a professor at WHU–Otto Beisheim School of Management in Vallendar, Germany, and director of the Institute of Management Accounting and Control. Leona Wiegmann, Ph.D., is a senior lecturer at Monash University, Melbourne, Australia. To comment on this article or to suggest an idea for another article, contact Oliver Rowe, an FM magazine senior editor, at Oliver.Rowe@aicpa-cima.com.