The financial technology – or fintech – industry presents itself as a new vanguard of digital financial services that will disrupt old players. The reality, though, is that fintech companies often build their services on top of the existing financial industry. Indeed, the fintech startup scene functions like an outsourced R&D department for large banks and funds. As startups succeed, their innovations get acquired or incorporated into existing players.
The best way to understand fintech, therefore, is to see it as the automation of traditional finance. Fintech initiatives either create systems that replace the jobs of financial professionals with algorithms – such as a “robo-adviser” – or they create platforms to replace customer-facing service staff with self-service interfaces. An artificial intelligence (AI) “chat-bot” is really just an interactive interface to allow customers to input requests to a financial institution. This leads to a vision of wholly digital financial institutions, in which both the internal decision-making processes of financial professionals and the means by which those professionals interact with the outside world are automated.
This process is often celebrated as a great step forward that will only benefit customers. Nevertheless, while such automation lowers the costs of providing financial services – and may allow those services to reach a wider range of people – the fintech industry specialises in increasing the speed and convenience of the existing financial system, rather than fundamentally altering it.
Presently, digital financial services are presented as optional choices, but large financial institutions have an intrinsic motivation to automate, to cut costs, and to boost profits when dealing with many customers. We are thus likely to see them all gradually move towards a common set of digital services whilst removing nondigital services, a process that will see us become dependent upon digital finance. We thus should be carefully thinking about the ethical implications of locking ourselves into this technology.
So what are the ethics of fintech? It’s a question that has mostly escaped attention. Since the financial crisis we have been fixated on the ethics of “high finance” – the large investment banks and traders – but not the everyday finance of commercial banks and personal financial advisers.
The key to understanding the negative potential of fintech lies in recognising that if we are to improve the ethics of finance, we must, firstly, be pushing financial professionals to think more deeply about the ethics of their decisions whilst also ensuring they are more closely connected to the outcomes of their decisions.
Secondly, we must be pushing retail customers to think more about how their money is used and to demand higher standards. In other words, a more ethical financial system is likely to be one that encourages moments of reflection on the broader consequences of investment decisions beyond narrow monetary returns.
Fintech poses a problem in this. The main aim of fintech is to reduce the direct role of financial professionals, whilst providing a “frictionless” experience for customers. In the case of professionals, this potentially leads to their feeling increasingly less responsible for decisions, and more able to point to a third-party algorithmic actor to justify controversial decisions. In the case of customers, they are increasingly being encouraged to pay even less attention to the behind-the-scenes processes of financial institutions.
Instantaneous financial services at the click of a button might sound convenient, but “frictionless” is another way of saying “lacking contact” or “lacking engagement”.
AI machine-learning systems pose special issues for accountability. An old credit-scoring algorithm was deterministic – a professional took past knowledge and encapsulated it as a step-by-step algorithm that followed orders, taking in data and obediently outputting scores.
Newer machine-learning systems, however, can calibrate their operations in response to past data – learning from experience. This increasingly means the professionals themselves may not know the reason for the decision made by the system. People seeking accountability for being denied a loan may find themselves facing a Kafkaesque digital bureaucracy, trying to guess why they have been rejected.
Finally, the rise of digital financial services poses serious questions of data ethics. The digital format leads to ever-greater collection of personal financial data, which can be combined with other data sets to build fine-grained pictures of individuals. This immediately raises concerns about privacy – am I being spied upon? – but an even deeper issue is the ability to use the past data of customers to steer their future paths. Entire schools of machine learning are designed to know you better than you know yourself, and to then prompt you into action or predict your action before you’ve even done it. Will people increasingly feel like their autonomy is being eroded? Will they feel a creeping sense of a financial panopticon that records all their actions and uses the data to manipulate them?
Fintech has some great positive potentials – possibly allowing greater access for people traditionally denied services – but we must not allow this to blind us from its very real negative potentials. If we are not careful, we could sleepwalk into a financial system that is even less ethically responsible, more invasive, and increasingly unfathomable to people.