While the internet of things has provided companies with more ways to collect increasing volumes and types of data about their customers, it also poses a significant challenge: Regulation is developing at a much slower pace than the technology, making it the responsibility of the company to decide how to harness the insights offered by data from mobile phones, travel passes, and thermostats among other devices, while living up to their core ethical values.
Business Ethics and Big Data, a new briefing from the Institute of Business Ethics, urges companies to articulate their own approach, maintaining a consistent alignment between values and behaviour.
The vast quantities of data available to companies provide the opportunity to develop new strategies and target their messages, products, and services. But as collection methods, and the way such information is analysed and used, become increasingly opaque to consumers, the more their trust in companies declines.
Research conducted by Ipsos MORI and the Royal Statistical Society suggests that the level of trust the public have in companies to use data appropriately is lower than the public’s overall trust in businesses. Media, telecommunications, and insurance companies are particularly affected.
If a company’s conduct in dealing with Big Data is perceived as less than ethical, this can adversely affect its reputation, customer relationships, and, in the long run, its revenues.
Among the ethical issues thrown up by data collection, the right to privacy, which allows people to limit who has access to their personal information, is a key concern. Individuals should have meaningful control over how a corporation gathers data from them, and how it uses and shares that data, the briefing notes.
Government and corporate databases must allow everyone to appropriately ensure the accuracy of personal information that is used to make important decisions about them.
When companies take personal information from a customer, either directly or through the use of a credit or debit card, the customer needs assurance that it is not going to be used in a way that will undermine his or her own personal privacy, Simon Webley, IBE, said in a webinar to launch the report. How those data will be used needs to be made quite clear to the consumer, he added.
Automated decision-making in areas such as employment, health care, education, and lending must be judged by its impact on real people, operate fairly for all communities, and protect the interests of those who are disadvantaged, the briefing advises. An independent review may be necessary to ensure that a system works fairly.
The veracity of new datasets is another area of concern. The briefing notes that “[t]he reliability of Big Data and the ability of algorithms to generate valid conclusions are matters of debate.”
If the new datasets are not statistically accurate, they could produce flawed results, which could in turn generate inequalities in outcomes or opportunities by making certain individuals or groups more or less visible.
So how can organisations strike a balance between using data to improve performance and customer service and honouring their commitment to protect privacy of stakeholders? The IBE briefing recommends companies consider the following six questions:
- How does the company use Big Data, and to what extent is it integrated into strategic planning? Clearly identifying the purpose for which data will be used helps to identify the critical issues that may arise. Another important question is: How does that particular use benefit the customer or wider public? For data use to benefit your organisation and its stakeholders, it has to be accurate, reliable, and trustworthy. What do you do to ensure the quality and veracity of your data?
- Does the organisation send a privacy notice when personal data are collected? Is it written in clear and accessible language which allows users to give truly informed consent? For example, social media platforms do ask users to agree to terms and conditions when they register. However, research shows this does not necessarily correlate to informed consent as many users do not read through lengthy, complicated documents, but simply sign to enable them to open their accounts.
- Does my organisation assess the risks linked to the specific type of data my organisation uses? Identifying any potential negative impact the use of data might have on particular groups of people and what might happen if the datasets became public, is one way of increasing awareness of the damage a potential data breach would cause. In some cases, a privacy impact assessment may be advisable. The risk of misuse of the company’s information by employees should not be underestimated.
- Does my organisation have safeguards in place to mitigate these risks? Communicating the preventive measures which are in place to bolster data security is an effective way to promote trust. These might include controls on data access and harsh penalties for its misuse.
- Do we make sure that the tools to manage these risks are effective and measure outcomes? Audit has a key role to play in helping companies deal with these issues.
- Do we conduct appropriate due diligence when sharing or acquiring data from third parties? When buying information from third parties, due-diligence procedures must apply as they would to other purchases. Do the suppliers uphold similar ethical standards and guarantee the accountability and transparency of these practices?
—Samantha White (email@example.com) is a CGMA Magazine senior editor.