Advertisement

Data privacy risks to consider when using AI

New technology carries unexpected perils that corporate leaders should guard against to keep consumer, employee, and client data safe.
Data privacy risks to consider when using AI

Artificial intelligence (AI) has the potential to solve many routine business challenges — from quickly spotting a few questionable charges in thousands of invoices to predicting consumers' needs and wants.

But there may be a flipside to these advances. Privacy concerns are cropping up as companies feed more and more consumer and vendor data into advanced, AI-fuelled algorithms to create new bits of sensitive information, unbeknownst to affected consumers and employees.

This means that AI may create personal data. When it does, "it's data that has not been provided with [an individual's] consent or even with knowledge", said Chantal Bernier, assistant and interim privacy commissioner in the Office of the Privacy Commissioner of Canada from 2008 until 2014 who now consults in the privacy and cybersecurity practice of global law firm Dentons.

AI is an umbrella term used to describe advanced technologies such as machine learning and predictive analytics that essentially shift decisions once solely made by humans to computers.

While AI is still in its early stages — we may have robotic vacuums but nothing like the futuristic cartoon character Rosey, the robot maid from The Jetsons — industries are using the technology to expand revenue streams and reduce workforce costs by linking disparate bits of information.

Few corporate executives are focused on the privacy risks associated with the use of AI. Discussions in boardrooms and in C-suites are "more focused on the possibilities and the benefits of AI than the potential risks", said Imran Ahmad, a Toronto-based lawyer with Blake, Cassels and Graydon who specialises in technology and cybersecurity issues.

Customers want assurances

Consumers are paying more attention to their private information and becoming increasingly uneasy about how data about their interests, locations, credit histories, and more is used by entities they interact with.

Seventy-one per cent of respondents surveyed by global professional services firm Genpact in 2018 said they don't want companies to use AI if it infringes on their privacy, even if those technologies improve their customer experiences. The survey involved more than 5,000 people in Australia, the UK, and the US.

In addition, nearly two-thirds (63%) of the survey's respondents said they're worried AI will make decisions about their lives without their knowledge.

Europe setting the bar

The EU has been leading the charge to meet consumer demand for digital privacy protections.

The EU's General Data Protection Regulation, or GDPR, went into effect in 2018 and vaulted digital privacy expectations to a higher level worldwide by ushering in new standards on a person's right to his or her own information, Ahmad said.

The EU's new privacy rules are taken seriously because of the potential fines for violations. Organisations can face fines up to the greater of €20 million ($22 million) or 4% of their annual global turnover if they are found to be out of compliance with the new privacy regulations. How data is stored, used, and protected is a focus of the GDPR, requiring companies to ensure their data collection and use policies and practices are in line with the privacy standard. Any business that uses personal data of persons in the EU to provide services, to sell goods, or to monitor their behaviour, even if those companies don't have an office in the EU, must comply with the rules. That requires tight control over how personal data is collected and processed.

Overseeing data privacy

Regardless of their size or scope, companies that are using these advanced technologies need to think through how their customer and client data is being protected and used, to ensure that people's privacy expectations aren't being violated unknowingly, Bernier said.

"While AI has been created post many privacy laws, the right to privacy and the way it has been defined and described and recognised by the courts does apply to AI," Bernier said.

Here are several ways to insert privacy concerns into management and corporate board discussion about AI:

Boards can and should lead the push for privacy protections

Boards can help hammer the point that any new technologies need to take security and privacy risks into account, Bernier said.

Board members don't have to understand the ins and outs of every new piece of technology, but they can make sure company management follows best practice in keeping consumers' data safe, she said. "They don't have to be the subject-matter expert, but they do need to know enough about the area to ask the right questions," she said.

Audit committee members can stress that they would like to see that routine checks of security protocols are being conducted and that plans are in place for any possible security breaches. (See the sidebar, "Tips for Board Members Dealing With AI", for more advice.)

Limit yourself

Many companies will be better off collecting and storing fewer data points than stockpiling every bit of data available. That's because having large volumes of personal data, going back years, can lead to more problems if there's a security breach, Bernier said.

On top of that, having large data sets with hundreds of categories of information can make them unwieldy and make it hard to explain to consumers which variable led to a decision to decline them for a loan, turn them down for a job, or target a particular product towards them.

"You avoid excessive collection, and you get a logic model on a database that is manageable," Bernier said.

Companies should have routine schedules in place to examine what data they have on hand, with timetables to discard or thin out the information.

Think about security from the get-go

Many companies, especially those just getting off the ground, focus on how to make their idea work and attract funding so they can get to the next point and scale up.

The thought of how to protect data and personalised information does not often become a primary concern in the first stages of a company's life, Ahmad said. Neglecting that puts businesses at a disadvantage.

By incorporating protections and best practice processes to routinely screen for issues, companies will be better off in the long term, he said. Corporate boards and top company officers should consider the data security piece whenever they are looking at AI for business solutions, Ahmad said.

"They really need to make sure that whatever solution or development is going to be used includes risk-based compliance," he said.

Inject risk awareness into technology discussions

Companies should be assessing their reputational, as well as monetary, risks of employing technologies that may create privacy concerns on an ongoing basis, said Atif Ansari, CPA (Canada), CGMA, the president and a founder of Canadian data analytics firm Piik Insights. That's not happening often enough, and those in the C-suite need to ensure that any AI or other advanced technologies, like any other company networks, are protected from cyberattacks or breaches, Ansari said.

"It is incumbent on boards and executive management to consider the risks that a breach could pose," he said. "It should be on the radar."

That includes having discussions about what information is collected and shared, and how it is used to inform other business decisions.

His data analytics firm purposely doesn't use personally identifiable information in order to protect clients from inadvertent privacy disclosures, he said. Piik Insights works primarily with clients in the retail and restaurant sectors in North and South America.

For example, the company collects and uses only four digits of a customer's credit card to track their purchases and offer insight into consumer trends to its clients. "We can draw some analysis from this, but it doesn't personally identify any particular individuals," Ansari said.

Be precise about vendor use of data

Before signing up with any vendor, discuss how data provided by your company will be used and whether the vendor plans to use the data on other projects, Ahmad said.

Having those details spelled out in contracts, and making sure those contractual promises are kept, will go a long way towards making sure private information is protected.

"It's out of your control," Ahmad said about what happens once sensitive business information and customer data are handed off to a third party.

He also suggested considering worst-case scenarios and making sure those third parties have insurance policies that will cover the costs of any major cyberbreaches.

Make sure the analysis does not go too far

Data thefts by cybercriminals aren't the only concern, Bernier said. Having the data go too far, and making conclusions that an individual is uncomfortable with, can raise other privacy concerns. She pointed to the now well-known case of the retailer Target, where the company's algorithm based on purchase history determined a teenager was pregnant before she was ready to talk to her family about it and sent coupons to her home.

Bernier recommended looking at what is done with customers' data and whether new privacy information is being created by linking up data and coming to a conclusion that could speak to a person's health, education, or other personal information.

Bringing privacy to the forefront

Corporate leaders are having more conversations about ways that advanced technologies and AI can affect individuals' privacy rights, and what to do about it, Ansari said.

"There's much more awareness, but there's still a lot more educating we need to do," he said.


Tips for board members dealing with AI

  • Encourage management to separate AI from analysis of other technology risks to break down the privatised information the technology creates and any risks that the data can be compromised.
  • Make sure security protocols are followed by vendors long after contracts for services are signed. Encourage management to keep regular schedules to make sure technology partners are keeping their promises to protect personal information.
  • Push management to comply with the most stringent set of privacy regulations, even if the company isn’t currently in the EU or other markets with far-reaching requirements. That way, if the company does expand into those areas, it won’t be an enormous burden to retrofit security protocols.
  • Follow up with technology contractors to make sure security protocols are being followed. If an AI tool developed by a vendor is supposed to delete extraneous information, ask for verification that those deletions happen. The rule of thumb of privacy law expert Imran Ahmad, a Toronto-based lawyer with Blake, Cassels and Graydon who specialises in technology and cybersecurity issues, is to “trust but verify” that agreed-upon security practices are being followed.

Sarah Ovaska is a freelance writer based in the US. To comment on this article or to suggest an idea for another article, contact Sabine Vollmer, an FM magazine senior editor, at Sabine.Vollmer@aicpa-cima.com.