Improving Customer Service and Security With Data Analytics

The advantages of analytics to customer service have already been shown. Now the question becomes: How can analytics be used to improve security?

Reading Time: 4 min 

Topics

Competing With Data & Analytics

How does data inform business processes, offerings, and engagement with customers? This research looks at trends in the use of analytics, the evolution of analytics strategy, optimal team composition, and new opportunities for data-driven innovation.
More in this series
Download

Organizations are collecting more and more data. And while rich data allows personalized service, detailed data about real people (rightly) often raises concerns. Just as this data is increasingly valuable to organizations, it can be valuable to criminals as well, leading to an ever-escalating series of data breaches. Data analytics exacerbates trade-offs between security and service; the analytical processes on data can, at a minimum, raise privacy concerns for individuals because much of marketing analytics tries to learn as much as possible about potential customers. These analytics processes are becoming increasingly powerful at de-anonymizing people from their trace data.

However, these de-anonymization techniques are an example of a way that analytics offers at least a partial solution to the problems it has exacerbated.

Consider, for example, placing a call to your bank for help after losing your debit card. The core problem is that, before providing customer service, the bank must authenticate that you are who you say you are. This authentication process must begin with the assumption that the caller is a malefactor impersonating the real customer — guilty until proven innocent. The bank will help the caller only after being convinced of the caller’s identity.

While this process is annoying when we’re customers seeking help, we actually want and need this level of security. It is in our best interests that the bank will verify that we are who we say we are before continuing to assist us. After all, we don’t want the bank to hand out our money (or our new debit card) willy-nilly to just anyone.

Historically, this telephone authentication process involves answering a set of questions. What is your account number? What is your personal identification number (PIN)? What is your Social Security number? Can you verify the last three transactions in the account? What is your prior address? The process continues, potentially escalating to security challenge questions based on shared secrets, until the bank is convinced of our identity.

This process is adversarial by design. Even the name “security challenge question” evokes a combative stance, a challenge. The initiator of the call is not trusted until passing through a gauntlet. For banks, it is unfortunate that so many initial interactions with a customer are adversarial in nature.

But data and machine learning, specifically speech processing, offer a great example of an invisible way that analytics can simultaneously help improve security and service. The technology itself isn’t that new, but speech processing has progressed to the point now where financial services companies can match a caller’s voice to their prior calls, allowing the authentication process to occur behind the scenes as the customer service conversation progresses.

Fidelity Investments, for example, encourages the use of voiceprints to confirm identity within the first moments of a conversation. HSBC is beginning to do this not just for premier clients, but at scale for retail clients as well. And the change doesn’t just help the customers avoid yet another password or secret question: Barclays notes a 20-second reduction in time to authenticate — and those 20 seconds add up quickly to considerable savings in employee time for the bank.

The convenience and savings may be the initial drivers of this change. However, perhaps a bigger effect, more elusive to quantify, is the change in orientation. Data and machine learning can ensure that the customer interaction begins by focusing on assistance rather than challenge. Customer service can work with, not against, a caller who (in all statistical likelihood) is a genuine customer, not a con artist — innocent until proven guilty, in other words. Customer service doesn’t have to assume initially that callers might be nefarious — and identity validation can occur in parallel while the conversation is getting started. This means that the unlikely (but potentially damaging) scenario that a security threat exists doesn’t have to poison the majority of interactions with valid customers — without leaving it unaddressed. Organizations can relegate the pesky security issues to behind the scenes, where they should be kept. The authentication process is passive, churning along in the background. Security must only become visible if a problem is found. In this case, the artificial intelligence is augmenting the human employee in ways that are not visible to the customers.

As a result, valuable and expensive training time for customer service employees can be spent more on banking and less on security. While the direct result is more effective customer-service training, the indirect result is scale. When a new security threat emerges, the bank can deploy countermeasures quickly to all customer service interactions.

And more can likely come from this initial application. For example, a customer may in fact be who they say they are, but may be being coerced. Or they may be suffering from some impairment. Speech patterns that indicate these possibilities can be brought to the attention of the customer service agent for further assessment.

Because it is, by definition, an invisible process, examples like this may get far less attention than humanoid robots or chatbots. But analytics can help mitigate some of the trade-offs between the security and service that increased data collection exacerbates. These applications may have a far greater effect on customer relationships for organizations than the ostentatious examples that may be more effective at marketing than managing.

Topics

Competing With Data & Analytics

How does data inform business processes, offerings, and engagement with customers? This research looks at trends in the use of analytics, the evolution of analytics strategy, optimal team composition, and new opportunities for data-driven innovation.
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (2)
Chandra Pandey
Interesting article, in assessment of how cross cutting concerns interplay in user experience it is important to understand that risk & security are not the same. Naturally when the stakes are high, the gravitational force of minimum viable window applies as a business concern & thereby locking up everything. The solution somewhere lies in having the building blocks of federated security services which is backed by integrated Risk Modelling to classify transactions at run time for add-on security as multi factor approach. Unfortunately, the industry & standards body has not invested enough in terms for time, efforts & funding the risk modelling as a science to develop the ecosystem for automated but intelligent transaction process.  The business of security is somewhere preoccupied with the thoughts of new security vaccines or casualty reports such that not enough is spent to engineer the value chain as industry ecosystem.  Transactions being distributed by nature, divided we fall is the likely outcome witnessed as the next breach headline. In that respect the value of risk modelling at granular level of all the interacting components is highly undervalued as investment & approach.

Technologies such as AI & behavioural techniques do provide the necessary aids but are not an end in itself. In digital world where the interaction points are ever increasing & traditional boundaries of security perimeter of orgs are ever evolving to remain relevant. GRC in modern era is not just about re enforcement of best practices but also of security rethink of tools, techniques & ecosystem for a balanced trade-off between security & user experience.

Disclaimer: The views and opinions expressed are personal in nature and do not reflect the official policy or position of any organization.
Munyaradzi Mushato
Excellent article and does bring to the fore the core issues relating to ethics and big data. Background AI intelligence to compliment  human interface is indeed a solution where security checks may interfere or are seen to interfere with customer service.

On another note, and from another perspective on the matter, it MAY be a worrying and serious security issue if an institution like a bank does not verify my identification before rendering a service. The customer may want to be assured that indeed a security check has been carried out before the service, in other instances. This might speak to visible AI process.

Given the above not-so-complimentary perspectives to AI and big data and security, I would suggest further discourse on how AI and Big data can be developed to  address the possibility of opposing security requirements/preferences in the client population.