Over the last few months, the financial sector, as well as many other industries, has had to adjust and make the shift to remote set-ups almost overnight due to COVID-19 restrictions. The transition has accelerated digital transformation; the sector’s previous reliance on face-to-face, or, ‘high-touch’ customer interactions have yielded to a completely digitalized experience.
Banks, in particular, have started to accommodate customers digitally in line with a steep increase (50 -70%) in internet traffic due to the global health pandemic. Many have turned to front-facing technology, or in other words, technology specialized to enhance the customer experience using automation, machine learning, and interactive engagement. This has led to the decline of traditional banking interactions such as visiting a branch or calling a service agent, as research reveals 72% of adults now prefer to communicate digitally.
In our new normal, the financial services sector is now providing more agile, digital channels to its customers to improve communication and engagement. But with innovation often comes risk – as a result of the uptick in technology usage, the industry has now become one of the most targeted by cybercrime. Generally speaking, over 70% of data breaches are financially motivated, and each channel used to access a bank account or apply for a mortgage is a potential access point for bad actors to fill up their piggy banks. On its current trajectory, victims of cybercrime are predicted to cost $10.5 trillion, with threats such as identity theft, malware, hijacking, or IOT attacks set to grow by 15% each year.
Unsurprisingly, regulation such as Strong Customer Authentication (part of the Payments Services Directive 2 or ‘PSD2’) has stepped in and stipulated that companies must adopt multi-factor authentication (MFA) to increase security. This diligently unique approach operates on the premise that a customer has already had their password compromised and requires them to confirm they are the account holder using texts, a phone call, or even an email, depending on the network. By requiring more than a password, MFA can also limit, detect, and block attempts at credential stuffing – a technique commonly used to automate password entries or guesses.
Naturally, some customers will continue to use familiar methods of contact with their banks which can also open up channels traditionally prone to imposter activity, such as scam-related calls. In June alone, over 3.3 billion robocalls were made to Americans during the height of the COVID-19 pandemic to collect and leverage data used to commit fraud using customer fears. Fortunately, banks are becoming wise to this, and more are implementing cross-channel front-facing protection that can detect account holders using speaker recognition. With 38% of information conveyed by speech and voice of tone, front-facing voice fingerprinting, voice recognition or even identifying a client using a number associated with the account could be vital in catching out impersonation attempts.
Outside of front-facing technology, cognitive computing tools such as Artificial Intelligence (AI) are being used to detect, prevent, and provide a security assessment of weak touchpoints in the back-office. Through the detailed mapping of all processes, systems, and resource interactions, AI can identify critical points prone to creating a negative impact or a potential breach. When combined through front-facing customer service channels commonly used within banking, AI’s dynamic machine learning capabilities can enhance interactions, and even flag high-risk users prone to threats. By analyzing customer traits, preferences, spending habits and tendencies, AI can identify unusual activity using back-end tools to detect identity theft automatically. Once a threat is exposed, AI can flag, automatically block, and even inform customers through a notification on an authenticated device.
There is, however, the elephant in the room surrounding the use of consumer data to enhance digitalized experiences. The common misconception surrounding fraud is that attacks are always external, when, in reality, 90% of data breaches involve at least one touchpoint, leak, or bad actor associated with an organization.
For that reason, the sector is phasing out traditional authentication processes that would generally involve revealing sensitive account information for more sophisticated approaches. AI-powered chatbot software, for example, is helping to assist customers in real-time and can correctly identify the person they are talking to with precision at the keystroke level. If automated chatbots cannot help customers directly, they can flag, divert, and steer the conversation to human agents who are aware that they are speaking to the right account holder. In some cases, we’ve seen cross-industry AI implemented to keep names anonymous and allow the respective agents or handlers to continue what they do best by assisting customers.
The COVID-19 pandemic in many ways has become a crisis that the financial sector has had to simultaneously adapt and respond to through a period of extreme uncertainty. Service providers - almost overnight - have had to increase their digital presence and ensuring they maintain robust security in an industry traditionally renowned for keeping money safe has been essential. With the right technology, service providers can mitigate the risks associated with cybercrime and ensure they not only keep their customers safe, but their businesses too.