In the Technology Age, You Can’t Always Trust What You Hear and See

Sourya Biswas
Author: Sourya Biswas, CISSP, CISA, CISM, CCSP, CRISC, CGEIT, Technical Director, NCC Group
Date Published: 27 April 2021

It seems like any mention of technological advancement also can unearth a case of misuse. From airplanes that revolutionized global business being used for terrorist attacks to gene technology that can be used both for targeted medicines and bioweapons, technology often is a double-edged sword.

The internet itself is a living example of the perils of technology misuse. Originally built with the intention to facilitate communications, its very architecture has opened it up to exploitation by malicious actors. Case in point: phishing. Before the internet and webmail, a conman would have to physically mail letters out to get enough people to respond and ultimately defraud them, a venture that involved significant effort, expense and risk. Today, all it takes is a mailing list and mail merge to automate the generation of messages. Similarly, a bank robber had to run the gauntlet of guards, guns and safes to get his hands on anything substantial. Today, a hacker can compromise a bank’s systems from a non-extradition country on the other side of the world and net hundred times that amount.

Artificial Intelligence (AI) is an emerging technology that can well define this century of human development, and yet another technology wide open to misuse.

The use of AI and Deepfake technology in business email compromise attacks
While we’re still far away from Hollywood doomsday scenarios like Terminator’s Skynet, the ability of technology to mimic humans has already been leveraged by criminals with regard to business email compromise (BEC) attacks. A BEC is a sophisticated scam targeting both businesses and individuals performing wire transfer payments, one that has netted an astounding US$12 billion from 2013 to 2018, according to the Federal Bureau of Investigations (FBI).

Historically, this is how your typical BEC attack worked: An attacker sends an email (purportedly from a senior executive) to a company’s accounting department, asking them to wire money to a fraudulent account. Accounting has no reason to suspect the email is illegitimate, and therefore sends the wire. It happens that fast, as shown in the case of “Shark Tank” star Barbara Corcoran. The same scenario can be replicated for a real-estate escrow firm. In this case, a hacker impersonates an escrow employee, sending fraudulent payment instructions to the property buyer. Since the buyer is expecting to wire a payment for the impending purchase, he or she may not confirm before transferring funds to a fraudster’s account. This is an attack that has been successful time and time again. However, if the recipient of the email requesting payment spoke with the executive or escrow firm employee over the phone before the transaction was initiated, the fraud could be uncovered. With AI in a cyberattacker’s arsenal, that’s no longer the case.

In August of 2019, the Wall Street Journal reported the case of a UK energy company’s CEO receiving an email (supposedly from his boss, the CEO of the German parent company) asking that €220,000 ($243,000) be wired to a Hungarian supplier. The email was immediately followed by a call that reiterated the instructions in the CEO’s voice. The voice was later found to have been mimicked using an AI software that could “imitate the voice, and not only the voice: the tonality, the punctuation, the German accent.”

The Washington Post reported that this is not an isolated attack. According to Symantec, this same type of attack has happened at least three times in recent past. Considering the general reluctance for many organizations to disclose cyberattacks, the actual number may be higher.

Some tips to protect from Deepfake voice fraud
This is a developing situation, but there are steps that can be taken to combat such use of “Deepfake” voice fraud:

  • The recipient should initiate the call and not take a call received at face value. Unless the impersonated person’s phone was compromised, a call to that person can uncover the truth. In fact, in the example above, a second fraudulent fund transfer request was avoided when the UK CEO actually called his boss.
  • The recipient should insist on a video conversation. Note that this is not foolproof. While Deepfake voice technology is currently more mature than its video counterpart, the latter is catching up.
  • Similar to multifactor authentication, mechanisms should be established to allow independent channels of verification. Some options are internal chat (e.g., Slack, Skype) predetermined code words or phrases (every fund transfer request must include them), etc.

As this threat becomes more mainstream, I expect the good guys to step up and devise effective countermeasures. Similar to anti-malware that can detect malicious code, specialized software should be able to detect Deepfakes. In fact, several top tech players have already started collaborating in this area. However, one thing is certain—the human ear and eye can and will be fooled as AI becomes more advanced. Therefore, be aware that you cannot always trust what you hear or see.