Deepfake technology and its implications for the future of cyber-attacks

December 1, 2021  |  Zachary Curley

Introduction

Recently I received a call on my personal cellphone. The call started out as many do; with a slight pause after I answered. Initially I assumed this pause was caused by whatever auto-dialer software the spammer was using to initiate the call before their text-to-speech software starts talking about my car’s extended warranty. Once the pause was over, however, I was surprised by a very human voice. She initiated the conversation by giving her name and a simple greeting, which was closely followed by the pitch she was trained to give.

It was during my response to her greeting (a “How are you doing” type question) that I noticed the issue. Another slight pause. As soon as I started speaking, the noise on the other side of the phone went dead, as if a recording had been switched off. This was my first sign that I wasn’t dealing with your run-of-the-mill telemarketer. Once the recording (for this is what it turned out to be) began with the next line in the pre-programmed speech, with no acknowledgement of my response, I knew I was dealing with a robot powered by technology that simulated a real voice.

What is a ‘Deepfake’?

While my initial example does not match all the pieces of a deepfake, I am certain many of those that read this will be familiar with the experience. The use of human-like voices combined with auto-dialers, while a new occurrence, are not all that unusual in the world of spam calls. Deepfakes, however, take this concept to a whole new level.

Imagine receiving a call from your CEO, someone you have never personally met but have heard speak at a variety of town halls and e-mailed video correspondences. This call says they really appreciate your work, and wondered if you would do them a small favor. After a slight pause, they ask you to purchase some gift cards for an upcoming raffle from whatever local retailer is close to you. They assure you the company will reimburse you, and apologizes for the inconvenience.

After you hang up the phone you pause for a moment and think “Hey, didn’t IT just send out a warning about being asked to purchase gift cards?”. Of course they did, but they said to be wary of unknown callers or suspicious emails, not personal calls from the CEO. To assuage your concern, you quickly search for the most recent town hall video your company sent out and confirm the voice you heard on the phone matches that of the CEOs. Satisfied, you pick up your wallet and head out of the office to purchase the requested gift cards.

Unfortunately, it turns out that the call you received wasn’t from your CEO. It was created by a machine learning algorithm (MLA) designed to mimic their way of speaking. This is, put simply, all that a ‘Deepfake’ is. It’s a falsified (although legitimate looking) video, sound clip, or picture, created to deceive the viewer into believing it is authentic by using existing content as needed to simulate the experience. They may take many forms, and be used for many purposes, but the core concept remains the same.

After purchasing the gift cards, or creating a new user account for an employee, or completing whatever task the attacker requested, you are left holding the bag. Money is lost (either yours or the company’s), access is granted (to the attacker, or to whomever they sell the account to), and reputation is lost (or gained in the case of an attacker demoing their new technology). Regardless, the enemy has won. Despite the best efforts of the company’s IT department, attackers found a new way to crack the weakest link – the human element.

Phishing evolved

The famous phrase “believe nothing of what you hear, and only half of what you see” comes to mind. The things that we hear, even when spoken by a trusted voice, cannot be believed. What we see, whether it is shared on social media or by a friend, is suspect. Much like the attacks of the past, Deepfake-supported attacks rely on the implicit trust that people share with one another, whether they be employees, friends, or even family.  

This isn’t unusual, unexpected, or even a negative. Our entire society exists, to some degree, on our ability to trust other people to accomplish certain tasks or do certain jobs. It is a requirement we must accept as a cost of doing business with our current operations. Unfortunately, this opens all our businesses to the risk of nefarious actors exploiting these relationships for their own gain.

As we have seen concepts like ‘Ransomware-as-a-service’ evolve and grow, it is safe to assume that the use of Deepfakes will only continue to profligate within the industry. Even today it is possible to create a convincing fake with only an hour or less (depending what tool you use) of audio. Given how active many prominent business leaders can be on social media platforms, townhalls, or other speaking opportunities, it is not unreasonable to expect attackers to be able to harvest the necessary data from publicly available sources.

What you can do

As always, my first answer will be to train, train, train, and then train some more. Employees are always the weakest link in any chain, regardless if they work in IT, or the mailroom, or in the executive office. If an attacker can exploit human nature to gain access it will likely be the easiest avenue available. It’s important training includes more than just a series of videos and a test; organizations have to leverage active participation tools as well such as social engineering campaigns.

My second answer is to empower your employees to act on the training you give them. Many social engineering attacks rely on the presumed authority of the requester, or some form of threat of punishment to obtain compliance. It is critical that employees are empowered to say “no” or to question a request that seems unusual, even if it comes from the CEO.

Third, define what ‘appropriate’ business looks like. Strong documentation with clear communication channels, employee expectations, and current operations can greatly reduce the opportunity for attackers to exploit the human condition so effectively. There should be defined processes for employees’ duties, what they can expect to do, and what classifies unusual or malicious behavior.

Conclusion

With every passing day attackers grow more and more intelligent, creative, and technologically advanced. These groups outpace even the most tech-friendly, innovative, startups when it comes to adopting new technology and trying new strategies. This ignores any of the groups which may serve as government agents and have more advanced training or better funding. Competing against these forces is, therefore, no easy task.

Security teams and their companies have to stay abreast to the everchanging landscape and always be on guard for new attacks. Even in subjects the company, and its employees, are well versed in may become a source of breaches as hackers change how they execute their attacks. Taking a proactive and informed approach to managing cybersecurity risks, and building a program that is flexible and can meet the changing threat landscape, are critical to warding off attacks.

Share this with others

Get price Free trial