Could You Spot a Deep Fake Video?

gettyimages-952069596-170667a.jpg

GUEST BLOGGER

Mason Wilder, CFE
ACFE Research Specialist

In the summer of 2017, researchers from the University of Washington presented technology they developed that combined realistic lip-synched video of former U.S. President Barack Obama with preexisting video and audio clips. In doing so, they birthed deep fakes, a phenomenon which has gone on to cause consternation among national security figures and, generally, creep a lot of people out.

The term deep fakes is a portmanteau of deep learning, the category of artificial intelligence (AI) technology used to create the videos, and a description of the end product — fake. By combining machine learning AI and facial-mapping software, it is now possible to manipulate sound, images and video to create convincing clips of people doing or saying things they never have and probably never would — certainly not on camera, anyway.

Immediate reactions to the university’s research and early examples of the technology’s capabilities centered on the potential implications of deep fakes, from reputational damage to prompting massive civil unrest. In 2019, most people have some experience with the effects of false narratives on large audiences through the all-too-ubiquitous phenomenon of “fake news.” It isn’t difficult to imagine how much more damaging deep fakes could be than news articles or misleading headlines, which have proven to be plenty effective.

Initial applications of deep-fake technology came in the form of videos manipulating existing pornographic videos by combining them with celebrities’ faces to make them appear to be real footage of the celebrities. Such video could possibly provide the basis for blackmail or extortion schemes against public or private figures.

Perhaps the most viewed deep fake also starred Barack Obama, following in the footsteps of the original video produced by the University of Washington researchers. However, the video, produced by renowned actor/director/producer Jordan Peele, did not feature audio captured from previous Obama speeches. Instead, the audio was a Peele impersonation of Obama and served as a public-service announcement to spread awareness of the potential for deep fakes used as disinformation tools.

Although most deep fakes largely feature celebrities or public figures due to the amount of available video for the AI programs to learn from, development of the technology could grow to feature average citizens, and thus create more opportunity for fraud. Imagine the implications of deep-fake technologies for fraud when novices can easily access increasingly sophisticated programs.

Take, for example, the business email compromise (BEC) scheme. Currently, sophisticated cyber fraudsters research an executive’s mannerisms or tendencies in emails, then copy them in a phishing email targeting someone with the authorization to issue a payment on behalf of the company. Consider how much more effective a phone call or a voicemail could be that recreates the executive’s voice with AI and issues urgent instructions to send a large wire transfer to a specific bank account before the close of business.

Beyond enhancing BEC scams, the same types of technology used to create deep fakes could be used to circumvent biometric authentication controls with voice recognition or generate false confessions to divert attention away from the real perpetrator of a fraud, to name a few more possibilities.

Preventive solutions using similar technology to detect deep fakes are in development, and the prospect of legislation criminalizing deep fakes has been proposed. Unfortunately, the bad guys are ahead of the good guys in this realm so far.