Deepfakes, or synthetic media that uses AI to distort a person’s likeness to imitate another’s, can be entertaining. After all, watching Ross Marquand evolve into 11 different actors in 3 minutes is impressive!
However, as most are aware, there is a dark side to deepfakes when videos threaten democracy by manipulating the public for political gain, or ruin reputations with revenge porn, or spread damaging misinformation. In general, there are two issues with malicious deepfakes. First, deepfakes have the potential to allow a person to pose as someone they are not. Second, they allow criminals to deny a wrongdoing by claiming a genuine video is fake.
Unfortunately, the fintech industry is not insulated from deepfake headaches of its own.
The problem
There are two different types of deepfakes– audio and video. Both media types can manifest multiple issues within financial services. Here is a list of weak spots that deepfakes threaten:
- Fraudulent onboarding, such as a criminal posing as someone else or creating a new, synthetic identity
- Fraudulent payment authorizations and transfers
- Impersonation of business leaders for insider trading scams or tricking employees into taking nefarious actions
These examples aren’t just potential threats. Last March, a voice-based deepfake was used to impersonate the CEO of a U.K.-based energy firm. The fraudster called one of the CEO’s employees, convincing him to transfer $243,000 to a supplier based in Hungary. The money was then moved to a bank account in Mexico and the thief still has not been caught.
Given the wide variety of fraud opportunities, identity verification company iProov recently surveyed 105 cybersecurity decision-makers at U.K.-based financial institutions. The company, which won Best of Show at FinovateEurope earlier this month, detailed the results in a report.
According to the findings, 13% of firms surveyed had never even heard of the term “deepfake.” And while 31% of respondents had no plans to combat deepfakes or were not sure if their organizations had protective measures in place, 28% had already implemented such measures. The survey also reported that 4% of organizations said that deepfakes pose no threat whatsoever to their company. However, the majority of respondents, 40%, said that deepfakes pose a “slight threat.”
The solution
The fintech industry is ripe with security firms, such as iProov, that use AI to combat both video and audio deepfakes with anti-spoofing technologies. Many security companies also offers liveness detection to detect artificial representations of actual clients. Liveness detection plays a major role in detecting identity spoofing during new client onboarding, when cybercriminals may attempt to use a stolen drivers license along with a mask created from a photo of the person in order to set up a fraudulent account. Financial services organizations can also use liveness detection to thwart fraudulent login attempts for technology that uses facial recognition.
Fraudsters, by definition, show complete disregard to regulations. Nevertheless, lawmakers are making efforts to crack down on the technology. In June New York congresswoman Yevette Clark introduced the Deepfakes Accountability Act in the house. that would require video creators to disclose if a video was altered and allow victims to sue. As TechCrunch points out, the act would be difficult to enforce, but at least it’s a start.