W2 Business Development Executive Lynsey Hoxha investigates the rise of deepfakes and the risks that they pose. How can businesses identify deepfakes and ensure that no fraudulent activity or security risks are posed to their business?

Deepfakes are on the rise. Yet another tool that fraudsters can use to infiltrate businesses and leverage identity fraud by the use of this new and unfamiliar technology. Global corporations such as Microsoft and Intel have invested in the introduction of deepfake detection tools, but how can technology be used to prevent the rise of this dangerous new fraud threat?

What is a Deepfake?

Deepfake videos and pictures allow an individual to manipulate certain facial expressions. The technology allows someone to essentially place someone else’s face on their own. This can clearly be a big concern for fraudsters to capitalise on identity fraud, fake news, and hoaxes, particularly with celebrities who are the most vulnerable to the threat.

How do Deepfakes work?

Deepfakes work by utilising very complex technology called artificial neural networks. These networks utilise patterns of data which is sourced from feeding thousands of images into the network. The network then learns various patterns from the images to enable someone to change their appearance and utilise those images to create videos appearing to be someone else.

What are the benefits, if any?

Although deepfakes clearly pose a significant risk to security, fraudulent attempts, and reputation of individuals, they can and have been used positively. They have been used by some retailers to allow consumers to ‘virtually’ try on clothes and see what they would look like in them without the need for any physical interaction, particularly beneficial in times such as the COVID-19 pandemic where most retailers globally had to close their brick-and-mortar stores.

Most recognisably they have been utilised in the movie industry with the likes of de-aging movie stars and allowing characters who are now deceased to still appear in movies with stunning quality. Star Wars and The Irishman are just two examples where deepfake technology has been used to enhance the experience and quality of the film. Although, despite being able to use the technology for some good, it still poses the question as to whether it outweighs all of the risks that come with the development of deepfakes.

What are the risks?

Despite deepfakes being used for benefitting experience in retail and film, there are multiple risks that they pose to society. Deepfakes have been used heavily to imitate celebrities, most likely because they are the most exposed to images and videos of themselves being fed into the artificial neural network that deepfakes gain their intelligence from. The use of celebrity imitation is clearly a huge risk to the reputation of celebs as someone not versed in the technology would not be familiar with the tell-tale signs.

Reputation is one thing, but with identity fraud on the rise in general (costing Americans a total of about $56 billion last year, with about 49 million consumers falling victim), deepfakes are just another tool in an arsenal of many for fraudsters to utilise for identity fraud and money laundering. As mentioned above, companies such as Microsoft and Intel have invested in several detection tools for deepfakes to ensure that the technology can’t be used for fraudulent purposes, but the technology is young and growing.

Deepfakes use powerful, ever-growing technology and more research and focus is needed in the area to understand how it works, allow regulators to impose strict sanctions to those attempting to use the technology, and the improvement of services such as document verification combined with facial comparison dramatically decreases the likelihood that a deepfake will get through an onboarding journey and cause any harm to businesses.

How can Deepfakes be stopped?

At the moment, deepfake technology is still in its infancy and the complexity of the technology used means that there’s still a way to go in identifying how they can be stopped. Identity verification and liveness technology is, however, the best way that banks and fintechs can try and identify fraudsters attempting to infiltrate their business. The addition of document verification alongside facial comparison enables an extra layer of authentication and means that even if complex deepfake technology is being used, a Government issued document also needs to be submitted to pass the onboarding check and is likely to deter fraudsters away from that business.

It is now more important than ever that regulated businesses are using the right measures to screen their customers and prevent both reputational and monetary loss due to fraudulent activity and fines relating to ill prepared compliance procedures. Technology such as deepfakes is so advanced and can easily hurt businesses, although with the correct procedures in place to screen and monitor customers, deepfakes could becomes just another risk to monitor but not cause any harm to your business by identifying and stopping fraudsters at the door.

For more information on how to effectively onboard customers and remain compliant, contact us here.

Scroll to Top