Deepfakes: The Most Serious Threat in AI Crime

Deepfakes: The Most Serious Threat in AI Crime

Artificial intelligence technologies have become increasingly integral to the world we live in.

AI technology is now routinely used in autonomous vehicles, medical diagnosis, and proving mathematical theorems. The promises of artificial intelligence are great, but so is the potential for the technology to be used for criminal activity. Here, we explore the most serious threats in AI crime.

Below is a list of all the topics we will cover in this article. Go ahead and click on any of these links, and you’ll be taken to that specific section.

Artificial Intelligence

Artificial intelligence is defined as intelligence demonstrated by machines that mimics the ‘cognitive’ functions humans associate with a brain, such as learning and problem solving. AI empowers everyday technologies like search engines, self-driving cars, and facial recognition apps. 

Artificial intelligence technologies have become increasingly integral to the world we live in. A surge in AI development has been made possible by the sudden availability of large amounts of data and the corresponding development and wide availability of computer systems that can process data faster and more accurately than humans can. 

For all the positive potential of AI, there is also a risk of the technology being used for criminal activity; last year, a University College London study identified a number of ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop. Deepfakes were ranked as the most worrying application of artificial intelligence for crime or terrorism, the top six are below:

  1. Deepfakes
  2. Driverless Vehicles as Weapons
  3. Tailored Phishing
  4. Disrupting AI-Controlled Systems
  5. Large-Scale Blackmail
  6. AI-Authored Fake News

The Most Serious Threat in AI Crime

Deepfake technology is no longer limited to dark corners of the internet. Apps that allow anyone to convincingly replace the faces of pop stars and celebrities with their own, even in videos, have become commonplace with the help of social media.

However, fake audio and video content have been ranked as a serious threat due to the increasing number of ways that the content could be used in crime, from discrediting a public figure to impersonating someone else to access their bank account. The rise of deepfake and synthetic AI-enabled technology means that it is becoming easier for fraudsters to generate realistic live images or videos of people for these synthetic identities to commit serious levels of fraud.

Synthetic identity fraud based on deepfaked or synthetic images and videos as part of someone’s identity is already growing; a recent study found that over three-quarters of cyber security decision makers are worried about the potential for deepfake technology to be used fraudulently, with online payments and personal banking services thought to be most at risk. Banks and FinTech groups have begun routinely setting up partnerships to combat the use of doctored video and audio content by fraudsters as they turn to synthetic identities to open new accounts.

These identities can be completely fake or a unique amalgamation of stolen or modified false information, be that hacked from a database (phished from an unsuspecting person) or bought on the dark web. Because of the limited impact on those whose PII has been used, often this kind of fraud will go unnoticed for longer than traditional identity fraud.

How We Can Help

AI technology is also being leveraged for identity verification and fraud prevention as machine learning and deep learning make it possible to authenticate, verify and accurately process the identities of customers at scale. 

Acuant’s suite of global solutions and biometric solution, BioMatch, uses advanced AI technology to provide facial recognition matching to combat the use of deepfakes and other presentation attacks at the point of sign-up.

Customers are asked to take and upload a photo of themselves and one of their ID document to be analysed using forensic and biometric checks. AI algorithms then analyse and cross-reference the submissions, assessing them for potential forgery or alterations and proving that the photo is a ‘live’ person. A facial recognition matching score and pass/fail decision is returned to the client, alongside the liveness results.

Our global ID, KYC & AML platform, Sodium serves our complete suite of solutions via a single API. This solution enables all data sources to be cross-referenced and deliver truly enhanced customer due diligence.

Utilise a single element or multiple processes – it’s entirely up to you. Learn more about how we can help to automate and simplify your verification processes to help you to learn more about your customers. One simple integration; a flexible 360° solution which is scalable and secure.

Questions?

Let's Talk