top of page

Safeguarding Digital Identity: The Growing Need for Enhanced Security

Identity theft is a pervasive issue that primarily affects consumers, but its potential risks and impact on businesses are often overlooked. Stolen identities can provide cybercriminals with a gateway to infiltrate corporate networks, accessing sensitive information and potentially unleashing ransomware or extortion attacks. We sat down with Stuart Wells, CTO, Jumio to discuss the tactics employed by cybercriminals to steal identities, the financial gains they seek, and the emergence of generative AI tools in facilitating identity theft. We also explore the crucial role of robust identity verification and ongoing authentication in safeguarding businesses from this growing threat.

Stuart Wells, CTO, Jumio

Identity theft is something that usually gets associated with consumers, not businesses. What’s the risk of identity theft in the context of enterprises?

Identity theft remains one of the top threats to consumers, and for good reason, but the impact of stolen identities is often overlooked by businesses. When an individual’s identity gets compromised, that creates a potential avenue for an attacker to infiltrate the victim’s employer and the company’s corresponding networks. For example, a stolen likeness could be used in the recruiting process to gain employment and subsequent access to a corporate email account or database with sensitive company information. The value of business information — in addition to the potential for ransomware or extortion — makes enterprises an enticing target for cybercriminals. Without proper identity verification checks in place for employees, an unauthorized user could gain access to company data and go completely undetected, exposing the organization to breaches, malware and more.

What kinds of tactics are cybercriminals using to steal identities? What do they attempt to gain through these stolen IDs?

Traditional social engineering techniques like phishing are still playing a large role in identity theft and business email compromise, but one of the more noteworthy trends is the rise of sophisticated spoofing and deepfake technology. Fraudsters have grown increasingly creative in their attempts to beat facial recognition tools and have found ways to steal identities that don’t involve typical Personal Identifiable Information (PII) like a Social Security, credit card or bank account number. A criminal may not even need a username or password if they can impersonate someone else using a deepfake image or video.

Like with most cyberattacks, the end game for identity thieves usually revolves around financial gain. Stolen identities can be the gateway to breaching or extorting a corporation that often proves far more lucrative than pursuing someone’s individual bank or credit card account.

With the rise of generative AI tools, how are fraudsters leveraging this technology to commit identity theft?

When it comes to generative AI, one of the biggest concerns across the industry is its scaling capabilities. Deepfakes are not necessarily new to the scene, but generative AI is enabling the creation and deployment of deepfake and related spoofing technologies at rates which we’ve never seen before. Plus, we’re seeing this technology expand beyond images and videos into audio deepfakes. Phishing attacks that previously consisted of phony emails from supposed coworkers or family members are turning into more legitimate-sounding phone calls and voicemails that imitate the people we interact with on a regular basis. As these attack vectors continue to evolve, fraudsters will increasingly turn to these automated tools to compromise identities.

What safeguards do enterprises need to have in place to combat identity theft?


A robust identity verification strategy is key for enterprises looking to combat identity theft. Verification is the only way for organizations to truly know that business users are who they claim to be. With the rise of more sophisticated identity theft tactics, the strongest verification methods involve AI/ML and biometrics to automate and streamline the identification process. The goal for businesses should be to keep out fraudsters before they even have a chance to infiltrate their networks. By establishing a baseline biometric template for all onboarding users, organizations will be able to confirm at each sign-in attempt whether or not the user is who they claim to be. As fraudsters attempt to find more workarounds to biometric verification, it’s important for organizations’ security protocols to evolve along with them. Advanced liveness detection can help pinpoint fraudsters’ advanced spoofing attacks. Techniques based on eye motion, face motion and active illumination methods significantly curtail these types of attacks. For deepfakes, custom ML models are needed to spot irregularities in biometric synchronization of face movement and voice, earlobe movement and lower jaw movement, and fringing effects around clothing and hair, among other methods of detection. This can spot potential flaws in deepfakes, such as eye motion, face motion and active illumination. It’s very difficult for cybercriminals to sync up synthetic faces and voices perfectly, or to hide artifacts around hair or edges that are generated during deepfake creation, which makes detecting those components all the more necessary for businesses.


How does the authentication of an identity differ from verification, and what role does authentication play in protecting identities?

The need to know someone’s identity doesn’t stop at onboarding, and this is where authentication comes into play.


Once an identity is verified, it must be authenticated each time someone attempts to access a system or resource. While authentication has historically relied on validating something that a user knows (i.e., a security question or password) or something a user has (i.e., an ID badge or cryptographic key), breaches regularly make this private data publicly available. The most secure systems today require something a user is, like providing a high-resolution selfie. This process of ongoing authentication ensures that bad actors don’t slip through the cracks posing as legitimate business users. ###

bottom of page