top of page

Jumio CEO: Identity Landscape Will Face a Seismic Upheaval in 2024

Updated: Jan 4

Jumio

Jumio's top executives share their visionary predictions for 2024, highlighting significant shifts in online safety, identity management, AI-enabled fraud detection, privacy, deepfake impact, and the rise of 'influencer bots' on social media.


Robert Prigge, CEO


Promoting online safety will be a top priority for global businesses in 2024


Seventy-three percent of global consumers believe stronger identity verification will help prevent underage access to social media, while 77% say it will help prevent minors from accessing online gambling and gaming. The recent passing of the UK’s Online Safety Bill is certainly a step in this direction, but in 2024, we will see a stronger push from governments and businesses around the world to promote the safety of minors online through better verification methods.

The focus has been on age verification, but as generative AI continues to increase in sophistication, more robust identity verification will be essential to ensure children are protected. Adding to the challenge will be the tricky balance of mitigating privacy risks and preserving a good user experience.

Beyond age verification, we can also expect to see more organizations deploying real-time content monitoring and filtering, enhancing their parental controls, and initiating education and awareness campaigns to foster a safer digital environment in the year ahead.

The identity landscape will face a seismic upheaval in 2024


The impending recession, budget cuts, business closures, increasing M&A activity and more caused sweeping changes to the identity space in 2023 that are still cutting in as we near the holiday season. In 2024, we can expect to see a flight to stability and consolidation in this market as vendors are bought out, forced to go out of business and continue to streamline operations. As a result, identity verification companies will be forced to innovate or risk losing their business.

Philipp Pointner, Chief of Digital Identity


To combat the surge of AI-enabled fraud, companies will turn to connected-data AI to detect more global fraud


Consumers reported fraud losses of nearly $8.8 billion in 2022 — a 30% increase from the year prior. With phishing, vishing, deepfakes and other scams becoming increasingly sophisticated, perpetrating fraud at scale is easier than ever. Jumio analysis also found that 25% of fraud is interconnected, either conducted by fraud rings or by people using the same credentials or information to open new online accounts.

As online fraud and cybercrime escalate, we will in turn see a surge in cybersecurity companies investing heavily in connected-data AI to look beyond individual identity transactions and company boundaries to spot trends across the globe. This approach will help stop identity-based attacks as well as contributing to the mitigation of ransomware, phishing and more.

Veronica Torres, Worldwide Privacy and Regulatory Counsel


2024 will be the year of AI transparency


The widespread adoption of generative AI in 2023 highlighted the need for more robust measures that can regulate the way organizations use these new technologies. The Biden Administration’s AI Executive Order and the EU AI Act paved the way for increased transparency and accountability in AI development and deployment. In 2024, we will see a more distinct move for companies to increase transparency and accountability around AI and automated decision-making. Organizations will be compelled to provide more clarity about their AI practices, empowering consumers to make informed decisions about their data sharing.

Stuart Wells, CTO


By the end of 2024, 95% of consumers in the U.S. will have fallen victim to a deepfake


Every company and consumer is jumping on the AI bandwagon, and fraudsters are no exception. Cybercriminals have previously found ways to cheat the system. Earlier in 2023, they were found bypassing ChatGPT’s anti-abuse restrictions to generate and review malicious code. Now, ChatGPT is fully connected to the internet and has the ability to generate images — a recipe for the perfect deepfake.

In 2023, 52% of consumers believed they could detect a deepfake video, reflecting an over-confidence in consumers. Deepfakes have become highly sophisticated and practically impossible to detect by the naked eye, and now generative AI makes their creation easier than ever. Misinformation is already spreading like wildfire, and deepfakes will only get more complicated with the upcoming elections. By the end of 2024, the vast majority of U.S. consumers will have been exposed to a deepfake, whether they knew it to be synthetic media or not.

Bala Kumar, CPO


The number of ‘influencer bots’ on social media will exceed the number of real human accounts


Nearly half of internet traffic is now bots — alarming, considering this is almost equivalent to the amount of human-operated internet traffic. And while catfishing has been around for a while, bot-operated social media accounts bring a new meaning to the word. Many social media users have already come across social media accounts that are entirely bot-operated, posing as influencers with seemingly realistic posts and comments. It’s becoming increasingly difficult for social media users to discern real accounts from fake ones. On the other hand, businesses are also paying these ‘influencers’ to promote their products without even knowing they are fraudulent. The onus will be on social media platforms to deploy identity verification tools with advanced liveness detection technologies to identify bot-operated accounts.

Commentaires


bottom of page