top of page

GetReal Security Taps Tom Cross to Lead Threat Research as AI Deepfakes Redefine the Cyber Battlefield

In the escalating arms race between synthetic deception and digital defense, GetReal Security is assembling a brain trust to stay ahead of malicious generative AI. The cybersecurity firm, known for its aggressive stance against AI-powered social engineering, has named veteran researcher Tom Cross as its new Head of Threat Research—fortifying a team already stacked with heavyweight talent in digital forensics and adversarial AI.


The move comes at a critical time. Enterprises and government agencies are increasingly under siege from convincing fake videos, audio clips, and impersonations—crafted not by expert hackers but by GenAI tools now accessible to almost anyone with an internet connection. In this new landscape, cyberattacks are no longer just about code—they’re about content.


“GenAI has given threat actors powerful new capabilities to create compelling social engineering attacks that target both enterprises and consumers,” said Cross. “AI is enabling threat actors with limited technical skills to ‘vibe code’ their way to fully automated attacks at scale, with highly personalized and targeted deceptions. This truly is a new frontier in cybersecurity, and we’ll need to combine new technical disciplines in order to combat these threats.”


Cross joins an elite multidisciplinary team that includes co-founder and forensic image analysis pioneer Dr. Hany Farid, as well as Chief Investigative Officer Emmanuelle Saliba—a former investigative reporter and one of the earliest experts in social media verification. Together, they’re spearheading a threat research division that marries machine learning with OSINT, digital forensics, and real-time incident response.


“These new threats are multi-faceted and constantly evolving, which is why each incident requires a multidisciplinary approach,” said Saliba. “We combine experts and skills from forensic research, machine learning, human investigation and open-source intelligence to each case that we take on.”


This convergence of disciplines underpins GetReal’s proactive strategy to counter synthetic media threats before they spiral. The company’s incident response service, GetReal Respond, now offers clients high-assurance attestation for digital content—a critical need as deepfakes infiltrate hiring processes, vendor communications, and even live video streams.


“In this new era of cyber threats, anything can be weaponized—name, image, likeness, static photos, real-time video streams,” said CEO Matt Moynahan. “All of which require a unified platform and the ability to reverse engineer synthetic digital content to best protect enterprise and government organizations.”


GetReal’s expanding research capability dovetails with its recent collaboration with Google’s SynthID initiative. The partnership integrates watermark detection into GetReal’s verification platform, allowing organizations to verify and act on AI-generated content across video, audio, and still imagery. It’s a signal that big tech and security players are starting to align on the urgent need for synthetic media safeguards.


As threat actors lean harder into scalable deception, GetReal’s hybrid defense model—part AI, part human, part investigative journalism—may offer organizations their best shot at discerning fact from fiction in real time.


Because in the age of synthetic everything, seeing isn’t believing.

bottom of page