Why Enterprises Must Act Now on Multimodal Deepfake Threats — Sandy Kronenberg, CEO of Netarx
- Cyber Jill
- 11 hours ago
- 3 min read
As deepfake technology rapidly evolves, its impact has shifted from social media trickery to a full-fledged enterprise security threat. In this Q&A, Sandy Kronenberg, CEO of Netarx, discusses how organizations can defend against multimodal deepfake attacks that blend voice, video, and text — and why real-time verification is now essential for digital trust.

Deepfake attacks have been rising rapidly. Why do many leaders still underestimate the threat?
Most leaders still see deepfakes as a media or misinformation issue rather than an enterprise security problem. That is the blind spot as these are no longer rare events. In the past year, deepfake-related cybercrime has grown more than 900 percent and more than 70 percent of enterprises have faced at least one attempt.
Attackers do not need to breach a system when they can impersonate a trusted person. The next major fraud will not come from a stolen password but from a cloned voice asking finance to move funds. Awareness training cannot stop that, but real-time verification can.
What is driving the shift from single-channel to multimodal attacks?
It’s the natural evolution given the democratization of AI tools, the strong incentives for the cyber criminals, and the gaps in protection. AI-generated deepfakes can now be done via cross-channel social engineering with ease and at scale. Phishing, vishing and smishing now converge into a single coordinated campaign. An email sets the stage, a fake voice adds urgency and a spoofed video closes the loop.
Traditional anti-phishing tools analyze events in isolation. Video and phone deepfake detection tools are limited and rarely implemented to date. Given that attackers work across multiple channels, social engineering protection must protect all communications. And, more importantly, the signals coming from each media channel are critical for the security tool to accurately detect a deepfake from a real person. Netarx correlates more than 50 metadata signals across voice, video, email and SMS to identify patterns in real time. That shared awareness separates prevention from postmortem.
How does Netarx detect a deepfake during a live interaction?
Netarx operates inside the conversation, not after it. Our platform joins the call or meeting, parses content in real time, and fuses federated data including behavioral and environmental metadata.
Each participant’s signal is analyzed by an ensemble of AI models that look at elements such as rPPG (remote photoplethysmography), prosody, and speech cadence. The results appear as a simple visual cue known as a Flurp on your phone, desktop, email, meeting, etc Green means verified, amber signals uncertainty and red indicates likely impersonation. Everything runs silently in the background with sub-second latency and no workflow disruption.
Which sectors are facing the greatest impact from these attacks?
The sharpest rise is in financial services, government and title companies, where trust, timing and high-dollar transactions meet.
In finance, a deepfake voice can redirect a multimillion-dollar transfer. In government, it can bypass identity verification or access benefits systems. In title and real estate, it can reroute closing funds within seconds. The common weakness is human trust, and that is where Netarx provides defense.
What makes Netarx different from other detection tools?
Most solutions analyze content after it has been created. Netarx verifies authenticity in real time during the interaction.
Our differentiation is built on four core principles:
Multi-modal coverage across voice, video, email and SMS under one platform
Multi-signal AI combining more than 50 metadata features with federated validators and blockchain-backed proof.
Frictionless experience that requires no coding or training and integrates into existing collaboration tools
Durable trust built through cryptographic validation that can withstand audit and regulation
We are not just detecting deepfakes. We are restoring confidence in human communication.
What advice do you have for leaders preparing for 2026 and beyond?
Acknowledge that social engineering by deepfakes is likely already happening to you. And, employee training isn’t going to prevent it. Stop treating deepfakes as edge cases. They are the new attack surface and require the same security dedication as the other layers in your defense. Move from post-event reaction and implement a tool that verifies authenticity, not just flag anomalies. Embed verification inside the workflow so that security travels with the user.
Deepfakes exploit trust, and the only defense is verifiable authenticity at machine speed. That is the new standard for digital trust.