top of page

Deepfakes, Passkeys, and the Coming AI Fraud Reckoning

The future of fraud may not come with a crowbar and a getaway car—but with a cloned voice and a friendly FaceTime. At least, that’s the dire picture painted by OpenAI CEO Sam Altman, who says artificial intelligence is hurtling us toward a "significant, impending fraud crisis."


Speaking at the Federal Reserve this week in front of an audience of financial leaders and policymakers, Altman didn’t mince words: AI has broken authentication. “Apparently there are still some financial institutions that will accept a voice print as authentication… that is a crazy thing to still be doing,” he said. With deepfakes and synthetic voices now trivial to generate, traditional biometrics—once hailed as secure—have quietly become liabilities.


The warning comes as AI-generated impersonations have already begun to pierce real-world defenses. In recent months, AI-mimicked voices have duped parents into thinking their children were kidnapped, and even faked messages from public officials. U.S. authorities recently flagged a campaign that used AI to impersonate Secretary of State Marco Rubio in voice calls to foreign dignitaries.


Altman fears that the next step—real-time, lifelike video forgeries—will be indistinguishable from genuine human interaction. “Right now, it’s a voice call; soon it’s going be a video or FaceTime,” he said, warning that neither financial systems nor everyday users are prepared.


The Washington AI Push


Altman’s appearance at the Fed comes amid OpenAI’s expanding political footprint. The company confirmed plans to open its first Washington, DC office next year, led by Chan Park and Joe Larson, previously of defense tech firm Anduril. The 30-person team will focus on policy outreach, economic research, and AI literacy programs targeting lawmakers, teachers, and civil servants.


The move underscores the tightrope OpenAI is walking—warning of AI’s risks while lobbying against overly restrictive regulation. The White House is poised to release its long-anticipated “AI Action Plan,” a policy blueprint shaped in part by OpenAI’s recommendations. Earlier this month, lawmakers struck a Trump-era legislative clause that would have barred states from enacting AI-related laws for a decade—a clause OpenAI had quietly opposed.


Despite fears of misuse, Altman insists his company is steering clear of developing impersonation tools. Instead, he’s backing “The Orb,” a controversial biometric device from Tools for Humanity that aims to provide “proof of personhood” in an increasingly synthetic digital landscape.


From Existential Threats to Existential Boredom


While many in Silicon Valley sound the alarm on job loss, Altman adopts a more philosophical posture. “No one knows what happens next,” he said of AI’s economic impact. “This is too complex of a system… too new and impactful of a technology.”


Altman has speculated before that future workers may have “everything they could possibly need” and no real work to do—just status games and time-fillers in a post-labor world. But that’s cold comfort to workers watching AI write code, generate legal memos, and replace white-collar knowledge work in real time.


Still, OpenAI released a report alongside Altman’s remarks, compiled by its chief economist Ronnie Chatterji, comparing ChatGPT’s productivity impact to technologies like electricity and the transistor. With 500 million users globally—20% of whom in the U.S. use it as a “personalized tutor”—the platform is already reshaping how people learn and work. The report hints that a deeper study on AI’s labor market effects is in the works, with Chatterji teaming up with economists Jason Furman and Michael Strain.


Enter the Passkey Era


The broader cybersecurity industry is not waiting for regulation to catch up. OneSpan CTO Ashish Jain says passkeys—cryptographic credentials stored securely on user devices—may become the gold standard for AI-resilient identity.


“Passwords have long been a point of vulnerability,” Jain said. “Passkeys represent a meaningful step toward improving both security and usability… especially valuable in securing high-risk interactions like financial transactions.”


Unlike voice prints or even SMS-based codes, passkeys resist phishing and AI manipulation by design. “As the adoption of passkeys grows,” Jain added, “I’m confident they will prove their resilience in protecting our most sensitive online interactions.”


A Race Between Offense and Defense


Altman’s greatest fear? Not just fraud, but the strategic weaponization of AI by hostile actors before global defenses are ready. That could mean attacks on U.S. critical infrastructure or even AI-assisted bioengineering.


While OpenAI is one of several companies chasing superintelligence—a hypothetical AI that surpasses human capabilities—Altman acknowledges the dark side of reaching that milestone too soon: “The thing that keeps me up at night is… bad actors making and misusing it before the rest of the world has caught up.”


The irony? OpenAI may be building the very tools it fears most. But in this AI arms race, even the ones pulling the brakes are riding the rocket.

bottom of page