top of page

AI Model Hunts for Hidden Hardware Trojans in Computer Chips—With 97% Accuracy

In the race to make computer chips smaller, faster, and more efficient, a shadowy adversary has quietly evolved alongside innovation: hardware trojans. These malicious design alterations—sometimes just a few lines of rogue logic—can cripple systems, siphon sensitive data, or sabotage national defense infrastructure. Once a chip is fabricated, these trojans are virtually impossible to remove.


Now, researchers at the University of Missouri (MU College of Engineering) have unveiled an artificial intelligence breakthrough that promises to change the game. Led by doctoral candidate Ripan Kumar Kundu, the team has developed a large-language-model-powered tool that detects these hidden threats faster, cheaper, and more transparently than ever before.


Cracking the Code of Chip Security


Traditional methods for identifying hardware trojans rely on exhaustive manual code reviews or complex test environments that compare a suspect chip’s behavior to a known-good version. These methods are expensive, slow, and often miss stealthy manipulations buried in billions of transistors.


Mizzou’s new system, dubbed PEARL, uses the same kind of large language models that underpin popular AI chatbots—but instead of interpreting human language, it’s trained to understand the “language of chip design.” The AI scans code for anomalous or malicious logic, achieving an impressive 97% detection accuracy. More importantly, it explains why a given piece of code is dangerous.


“That explanation is critical because it saves developers from digging through thousands of lines of code,” said Kundu. “We’re making the process faster, clearer and more trustworthy.”


This transparency is key: hardware engineers can see the AI’s reasoning, which builds confidence and makes the system easier to integrate into commercial workflows. The model can run on local machines or in the cloud, offering flexibility for both open-source chip designers and major semiconductor manufacturers.


Securing a Global Supply Chain


Hardware trojans represent one of cybersecurity’s most insidious risks precisely because of the globalized chip manufacturing ecosystem. A malicious actor could insert a backdoor during any stage—design, verification, fabrication, or assembly—making attribution and detection notoriously difficult.


“These chips are the foundation of our digital world,” said Khurram Khalil, a fellow doctoral researcher and co-author of the study. “By combining the power of artificial intelligence with an understandable explanation, we’re building tools to protect that foundation at every step of the supply chain.”


Catching trojans before production could save companies millions by preventing catastrophic recalls or compromised devices. The stakes are particularly high for critical sectors like healthcare, finance, and defense, where one compromised chip could cascade into widespread system failures.


Beyond Detection: Toward Self-Healing Chips


The Mizzou team isn’t stopping at finding vulnerabilities—they’re working on ways for AI to automatically repair compromised designs in real time. The idea: before a chip ever reaches fabrication, the system could identify and rewrite malicious logic to neutralize the threat.


Such advances could ultimately extend to other infrastructure systems—think power grids, satellites, and autonomous vehicles—where embedded hardware security is non-negotiable.


The full study, “PEARL: An Adaptive and Explainable Hardware Trojan Detection Using Open Source and Enterprise Large Language Models,” appears in IEEE Access, co-authored by University of Missouri professors Prasad Calyam and Khaza Anuarul Hoque, along with students Eric Garcia (Columbia College) and Ethan Grassia (Loyola University).


Why It Matters


The semiconductor industry has long treated hardware security as a black box—difficult, specialized, and expensive. Mizzou’s AI-driven approach could democratize chip assurance, giving both startups and established companies a practical way to verify trust in their designs.


In a world where hardware is increasingly weaponized, the ability to explain and fix hidden digital sabotage may mark a new frontier for cybersecurity—one that starts not in code, but in silicon.

bottom of page