top of page

Navigating the Fallout: Delays in NVD Updates Challenge CIOs and CISOs in Cybersecurity Battle

The recent inconsistencies in updates from the National Vulnerability Database (NVD) since early February have raised significant concerns within the cybersecurity community. As the go-to source for timely vulnerability information, delays in the NVD can have far-reaching impacts on organizations that rely heavily on this data to protect their digital assets. We sat down with Scott Kuffer, co-founder and COO of Nucleus, to discuss this topic in more depth and what actions can be taken to help these challenges.


Scott Kuffer, COO & co-founder, Nucleus Security

Given the recent inconsistencies in updates from the National Vulnerability Database (NVD) since early February, how do you foresee these delays impacting the overall cybersecurity landscape, particularly for organizations that rely heavily on timely vulnerability information? This is an interesting question because the larger impact is largely unknown. However, it is likely to be on the actual discovery of vulnerabilities in an organization’s environment. So, when we talk about timely vulnerability information, we need to break it out into multiple stages. There’s the discovery of a vulnerability, but then there is the rating of the vulnerabilities for prioritization. The big impact here is potentially on vulnerability scanning vendors that do the detection of the vulnerabilities in an organization’s environment. Vulnerability scanning vendors, especially less mature vendors, are relying heavily on NVD to inform their signature roadmap. Vuln scanners use signature libraries, just like anti-virus, to determine mappings between software they detect during scanning and the NVD. So, if they rely on the NVD data to drive their identification system, organizations downstream will be much slower at detecting vulnerabilities. 

You’ve emphasized the importance of standardization and information sharing around vulnerabilities. Could you elaborate on how the lack of a consistent and updated NVD affects standardization efforts and what measures can be taken to mitigate these challenges?

The purpose of standardizing on vulnerability information sharing is to be able to communicate as an industry about vulnerabilities. Think of a situation where you are in a foreign country and don’t speak the language. Let’s say you are walking down the street and see a thief breaking into somebody’s home. You go to the police and try to communicate what is happening. It is impossible to communicate complex thoughts to the police because you cannot communicate using the same language. With vulnerabilities, we actually have the same problem. The central identification system that was designed in 2004 was built to solve the problem so that every company can share information about a vulnerability. For example, Microsoft says you need to run a patch to fix a certain vulnerability. How are you supposed to know which vulnerability this patch fixes? 

You can also have scenarios where there are multiple ways to fix a specific vulnerability. If someone who is not the vendor discovers a workaround to resolve the vulnerability, without a single identifier to reference, it is difficult to communicate the fix to others in the industry. 

The standardization of the language is just as important, if not more, than the enrichment and rating that NVD does. The CISA Vulnrichment program is a good initiative, but we must ensure we don’t lose sight of the fact that the main benefit of the system is something that is foundational, the actual identification of the unique vulnerability. If we lose that or that part of the system suffers because we spend all our resources on rating vulnerabilities, we will lose the fundamental value. In other words, you can have identification without enrichment, but you cannot have enrichment without identification. 

To mitigate these challenges, it is possible to create more automation into the system. We can give the CNAs more responsibility in the process. We can also make the system more efficient. Technically MITRE does a lot of the upstream CVE processing of the NVD at cve.org. We could leverage CVE.org as the primary identification system and then utilize NVD as the rating system. This will eliminate a lot of the challenges we may have and minimize the impact of interruptions in NVD service. Ultimately, we need to make sure the CVE process is considered a public service and maintains its funding, or the commercial sector is going to go elsewhere, and the vulnerability database concept will end up in the private sector, which will be a step backwards from where we are today due to market dynamics. 

With various vendors pushing their own versions of CVE databases, what are the potential risks associated with this fragmentation? How can the industry ensure a unified approach to vulnerability management despite these competing sources?

The main risk here is lack of coverage and visibility. Different vendors are going to be better at covering different sets of vulnerabilities. Just like with streaming services you’ll need to subscribe to Netflix for Microsoft vulnerabilities, and Hulu for Oracle expertise. Right now, because it’s a public service, a lot of that expertise is consolidated into a single pipeline and single identifier. 

Fragmentation will ultimately mean that a service that is considered a core public service may end up being something you need to pay for to maintain the business models for these commercial vendors. And unless they partner with each other, there are likely to be different languages that are used for communicating vulnerabilities across vendors. The best example of this is the vulnerability scanning vendors themselves. Qualys and Tenable both have what they call “plugin libraries”. These plugin libraries are maintained separately and are used to detect vulnerabilities unique to their specific scanning tool. Now, the tricky part here is that they group different CVEs together based on what vulnerabilities they think are the same. This means that there are plugins that Qualys has that Tenable does not, and vice versa. Now imagine that being the scenario with determining the existence of a vulnerability in the first place, not just the detection. The number of LinkedIn posts with arguments in the comments is going to be unreal. 

Collaboration between private and public sectors is crucial for effective vulnerability management. In your view, what steps should be taken to enhance this collaboration, and what role can organizations like Nucleus Security play in bridging the gap between these sectors?

There are numerous reasons on both sides why collaboration is difficult. The federal government has many additional requirements to do business in the federal space that do not exist in the commercial sector. For example, the FedRAMP process. There is a long, security-intensive process just to sell a SaaS service to the Federal government. I believe this makes many commercial companies hesitant to want to work with the public sector. This attitude permeates throughout many projects. 

On the side of the private sector, for the NVD specifically, it is expensive and there is nothing really in it for the private sector in helping to maintain the NVD (aside from total collapse of the program) so there isn’t motivation to help. There’s a sense that the federal government, and CISA specifically, have it under control, so we might as well use the services provided by the government. The same reason that people don’t think about the road they drive on when they go to work in the morning, many of the cyber infrastructure is thought of the same way by commercial companies. 

Private-public partnerships are a good start to building this connection. Having a centralized service in the federal government where vulnerabilities are managed can also help, because then every agency is not managing vulnerabilities independently.


Comments


bottom of page