Enrichment, Decentralized
NIST stepped back. The work moves closer to where it belongs.
TL;DR: On April 15, NIST announced that the NVD will stop enriching most CVEs. The industry is framing it as a crisis. It isn’t. Three of the four enrichment fields — CVSS, CWE, reference tags — were already duplicated by the CNAs and vendors closest to the vulnerability, or trivially automatable by anyone willing to do the work. Only CPE mattered, and CPE was already being replaced by PURL in every ecosystem with a package manager. What NIST actually did was end a subsidy that was distorting incentives: the NVD carried work that should have been done by whoever knew the product best. Now that work has to come from the CNAs, the vendors, and the consumers themselves. Each actor in the chain finally has skin in the game. That is a healthier pipeline than what we had.
On April 15, 2026, NIST announced that it was giving up on enriching most CVEs. My Slack lit up with the panic you’d expect from a community whose tooling sits on top of vulnerability data. The framing was apocalyptic everywhere: the backbone of vulnerability management is breaking, the CVE ecosystem is collapsing, the public good is on life support.
I’m a Research CNA. I’ve spent the last few years watching how vulnerability data actually gets produced, packaged, and consumed — upstream as a participant (assigning CVEs for vulnerabilities my team finds) and downstream as a vendor (our product ingests CVE data at scale). When the announcement hit, I didn’t open a press release. I opened our own ingestion pipeline and asked a blunt question: how much of what we actually use comes from NIST’s enrichment, and how much of it would we miss if it disappeared tomorrow?
The answer surprised me, and then it didn’t. Most of what people are mourning was already duplicated elsewhere by the actors who actually knew the vulnerability best. One thing was not. And that one thing is about to be obsolete for a reason that has nothing to do with the NVD.
This is a post about that audit. It is also a pushback against a narrative I think is wrong — or, more precisely, wrong in its weight. The NVD’s capitulation is not a catastrophe. It is the overdue retirement of a subsidy that was distorting incentives across the vulnerability supply chain. For two decades NIST was doing work that belonged, by any reasonable division of labor, to someone else. The panic is loudest among the vendors who happen to sell the replacement. That alone should make you suspicious.
What the Notary Actually Did
Think of NVD enrichment as notarization. When a CVE is submitted by a CNA — MITRE, a vendor, a research team like mine — what arrives in the database is a description, a list of reference URLs, and maybe a CVSS score if the CNA bothered to assign one. A NIST analyst then applies four stamps to convert that raw record into something operationally useful:
CVSS stamp — a severity score computed under uniform NIST criteria.
CWE stamp — the category of the underlying weakness (is this a SQLi, an XSS, a memory corruption, a misconfiguration?).
Reference-tag stamp — labels on the URLs: this one is a patch, this one is an exploit, this one is a vendor advisory.
CPE stamp — the machine-readable mapping that says this CVE affects product X, versions 3.0 through 3.4, running on platform Y.
That’s it. Four stamps. The entire industry has been treating them as a single sacred bundle. They are not. They are four very different things with four very different replacement costs, and the panic narrative conflates them. Let’s take them one at a time.
Three Stamps Nobody Will Miss
The CVSS stamp was already duplicated. When NIST says it will stop assigning its own score when the CNA already provided one, it is quietly admitting what the industry has known for years: most major CNAs now publish CVSS at disclosure. Microsoft, Cisco, Red Hat, Oracle, GitHub Security Advisory database — all of them have been scoring their own vulnerabilities for over a decade. Is the quality uneven? Yes. Is the NIST analyst’s score systematically better than the CNA’s? Not really — it is often the same number rederived by someone with less context about the vulnerability than the engineer who wrote the patch. What NIST provided was the illusion of a neutral arbiter. That illusion was always thin. Losing it changes the discourse around disputed scores, but it does not create a data void. The data was never exclusively NIST’s to begin with.
The CWE stamp is inferable. CWE classification is a bounded categorization problem. There are a few hundred CWE entries, the vast majority of disclosed vulnerabilities map to a short list of common ones (CWE-79, CWE-89, CWE-287, CWE-416, a handful more), and the mapping can be derived from the vulnerability description with high accuracy. I know this because we do it. A modern LLM fed a CVE description can assign a plausible CWE with better-than-human consistency — not because the model is smarter than a NIST analyst, but because the problem has a small, well-defined output space and the input is structured prose. If CWE classification is your operational bottleneck in 2026, the problem is not that NIST stopped enriching; the problem is that you haven’t automated what has been automatable for two years.
The reference-tag stamp is nice, not critical. Labeling URLs as Patch versus Exploit versus Vendor Advisory is useful. It is not foundational. Most tooling that cares about reference tagging already does its own classification with URL patterns, domain reputation, and lightweight content inspection — because NIST’s tagging was always partial and inconsistent anyway. I will miss it at the margin. I will not rebuild my stack because it is gone.
Three of the four stamps. Duplicated, inferable, and marginal, respectively. If these were everything the NVD did, the April announcement would be a non-event.
The One Stamp That Actually Mattered
The CPE stamp is different.
CPE — Common Platform Enumeration — is the layer that says this CVE applies to cpe:2.3:a:apache:log4j:2.14.0:*:*:*:*:*:*:*, to all versions between 2.0 and 2.14.1, excluding versions X and Y. It is the mapping that makes automated vulnerability scanning possible. A scanner walks your software inventory, builds a list of CPEs, matches against the NVD’s CPE expressions, and produces the list of CVEs you are actually exposed to.
Without CPE, a CVE is text. With CPE, a CVE is data.
This is the stamp whose loss is real, and here the replacement cost is genuinely high, for three reasons. CPE mapping requires product knowledge the CNA does not always have in structured form. A vendor advisory says “affects versions 3.0 through 3.4.” Converting that into a CPE expression with precise ranges, correct vendor and product normalization, and the right platform conditions is skilled, tedious work. The skill is unglamorous; it is lexicographic discipline. The CPE dictionary itself is degrading. Without enrichment, new products do not get CPE entries. The vocabulary shrinks relative to the ecosystem. The problem compounds. Scanners that match by CPE become blind to unenriched CVEs. Not wrong, not delayed — blind. The CVE exists in the NVD, but from the scanner’s point of view it does not apply to anything.
If the story ended here, the panic would be justified. It does not end here, because the world did not stop in 2010 when CPE was the only game in town.
The PURL Pivot
CPE was designed in the era when “software” meant commercial products with vendor names, fixed releases, and stable identifiers. Microsoft Windows 10 version 1903. Apache HTTP Server 2.4.41. Oracle Database 19c. The vocabulary assumed a small, identifiable set of entities, each one curated by a human analyst into a central dictionary.
That world died sometime between the rise of npm and the normalization of containerized deployments. Modern software is a graph of open-source dependencies, each one published to a package registry with its own naming conventions and release cadence. The correct identifier for a given piece of code is no longer “Apache Log4j 2.14.0” — it is pkg:maven/org.apache.logging.log4j/log4j-core@2.14.0. That string is a PURL, a Package URL, and it has a property CPE never had: it is generated at the source by the package manager itself.
PURLs do not need an analyst to curate them. They are machine-readable by construction. They are embedded natively in every modern SBOM format — CycloneDX, SPDX — and they are already how the Open Source Vulnerabilities database (OSV) and the GitHub Advisory Database identify affected packages. When you run npm audit, pip-audit, or Trivy against a container, you are matching by PURL, not by CPE, whether or not you realized it.
The NVD’s CPE monopoly was not ending because NIST gave up. It was ending because the center of gravity of modern software moved to ecosystems where PURL is native and CPE is a retrofit. NIST’s April announcement is a trailing indicator, not a leading one. The funeral is for a dictionary that was already becoming a museum piece.
Where LLMs Close the Gap
A real gap remains, and it is the one PURL does not fix on its own: the long tail of commercial, proprietary, and firmware software where no package manager exists. Routers, industrial control systems, enterprise SaaS, medical devices, point-of-sale terminals. For that tail, CPE was never great — but it was something, and when NIST stops providing it, there is nothing obvious to replace it with.
This is where LLMs are not a silver bullet, but a very sharp tool. A language model fed a CVE description, the referenced advisory, and a product catalog can produce a plausible CPE or PURL-equivalent mapping with accuracy that is not yet at the level of a disciplined human analyst, but is already higher than the accuracy of an overworked analyst processing 50,000 CVEs a year. More importantly, the inference is reproducible, auditable, and scales with compute rather than with headcount.
The honest framing is this. For the open-source half of the ecosystem, PURL replaces CPE natively and the NVD was never the real bottleneck. For the commercial long tail, LLM-assisted inference replaces what NIST analysts were doing with a process that is measurably imperfect but operationally viable. Between the two, most of what matters gets covered. The remaining gap is real, but it is shrinking — and it is shrinking in the direction where tooling improves faster than bureaucracies do.
Skin in the Game
The NVD enrichment collapse is being narrated as an ecosystem crisis. Read carefully, it is the opposite: a forcing function that is accelerating a transition the ecosystem was already halfway through, and a correction of incentives that had been misaligned for two decades. Three of the four NVD stamps were already being duplicated or were trivially automatable. The fourth — CPE — was the last residue of a centralized, dictionary-based, human-curated model that package ecosystems and SBOM standards had quietly been replacing for five years.
The deeper point is about who should be doing the work. For most of the NVD’s history, a federal agency with no commercial relationship to the software it catalogued was curating the metadata for the entire industry, for free. That is a subsidy, and subsidies distort. Vendors had no operational reason to publish structured CVSS, CWE, or CPE data at disclosure, because NIST would do it for them. CNAs could ship half-finished records because the NVD would fill in the gaps. Consumers treated the NVD as ground truth because it was the only source with a uniform format. None of these actors had any skin in the game. The work of making a CVE operationally useful was outsourced to an overworked public-sector team that, predictably, eventually broke.
What is emerging now is messier, but it is more honest. CNAs are under pressure to enrich their own disclosures, because there is no longer a backstop. Vendors have a reason to publish PURLs for their open-source components, because that is what SBOM pipelines consume. Consumers have to think harder about what sources they trust and why, because there is no single authoritative feed anymore. Each actor has to do the piece of the job they are actually best positioned to do. That is not a crisis of vulnerability management. That is vulnerability management growing up.
If you run a program and your only response to the April announcement is to panic and subscribe to a commercial enrichment feed, you are paying someone else to maintain the old subsidy on your behalf. That is fine as a stopgap. It is a bad long-term strategy. The better move is to rebuild your pipeline around PURL-native data for everything that has a package manager, LLM-assisted inference for everything that does not, and direct relationships with the CNAs whose products actually run in your environment.
The NVD was not the backbone of vulnerability management. It was the scaffolding — the thing that held the ecosystem up while the real infrastructure got built. The scaffolding is coming down, and what was underneath is a pipeline where everyone finally has to pay attention to their own work. That is not a crisis. That is progress.

