Qualitative Risk Correlation — The MITRE-Backed Way
-
Balachandran Sivakumar - 18 Apr, 2026
Every organization running vulnerability scans is sitting on a goldmine of risk intelligence. Most are panning for iron pyrite.
A vulnerability scan report is rarely treated as what it actually is — a structured, machine-readable dataset that, when cross-correlated with the right threat intelligence sources, can tell you exactly which of your assets a motivated threat actor is likely to hit next, and by what method. Instead, it gets handed to a sysadmin queue sorted by CVSS score and forgotten until the next scan cycle.
Let’s fix that. This post walks through a methodology we have been refining for a while — one that takes your standard VA scan output and transforms it into a prioritized, context-rich risk picture. We’ll lean heavily on MITRE’s ecosystem: CVE, ATT&CK, and their APT dataset. And we’ll layer in CISA’s Known Exploited Vulnerabilities (KEV) catalog, because theoretical exploitability is one thing; known-in-the-wild exploitation is another conversation entirely.
No black boxes. No vendor magic. Just structured thinking, documented inputs, and repeatable logic.
What You’re Actually Working With
Before diving into the process, let’s look at our inputs.
Primary inputs are the non-negotiables. You need an Asset Register with up-to-date IP mappings — without this, your VA scan is a list of problems attached to IP addresses nobody can route to a business owner. The VA scan CSV is the other anchor. Both inputs need to be current. Stale asset data is arguably worse than no asset data, because it creates false confidence, or false alarms.
Secondary inputs are where the magic lives. These are the enrichment sources that elevate your raw vulnerability list into a threat-contextualized risk model:
- MITRE CVE Database — canonical vulnerability descriptions, affected software, and weakness classification (CWE mappings)
- CISA KEV Catalog — the authoritative public list of CVEs that are actively being exploited in the wild, maintained and updated by CISA
- Exploit-DB — public exploit code repositories, useful as a secondary signal for exploit maturity
- MITRE ATT&CK — the adversary tactics and techniques framework, which lets you map a vulnerability to how an attacker would use it
- MITRE APT Group Data — profiles of known Advanced Persistent Threat groups, including their preferred TTPs (Tactics, Techniques, and Procedures)
The combination of these sources lets you answer the question that actually matters: “If I were a threat actor targeting this organization, which of these CVEs would I pick first, and against which of my assets?”
The Logic Chain
Step 1 — Filter to What Matters
Start by isolating critical assets from your Asset Register. “Critical” should be defined by your organization in advance — typically assets that are part of core business processes, handle sensitive data, or have regulatory significance. If you don’t have this classification already, this is the forcing function to build it.
For each critical asset, cross-reference against the VA scan output. If a critical asset shows no identified vulnerabilities — great, document it and move on. If vulnerabilities are present, you have work to do.
Step 2 — CVE Enrichment
For every CVE flagged against a critical asset, pull the full record from MITRE’s CVE database. You want the description (which tells you what the vulnerability is), the CWE classification (which tells you what kind of weakness), the CVSS vector (which gives you technical severity dimensions beyond the score), and any referenced advisories.
At this point, also run each CVE against the CISA KEV Catalog. This is a binary check with significant weight: is this CVE in the KEV list? If yes, treat that as a hard signal that real attackers have weaponized this vulnerability in production environments. The theoretical exploitability debate is over for KEV entries.
Step 3 — ATT&CK Mapping
This is the step most teams skip, and it’s the most intellectually rich part of the process.
From the CVE description and CWE classification, identify the most appropriate ATT&CK Tactic and Technique. The CVE description usually tells you how the vulnerability works — initial access via a web-facing application? That’s Initial Access, likely T1190 (Exploit Public-Facing Application). A privilege escalation bug in a kernel driver? That’s Privilege Escalation. A memory corruption vulnerability in a network service? Look at Execution and Lateral Movement.
The ATT&CK mapping does something important: it shifts the conversation from “this thing has a bug” to “this thing enables an adversary to do X in the kill chain.” That framing resonates with executives and security committees in a way that CVSS 9.8 doesn’t.
Step 4 — APT Association
Once you have your ATT&CK Tactic/Technique mapping, cross-reference the MITRE ATT&CK Groups dataset. Which APT groups regularly employ these techniques? MITRE maintains detailed profiles linking groups to specific techniques, software they use, and historical campaigns.
This is where the correlation becomes genuinely powerful. If CVE-XXXX-YYYY maps to T1190 (Exploit Public-Facing Application) and T1078 (Valid Accounts), and those techniques are in the TTP profile of APT29, APT41, and two other groups known to target your industry — you have a very different conversation with your CISO than “CVSS 8.1, patch within 30 days.”
Step 5 — Arrive at Likelihood and Impact
With the enrichment data in hand, you can now make defensible qualitative assessments:
Likelihood is a function of: exploit availability (is there public PoC code?), KEV status (is it actively exploited?), and APT relevance (do groups targeting your sector use this technique?). A CVE with all three signals is high likelihood. A CVE with none of them, even at CVSS 9.0, may be genuinely low likelihood for your environment.
Impact maps to the asset’s criticality and the nature of the vulnerability. A remote code execution on an internet-facing asset that processes payments hits different than an authenticated local privilege escalation on an internal developer workstation.
Prioritization Factors — The Final Filter
Before you hand off a remediation list, apply four prioritization factors that provide environmental context:
Internet exposure — Is the asset reachable from the public internet, directly or through a DMZ? This is a force multiplier for likelihood. A vulnerability on an air-gapped system may sit at a very different priority level than the same CVE on an internet-facing API gateway.
Criticality of the system/process — Does this asset support a tier-1 business process? Is it in scope for PCI-DSS, HIPAA, or another compliance framework? Critical system classification should already be in your Asset Register (see Step 1).
TTPs associated with the CVE — You’ve already done this mapping. The question here is whether the mapped techniques are commonly associated with your threat landscape — sector-specific APT activity, regional threat groups, or attack patterns relevant to your business model.
Remote exploitability — Can this CVE be triggered without authenticated access? Without local network access? The attack vector dimension of the CVSS vector string gives you this directly: Network > Adjacent > Local > Physical. Prioritize Network-exploitable CVEs.
Together, these four factors let you build a simple prioritization matrix that your teams can action without a PhD in threat intelligence.
Why This Approach Holds Up
A few things make this methodology durable rather than fashionable.
It’s evidence-based. Every step in the correlation chain uses publicly verifiable sources — MITRE’s datasets, CISA’s catalog. There’s no proprietary black box making assertions you can’t validate or explain to an auditor.
It’s proportional. CVSS-only prioritization creates patch queues that no team can realistically execute. By filtering through business criticality and threat relevance, you end up with a shorter, more defensible list that reflects actual risk rather than theoretical severity.
It’s auditable. Because you can document the logic chain — “CVE-XXXX is prioritized at Critical because: asset is internet-exposed, KEV-listed, maps to techniques used by APT41 which actively targets financial services, and is remotely exploitable without authentication” — you can justify your risk decisions in a regulatory context.
It’s repeatable. The methodology doesn’t change between scan cycles. With enough automation, you can run this correlation on every scan output and produce a risk-contextualized report that updates with each new KEV addition or ATT&CK technique publication.
Final thoughts
This approach is solid — the underlying logic is well-founded and the data sources are exactly the right ones. But some gotchas to keep in mind if you plan to operationalize this.
The ATT&CK mapping step (Step 3) is the hardest, though it looks simple when explained. Mapping from a CVE description to a Tactic/Technique requires reading comprehension of technical prose and familiarity with the ATT&CK taxonomy. This is best done using AI agents that are purpose built for this — structured prompting works wonders as opposed to just “letting the AI handle it”. But human review is still warranted for high-stakes assets before signing off on the prioritized list. This step is where AI agents powered platforms like CISOGenie play a spectacular role.
The methodology assumes Asset Register quality. In most cases, asset registers are almost always partially stale. And this is what leads to inaccurately prioritized vulnerability patching. Discipline in maintaining the Asset Register is non-negotiable.
A question that often came across when I discussed this approach with other is “Why additional sources when you have CISA’s KEV?”. Well, because the KEV is necessary but not sufficient as the sole exploit signal. Some highly targeted vulnerabilities exist in Exploit-DB or threat actor toolkits before CISA has had time to update the catalog. This approach of using multiple sources helps us get the info faster, and cross-validate too.
Finally, APT mapping requires industry context. The same TTPs may be associated with 15 different APT groups across different sectors. Filtering by sector relevance — which MITRE’s group profiles do support — sharpens the prioritization considerably. If your organization is a logistics company, prioritize APT groups known to target supply chain and logistics over groups that exclusively target governments.
The Bottom Line
Vulnerability management without threat context is just change management with extra steps. You’re processing tickets, not managing risk.
The methodology described here turns your existing scan output into something qualitatively more valuable — a risk picture that reflects who is actually likely to attack you, how they would do it, and which of your assets would feel it most. The inputs are free. The logic is transparent. The output is defensible.
The only thing it costs is the discipline to do it consistently.
To read about how CISOGenie’s Risk Profiling AI agents help with this, visit Risk Profiling Agents