Capability Maturity Model and Security Metrics

No business would survive for long if it failed to measure the results of its efforts. Without measurement, you are flying blind. The same applies to the security function. If we do not define our success criteria and we do not continually measure our work against these criteria, how do we know if we’re making progress or falling behind? 

We’re all familiar with the saying, “You can’t manage what you can’t measure.”

It’s particularly true for the security domain where you can define and measure security metrics. Our primary mission is stopping bad things from happening. When we succeed, nothing happens. We’re largely invisible to the rest of the corporation, and that’s how it should be. But our mission cannot be successful unless we carefully measure the effects of our work and continuously improve our responses to threats.

Without measurement, it would be difficult to tell whether we are moving the needle forward or simply marking time until the next zero-day exploit. The goal of this chapter is discussing a range of techniques for measuring security effectiveness in a modern business environment.

Narrowing the Field

Security is a broad topic, so it’s important to narrow the field and decide what’s most important to your organization. What are your immediate goals? Are you trying to reduce the number of incidents? Or are you seeking ways to reduce friction from normal security processes? Every organization has a different mix of goals and objectives, depending on variables such as size, geography, domain and industry. To sort through the complexity, I find it useful to divide security metrics into five general categories:  

  1. Governance, Risk and Compliance (GRC)
  2. Security Engineering
  3. Security Operations
  4. Corporate IT Security
  5. Privacy Engineering

Each of these five areas will have its own set of metrics, but the overall responsibility for making sure that you're measuring your security activities lies within the GRC function.

Here’s the good news: there’s no shortage of activities to measure. Practically every area in security can be accurately measured and documented. For example, almost all companies perform regular audits; large organizations perform numerous audits over the course of a typical year. How often did the security team respond to requests for evidence and how quickly did it respond to these requests? Those are metrics you should be capturing.

When you’re doing penetration testing, you should be measuring and documenting the issues you find, since those issues may indicate problems with the quality of your static or dynamic application security testing capabilities.

Another helpful metric is mean time to remediate (MTTR), which is a measure of how many days it took to resolve a threat or fix a vulnerability. For security teams, MTTR is a great metric –basically, it shows that we’re doing our job. And in our line of work, threats and vulnerabilities are facts of life. There will always be gaps to close, servers to protect and software to patch. Measuring MTTR enables you to assess the overall effectiveness of your security program, so it’s an essential metric.

Let’s say the specific focus is assessing your company’s ability to build secure systems. Have you done threat modeling on the systems? Have you done static code scans? Have you done source composition analysis and dynamic application security testing? By asking these questions and documenting the answers, you are collecting valuable metrics, and gaining useful insights about your company’s security programs. And if you find issues, you can address them before they get out of control. It’s also critically important to keep track of the return on your security investment. Keeping a close eye on ROI will motivate you to find out precisely which security controls are working and which aren’t.

Going Deeper

Now let’s dig deeper into the metrics within each of the five categories. The metrics listed below do not represent the entire universe of possible measurements, but they are a reasonable sample, and a good starting point.

     1.GRC

                a. Security Project and Program Management

                b. Policy, and Policy Exception Governance

               c. Security Assurance/Trust

               d. Security Awareness

               e. Security Sales Enablement

                f. Security Budget

               g. Third-Party Risk Management

     2.Security Engineering

                 a. Application Security (SAST, DAST, SCA)

                b. Cloud Security

                          i. Patch and Vulnerability Management

    3.Corporate IT Security

               a. Identity and Access management

               b. Corporate IT Security Incidents (e.g., lost badges, stolen laptops)

               c. Metrics from EDR/AV solution

               d. Number of documents externally shared

               e. Patch Metrics - OS Level, Third Party Apps

                f. Asset Management Metrics

               g. Network and Perimeter Security

               h. Trend Analysis

   4.Security Operations (SecOps)

                 a. Incident Management

   5. Privacy Engineering

               a. Number of Data Subject Access Right (DSAR) requests

               b. Number of internal privacy incidents

               c. Number of Privacy by Design Reviews by current status

               d. Vendor Privacy Assessments for third party vendors storing PII

You can drill down further within most of the sub-categories. For example, within Security Operations Incident Management you can measure:

         ● Number of incidents investigated

         ● Mean time to detect

         ● Mean time to triage

         ● Mean time to resolution

         ● Performance against internal/external SLAs

         ● Number of incidents by type, e.g.:

                  ◦ Unauthorized access

                  ◦ Service disruption

                  ◦ Data breach

                  ◦ Credentials exposure

                  ◦ Privilege escalation

         ● Mean time to communication to partners/customers (external)

And within Application Security, you can measure

         ● Security alerts from GitHub on vulnerabilities detected in code

         ● Number of security improvement stories in product (& story points)

         ● Penetration Test Remediation Metrics

         ● Percent or number of internal apps covered under security static code analysis (SAST)

         ● Percent or number of internal apps covered under dynamic application security testing (DAST)

         ● Number of vulnerabilities by type - SQL injection versus XSS

         ● Top5/Top 10 vulnerabilities by type

         ● Vulnerabilities by severity - Critical, High, Medium, Low, Informational

         ● Bugs per developer/team/project (can be pulled from the ticketing tool)

         ● LOC (Lines of Code) Scanned

         ● Number of files scanned

         ● Most vulnerable files

         ● Number of identified business flows

         ● Number of identified data flows

         ● Number of releases/updates covered per quarter

         ● Number of manual/automated unit tests run

         ● Vulnerabilities by state - New/To be verified, Verified, Not Exploitable, Fixed, Recurring

         ● Number of OWASP Top 10 vulnerabilities

         ● New Issues, resolved Issues, recurring Issues per scan/month/quarter

         ● Time to remediate (can be tracked using the bug tracking tool)

         ● Trend analysis of vulnerabilities

         ● Risk Heat Map per application/business unit/department/team

One Metric to Rule Them All

Security and trust are intertwined. Do we trust the overall security environment in our organization? Do we trust our data? Do we feel confident that all of our systems and processes are working as intended? Do we trust the security of our third-party vendors? Do we feel secure in an overall sense? 

Reflecting on this relationship between security and trust led me to develop a Security Confidence Score for products and applications being developed at an organization. The score aggregates multiple criteria and metrics, producing a single meta-metric that can be useful for judging the overall readiness of a service or application from a security perspective. In addition to producing a single-metric score, the process also creates opportunities to measure, learn and improve.

Here’s a quick overview of how the Security Confidence Score works:

       1. Security Design Review (Max. Score: 2)

                             a. Score= 0 (Security Design review not completed)

                             b. Score=1 (Security Design review completed with review comments open)

                             c. Score= 2 (Security Design review completed with all review comments addressed)

       2. Static Application Security Testing: SAST (Max. Score - 3)

                             a. Score= 0 (SAST not completed)

                             b. Score=1 (SAST completed with high/critical vulnerabilities open)

                             c. Score= 3 (SAST completed with no high/critical vulnerabilities open)

       3. Third Party Dependency Analysis (Max. Score: 2)

                             a. Score= 0 (Third Party Dependency Analysis not completed)

                             b. Score=1 (Third Party Dependency Analysis completed with high/critical vulnerabilities open)

                             c. Score= 2 (Third Party Dependency Analysis completed with no high/critical vulnerabilities open)

      4. Dynamic Application Security Testing - DAST/Penetration Testing (Max Score: 3)

                             a. Score= 0 (DAST not completed)

                             b. Score= 1 (DAST completed with high/critical vulnerabilities open)

                             c. Score= 3 (DAST completed with no high/critical vulnerabilities open)

Remember, the intent is to measure, learn and improve.

            ● Measure:

                         ◦ Ideal value shall be ≤ 10.

                         ◦ Frequency: Release/Feature dependent

                         ◦ Reporting: Probably, can be done at the Project/Epic level

            ● Learn: Any value less than 10 indicates a gap in one of the parameters being used to measure the security confidence score.

            ● Improve: Remediate gaps to improve this metric 

I believe this type of meta-analysis can yield valuable insights, both inside and outside of the security function, and can contribute meaningfully to achieving the goal of building a healthy and holistic security posture.

Which Metrics Are Important to Your Organization?

As noted earlier, metrics can differ by industry and domain. Because of this variance, it’s essential to determine which metrics are most important for your company, and then to measure them carefully on an ongoing basis.

Even if your metrics are similar to another company’s, you may prioritize them differently. For example, the pharmaceutical and media/entertainment industries both value their intellectual property highly, but their approaches to security are different because they tend to face different kinds of threats. In pharma, insider threats, inadvertent transfer of data and phishing are major security concerns, while in media/entertainment, endpoint security (lost or misplaced laptops) and digital rights management are primary concerns.

As a result, the emphasis will be on different metrics. In pharma, the focus may be on employee security awareness training and intrusion testing. In media/entertainment, the focus may be on patching and endpoint security.  

When you’re talking about software supply chain security, the focus will be on third-party providers. Ironically, this is especially true for the security function, since almost all of the tools we use for vital activities – such as scanning, intrusion detection and incident management – are sourced from third-parties.  

No organization today, no matter what industry it’s part of, is immune from security threats. Banks, hospitals, gasoline retailers and meat processing plants have been shut down in response to malicious attacks. Metrics may not help you prevent an attack, but they will, over time, significantly contribute to your organization’s overall security health.

Security Transparency

In addition to generating actionable insights, metrics will also help you do a better job of “selling” the importance of establishing and maintaining a world-class security posture. When you share metrics, however, don’t just share the most obvious ones, such as how many systems you’ve patched or how many threats you’ve blocked.

You should also talk about how you are vetting your third-party service providers and which third-party providers you are using. Let people know, for example, that you’re using Amazon Web Services for cloud computing, Send Grid for email and Twilio for text messaging. Be transparent about your SaaS providers.

While some might argue that too much transparency can be counterproductive, it seems reasonable to share information about your service providers with your users and stakeholders. At the time of this writing, the security world is coming to terms with the Log4Shell vulnerability, and we’re only just beginning to assess the potential long-term harm that it’s likely to cause.[1] Being forthright about which providers you are using, and talking about whether those providers have been compromised, may seem risky. But your candor and transparency also may inspire higher levels of trust and confidence, which will prove beneficial in the long run.

Technical vs. Emotional 

Metrics tend to be technical. And yet they also evoke strong emotions. Metrics can inspire concern, anxiety and fear. Or they can inspire trust, engagement and confidence. As a security professional, it’s not enough for you to look at only the technical side of the metrics you’re gathering. You should also look at how they make people feel. People in different departments and different functional areas may respond differently to the same set of metrics. People in engineering may respond differently than people in operations or people in finance. Every stakeholder will feel slightly differently about the data you share. Part of your role as a security professional is putting the metrics into perspective and explaining what they mean in practical terms so that people can understand.

Lessons Learned

One of the most important lessons I have learned is that you should begin thinking about security metrics early on during the development of a security program. Don’t let metrics become an after thought. The biggest mistake is thinking, “I’ll build a security program and then I'll think about metrics later.” If you're taking a data-driven approach to security, you need to start collecting metrics from day one. Without metrics, you cannot measure progress and you cannot measure success.

If you're building a security program to manage risk, your program should lead to a measurable reduction in risk. The only credible way to prove your success is with metrics. That’s why we need to start thinking about metrics from the get-go, and not wait until the end of the project or the end of the quarter or the end of the year.