Foundational Security Metrics

Foundational Security Metrics

Mean Time to Identify (MTTI) vulnerabilities in dependencies measures how quickly your SCA program discovers new threats. Calculate MTTI by comparing when vulnerabilities are publicly disclosed versus when your scanners detect them. Leading programs achieve MTTI under 24 hours through continuous scanning and real-time vulnerability feeds. Track MTTI trends to ensure your detection capabilities keep pace with the evolving threat landscape.

Mean Time to Remediate (MTTR) represents the average duration from vulnerability discovery to resolution. Segment MTTR by severity level—critical vulnerabilities should show MTTR measured in days, while low-severity issues might have MTTR of weeks or months. Compare MTTR across different teams and application types to identify where additional support or process improvements are needed. Declining MTTR trends indicate improving security responsiveness.

# Example: MTTR Calculation and Tracking
from datetime import datetime, timedelta
import pandas as pd

def calculate_mttr_metrics(vulnerability_data):
    # Group by severity and calculate MTTR
    mttr_by_severity = {}
    
    for severity in ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW']:
        severity_vulns = vulnerability_data[
            vulnerability_data['severity'] == severity
        ]
        
        if not severity_vulns.empty:
            # Calculate time differences
            time_diffs = (
                severity_vulns['remediated_date'] - 
                severity_vulns['discovered_date']
            ).dt.total_seconds() / 86400  # Convert to days
            
            mttr_by_severity[severity] = {
                'mean': time_diffs.mean(),
                'median': time_diffs.median(),
                'p90': time_diffs.quantile(0.9),
                'count': len(time_diffs)
            }
    
    return mttr_by_severity

# Track trends over time
def track_mttr_trends(historical_data, time_period='monthly'):
    trends = historical_data.groupby([
        pd.Grouper(key='discovered_date', freq='M'),
        'severity'
    ]).apply(lambda x: (
        x['remediated_date'] - x['discovered_date']
    ).dt.total_seconds().mean() / 86400)
    
    return trends

Vulnerability density metrics normalize security findings by application size or complexity. Calculate vulnerabilities per thousand lines of code (KLOC) or per hundred dependencies. This normalization enables fair comparison between small and large applications. Track density trends to measure whether code quality improves over time. Industry benchmarks suggest mature programs achieve vulnerability densities below 5 per 100 dependencies.