Tag: Information Security Strategy

  • Human Risk Surface Quantification in Cybersecurity

    Understanding and measuring the “human risk surface” – the risk introduced by human behavior, errors, or malicious insiders – has become a critical frontier in enterprise cybersecurity. Recent studies show that the majority of breaches involve a human element (e.g. 82% of breaches in 2021 involved people via stolen creds, phishing, or errors). In response, a new wave of solutions and research is emerging to quantify human-centric cyber risk (often via user risk scores or profiles) and proactively reduce it. Below, we explore existing commercial solutions, relevant academic research, and emerging trends and gaps in this domain.

    Existing Commercial Solutions (Human-Centric Risk Scoring Tools)

    Many security vendors and startups now offer products to measure and manage human risk, often assigning risk scores to users based on their behaviors or likelihood of insider incidents. These solutions are typically aimed at mid-to-large enterprises and integrate with corporate systems (email, endpoints, identity, etc.) to gather behavioral signals. Below are notable examples, with brief descriptions and limitations:

    • Elevate Security – A pioneer in workforce risk quantification, Elevate creates an individual “Human Risk Score” for each employee much like a credit score. It aggregates data on factors like phishing susceptibility, unsafe browsing, password habits, and past security incidents to continually update each user’s risk level. These scores integrate with security controls so that high-risk users get extra protections automatically (adaptive authentication, stricter policies, etc.). Limitation: Elevate focuses primarily on unintentional insider risk (employees likely to make mistakes) rather than detecting already-malicious insiders. It requires integrating many data feeds for full visibility, which can be complex in practice.
    • Living Security – This security awareness company introduced a Human Risk Index (HRI) in 2022 that quantifies user risk levels. The HRI uses a Bayesian network model to combine “hundreds of predefined criteria” from integrated data sources and sorts users into five risk tiers. It serves as the quantification component of a broader Human Risk Management program, converging technical and behavioral data to identify users who pose higher risk and measure improvements over time. Limitation: Like similar platforms, the accuracy of the risk scoring depends on the quality and breadth of integrated data. It’s primarily used to inform training and targeted interventions, so it may not directly prevent an insider incident without pairing with enforcement tools.
    • OutThink and CybSafe – OutThink (UK) and CybSafe are examples of “Cybersecurity Human Risk Management (HRM) platforms” that evolved out of security awareness training programs. They go beyond tracking training completion to measure attitudes, behaviors, and security habits of users. For instance, OutThink’s platform gathers metrics on people’s knowledge, behavior (e.g. phishing click rates, reporting rates), and cultural factors to quantify human risk exposure for the organization. This allows security teams to see how human risk levels change over time and to tailor training or controls accordingly. Limitation: These platforms mostly address negligent or unaware behavior (the “oops” factor). They rely on users interacting with training or simulated attacks, so malicious insiders who intentionally evade detection may not be captured. Also, HRM scoring lacks a universal standard – each vendor has its own model, making cross-organization comparisons difficult.
    • Proofpoint (People-Centric Security) – Proofpoint is a major enterprise security vendor that has championed a “people-centric” approach. Their tools (email security, DLP, insider threat management) correlate threats and user behavior to identify risky users. For example, Proofpoint identifies Very Attacked People (VAPs) – users who are highly targeted by external threats – and provides visibility into those individuals. Its Information Protection suite brings together context across content, user behavior, and threats to give people-centric insight into data loss and insider threats. In practice, this means Proofpoint’s Insider Threat Management (from its ObserveIT acquisition) monitors user activity (like file movements, anomalous data access) and ties it with threat intel (phishing attempts, malware targeting that user) to flag high-risk users. Limitation: Proofpoint’s approach is powerful in detecting ongoing risky behavior, but as a traditional security suite it may be reactive – e.g. alerting after a risky action occurs. It also focuses mainly on digital signals (email, files) and might not incorporate softer human factors (training, security awareness levels) into its risk scoring. Still, it’s widely adopted by large enterprises (over 75% of the Fortune 100 use Proofpoint’s people-centric security solutions), indicating its effectiveness in real-world environments.
    • Microsoft Insider Risk Management (Purview) – Microsoft 365 includes an Insider Risk Management solution designed for large organizations. It aggregates signals from M365, Windows, and Azure and uses machine learning to detect anomalous user actions that could indicate insider incidents. For example, it looks for patterns like a user downloading hundreds of sensitive files, copying files to USB, emailing out confidential data, or disabling security controls. When thresholds are exceeded, it generates risk alerts (with options to anonymize user identities initially to avoid bias). Limitation: Microsoft’s tool is tuned to catch insider data theft or policy violations in progress; it may not explicitly produce a persistent “risk score” for each user, but rather case-by-case risk incident scores. It’s very useful for catching employees preparing to quit and take data or those snooping where they shouldn’t. However, it doesn’t measure broader security behaviors like phishing test performance or compliance with best practices – it’s focused on insider misuse. Also, as Microsoft emphasizes, there are privacy considerations – the system is built with privacy in mind (anonymized alerts, assuming positive intent), but organizations must balance monitoring with maintaining a healthy culture.
    • Code42 Incydr – Code42’s Incydr is an insider risk management SaaS used in many enterprises to detect and respond to data leaks (especially from remote or departing users). It gives organizations deep visibility into file movements by users and uses 120+ risk indicators (file metadata, user activity context, etc.) to prioritize risky events. Incydr essentially assigns higher risk scores to user actions that suggest data exfiltration – e.g. an engineer uploading source code to a personal cloud drive would be flagged with a high risk indicator. The platform’s dashboard highlights users with the most risk events, and it can automate response actions (like alerting security, blocking transfers). Limitation: Incydr is focused on data exposure risks; it excels at spotting file leaks but is less about other human risks like credential phishing or poor security habits. It’s an example of a “partial” human risk scoring tool – addressing insider data theft specifically. Also, like other endpoint monitoring tools, it requires deploying agents and can be resource-intensive, which some mid-size firms find challenging (though Code42 aims to balance security and employee productivity).
    • User Behavior Analytics (UEBA) Platforms – A number of SIEM/analytics vendors provide UEBA modules (e.g. Exabeam, Securonix, Splunk UBA, IBM QRadar UBA, Gurucul). These analyze logs and user activity patterns to detect anomalies, often outputting a risk score for each user or entity. For instance, a UEBA system will learn the normal behavior of each user over time (logon times, accessed resources, typical data volume) and then increase a user’s risk score when it sees deviations (like accessing an unusual server at 2am or a surge in file downloads). If a user’s score crosses a threshold, an alert is generated for investigation. These tools are valuable for catching both malicious insiders and compromised accounts by looking for behavioral red flags. Limitation: False positives can be an issue – not every anomaly is malicious (e.g. a new project might explain unusual access). Tuning is required to make the risk scoring accurate. Additionally, UEBA products typically don’t incorporate human context like security training or user attitude; they focus on digital behavior. They are most often used by large enterprises with a mature SOC, since they require integrating many data sources and skilled analysts to interpret the results.
    • Adaptive Authentication & Identity Risk Engines – While not full HRM solutions, even identity and access management tools now include user risk scoring in a narrow sense. For example, Microsoft Entra (Azure AD) Identity Protection and Okta’s Adaptive MFA use ML to assign a risk level to each login or user account based on suspicious signals (impossible travel, unfamiliar device, known credential compromise, etc.). Users with high risk are forced to re-authenticate or have account access restricted. This helps prevent account takeover by quantifying authentication risk. Limitation: This only addresses account compromise risk, not the user’s own propensity to cause incidents. It’s a siloed score (low/medium/high risk sign-in) used to trigger security actions, and doesn’t reflect other behaviors. So while it’s a form of human risk quantification, it’s narrowly focused on logins rather than holistic security risk of a person.

    Note: In practice, enterprises often use a combination of these solutions. For example, a company might use an HRM platform (like Elevate or Living Security) to guide security awareness efforts and identify “most at-risk” employees for extra training, plus a UEBA/insider threat tool (like Microsoft IRM or Code42) to catch active insider incidents. Each tool has scope limitations, and integration of outputs remains a challenge. Nonetheless, all these solutions indicate a growing industry trend to treat people as measurable risk factors that can be scored and managed, rather than just training recipients.

    Academic and Research Work on Human-Centric Risk

    The academic and cybersecurity research community has recognized the need for human-centric risk measurement frameworks and has been exploring models to quantify and reduce human cyber risk. Key contributions and studies since 2021 include:

    • Human-Centric Risk Management (HRM) Frameworks: Kioskli et al. (2025) proposed an enhanced risk assessment methodology explicitly incorporating human factors. They point out that mainstream standards like ISO 27001 or NIST 800-30 have “often overlooked the human element” in risk calculation. Their methodology introduces the concept of user profile maturity – essentially assessing how practiced and aware users are in security – as an input to risk calculations. Social controls (security awareness programs, training, behavioral interventions) are included in the risk treatment phase alongside technical controls. This work highlights that an organization’s risk is not just about system vulnerabilities, but also the security hygiene and resilience of its people. It’s particularly relevant as a framework for SMEs, but the concept applies broadly: quantify human vulnerabilities (like poor cyber hygiene, low training, susceptibility to social engineering) and factor that into overall risk assessments.
    • Insider Threat Modeling and Profiling: Researchers have continued to study insider threats and how to predict or detect them. Modern approaches often leverage machine learning on user activity logs, but also emphasize psychological and behavioral cues. For instance, Pollini et al. (2022) took a holistic human-factors approach in a healthcare pilot and found that “a better cybersecurity culture does not always correspond with more rule-compliant behavior”. In other words, even if an organization has high security awareness in theory, practical conflicts and pressures can lead to unsafe actions. This aligns with the idea that we need direct measurement of behavior (e.g., who actually violates policies or ignores updates) rather than assumptions. Other academic efforts (e.g., by CERT at Carnegie Mellon) have produced best-practice frameworks rather than quantitative scores – CERT’s Common Sense Guide to Mitigating Insider Threats (7th Edition, 2022) compiles lessons learned to reduce insider risk. While not a scoring system, it provides a knowledge base that can inform what indicators or behaviors should be tracked in a human risk model (e.g. frequent policy violations, disgruntlement signs, etc.).
    • Behavioral Risk Metrics & Security Culture Research: There is emerging research on how to reliably measure security behavior and culture. The SANS Institute’s 2023 “Managing Human Risk” report emphasizes that humans are the primary attack vector and that security awareness teams need to move toward measuring risk reduction, not just training completion. It underlines a shift in mindset: instead of only asking “did employees take training?”, organizations ask “are employees’ risky behaviors decreasing?”. Academic studies have attempted to correlate training with risk reduction – for example, a 2021 doctoral study found a statistically significant (though modest) positive correlation between employees’ risk scores and the effectiveness of an AI-based security awareness training program. This suggests training can lower measured risk (e.g. fewer mistakes by trained users), but also shows the relationship is complex (r ≈ 0.15 in that study, indicating other factors at play). Researchers are also building databases of specific security behaviors (see CybSafe’s SebDB, a catalog of human security behaviors) to standardize what to measure. All of this academic work contributes to establishing metrics and models for human risk – a necessary foundation for tools that assign risk scores.
    • Public Standards and Frameworks: While no universal standard for a “human cybersecurity risk score” exists yet, there are efforts to codify insider risk management. In 2021, the U.S. government (CISA in partnership with CMU) released an Insider Risk Management Program Evaluation (IRMPE) framework mapping insider risk program best practices to the NIST Cybersecurity Framework. This provides a structured way for organizations to assess how well they’re addressing human/insider risk (governance, training, monitoring, response, etc.). It’s not a scoring tool, but a maturity model. Similarly, NIST IR 8286 (2021) and others have started bridging enterprise risk management with cybersecurity risk, hinting at integrating people risks into overall risk registers. We also see interest from the insurance sector and academia in quantifying “cyber risk of people” for underwriting – e.g. how likely is a given role’s error to lead to a breach and what’s the cost? (Elevate Security even noted cyber insurers are looking to use human risk scores to inform policies.) In summary, academic and standards work is laying the groundwork for rigorous human risk quantification by defining relevant human risk indicators, frameworks to incorporate them, and preliminary evidence that managing these metrics can reduce incidents.

    Emerging Trends and Gaps

    Trends in Human-Centric Risk Quantification:

    • Human Risk Management (HRM) as a Strategy: There is a clear trend towards treating human security risk as its own managed surface, often termed “Human Risk Management”. This goes beyond annual training to continuous monitoring, feedback, and intervention based on user risk levels. Gartner and industry analysts in recent years have highlighted HRM as the next evolution of security awareness. The approach is proactive: identify risky users and mitigate their risk before they cause an incident. Techniques like real-time “nudges” are being used – for example, popping up a warning or coaching message when an employee is about to do something risky (click a suspicious link, share data publicly). This targeted, behavior-driven intervention is a big shift from one-size-fits-all education. Early adopters (often forward-thinking CISOs) report that such programs help optimize security investments by focusing on the riskiest individuals.
    • Data-Driven Insights and AI: The use of AI/ML to quantify human risk is on the rise. Vendors are leveraging machine learning for anomaly detection (as in UEBA and Microsoft’s insider risk ML algorithms) and even Bayesian inference models (Living Security’s HRI). The massive amount of user activity data available – from email logs to badge entries – enables this. For instance, products analyze “an array of signals” from many sources to pinpoint risky behavior that a human analyst might miss. AI is also helping correlate disparate events into a meaningful risk story (e.g. combining HR data like an employee’s resignation notice with IT data like file downloads to raise an insider risk alert). On the flip side, concerns about AI ethics are noted – as one report found, 68% of breaches involve a non-malicious human in 2023, solutions must be careful not to unfairly punish well-intentioned employees. We see vendors emphasizing privacy protections (e.g. anonymization, data masking) as they roll out AI-driven monitoring. The use of AI will continue to grow, especially to scale human risk analysis in large enterprises, but it must be paired with governance to avoid over-surveillance.
    • Enterprise Adoption and Executive Focus: Human cyber risk has jumped to the top of the agenda for security leaders and boards. Multiple surveys in the past two years underscore this. For example, Mimecast’s 2025 State of Human Risk report found security leaders overwhelmingly say human error is now the biggest challenge, beyond technology gaps. EY’s 2022 Human Risk Survey highlighted that younger employees (Gen Z, Millennials) are significantly more likely to ignore security protocols – e.g. ~58% of Gen Z admit to delaying security updates vs only 15% of Baby Boomers. This generational shift is driving companies to invest in continuous education and risk scoring to identify those risky behaviors. We also see integration across security departments: insider risk programs now involve not just security ops, but HR, legal, and compliance teams to handle the human element (as Microsoft did when developing its insider risk solution). The result is more budget and attention going into human-centric controls. Many large enterprises are building dedicated “human risk” dashboards for executive reporting – showing metrics like “phishing click rate trend” or “number of high-risk users this quarter” – something unheard of a few years ago. The fact that cyber insurers and auditors are asking about human risk metrics now further indicates this is a lasting trend.

    Gaps and Challenges:

    Despite progress, several gaps exist in human risk quantification:

    • Lack of Standard Metrics: There is currently no universally accepted scale or benchmark for human cyber risk. Each vendor and research group defines their own “risk score” formula. One company might rate users 0-100, another A-F grades, another low/medium/high. This makes it hard to compare or validate scores externally. Efforts like academic frameworks and the sharing of behavior datasets aim to move toward standard metrics, but we’re not there yet. For example, the concept of a human risk “credit score” is frequently mentioned, yet unlike financial credit scores, there’s no FICO-equivalent in security. This also means organizations often have to trust vendor black-box algorithms without clear calibration. A related issue is measuring the ROI – linking a change in user risk score to a reduction in actual incidents is challenging (though some early data is promising, as noted in training studies and vendor case studies).
    • Privacy and Ethical Concerns: Quantifying human risk inherently means monitoring people’s behaviors, which raises privacy and ethics questions. If done poorly, programs can erode trust or feel like “employee surveillance.” Enterprises struggle to find the balance between security and privacy. The Microsoft case study explicitly noted the need to “assume positive intent” and respect privacy when implementing insider risk monitoring. Some regions (EU, etc.) have stricter laws that may limit how far user monitoring can go. There’s also the risk of stigmatizing employees – if someone is labeled “high risk” due to clicking phishing emails, how is that information used? Leading practices suggest keeping individual risk scores confidential and using them constructively (training, support) rather than for punishment. This is an evolving area – frameworks for ethically using human risk scores lag behind the technology.
    • Focus on Behavior vs. Intent: Most current tools excel at observing what users do (click a link, download a file) and rating the risk of those actions. However, determining why – the intent or motivation – is much harder. An employee with a high number of risky actions might be clumsy or poorly trained, not malicious. The human context (stress, fatigue, job dissatisfaction) is not easily captured by technical sensors. This gap means risk scoring systems can sometimes misjudge insiders. Research like the DTEX approach tries to address this by gathering rich metadata for context, but it’s not foolproof. In short, human risk quantification is not deterministic – a high score is a warning sign, not absolute proof of future breach, and security teams must investigate further. Bridging quantitative scores with qualitative investigation is an ongoing challenge.
    • Siloed Solutions and Integration: As seen above, there are point solutions focusing on different facets of human risk (phishing training vs. data exfiltration vs. credential theft). Enterprises lack a unified view. A truly comprehensive “human risk dashboard” that pulls in all these inputs is still mostly aspirational. Some platforms (e.g. Elevate) aim to be a one-stop hub by integrating with many other tools, but if an organization doesn’t use that platform, they might have disparate reports from training platforms, SIEM, DLP, etc. This fragmentation is a gap – it’s difficult to aggregate an organization-wide human risk surface quantification. We anticipate more consolidation and integration as the market matures (indeed, companies like Mimecast and Proofpoint are acquiring/partnering to cover both training and insider threat, etc.). For now, security teams often have to manually correlate human risk data from multiple sources.
    • Mid-Market Adoption and Resources: While our focus is on mid-to-large enterprises, it’s worth noting that smaller organizations (SMBs) are generally behind in this area. They often don’t have the dedicated staff or budget for fancy human risk platforms or UEBA systems. This creates a gap in the overall ecosystem – many breaches hit mid-sized companies who may still rely only on basic training and IT controls. Even in large enterprises, human risk quantification is a relatively new practice, often not staffed with dedicated analysts (unlike, say, vulnerability management which is well-established). There’s a skills gap in understanding and acting on these new metrics. The security industry will need to develop more expertise and perhaps services to help organizations operationalize human risk scoring (similar to how managed security services help run SIEMs).

    In summary, human risk surface quantification is an emerging but rapidly growing field. Both vendors and researchers recognize that improving technology alone isn’t enough – we must manage the people aspect of security with the same rigor. Commercial solutions are now providing the tools to score and track human risk in enterprises, and academic work is giving those efforts a foundation in risk models and psychology. The trends point to more data-driven, individualized approaches to reduce human cyber risk, moving away from checkbox training to truly measuring and managing behavior change. Addressing the current gaps – standardization, privacy, holistic integration – will be key to making human-centric risk quantification a reliable pillar of cybersecurity programs in the years ahead.

    Sources:

    • Elevate Security press release on workforce risk scoring; Businesswire (2022)
    • Living Security whitepaper on Human Risk Index (2022)
    • OutThink blog – Metrics for Cybersecurity Human Risk Management (2023)
    • Proofpoint Protect 2022 conference news – people-centric innovations
    • Microsoft Insider Risk Management overview (2021)
    • Teramind blog on Code42 Incydr (2024)
    • ManageEngine explainer on UEBA risk scoring
    • Reddit summary of Azure AD risky users (2023)
    • Kioskli et al., Electronics journal 14(3):486 (Jan 2025) – Practical HRM Methodology
    • Pollini et al., Cognition, Technology & Work 24(2022):371 – Human factors in cybersecurity
    • SANS Institute report “Managing Human Risk 2023”
    • Research study on training vs. risk scores (2021)
    • CISA/CERT Insider Risk Mgmt Eval., NIST CSF crosswalk (Oct 2021)
    • Cisco Investments blog – “Elevate Security: User Risk” (2021)
    • Infosecurity Magazine (via Hipther) – Why HRM is Next Step (May 2024)
    • EY Human Risk in Cybersecurity Survey (2022)
    • Mimecast “State of Human Risk 2025” report
    • DTEX Systems blog – GigaOm 2021 UEBA report (behavior risk scoring)

    Note:
    This blog is powered by deep research conducted with ChatGPT’s advanced research capabilities. The findings pull together insights from real-world cybersecurity tools, academic papers, and industry trends, focused on Human Risk Surface Quantification.