Tag Archives: metrics

The Necessity of Compliance Alone Is Insufficient for Justifying Security Investment

It is often much easier to justify investing security resources that are legally or contractually mandated than basing such investments on the overall value to the company of an adequately funded information risk management group. Given this disparity of  funding “must do” tasks while overlooking “should do” tasks, security teams can take the initiative and do the “non-essential” actions that strengthen their organizations’ security as they meet compliance requirements. This approach  also provides a security group the information it needs to justify requests for risk management resources.

The compliance standards currently applicable to application security include PCI-DSS, HIPAA, FFIEC, GLBA, ISO 27001/27002, and Sarbanes Oxley. A organization’s failure to comply to their standards can lead to fines, legal action, and sometimes even business shutdown. When executive management is faced with these possibilities, a typical conversation with the company’s security staff usually results in the conclusion that,  “The company must spend $A on X compliance mandate because non-compliance with regulation X carries an estimated cost of $B.”

As obvious as it may seem that compliance requirements should convince management to allocate funds for security, the necessity for compliance is often insufficient to get management to actually make the needed investments.

Government and/or industry-mandated regulations can frequently differ in how they impact an organization,  and typically can be applied differently to each organization. Some organizations may be able to change specific aspects of how they do business, and thus meet mandated requirements by changing the parameters of compliance. Other organizations,  after estimating that the punishment for non-compliance will cost less than the costs necessary to comply, may decide to simply ignore the notification to comply.

At WhiteHat we think a more realistic way to look at compliance issues is that in some instances you MIGHT get hacked, but for ignoring certain regulations you WILL get audited. In either case it is essential to understand the “historical record” of how a particular compliance standard has been applied within an industry; and to then be able to estimate the capital and operational expenditures required for your organization to comply. This way, when management asks you to justify your request for funding based on how NOT DOING SO would impact business, you’ll have the information immediately at hand, which will make the decision-making process far easier.

 

If You Want to Improve Something, Measure It

During the past five years WhiteHat Sentinel has performed comprehensive vulnerability assessments on thousands of websites. These assessments are ongoing, as performing continuous assessments is integral to our core values at WhiteHat. With each assessment we analyze the results − discovering new facts about Web application security, share that knowledge through reports, and receive feedback from companies about which information has done the best job helping them secure their websites.

Now that we’ve gathered this large amount of data on website security, we believe that any security professional can use the seven metrics − all measurable and mostly automatable − listed below to track the performance of their application security program and the software development lifecycle that drives it.

The Seven Key Application Security Metrics

1. Discoverability – The Attacker Profile
What are the risks of a vulnerability being discovered, and what type of attack − Opportunistic, Directed Opportunistic, or Fully Targeted – will the attacker use? Does the attacker need to be authenticated in order to breach security? Is the damage that the attacker can cause within the risk tolerance of the organization attacked?

2. Exploitability – The Difficulty of Attacking
Are the site’s vulnerabilities possible to exploit? Theoretically? Possibly? Probably? Easily? Are proof-of-concept exploits available for demonstration purposes? Is it becoming more difficult for attackers to exploit issues previously identified? That is, from day to day are there less trivial vulnerabilities per application, and are the remaining vulnerabilities less likely to be exploited?

3. Technical and Business Impact – The Severity of the Attack
What negative impacts can the vulnerabilities have on the business – both technically and financially? If a vulnerability is exploited, is the breach communicated within the organization and is an estimate made of the possible damage – quickly? Once an attack is discovered will business stakeholders and development groups within the company prioritize the risk and take action to remediate the problem accordingly?

4. Vulnerabilities-per-Input – The Attack Surface
How common and how deep across the codebase are the vulnerabilities exploitable? Does the codebase become more secure – or less secure – as new code and acquired code are added; as legacy code requirements increase; and as new technologies are introduced?

5. Window of Exposure – The Exposure to Attacks
How often is the organization exposed to exploitable vulnerabilities: Always, Frequently, Regularly, Occasionally, or Rarely? How effective are the quarterly or annual remediation efforts? What is the best method for shortening the window of exposure time, i.e., the risk of compromise? Is the risk diminished by reducing the number of vulnerabilities being introduced? By increasing remediation speeds? By fixing more of the system security issues?

6. Remediation Costs – The Opportunity for Greater Savings
What are the costs of fixing the vulnerabilities that are discovered? Over time, is the cost-per-defect to remediate the vulnerabilities decreasing? Is it possible to decrease “lost opportunity costs,” i.e., to reduce the time that developers spend “fixing” security issues rather working on new business projects? And how can the efficiency of developers’ remediation efforts be improved?

7. Defeating Vulnerabilities – The Focus on Identifying Their Source
What type of code or which business unit is introducing the vulnerabilities? Are the vulnerabilities internally developed, third-party developed, from acquired software, etc.? When multiple vulnerabilities are discovered, where should the focus of attention be on immediately? Can the level of software security be improved through developer training, contractual obligations, software acceptance policies, etc.?

“To Improve the Process, Measure the Process”

In summary, corporate security teams often tell us that their businesses need to become much more concerned about their Web application security. I’m sure many of you readers can relate to such comments. At WhiteHat we’ve learned that the three primary obstacles to establishing a successful website security program are: (1) compliance issues, (2) company policies, and (3) a lack of data that will support the need for greater security.

While compliance issues and company policies may sometimes work to get management’s attention, I believe that data – hard numbers – work much more effectively.

When management understands how secure – or unsecure – its websites are, how secure they have been in the past, and how the company’s current Web security compares to the security of its competitors, then almost certainly the resistance to devoting more resources to Web security will diminish. Furthermore, there will be new respect within management for both the Web security services provided and the teams providing them.

As the old saying goes, “If you want to improve something, measure it.” Because almost anything that can be measured will improve. Which of the 7 metrics discussed here are you measuring today? Which of these metrics are you currently unable to measure at all? And are any of the important metrics presented here missing from your list?  Or maybe you have new ones to add!

PROTIP: Publish Security Scoreboards Internally

Scoreboards have been around forever, used to show who’s winning, how competitors rank, and sometimes track what has transpired. Scoreboards are seen in sports, video games, stock markets, box office sales, traffic analytics, education, and on and on. As a fundamental concept, scoreboards also have the powerful ability to harness a basic human instinct — competitiveness. Leaders at the top of a scoreboard will naturally work to preserve their position, those further down are innately compelled to fight to move up, and collectively all participants similarly driven towards a common objective. Using this influence many organizations have found that using scoreboards to measure and communicate “security” objectives can be amazingly effective at aligning business interests.

Achieving similar success requires first choosing a useful and collectable set of security metrics where the organization would like to improve. Anything measured tends to improve. These metrics may be the total number of vulnerabilities, remediation rates & speed, vulnerabilities-per-input, percentage of developers passing awareness training, time exposed to serious issues, and so on. Next, start collecting data. When enough is gathered, the results are properly formatted, typically organized by subsidiary, business unit, or team, and the reports published internally for all too see. Security scoreboard leaders will be proud to see their performance recognized as they set the standard for coworkers to follow. Laggards feel a sense of pressure to do the things necessary to close the gap with their peers. Less and less will security teams have to chase down the weakest links, those needing the most help will begin seeking them out.