Is It Really True That Application Security has “Best-Practices”?

Application Security professionals commonly advocate for “best-practices” with little regard for the operational environment. The implication of a “best practice” is they are essential for everyone, in every organization, and at all times. Commonly cited “best-practices” include, but are not limited to, software security training, security testing during QA, threat modeling, code reviews, architecture analysis, Web Application Firewalls (WAF), penetration-testing, and a hundred other activities.

Sure, all these “best-practices” sound like good ideas. And any good sounding “application security” idea must be a “best-practice” right? Not so fast. Watch this, I’ll create a new “best-practice” right now: Website Bug Bounty programs. See how easy that was! Google does it. So does PayPal, Mozilla, Facebook, and Etsy. Now everyone must do it! Seriously though, more thoughtfulness is required.

After more than a decade working in the Application Security industry, I’m fairly confident there are few, if any, “best-practices” — that is to say, activities universally effective and investment worthy no matter the current environment. I’m convinced that different application security activities, including those listed above, are best suited in different scenarios. In fairness, I must admit that I’ve a limited amount of data that backs up my assertion, but at the same time it’s probably more data than those blindly advocating “best-practices” have to offer.

I’d like to share my thought process and support my opinions by laying out a few example scenarios. The following are vulnerability statistics descriptions of real-world websites taken over a period of time. These scenarios may be familiar to anyone with more than a few years of application security experience.

Website A
Produces a high rate of new vulnerabilities, about 20 each month, largely dominated by Cross-Site Scripting (XSS). When vulnerabilities are identified in production, the development team fixes nearly every one in under 24 hours. The net result is Website A is an industry laggard in vulnerability volume, but a leader in time-to-fix and remediation rate.

Given this scenario, and if you knew nothing else about the organization responsible for the website, what application security “best-practices” would you recommend they adopt first to improve their security posture?

My Recommendations
• Look into the development framework in use. Perhaps the output encoding APIs are nonexistent, imperfect, not well advertised to developers internally, and/or their use is not strictly mandatory.
• An internal security QA process should be created, or improved upon, to catch the XSS vulnerabilities prior to production release.

Addressing either one of these potential gaps could seriously curtail the XSS problem, quickly too! How close were your recommendations to mine? The same, way different, or somewhere in between? Let’s hear it in the comments.

Which “best-practices” would you not recommend, at least initially?

I’d recommend holding off on:
• Threat modeling is great for spotting missing security controls during the application design phase, but given the type of vulnerabilities in this case (XSS), and the organizations ability to patch them quickly, the problem appears more on the implementation side rather than design.
• The statistics show XSS vulnerabilities are capable of being fixed quickly when identified, indicating that they development team knows what the issue is and how to fix it. So, software security training for developers might help, but only if the programmers have been provided effective anti-XSS tools and for whatever reason aren’t using them consistently.
• WAFs can act as a virtual patches for websites experiencing lengthy time-to-fix metrics and low remediation rates, but that’s not what the metrics are showing us here.

In the next statistics scenario, the situation is slightly varied…

Website B
Experiences roughly 100 serious vulnerabilities a year, a large portion of which are XSS, but also a nontrivial number of SQL Injection, Insufficient Authentication, and Insufficient Authorization issues exist. When vulnerabilities are brought to the attention of the development team, the Insufficient Authentication and Insufficient Authorization issue are fixed consistently and comprehensively within a week or two. On the other hand, XSS and SQL Injection issues remain exposed for several months and do not get fixed often.

Given these statistics, what “best-practices” would you recommend first? The same as Website A or different?

Recommendations
• Insufficient Authentication and Insufficient Authorization issues are of low volume and taken care of quickly, but the more esoteric vulnerabilities such as XSS and SQL Injection are not. This says the odds are developer education is lacking in the latter two vulnerability classes. To address this focus a software security training program on those vulnerability classes, XSS and SQL Injection, placing emphasis on understanding the risks and proper defensive coding techniques.
• Deploy a Web Application Firewall, particularly a product capable of automatically integrating vulnerability results to create virtual-patch rules. This is important so that while the developers come up to speed on XSS and SQL Injection, the IT operations team can be engaged to help mitigate the risks they pose in a timely fashion.
• While Insufficient Authentication and Insufficient Authorization are fixed quickly, the mere fact that they exist could lead to the conclusion that a Threat Modeling process is could be helpful and perhaps some security QA to test for various abuse cases.

Comment below if you agree or disagree.

Website C
Very few vulnerabilities, maybe only 1 XSS per month, make it through the software development life-cycle (SDL) to production. WHEN XSS issues are identified, they are fixed, and fixed extremely quickly, taking no more than a day. But, vulnerabilities don’t get remediated often. In fact, Website C’s annual remediation rate is under 25%.

With these statistics in mind, what “best-practices” should prioritized? The same as Website A and Website B or different?

My initial assumption would be that the organization has a vulnerability prioritization problem and/or lack of development resources where the investment in revenue generating features is placed ahead of security fixes. And for them, perhaps that’s the right decision. If so…

Recommendations
• In the near-term, like in Website B, deploy a Web Application Firewall with virtual-patch integration capability. If the vulnerabilities are to remain publicly exposed in the code for an extended period of time, as a business decision, the bar to exploitation higher should be pushed higher.
• For a longer-term plan, engage experienced consulting firm that specializes in code remediation. That way the organizations core development team can stay focused driving features while the current pool of vulnerabilities is simultaneously taken care of.

Hold-Off
• The metrics obviously show that the development team generates very few vulnerabilities. This is probably because they either are well-versed in XSS and other classes of attack, have the proper tools, and/or an effective QA process. If so, it also means investing in software security training for developers and QA process improvement probably isn’t going to make a huge impact on poor remediation rates.

Agree or disagree with the guidance? Sound off in the comments.

 

We could continue forever discussing all the possible varied statistical scenarios one might encounter in application security. I maintain that application security defense must be adaptive, situationally aware, and inline with the interests of the business. Therefore, while a “best-practice” might sound like good ideas, it must be applied at the right time and place. This is a big reason why customers of WhiteHat Security measurably improve their security posture over time. As the saying goes, anything measured tends to improve.

The ability to capture vulnerability statistics, on an ongoing basis, provides invaluable insight into how to prioritize organizational action that has the best chance to improve the outcomes. Without knowing where the gaps are in the application security program ahead of time, what else can be done except guess what “best-practice” might help, which unfortunately is what most do so. When you don’t have to guess, organization can continue investing in activities that are measurable returning value, divest in those that don’t, and better apply those scarce application security resources.

This entry was posted in Web Application Security on by .

About Jeremiah Grossman

Jeremiah Grossman is the Founder and interim CEO of WhiteHat Security, where he is responsible for Web security R&D and industry outreach. Over the last decade, Mr. Grossman has written dozens of articles, white papers, and is a published author. His work has been featured in the Wall Street Journal, Forbes, NY Times and hundreds of other media outlets around the world. As a well-known security expert and industry veteran, Mr. Grossman has been a guest speaker on six continents at hundreds of events including TED, BlackHat Briefings, RSA, SANS, and others. He has been invited to guest lecture at top universities such as UC Berkeley, Stanford, Harvard, UoW Madison, and UCLA. Mr. Grossman is also a co-founder of the Web Application Security Consortium (WASC) and previously named one of InfoWorld's Top 25 CTOs. He serves on the advisory board of two hot start-ups, Risk I/O and SD Elements, and is a Brazilian Jiu-Jitsu Black Belt. Before founding WhiteHat, Mr. Grossman was an information security officer at Yahoo!