The Ghost of Information Disclosure

Information disclosure is a funny thing. Information disclosure can be almost completely innocuous or — as in the case of Heartbleed — it can be devastating. There is a new website called un1c0rn.net that aims to make hacking a lot easier by letting attackers utilize Heartbleed data that has been amassed into one place.

The business model is simple – 0.01 Bitcoins (Around $5) for data. It leaves
no traces on the remote server because the data isn’t stored there anymore,
it’s on un1c0rn’s server. So let’s play a sample attack out.

1) Heartbleed comes out;

2) Some time in the future un1c0rn scans a site that is vulnerable and logs it;

3) A would-be attacker searches through un1c0rn and finds a site of interest;

4) Attacker leverages the information to successfully mount an attack against the target server leveraging the data.

In this model, the attacker’s first packets to the server in question could be
the one that compromises them. But it’s actually more interesting than that. As I was looking through the data I found this query.

un1c0rn

For those of you who don’t live and breathe HTTP this is an authorization request with a base64 encoded string (which is trivial to reverse) that contains the usernames and passwords to the sites in question. This simple request found 400 sites with this simple flaw in it. So let’s play out another attack scenario.

1) Heartbleed comes out;

2) Some time in the future un1c0rn scans a site that is vulnerable and logs it;

3) Site is diligent and finds that they are vulnerable, patching up immediately and switching out their SSL certificate with a new one.

4) A would-be attacker searches through un1c0rn and finds a site of interest;

5) Using the information they found they still compromise the site with the username/password, even though the site is no longer vulnerable to the attack in question.

This is the problem with Information Disclosure – it still can be useful even long after the hole that was used to gather the data has been closed. That’s why in the case of Heartbleed and similar attacks not only do you have to fix the hole but you also have to expire all of the passwords, and remove all of the cookies or any other way that a user could gain access to the system.

The moral of the story is that you may find yourself being compromised seemingly almost magically in a scenario like this. How can someone guess a cookie correctly on the first attempt? Or guess a username/password on the first try? Or exploit a hole without ever having looked at your proprietary source code or even having visited your site before? Or find a hidden path to a directory that isn’t linked to from anywhere? Well, it may not be magic – it may be the ghost of Information Disclosure coming back to haunt you.

Spooky!

SSL/TLS MITM vulnerability CVE-2014-0224

We are aware of the OpenSSL advisory posted at https://www.openssl.org/news/secadv_20140605.txt. OpenSSL is vulnerable to a ChangeCipherSpec (CCS) Injection Vulnerability.

An attacker using a carefully crafted handshake can force the use of weak keying material in OpenSSL SSL/TLS clients and servers.

The attack can only be performed between a vulnerable client *and* a vulnerable server. Desktop Web Browser clients (i.e. Firefox, Chrome Internet Explorer) and most mobile browsers( i.e. Safari mobile, Firefox mobile) are not vulnerable, because they do not use OpenSSL. Chrome on Android does use OpenSSL, and may be vulnerable. All other OpenSSL clients are vulnerable in all versions of OpenSSL.

Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.

OpenSSL 0.9.8 SSL/TLS users (client and/or server) should upgrade to 0.9.8za.
OpenSSL 1.0.0 SSL/TLS users (client and/or server) should upgrade to 1.0.0m.
OpenSSL 1.0.1 SSL/TLS users (client and/or server) should upgrade to 1.0.1h.

WhiteHat is actively working on implementing a check for sites under service. We will update this blog with additional information as it is available.

Editor’s note:
June 6, 2014
WhiteHat has added testing to identify websites currently running affected versions of OpenSSL across all of our DAST service lines. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. WhiteHat recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested and patched.

If you have any questions regarding the the new CCS Injection SSL Vulnerability please email support@whitehatsec.com and a representative will be happy to assist.

Easy Things Are Often the Hardest to Get Right: Security Advice from a Development Manager

I’m not a security guy. I haven’t done much hands-on software development for awhile either. I’m a development manager, project manager, and CTO, having spent much of my career building technology for stock exchanges and central banks. About six years ago I helped to launch an online institutional trading platform in the US, where I serve as the CTO today. The reliability and integrity of our technology and operations are critically important, so we worked with some very smart people in the info sec community to make sure that we designed and built security into our systems from the start.

Through this work I got to know about OWASP, SAFECode and WASC, and met many of the people involved in trying to make the software world secure. Over the last couple of years I’ve helped out on some OWASP projects, most recently the OWASP Top 10 Proactive Controls (a Top 10 list for developers instead of security auditors) and at the SANS Institute in their SANS Analyst program. My special areas of interest right now are secure development with Agile and Lean methods, and integrating security into DevOps.

I’ve learned that preaching about security principles, “economy of mechanism” and “complete mediation” and “least common mechanism” (and whatever else Saltzer and Schroeder talked about 40 years ago) and publishing manifestos are a waste of time. There’s nothing actionable; nothing that developers can see or use or test.

I’ve also learned that you can’t legislate “security-in”. Regulatory compliance constraints and formal security policies don’t make software more secure. The best that they can do is act as leverage to get security on the table.

I am only interested in things that work, especially things that work for small teams. Because a lot of software is built by small teams and small companies like mine, and because what works for small teams can be scaled up: look at the success of Agile as it makes its way from teams to the enterprise. I want things that developers can do on their own, without the help of a security team – because small companies don’t have security teams (or even a “security guy”), and because the only way that appsec can succeed at scale is if developers make it part of their jobs.

Here’s what I’ve found works so far:

In design: trust, but verify

When designing, or changing the design, make sure that you understand your dependencies and system contracts, the systems and services and data you depend on – or that depend on you. What happens if a service or system fails or is slow or occasionally times out? Can you trust the quality of the data that you get from them? Can you trust that they will take good care of your data? How can you be sure?

All input is evil

Michael Howard at Microsoft has “one rule” for developers: if there’s one thing that developers can do to make applications more secure, it’s to recognize that

All input is evil unless proven otherwise

and it’s up to developers to defend against evil through careful data checking. Get to know – and love – type-checking and regex.

This is simple to explain and easy to understand. It’s something that developers can think about in design and in coding. There are testing tools that can help check on it (through taint analysis or fuzzing). It will pay back in making the system more reliable and more resilient, catching and preventing run-time errors and failures, as well as making the system more secure.

Write good code

Security builds on top of code quality. Bad code, code with serious bugs, code that is too hard to understand and unsafe to change, will never be secure. Look at some of the most serious recent security problems: the Apple iOS SSL bug and the Heartbleed OpenSSL bug. These weren’t failures in understanding appsec principles or gaps in design – they were caused by sloppy coding.

Identify your high-risk code: security plumbing and other plumbing, network-facing APIs, code that deals with money or control functions or sensitive data, core algorithms. Make sure that your best developers work on this code, that they have extra time to review it carefully to make sure that the code does what it is supposed to do, and that it “won’t boink” if it hits bad data or some other exception occurs. Step through the logic, check API contracts, data validation, error handling and exception handling. Spend extra time testing it.

Never stop testing

Agile and XP and TDD/ATDD and Continuous Integration have made developers responsible for testing their own software to verify that it works by writing automated tests as much as possible. This needs to be extended to security. Developers can write security unit tests and run them in Continuous Integration.

Find a good security testing tool – a dynamic scanner or a static analysis tool – that is affordable and fast and generates minimal noise, and make it part of your automated testing pipeline. Run it as often as possible. Bugs that are caught faster will be fixed faster and cost less to fix. Make security testing something that developers don’t have to think about – it’s always “just there.”

Automated testing is an important part of a testing program, but it isn’t enough to build good software. Like some other shops, we also do a lot of manual exploratory testing. A skilled tester working through key features of the app, pushing the boundaries, trying to understand and break things will find important bugs and tell you if the software really is ready. A good pen test is exploratory testing on steroids. For every bug they find, ask: Why is it broken? How did we miss it? What else could be wrong? Pen tests as a security gate are too little too late. But as a health check on your SDLC? Priceless.

Vulnerabilities are bugs – so fix them

Money spent on training and tools and testing is wasted if you don’t fix the bugs that you find. Security vulnerabilities are bugs – you may have to track them separately for compliance reasons, but to developers they should just be bugs, handled and prioritized and fixed like any other bug. If you have a development culture where bugs are taken seriously (Zero Bug Tolerance), and security vulnerabilities are bugs, then security vulnerabilities will get fixed. If developers aren’t fixing bugs, then you have a serious management problem that you need to take care of.

Training works – to a point

I agree that every developer and tester (and their managers) should get some training in secure development, so that everyone understands the fundamentals. But not everybody will “get it” when it comes to secure development – and not everybody has to. Adobe’s karate belt approach to appsec training makes good sense. Everybody should get white belt level awareness training. Every team should have somebody the rest of the team looks to for technical leadership and mentoring. These people are the ones who need in-depth black belt appsec training, and who can take best advantage of it. They can do most of the heavy-lifting required, help the rest of the team on security questions, and keep the team honest.

Simple is not so simple

These are the things that I’ve found that work: simple rules, simple tools, simple good practices. Make sure that at least your best developers are trained in application security. Understand trust and layering-in design. Write solid, defensive code. Test your code for security like you test it for correctness and usability. If you find security bugs, fix them.

Sounds easy. But what I’ve learned from working in software development for a long time is that the easy things are often the hardest to get right.

About Jim Bird
Jim is a development manager and CTO who has worked in software engineering for more than 25 years. He is currently the co-founder and CTO of a major US-based institutional trading service, where he manages the company’s technology organization and information security programs. Jim has worked as a consultant to IBM and to stock exchanges and banks globally. He was also the CTO of a technology firm (now part of NASDAQ OMX) that built custom IT solutions for stock exchanges and central banks in more than 30 countries. You can follow him on his blog at Building Real Software.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

A Tale of Responsible Disclosure

In the past few months I’ve taken on cycling as a new hobby, and one of the services I have been using to track routes and mileage is a popular webapp known as MapMyRide. They actually have several services that are split up per activity including MapMyFitness, MapMyHike, MapMyWalk, and MapMyRun. Several colleagues have also joined me in recreational cycling, so one day I decided to plan out a route for us to use repeatedly. After the route was created using MapMyRide, I decided to give it the quirky name Tour de <’hackers”>. It would get a laugh out of any XSS hunters because most of us in application security would recognize it as a benign HTML tag that also checks for single and double quote attribute breaks.

Image1
Take a close look at the “Share via Tweet/Facebook/Email” section

Image2

Oh my…
Whelp, that’s awesome, I’ve now inadvertently done the number one thing an ethical pentester should never do, and that’s play “outside of the sandbox.” You see, companies that don’t have an extremely active security team don’t really distinguish between a security professional that may be using the application legitimately, or an actual intruder that’s digging for exploits. Those of us that are paid to ethically hack into web applications and services are normally protected by various legal documents as per the contract of said penetration tests. XSS isn’t really hard to find – and truthfully I’m quite annoyed with all of its hype – but to a company it really doesn’t matter how cool or complex the exploit is so long as it’s damaging in some fashion to the business or brand. This particular one is persistent and easily wormable. If anyone remembers using MySpace (I admit that I did and accept the shame for it) they might be familiar with the MySpace Samy Worm that infected around a million profiles in less than 24 hours. Samy Kamkar’s XSS injection made you friend request him and added a little text to your profile page that said “but most of all, samy is my hero.” Silly, right? Well, what’s not silly is that his home was raided and he ended up entering a felony plea agreement that involved three years probation without computer use.

Let’s look at the facts. I don’t know if the makers of MapMyRide have an intrusion detection system (IDS) in place or if anything has flagged internally on this input. A quick Google search doesn’t reveal anything about them participating in Bug Bounties or Responsible Disclosure. I also did this on an account that is quite visibly linked to my personal Facebook account, so there’s a first and last name right there. All I can do now is approach them with what I’ve found, state my case and my intent, and hope that they seem me as a “do-good” individual that wants to ethically disclose the issue.

I hunted down a MapMyRide co-owner’s Twitter handle through LinkedIn, who then referred me to their VP of Engineering, Jesse Demmel. Now I can’t just email him saying that I was able to break the HTML document because that might not warrant enough attention. I’m going to have to send them an actual proof of concept and cause more injections to go through. Thankfully (open to interpretation) there wasn’t any input validation in place at all and we could call the cookies easily with <svg/onload=alert(document.cookies)>. Voilà.

Image3

Then things started getting a bit more interesting. I realized that the other domains shared this exact same template. This injection isn’t only persistent on mapmyride.com, but on the other four as well!

Image4

Image5

Image6

Image7

So, if I were to actually exploit this across all five domains from one entry point, how many people would I affect?

Here are some WolframAlpha statistics:

Image8

Based on those numbers, total page views across all sites are approximately 1,353,300 per day with 431,800 daily visitors. That’s not too shabby of a victim pool. With this injection I could make anyone that viewed any of these “routes” deactivate their account, change their profile information, change their profile pictures, steal their addresses, make things they’ve set as private now turn to public, have them each create support tickets to overload customer service, etc. etc., while also causing each victim to become a carrier of the injection which will cause an exponential increase in infection. Basically, anything within the context of the application is now accessible to me through JavaScript execution. I now carefully worded an email to Mr. Demmel explaining the severity and impact of the vulnerability and he responded in less than 24 hours:

“Hey Jonathan,

Sorry I didn’t get back to you sooner as I was traveling. Thanks for the details. We take this very seriously and are actively working on a fix, which we hope to push live shortly.

We have escaping turned on in Django and are using the latest version. This will require a patch and we’re working through that today (and the weekend if necessary).

I really appreciate you bringing this to our attention. We have gone through security audits in the past and have another scheduled soon.

Thanks“

The moral of this story: If you’re going to responsibly disclose I suggest the following things:

  1. Contact the highest person in the organization chain as possible initially. If you start by contacting someone like a customer support representative they may have no idea what you’re talking about. I’ve reported a bug once before and my first contact thought I was talking about spelling/grammar mistakes. This time I went straight to the co-owner.
  2. Don’t threaten with blog posts or say “I’ve reached out to Gawker, HuffingtonPost, or Threatpost and they’re ready to run this story, whatcha gonna do to fix it?” You’ll look like the enemy really quick and won’t be doing yourself any favors.
  3. Don’t ask for a reward. There’s a simple word for this: extortion.
  4. Simply make your intentions known. Know how to spot XSS, SQLi, or RCE by casually browsing an application that you use personally? You’re one-in-a-million (quite literally). Use your knowledge not only to report the problem, but also to tell them how they can fix it in case they are unaware. Intentions and thoughtfulness will go the longest for you in these regards.
  5. If you’re going to blog about it, like I’m doing here, wait until the issue is fixed and no longer exploitable (at least by the same attack vector). Asking for permission is wise as well, so that they can coordinate a release statement at the same time. Again, work with them, not against them.

Happy hacking.

Podcast: An Anthropologist’s Approach to Application Security

In this edition of the WhiteHat Security podcast, I talk with Walt Williams, Senior Security and Compliance Manager Lattice Engines. Hear how Walt went from Anthropologist to Security Engineer, his approach to building relationships with developers, and why his biggest security surprise involved broken windshields.

Want to do a podcast with us? Signup to become an “Unsung Hero”.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Teaching Code in 2014

Guest post by – JD Glaser

“Wounds from a friend can be trusted, but an enemy multiplies kisses” – Proverbs 27:6

This proverb, over 2,000 years old, is directly applicable to all authors of programming material today. By avoiding security coverage, by explicitly teaching insecure examples, authors do the world at large a huge disservice by multiplying both the effect of incorrect knowledge and the development of insecure habits in developers. The teacher is the enemy when their teachings encourage poor practices through example, which ultimately bites the student in the end at no cost to the teacher. In fact, if you are skilled enough to be an effective teacher, the problem is worse than for poor teachers. Good teachers by virtue of their teaching skill greatly multiply the ‘kisses’ of poor example code that eventually becomes ‘acceptable production code’ ad infinitum.

So, to the authors of programming material everywhere, whether you write books or blogs, this article is targeted at you. By choosing to write, you have chosen to teach. Therefore you have a responsibility.

No Excuse for Demonstrating Insecure Code Examples

This is year 2014. There is simply no excuse for not demonstrating and explaining secure code within the examples of your chosen language.

Examples are critical. First, examples teach. Second, examples, one way or the other, find their way into production code. This is a simple fact with huge ramifications. Once a technique is learned, once that initial impact is made upon the mind, that newly learned technique becomes the way it is done. When you teach an insecure example, you perpetuate that poor knowledge in the world and it repeats itself again and again.

Change is difficult for everyone. Pepsi learned this the expensive way. Once people accepted that Coke was It, hundreds of millions of dollars spent on really cool advertising have not been able to change that mindset of that opinion. The mind has only one slot for number one, and Pepsi has remained second ever since. Don’t continue to enforce security in the mind as second.

Security Should Be Ingrained, Not Added

When teaching, even if you are ‘just’ calculating points on a sample graph for those new to chart programming, if any of that data comes from user supplied input via that simple AJAX call to get the user going, your sample code should filter that sample data. When you save it, your sample code should escape it for the chosen database; when you display it to the user, the sample code of your chosen language should escape it for the intended display context. When your sample data needs to be encrypted, your sample code should apply modern cryptography. There are no innocent examples anymore.

Security should be ingrained, not added. Programmers need to be trained to see security measures in line with normal functionality. In this way, they are trained to incorporate security measures initially as code is written, and also to identify when security measures are missing.

When Security Measures are removed for the sake of ‘clarity’ the student is being taught unconsciously that security makes things less clear. The student is also being trained directly by example to add security ‘later’.

Learning Node.js In 2014

Most of the latest Node.js books have some great material, but only one out of several took the time to properly teach, via integrated code examples, how to properly escape Node.js code for use with MySQL. Kudos to Pedro Teixeira, who wrote Professional Node.js from Wrox Press, for teaching proper security measures as an integral part of the lesson to those adapting to Node.js.

Contrast this with Node.js for PHP Developers from O’Reilly Press, where the examples explicitly demonstrate how to insecurely code Node.js with SQL Injection holes. The code in this book actually teaches the next new wave how to code wide open vulnerabilities to servers. Considering the rapid adoption of Node.js for server applications, and the millions who will use it, this is a real problem.

The fact that the retired legacy method of building SQL statements through insecure string concatenation was chosen by the author as the instruction method for this book really demonstrates the power of ingrained learning. Once something is learned, for good or bad, a person keeps repeating what they know for years. Again, change is difficult.

The developers taught by these code examples will build apps that we may all use at some point. The security vulnerabilities they contain will effect everyone. It makes one wonder if you are the author whose book or blog taught the person who coded the latest Home Depot penetration. This will be code that has to be retrofitted later at great time and great expense. It is not a theoretical problem. It does have a large dollar figure attached to its cost.

For some reason, authors of otherwise good material choose to avoid teaching security in an integral way. Either they are not knowledgeable about security, in which it is time to up their game, or have fallen into the ‘clarity omission’ trap. This is a wide spread practice, adopted by almost everyone, of declaring that since this ‘example code’ is not production code, insecure code is excusable, and therefore critical facts are not presented. This is a mistake with far-reaching implications.

I recently watched a popular video on PHP training from an instructor who is in all other respects a good instructor. Apparently, this video has educated at least 15,000 developers so far. In it, the author explicitly states briefly that output escaping should be done. However, in his very next statement, he declares that for the sake of brevity, the technique won’t be demonstrated, and the examples given out as part of the training do not incorporate the technique. The opportunity to ingrain the idea that output escaping is necessary, and that it should be an automated part of a developer’s toolkit, has just been lost on the majority of 15,000 students because the code to which they will later turn for reference is lacking. Most, if not all, will ignore it as a practice until mandated by an external force.

Stop the Madness

In the real world, when security is not incorporated at the beginning, it costs additional time and money to retrofit it later. This cost is never paid until a breach is announced on the front page of a news service and someone is sued for negligence. Teachers have a responsibility to teach this.

Coders code as they are taught. Coders are primarily taught through books and blog articles. Blog articles are especially critical for learning as they are the fastest way to learn the latest language technique. Therefore, bloggers are equally at fault when integrated security is absent from their examples.

The fact is that if you are a writer, you are a trainer. You are in fact training a developer how to do something over and over again. Security should be integral. Security books on their own, as secondary topics, should not be needed. Think about that.

The past decade has been spent railing against insecure programming practices, but the question needs to be asked, who is doing the teaching? And what is being taught?

You, the writer, are at the root of this widespread security problem. Again, this is 2014 and these issues have been in the spotlight for 10+ years. This is not about mistakes, or being a security expert. This is about the complete avoidance of the basics. What habit is a young programmer learning now that will effect the code going into service next year? and the years after?

The Truth Hurts

If you are still inadvertently training coders how to write blatantly insecure code, either through ignorance or omission, you have no business training others, especially if you are successful. If you want to teach, if you make money teaching programming, you need to stop the madness, educate yourself, and reverse the trend.

Write those three extra lines of filtering code and add that extra paragraph explaining the purpose. Teach developers how to see security ingrained in code.

Stop multiplying the sweet kiss of simplicity and avoidance. Stop making yourself the enemy of those who like to keep their credit cards private.

About JD
In his own words JD, “Did a brief stint of 15 years in security. Headed the development of TripWire for Windows and Foundscan. Founded NT OBJECTives, a company to scan web stuff. BlackHat speaker on Windows forensic issues. Built the Windows Forensic Toolkit and FPort, with the Department of Justice has used to help convict child pornographers.”

Details on Internet Explorer Zero-Day Exploit

A new Zero-Day exploit for Internet Explorer was released on Saturday by FireEye Research Labs. At its core the new exploit takes advantage of a known Flash technique that can be used to access memory. Memory is then corrupted in a way that completely bypasses the built in Microsoft Window’s protection. This then gains the attacker full control which allows the attacker to run his own maliciously crafted code on the victims machine. Internet Explorer versions 6-11 are all currently vulnerable to attack. Details of the exploit can be found here: http://www.fireeye.com/blog/uncategorized/2014/04/new-zero-day-exploit-targeting-internet-explorer-versions-9-through-11-identified-in-targeted-attacks.html.

Since the vulnerability relies on corrupting memory through Flash, an easy mitigation technique is to simply disable Flash. In addition if you are using different browsers, such as Firefox or WhiteHat’s Aviator, you will not be affected. There have already been known attacks exploiting the new IE vulnerability so users are encouraged to take immediate action to mitigate their risk.

For users interested in an alternative browser to Internet Explorer, WhiteHat Aviator is now available for Windows users and can be downloaded here: https://www.whitehatsec.com/aviator/.

Apache Struts ClassLoader Vulnerability

A patch issued in March for a previously known vulnerability in Apache Struts Version 2.0.0 – 2.3.16 has been bypassed. The vulnerability allowed attackers to manipulate the ClassLoader leading to possible remote code execution and denial of service. Struts versions 2.0.0-2.3.16.1 are all currently vulnerable to attack. As of today no patch is available however Apache has a detailed write up on how to mitigate the vulnerability while they work on a security patch. Details can be found at http://struts.apache.org/announce.html#a20140424

WhiteHat has added detection for the Struts ClassLoader vulnerability across all service lines. Both dynamic and static assessments have been updated and will begin testing as soon as the next scan begins.

Our Customer Success team would be happy to answer any questions you may have regarding this issue. They can be reached by emailing support@whitehatsec.com

Editor’s Note: A patch has been released by Apache on Saturday 4/26 which should fix the ClassLoader issue in Struts. Users are encouraged to update to Struts 2.3.16.2 immediately. Details can be found at http://struts.apache.org/announce.html#a20140424

A Security Expert’s Thoughts on WhiteHat Security’s 2014 Web Stats Report

Editor’s note: Ari Elias-Bachrach is the sole proprietor of Defensium LLC. Ari is an application security expert. Having spent significant time breaking into web and mobile applications of all sorts as a penetration tester, he now works to try and improve application security. As a former developer who has experience with both static and dynamic analysis he can work closely with developers to try and remediate vulnerabilities. He has also developed and taught secure development classes, and can help make security part of the SDLC. He is a regular speaker on the field of application security at conferences. He can be found on Twitter @angelofsecurity. Given his experience and expertise, we asked Ari to review our 2014 Website Security Statistics Report which was announced yesterday to get his thoughts which he has shared as a guest blog post.

The most interesting and telling chart in my opinion is the Vulnerability class by language chart. I decided to start by asking myself a simple question: can vulnerabilities be dependent on the language used, and if so which vulnerabilities? I did a standard deviation on all vulnerability classes to see which ones had a high degree of variance across the different languages. XSS (13.2) and information leakage (16.4) were the two highest. In other words, those are the two vulnerabilities which can be most affected by the choice of programming language. In retrospect info disclosure isn’t surprising at all, but XSS is a little interesting. The third one is SQLi, which had a standard deviation of 3.8, and everything else is lower than that.

Conclusion 1: The presence or absence of Cross-site scripting and information disclosure vulnerabilities is very dependent on the environment used, and SQLi is a little bit dependent on the environment. Everything else isn’t affected that much.

Now while it seems that frameworks can do great things with respect to security, if you live by the framework, then you die by the framework. Looking at the “Days vulnerability open by language” chart, you can see some clear outliers where it looks like certain vulnerabilities simply cannot be fixed. If the developer can’t fix a problem in code, and you have to wait for an update to the framework, then you end up with those few really high mean times to fix. This brings us to the negative consequences of relying on the framework to take care of security for us – it can limit our ability to make security fixes as well. In this case the HTTP response splitting issue with ASP are both problems that cannot be fixed in the code, but require waiting for the vendor to make a change, which they may or may not judge necessary.

Conclusion 2: Live by the framework, die by the framework.

Also interesting is that XSS, which has the highest variance in occurrence, has the least variance in terms of time to fix. I guess once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. Honestly I have no idea why this would be, I just find it very interesting.

Conclusion 3: Once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. I can’t fathom the reason why, but my gut tells me it might be important.

I found the “Remediation rate by vulnerability class” chart to be perhaps the most surprising (at least to me). I would have assumed that the remediation rates per vulnerability would have been more closely correlated to the risk posed by each vulnerability, however that does not appear to be the case. Even more surprisingly, the remediation rates do not seem to be correlated to the ease of fixing the vulnerability, as measured by the previous chart on the number of days each vulnerability stayed open. Looking at SQLi for example, the remediation rate is high in asp, ColdFusion, .NET, and Java, and incredibly low in PHP and Perl. However PHP and Perl were the two languages where SQLi vulnerabilities were fixed the fastest! Why would they be getting fixed less often than other environments? XSS likewise seems to be easiest to fix in PHP, yet that’s the least likely place for it to be fixed. Perhaps some of this can be explained by a single phenomena – in some environments, it’s not worth fixing a vulnerability unless it can be done quickly and cheaply. If it’s a complex fix, it is simply not a priority. This would lead to low remediation rates and low days to patch at the same time. In my personal (and purely empirical non-scientific) experience, perl and php websites tend to be put up by smaller organizations, with less mature processes and a lesser emphasis on security and a greater focus on continuing to create new features. That may explain why many Perl and PHP vulnerability are either fixed fast or not at all. Without knowing more, my best guess is that many of the relationships here, while correlated, do not appear to be causal. In other words, some other force, like organizational culture, is driving both the choice of language and the remediation rate.

Conclusion 4: Remediation rates do vary across language, but the reasons seem to be unclear.

Final Conclusions
I started off with a very basic question “does choice of programming language matter”, and the answer does seem to be yes. While we all know that in theory there is no vulnerability that can’t exist in a given environment, and there’s no vulnerability that can’t be fixed in any given environment, the real world rarely works as neatly as it should “in theory”. Certain vulnerabilities are more likely in certain environments, and fixes may be easier or harder to apply, which impacts their likelihood of ever being applied. There has been a lot of talk lately about moving security into the framework, and this does provide evidence that this approach can be very successful. However it also shows the risks of this approach if the framework does not implement the right security controls and in the right way.

Relative Security of Programming Languages: Putting Assumptions to the Test

“In theory there is no difference between theory and practice. In practice there is.” – Yogi Berra

I like this quote because I think it sums up the way we as an industry all too often approach application security. We have our “best practices” and our conventional wisdom of how we think things operate, what we think is “secure” and standards that we think will constitute true security, in theory. However, in practice — in reality — all too often we find that what we think is wrong. We found this to be true when examining the relative security of popular programming languages, which is the topic of the WhiteHat Security 2014 Website Statistics Report that we launched today. The data we collected from the field defies the conventional wisdom we carry and pass down about the security of .Net, Java, ASP, Perl, and others.

The data that we derived in this report puts our beliefs around application security to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long? How do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?

By examining these questions and approaching their answers not with assumptions, but with hard evidence, our goal is to elevate conversations around how to “build-in” security from the start of the development process by picking a language and framework that not only solves business requirements, but security requirements as well.

For example, whereas one might assume that newer programming languages such as .Net or Java would be less prone to vulnerabilities, what we found was that there was not a huge difference between old languages and newer frameworks in terms of the average number of vulnerabilities. And when it comes to remediating vulnerabilities, contrary to what one might expect, legacy frameworks tended to have a higher rate of remediation – in fact, ColdFusion bested the whole field with an average remediation rate of almost 75% despite having been in existence for more than 20 years.

Similarly, many companies assume that secure coding is challenging because they have a ‘little bit of everything’ when it comes to the underlying languages used in building their applications. in our research, however, we found that not to be completely accurate. In most cases, organizations have a significant investment in one or two languages and very minimal investment in any others.

Our recommendations based on our findings? Don’t be content with assumptions. Remember, all your adversary needs is one vulnerability that they can exploit. Security and development teams must continue to measure their programs on an ongoing basis. Determine how many vulnerabilities you have and then how fast you should fix them. Don’t assume that your software development lifecycle is working just because you are doing a lot of things; anything measured tends to improve over time. This report can help serve as a real-world baseline to measure against your own.

To view the complete report, click here. I would also invite you to join the conversation on Twitter at #2014WebStats @whitehatsec.