Category Archives: Technical Insight

A Security Expert’s Thoughts on WhiteHat Security’s 2014 Web Stats Report

Editor’s note: Ari Elias-Bachrach is the sole proprietor of Defensium LLC. Ari is an application security expert. Having spent significant time breaking into web and mobile applications of all sorts as a penetration tester, he now works to try and improve application security. As a former developer who has experience with both static and dynamic analysis he can work closely with developers to try and remediate vulnerabilities. He has also developed and taught secure development classes, and can help make security part of the SDLC. He is a regular speaker on the field of application security at conferences. He can be found on Twitter @angelofsecurity. Given his experience and expertise, we asked Ari to review our 2014 Website Security Statistics Report which was announced yesterday to get his thoughts which he has shared as a guest blog post.

The most interesting and telling chart in my opinion is the Vulnerability class by language chart. I decided to start by asking myself a simple question: can vulnerabilities be dependent on the language used, and if so which vulnerabilities? I did a standard deviation on all vulnerability classes to see which ones had a high degree of variance across the different languages. XSS (13.2) and information leakage (16.4) were the two highest. In other words, those are the two vulnerabilities which can be most affected by the choice of programming language. In retrospect info disclosure isn’t surprising at all, but XSS is a little interesting. The third one is SQLi, which had a standard deviation of 3.8, and everything else is lower than that.

Conclusion 1: The presence or absence of Cross-site scripting and information disclosure vulnerabilities is very dependent on the environment used, and SQLi is a little bit dependent on the environment. Everything else isn’t affected that much.

Now while it seems that frameworks can do great things with respect to security, if you live by the framework, then you die by the framework. Looking at the “Days vulnerability open by language” chart, you can see some clear outliers where it looks like certain vulnerabilities simply cannot be fixed. If the developer can’t fix a problem in code, and you have to wait for an update to the framework, then you end up with those few really high mean times to fix. This brings us to the negative consequences of relying on the framework to take care of security for us – it can limit our ability to make security fixes as well. In this case the HTTP response splitting issue with ASP are both problems that cannot be fixed in the code, but require waiting for the vendor to make a change, which they may or may not judge necessary.

Conclusion 2: Live by the framework, die by the framework.

Also interesting is that XSS, which has the highest variance in occurrence, has the least variance in terms of time to fix. I guess once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. Honestly I have no idea why this would be, I just find it very interesting.

Conclusion 3: Once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. I can’t fathom the reason why, but my gut tells me it might be important.

I found the “Remediation rate by vulnerability class” chart to be perhaps the most surprising (at least to me). I would have assumed that the remediation rates per vulnerability would have been more closely correlated to the risk posed by each vulnerability, however that does not appear to be the case. Even more surprisingly, the remediation rates do not seem to be correlated to the ease of fixing the vulnerability, as measured by the previous chart on the number of days each vulnerability stayed open. Looking at SQLi for example, the remediation rate is high in asp, ColdFusion, .NET, and Java, and incredibly low in PHP and Perl. However PHP and Perl were the two languages where SQLi vulnerabilities were fixed the fastest! Why would they be getting fixed less often than other environments? XSS likewise seems to be easiest to fix in PHP, yet that’s the least likely place for it to be fixed. Perhaps some of this can be explained by a single phenomena – in some environments, it’s not worth fixing a vulnerability unless it can be done quickly and cheaply. If it’s a complex fix, it is simply not a priority. This would lead to low remediation rates and low days to patch at the same time. In my personal (and purely empirical non-scientific) experience, perl and php websites tend to be put up by smaller organizations, with less mature processes and a lesser emphasis on security and a greater focus on continuing to create new features. That may explain why many Perl and PHP vulnerability are either fixed fast or not at all. Without knowing more, my best guess is that many of the relationships here, while correlated, do not appear to be causal. In other words, some other force, like organizational culture, is driving both the choice of language and the remediation rate.

Conclusion 4: Remediation rates do vary across language, but the reasons seem to be unclear.

Final Conclusions
I started off with a very basic question “does choice of programming language matter”, and the answer does seem to be yes. While we all know that in theory there is no vulnerability that can’t exist in a given environment, and there’s no vulnerability that can’t be fixed in any given environment, the real world rarely works as neatly as it should “in theory”. Certain vulnerabilities are more likely in certain environments, and fixes may be easier or harder to apply, which impacts their likelihood of ever being applied. There has been a lot of talk lately about moving security into the framework, and this does provide evidence that this approach can be very successful. However it also shows the risks of this approach if the framework does not implement the right security controls and in the right way.

Relative Security of Programming Languages: Putting Assumptions to the Test

“In theory there is no difference between theory and practice. In practice there is.” – Yogi Berra

I like this quote because I think it sums up the way we as an industry all too often approach application security. We have our “best practices” and our conventional wisdom of how we think things operate, what we think is “secure” and standards that we think will constitute true security, in theory. However, in practice — in reality — all too often we find that what we think is wrong. We found this to be true when examining the relative security of popular programming languages, which is the topic of the WhiteHat Security 2014 Website Statistics Report that we launched today. The data we collected from the field defies the conventional wisdom we carry and pass down about the security of .Net, Java, ASP, Perl, and others.

The data that we derived in this report puts our beliefs around application security to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long? How do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?

By examining these questions and approaching their answers not with assumptions, but with hard evidence, our goal is to elevate conversations around how to “build-in” security from the start of the development process by picking a language and framework that not only solves business requirements, but security requirements as well.

For example, whereas one might assume that newer programming languages such as .Net or Java would be less prone to vulnerabilities, what we found was that there was not a huge difference between old languages and newer frameworks in terms of the average number of vulnerabilities. And when it comes to remediating vulnerabilities, contrary to what one might expect, legacy frameworks tended to have a higher rate of remediation – in fact, ColdFusion bested the whole field with an average remediation rate of almost 75% despite having been in existence for more than 20 years.

Similarly, many companies assume that secure coding is challenging because they have a ‘little bit of everything’ when it comes to the underlying languages used in building their applications. in our research, however, we found that not to be completely accurate. In most cases, organizations have a significant investment in one or two languages and very minimal investment in any others.

Our recommendations based on our findings? Don’t be content with assumptions. Remember, all your adversary needs is one vulnerability that they can exploit. Security and development teams must continue to measure their programs on an ongoing basis. Determine how many vulnerabilities you have and then how fast you should fix them. Don’t assume that your software development lifecycle is working just because you are doing a lot of things; anything measured tends to improve over time. This report can help serve as a real-world baseline to measure against your own.

To view the complete report, click here. I would also invite you to join the conversation on Twitter at #2014WebStats @whitehatsec.

WhiteHat Security Observations and Advice about the Heartbleed OpenSSL Exploit

The Heartbleed SSL attack is one of the most significant, and media-covered, vulnerabilities affecting the Internet in recent years. According to Netcraft, 17.5% of SSL-enabled sites on the Internet were vulnerable to the Heartbleed SSL attack just prior to its disclosure.

The vulnerable versions of OpenSSL were first released to the public 2 years ago. The implications of 2 years of exposure to this vulnerability are significant, and we will explore that more at the end of this article. First – immediate details:

This attack does not require any MitM (Man in the Middle) or other complex setups that SSL exploits usually require. This attack can be executed directly and anonymously against any webserver/device running a vulnerable version OpenSSL, and yield a wealth of sensitive information.

WhiteHat Security’s Threat Research Center (TRC) began testing customers for vulnerability to this attack immediately, using a custom SSL-testing tool from our TRC R&D labs. Our initial conclusions were that the frequency with which sites were vulnerable to this attack was low on production applications being monitored by the WhiteHat Sentinel security service. In our first test sample, covering 18,000 applications, we found a vulnerability rate of about 2% – much lower than Netcraft’s 17.5% vulnerability rate for all SSL-enabled websites. This may be a result of a biased sample, since we only evaluated applications under Sentinel service. (This may be because application owners who use Sentinel are already more security-conscious than most.)

While the frequency of vulnerability to the Heartbleed SSL attack is low: those that are vulnerable are severely vulnerable. Everything in memory on the webserver/device running a vulnerable version of OpenSSL is exposed. This includes:

  • UserIDs
  • Passwords
  • PII (Personally Identifiable Information)
  • SSL certificate private keys (e.g.-SSL using this cert will never be private again, if compromised)
  • Any private crypto keys
  • SSL-based Chat
  • SSL-based VPN
  • SSL-based anything
  • Connection Strings (Database userIDs/Passwords) to back-end databases, mainframes, LDAP, partner systems, etc. loaded in memory
  • Additional vulnerabilities in source code, that are loaded into memory, with precise locations (e.g.-Blind SQL injection)
  • Basically any code or data in memory on the server running a vulnerable version of OpenSSL is rendered in clear-text to all attackers

The most important thing you need to know regarding the above: this attack leaves no traces. No logs. Nothing suspect. It is a nice, friendly, read-only anonymous dump of memory. This is the digital equivalent of accidentally leaving a copy of your corporate diary, with all sorts of private secrets, on the seat of the mass-transit bus for everyone to read – including strangers you will never know. Your only recourse for secrets like passwords now is the delete key.

The list of vulnerable applications published to date includes mainstream web and mobile applications, security devices with web interfaces, and critical business applications handling sensitive and federally-regulated data.

This is a seriously sub-optimal situation.

Timeline of the Heartbleed SSL attack and WhiteHat Security’s response:

April 7th: Heartbleed 0day attack published. Websites vulnerable to the attack included major websites/email services that the majority of the Internet uses daily, and many mobile applications that use OpenSSL.

April 8th: WhiteHat TRC begins testing customers’ sites for vulnerability to Heartbleed SSL exploitation. In parallel, WhiteHat R&D begins QAing new automated tests to enable Sentinel services to identify this vulnerability at scale.

April 9th: WhiteHat TRC identifies roughly 350 websites vulnerable to Heartbleed SSL attack, out of initial sample of 18,000 production applications. We find an initial 1.9% average exploitability but this percentage appears to drop rapidly in the next 48 hours.

April 10th: WhiteHat R&D releases new Sentinel Heartbleed SSL vulnerability tests, enabling Sentinel to automatically identify if any applications under Sentinel service are vulnerable to Heartbleed SSL attacks with every scan. This brings test coverage to over 35,000 applications. Average vulnerability rate drops to below 1% by EOD April 10th.

Analysis: the more applications we scan using the new Heartbleed SSL attack tests, the fewer sites (by percent) we find vulnerable to this attack. We suspect this is because most customers have moved quickly to patch this vulnerability, due to the extreme severity of the vulnerability and the intense media coverage of the issue.

Speculation: we suspect that this issue will quickly disappear for most important SSL-enabled applications on the Internet – especially applications under some type of active DAST or SAST scanning service. It will likely linger on with small sites hosted by providers that do not offer (or pay attention to) any form of security patching service.

We also expect this issue to persist with internal (non-Internet-facing) applications and devices that use SSL, but which are commonly not tested or monitored by tools or services capable of detecting this vulnerability, and that are less frequently upgraded.

While the attack surface of internal network applications & devices may appear to be much smaller than Internet-facing applications, simple one-click exploits are already available on the Internet, usable by anyone on your network with access to a web browser. (link to exploit code:

This means that any internal user on your network who downloads one of these exploits is capable of extracting everything in memory from any device or application on your internal network that is vulnerable. This includes:

  • Internal routers and switches
  • Firewalls and IDS systems
  • Human Resources applications
  • Finance and payroll applications
  • Pretty much any application or device running a vulnerable version of OpenSSL
  • Agents exploiting internal devices will see all network traffic or application data in memory on the affected device/application

Due to the fact that most of these internal applications and devices lack the type of logging and alerting that would notify Information Security teams of active abuse of this vulnerability, our concern is that in coming months these internal devices & applications may provide rich grounds for exploitation that may never be discovered.

Conclusion & Recommendations:
The scope, impact, and ease-of-exploitation of this vulnerability make it one of the worst in Internet history. However, patches are readily available for most systems, and the impact risk of patching seems minimal. It appears most of our customers have already patched the majority of their Internet-facing systems against this exploit.

However, this vulnerability has existed for up to 2 years in OpenSSL implementations. We will likely never know if anyone has been actively exploiting this vulnerability, due to difficultly in logging/tracking attacks. If your organization is concerned about this, we recommend that — in addition to patching this vulnerability — SSL certificates of suspect systems be revoked and re-issued. We also recommend that all end-user passwords and system passwords be changed on suspect systems.

Is this vulnerability a “boy that cried wolf” hype situation? Bruce Schneier has another interesting perspective on this:

WhiteHat will continue to share more information about this vulnerability as further information becomes available.

General reference material:

Vulnerability details:

Exploit details:


Heartbleed OpenSSL Vulnerability

Monday afternoon a flaw in the way OpenSSL handles the TLS heartbeat extension was revealed and nicknamed “The Heartbleed Bug.” According to the OpenSSL Security Advisory, Heartbleed reveals ‘a missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.’ The flaw creates an opening in SSL/TLS which an attacker could use to obtain private keys, usernames/passwords and content.

OpenSSL versions 1.0.1 through 1.0.1f as well as 1.0.2-beta1 are affected. The recommended fix is upgrade to 1.0.1g and to reissue certs for any sites that were using compromised OpenSSL versions.

WhiteHat has added testing to identify websites currently running affected versions. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. These tests are currently being run across all of our clients’ applications. We expect to get full coverage of all applications under service within the next two days. WhiteHat also recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested. Several tools have been made available to test for open issues. To access an online testing tool visit Another tool can be found on GitHub at and a new script can be obtained from

If you have any questions regarding the Heartbleed Bug please email and a representative will be happy to assist. Below you will find a link to the OpenSSL Security Advisory: Reference for Heartbeat extension

Raising the CSRF Bar

For years, we at WhiteHat have been recommending Tokenization as the number one protection from Cross Site Request Forgery (CSRF). Just having a token is not enough, of course, as it must be cryptographically strong, significantly random, and properly validated on the server. Can’t stress that last point enough as the first thing I try when I see a CSRF token is to empty the value out and see if the form submit is accepted as valid.

Historically the bar for “good enough” when it comes to CSRF tokens was if the token was changed at least per session.

For example: A user logs in and a token is generated & attached to their authenticated session. That token is used for that user for all sensitive/administrative forms that are to be protected against CSRF. As long as when the user is logged out and logged back in they received a new token and the old one was invalidated, that met the bar for “good enough.”

Not anymore.

Thanks to some fellow security researchers who are much smarter than I when it comes to crypto chops, (Angelo Prado, Neal Harris, and Yoel Gluck), and their latest attack on SSL (which they dubbed BREACH), that no longer suffices.

BREACH is an attack on HTTP Response compression algorithms, like gzip, which are very popular. Make your giant HTTP responses smaller, which makes them faster for your user? No brainer. Well now, thanks to the BREACH attack, within around 1000 requests an attacker can pull sensitive SSL-encrypted information from HTTP Responses — sensitive information such as… CSRF Tokens!

The bar must be raised. We now no longer allow Tokens to stay the same for an entire session if response compression is enabled. In order to combat this, CSRF Tokens need to be generated for every request/response. This way they can not be stolen by making 1000 requests against a consistent token.


  • Old “good enough” – CSRF Tokens unique per session
  • BREACH (BlackHat 2013)
  • New “good enough” – CSRF Tokens must be unique per request if HTTP Response compression is being used.

Adding Open Source Framework Hardening to Your SDLC – Podcast

I talk with G.S. McNamara, Federal Information Security Senior Consultant, about fixing open source framework vulnerabilities, what to consider when pushing open source, how to implement a system around patches without impacting performance, and security considerations on framework selections.

Want to do a podcast with us? Signup to be part of our Unsung Hero program.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Top 10 Web Hacking Techniques 2013

Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its eighth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent work. Past Top 10s and the number of new attack techniques discovered in each year:
2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51) and 2012 (56).

Phase 1: Open community voting for the final 15 [Jan 23-Feb 3]
Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top 15 overall. Comment with your vote!

Phase 2: Panel of Security Experts Voting [Feb 4-Feb 11]
From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as phase 1, the judges will rank the final 20 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2013!

Complete 2013 List (in no particular order):

  1. Tor Hidden-Service Passive De-Cloaking
  2. Top 3 Proxy Issues That No One Ever Told You
  3. Gravatar Email Enumeration in JavaScript
  4. Pixel Perfect Timing Attacks with HTML5
  5. Million Browser Botnet Video Briefing
  6. Auto-Complete Hack by Hiding Filled in Input Fields with CSS
  7. Site Plagiarizes Blog Posts, Then Files DMCA Takedown on Originals
  8. The Case of the Unconventional CSRF Attack in Firefox
  9. Ruby on Rails Session Termination Design Flaw
  10. HTML5 Hard Disk Filler™ API
  11. Aaron Patterson – Serialized YAML Remote Code Execution
  12. Fireeye – Arbitrary reading and writing of the JVM process
  13. Timothy Morgan – What You Didn’t Know About XML External Entity Attacks
  14. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  15. James Bennett – Django DOS
  16. Phil Purviance – Don’t Use Linksys Routers
  17. Mario Heiderich – Mutation XSS
  18. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  19. Carlos Munoz – Bypassing Internet Explorer’s Anti-XSS Filter
  20. Zach Cutlip – Remote Code Execution in Netgear routers
  21. Cody Collier – Exposing Verizon Wireless SMS History
  22. Compromising an unreachable Solr Serve
  23. Finding Weak Rails Security Tokens
  24. Ashar Javad Attack against Facebook’s password reset process.
  25. Father/Daughter Team Finds Valuable Facebook Bug
  26. Hacker scans the internet
  27. Eradicating DNS Rebinding with the Extended Same-Origin Policy
  28. Large Scale Detection of DOM based XSS
  29. Struts 2 OGNL Double Evaluation RCE
  30. Lucky 13 Attack
  31. Weaknesses in RC4

Leave a comment if you know of some techniques that we’ve missed, and we’ll add them in up until the submission deadline.

Final 15 (in no particular order):

  1. Million Browser Botnet Video Briefing
  2. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  3. Hacker scans the internet
  4. HTML5 Hard Disk Filler™ API
  5. Eradicating DNS Rebinding with the Extended Same-Origin Policy
  6. Aaron Patterson – Serialized YAML Remote Code Execution
  7. Mario Heiderich – Mutation XSS
  8. Timothy Morgan – What You Didn’t Know About XML External Entity Attacks
  9. Tor Hidden-Service Passive De-Cloaking
  10. Auto-Complete Hack by Hiding Filled in Input Fields with CSS
  11. Pixel Perfect Timing Attacks with HTML5
  12. Large Scale Detection of DOM based XSS
  13. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  14. Weaknesses in RC4
  15. Lucky 13 Attack

Prizes [to be announced]

  1. The winner of this year’s top 10 will receive a prize!
  2. After the open community voting process, two survey respondents will be chosen at random to receive a prize.

The Top 10

  1. Mario Heiderich – Mutation XSS
  2. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  3. Pixel Perfect Timing Attacks with HTML5
  4. Lucky 13 Attack
  5. Weaknesses in RC4
  6. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  7. Million Browser Botnet Video Briefing
  8. Large Scale Detection of DOM based XSS
  9. Tor Hidden-Service Passive De-Cloaking
  10. HTML5 Hard Disk Filler™ API

Honorable Mention

  1. Aaron Patterson – Serialized YAML Remote Code Execution

List of HTTP Response Headers

Every few months I find myself looking up up the syntax of a relatively obscure HTTP header. Regularly I find myself wondering why there isn’t a good definitive list of common HTTP Response headers anywhere. Usually the lists on the Internet are missing half a dozen HTTP headers. So I’ve taken care to gather a list all of the HTTP response headers I could find. Hopefully this is useful to you, and removes some of the mystique behind how HTTP works if you’ve never seen headers before.

Note: this does not include things like IncapIP or other proxy/service specific headers that aren’t standard, and nor does it include request headers.

Header Example Value Notes
Access-Control-Allow-Credentials true
Access-Control-Allow-Headers X-PINGOTHER
Access-Control-Allow-Methods PUT, DELETE, XMODIFY
Access-Control-Expose-Headers X-My-Custom-Header, X-Another-Custom-Header
Access-Control-Max-Age 2520
Accept-Ranges bytes
Age 12
Allow GET, HEAD, POST, OPTIONS Commonly includes other things, like PROPFIND etc…
Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/2
Cache-Control private, no-cache, must-revalidate
Client-Date Tue, 27 Jan 2009 18:17:30 GMT
Client-Response-Num 1
Connection Keep-Alive
Content-Disposition attachment; filename=”example.exe”
Content-Encoding gzip
Content-Language en
Content-Length 1329
Content-Location /index.htm
Content-MD5 Q2hlY2sgSW50ZWdyaXR5IQ==
Content-Range bytes 21010-47021/47022
Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP default-src ‘self’ Different header needed to control different browsers
Content-Security-Policy-Report-Only default-src ‘self’; …; report-uri /csp_report_parser;
Content-Type text/html Can also include charset information (E.g.: text/html;charset=ISO-8859-1)
Date Fri, 22 Jan 2010 04:00:00 GMT
ETag “737060cd8c284d8af7ad3082f209582d”
Expires Mon, 26 Jul 1997 05:00:00 GMT
HTTP /1.1 401 Unauthorized Special header, no colon space delimiter
Keep-Alive timeout=3, max=87
Last-Modified Tue, 15 Nov 1994 12:45:26 +0000
Link <>; rel=”cononical” rel=”alternate”
P3P policyref=””, CP=”NOI DSP COR ADMa OUR NOR STA”
Pragma no-cache
Proxy-Authenticate Basic
Proxy-Connection Keep-Alive
Refresh 5; url=
Retry-After 120
Server Apache
Set-Cookie test=1;; path=/; expires=Tue, 01-Oct-2013 19:16:48 GMT Can also include the secure and HTTPOnly flag
Status 200 OK
Strict-Transport-Security max-age=16070400; includeSubDomains
Trailer Max-Forwards
Transfer-Encoding chunked compress, deflate, gzip, identity
Upgrade HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
Vary *
Via 1.0 fred, 1.1 (Apache/1.1)
Warning Warning: 199 Miscellaneous warning
WWW-Authenticate Basic
X-Aspnet-Version 2.0.50727
X-Content-Type-Options nosniff
X-Frame-Options deny
X-Permitted-Cross-Domain-Policies master-only Used by Adobe Flash
X-Powered-By PHP/5.4.0
X-Robots-Tag noindex,nofollow
X-UA-Compatible Chome=1
X-XSS-Protection 1; mode=block

If I’ve missed any response headers, please let us know by leaving a comment and I’ll add it into the list. Perhaps at some point I’ll create a similar list for Request headers, if people find this helpful enough.

Challenging Security Assumptions in Open Source and Why it Matters

Guest post by G.S. McNamara

GS McNamara

Open source is great. Startups, non-profits, enterprises, and the government have adopted open source software. runs on a modified instance of Drupal, the free and open source content management framework, with security modifications. Another open source project, Apache Solr, is what powers search on And finally, all of it runs on the LAMP solution stack. This is just one example of open source adoption by the federal government. Open source software is attractive in part because it can help the bottom line by avoiding vendor lock-in and leveraging the global community’s effort in creating new features quickly.

But open source code that has been scrutinized by many must be secure, right? Sometimes, no. Do not skip the security assessment of a Web application simply because it is based on open source code. I am not talking about the introduction of insidious backdoors in the code, but about design decisions made in open source projects that have real security implications. Some of these design decisions can support really attractive features, but come with a hidden cost. Security concerns are not always highlighted in the documentation, and busy developers using open source code likely will not know about them. While application developers might be versed in secure development practices, they may make the assumption that so too are the developers contributing to the open source projects they build upon. Security weaknesses that make it past both sets of developers may later be discovered by the users of the application.

Thankfully there are tools and services that can be used to automatically scan your Web application and the libraries it uses for security vulnerabilities. An additional set of tools and services can be leveraged after the application is deemed functionally complete. I would suggest that open source developers clearly mark the security tradeoffs involved in design decisions, and match them up with a common terminology such as Web Application Security Consortium’s (WASC) Threat Classification to help developers using your work understand the risk tradeoffs. If you are using the open source please evaluate your Web applications using a combination of static and dynamic analysis tools. Some tools can be used while still in development to avoid future, more costly surprises that require additional engineering to remediate. Some security tools are created for specific open source projects and frameworks, and are designed to be aware of security shortcomings within those projects and frameworks. Developers can use these tools while developing to be alerted to security issues as they are introduced into the application they are developing.

We are close, but we are not completely talking the same language when it comes to security. For instance, in my recent work on a pre-deployment security evaluation of Ruby on Rails Web application, I found what can be categorized under WASC’s Threat Classification as an Insufficient Session Expiration (WASC-41) weakness (PDF link) provided by the Rails framework itself. This weakness resides in the cookie-based session storage mechanism CookieStore, which is both enabled by default and attractive for its performance benefits.

In discussing this issue, which also exists in the popular Django framework, I realized there is also a semantic problem. This became evident during conversations with contributors to these open source projects. This session termination weakness does not fall under the examples given by the projects’ security guides of a replay attack or session hijacking. When replay attacks are mentioned in the documentation and elsewhere, the examples given are actions that can be taken (again) within an existing session on behalf of a user, such as re-purchasing a product from a shopping website. When “session hijacking” is discussed, talks concern stealing a user’s session while it is still active. The documentation does not talk about how the terminated session itself can be exhumed, which is the root issue I have worked to point out. Even a developer who has read the security guides for these projects would be blindsided by this nuance. More work is needed to better define vendor-neutral security weaknesses so their risks can be highlighted and managed appropriately regardless of the particular technology.

The bottom line is that you cannot trust that all eyes have vetted all security issues in the open source software your application is built on. Both applications and their components need to be tested by the developers for security flaws. Open source software design choices may have been made that are not secure enough for your environment, and the security tradeoffs of these choices may not even be mentioned in the project’s documentation. A huge benefit of open source software is that you can review the security of the code itself; take advantage of that opportunity to ensure you understand the risks that are present in your code.

About G.S.
G.S. McNamara is a senior technology and security consultant, and an intelligence technologies postgraduate based in the Washington, D.C. area. Formerly, he worked in research and development lab on DARPA malware analysis and Web security programs. He began his career as an IT consultant to a K Street medical practice. He works with startups as well as on federal contracts building Web and mobile applications.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Why Should Scare You

A long time ago I began to compile a list of lesser known but still very scary choke points on the Internet. This was a short list of providers (like the DNS that most routers use, for instance) that could have major impact if they were ever compromised or owned by a nefarious 3rd party. One of those was is the single best typo squatter domain on the planet. Let’s say, for instance, you accidentally type in (notice the end there). Yes, it would go to email servers if they were set up to allow email to come in. If you typed a URL as similarly, you would also redirect to the typo domain. Or how about phishing sites of or have you downloaded the latest patch from Given that .com is the largest TLD, is the best typo domain naturally.

A few months ago and without much fan fare, was silently sold by CNET to – one of the most notorious domain squatters ever. Sources close to the deal told me it was sold for around $1.5 million. Yes, the fox owns the hen-house. For $1.5 million it was a steal though – it’s easily worth that just in typo traffic and the huge volume of accidental inbound links.

However, things are not as bad as they could be. For instance, port 80 (web) is open, yes, but port 25 (mail) is not. If and when DSparking decides to open port 25 they will start receiving tons of email that is potentially extremely sensitive. Even if it seems rejected by the mail servers they could still be logging it.

For now mail appears to be closed and has been since the deal went through. They could easily be opening port 25 only to selective IP address ranges, or even opening and closing it periodically to get a snapshot of traffic. There’s no easy way to know for sure. I doubt it’s open at all but should that change, and it could easily, this should be considered an extremely dangerous domain.

as @spazef0rze pointed out although port 25 is closed on the website, mail is still being processed through a seperate DNS entry:

$ host -t MX mail is handled by 10

Also, as @mubix noted, there are many other ports/protocols that can be subject to interception if typo’d in the same way. So yes, it’s probably time to block this. No good can come of it.