Podcast: An Anthropologist’s Approach to Application Security

In this edition of the WhiteHat Security podcast, I talk with Walt Williams, Senior Security and Compliance Manager Lattice Engines. Hear how Walt went from Anthropologist to Security Engineer, his approach to building relationships with developers, and why his biggest security surprise involved broken windshields.

Want to do a podcast with us? Signup to become an “Unsung Hero”.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Teaching Code in 2014

Guest post by – JD Glaser

“Wounds from a friend can be trusted, but an enemy multiplies kisses” – Proverbs 27:6

This proverb, over 2,000 years old, is directly applicable to all authors of programming material today. By avoiding security coverage, by explicitly teaching insecure examples, authors do the world at large a huge disservice by multiplying both the effect of incorrect knowledge and the development of insecure habits in developers. The teacher is the enemy when their teachings encourage poor practices through example, which ultimately bites the student in the end at no cost to the teacher. In fact, if you are skilled enough to be an effective teacher, the problem is worse than for poor teachers. Good teachers by virtue of their teaching skill greatly multiply the ‘kisses’ of poor example code that eventually becomes ‘acceptable production code’ ad infinitum.

So, to the authors of programming material everywhere, whether you write books or blogs, this article is targeted at you. By choosing to write, you have chosen to teach. Therefore you have a responsibility.

No Excuse for Demonstrating Insecure Code Examples

This is year 2014. There is simply no excuse for not demonstrating and explaining secure code within the examples of your chosen language.

Examples are critical. First, examples teach. Second, examples, one way or the other, find their way into production code. This is a simple fact with huge ramifications. Once a technique is learned, once that initial impact is made upon the mind, that newly learned technique becomes the way it is done. When you teach an insecure example, you perpetuate that poor knowledge in the world and it repeats itself again and again.

Change is difficult for everyone. Pepsi learned this the expensive way. Once people accepted that Coke was It, hundreds of millions of dollars spent on really cool advertising have not been able to change that mindset of that opinion. The mind has only one slot for number one, and Pepsi has remained second ever since. Don’t continue to enforce security in the mind as second.

Security Should Be Ingrained, Not Added

When teaching, even if you are ‘just’ calculating points on a sample graph for those new to chart programming, if any of that data comes from user supplied input via that simple AJAX call to get the user going, your sample code should filter that sample data. When you save it, your sample code should escape it for the chosen database; when you display it to the user, the sample code of your chosen language should escape it for the intended display context. When your sample data needs to be encrypted, your sample code should apply modern cryptography. There are no innocent examples anymore.

Security should be ingrained, not added. Programmers need to be trained to see security measures in line with normal functionality. In this way, they are trained to incorporate security measures initially as code is written, and also to identify when security measures are missing.

When Security Measures are removed for the sake of ‘clarity’ the student is being taught unconsciously that security makes things less clear. The student is also being trained directly by example to add security ‘later’.

Learning Node.js In 2014

Most of the latest Node.js books have some great material, but only one out of several took the time to properly teach, via integrated code examples, how to properly escape Node.js code for use with MySQL. Kudos to Pedro Teixeira, who wrote Professional Node.js from Wrox Press, for teaching proper security measures as an integral part of the lesson to those adapting to Node.js.

Contrast this with Node.js for PHP Developers from O’Reilly Press, where the examples explicitly demonstrate how to insecurely code Node.js with SQL Injection holes. The code in this book actually teaches the next new wave how to code wide open vulnerabilities to servers. Considering the rapid adoption of Node.js for server applications, and the millions who will use it, this is a real problem.

The fact that the retired legacy method of building SQL statements through insecure string concatenation was chosen by the author as the instruction method for this book really demonstrates the power of ingrained learning. Once something is learned, for good or bad, a person keeps repeating what they know for years. Again, change is difficult.

The developers taught by these code examples will build apps that we may all use at some point. The security vulnerabilities they contain will effect everyone. It makes one wonder if you are the author whose book or blog taught the person who coded the latest Home Depot penetration. This will be code that has to be retrofitted later at great time and great expense. It is not a theoretical problem. It does have a large dollar figure attached to its cost.

For some reason, authors of otherwise good material choose to avoid teaching security in an integral way. Either they are not knowledgeable about security, in which it is time to up their game, or have fallen into the ‘clarity omission’ trap. This is a wide spread practice, adopted by almost everyone, of declaring that since this ‘example code’ is not production code, insecure code is excusable, and therefore critical facts are not presented. This is a mistake with far-reaching implications.

I recently watched a popular video on PHP training from an instructor who is in all other respects a good instructor. Apparently, this video has educated at least 15,000 developers so far. In it, the author explicitly states briefly that output escaping should be done. However, in his very next statement, he declares that for the sake of brevity, the technique won’t be demonstrated, and the examples given out as part of the training do not incorporate the technique. The opportunity to ingrain the idea that output escaping is necessary, and that it should be an automated part of a developer’s toolkit, has just been lost on the majority of 15,000 students because the code to which they will later turn for reference is lacking. Most, if not all, will ignore it as a practice until mandated by an external force.

Stop the Madness

In the real world, when security is not incorporated at the beginning, it costs additional time and money to retrofit it later. This cost is never paid until a breach is announced on the front page of a news service and someone is sued for negligence. Teachers have a responsibility to teach this.

Coders code as they are taught. Coders are primarily taught through books and blog articles. Blog articles are especially critical for learning as they are the fastest way to learn the latest language technique. Therefore, bloggers are equally at fault when integrated security is absent from their examples.

The fact is that if you are a writer, you are a trainer. You are in fact training a developer how to do something over and over again. Security should be integral. Security books on their own, as secondary topics, should not be needed. Think about that.

The past decade has been spent railing against insecure programming practices, but the question needs to be asked, who is doing the teaching? And what is being taught?

You, the writer, are at the root of this widespread security problem. Again, this is 2014 and these issues have been in the spotlight for 10+ years. This is not about mistakes, or being a security expert. This is about the complete avoidance of the basics. What habit is a young programmer learning now that will effect the code going into service next year? and the years after?

The Truth Hurts

If you are still inadvertently training coders how to write blatantly insecure code, either through ignorance or omission, you have no business training others, especially if you are successful. If you want to teach, if you make money teaching programming, you need to stop the madness, educate yourself, and reverse the trend.

Write those three extra lines of filtering code and add that extra paragraph explaining the purpose. Teach developers how to see security ingrained in code.

Stop multiplying the sweet kiss of simplicity and avoidance. Stop making yourself the enemy of those who like to keep their credit cards private.

About JD
In his own words JD, “Did a brief stint of 15 years in security. Headed the development of TripWire for Windows and Foundscan. Founded NT OBJECTives, a company to scan web stuff. BlackHat speaker on Windows forensic issues. Built the Windows Forensic Toolkit and FPort, with the Department of Justice has used to help convict child pornographers.”

Details on Internet Explorer Zero-Day Exploit

A new Zero-Day exploit for Internet Explorer was released on Saturday by FireEye Research Labs. At its core the new exploit takes advantage of a known Flash technique that can be used to access memory. Memory is then corrupted in a way that completely bypasses the built in Microsoft Window’s protection. This then gains the attacker full control which allows the attacker to run his own maliciously crafted code on the victims machine. Internet Explorer versions 6-11 are all currently vulnerable to attack. Details of the exploit can be found here: http://www.fireeye.com/blog/uncategorized/2014/04/new-zero-day-exploit-targeting-internet-explorer-versions-9-through-11-identified-in-targeted-attacks.html.

Since the vulnerability relies on corrupting memory through Flash, an easy mitigation technique is to simply disable Flash. In addition if you are using different browsers, such as Firefox or WhiteHat’s Aviator, you will not be affected. There have already been known attacks exploiting the new IE vulnerability so users are encouraged to take immediate action to mitigate their risk.

For users interested in an alternative browser to Internet Explorer, WhiteHat Aviator is now available for Windows users and can be downloaded here: https://www.whitehatsec.com/aviator/.

Apache Struts ClassLoader Vulnerability

A patch issued in March for a previously known vulnerability in Apache Struts Version 2.0.0 – 2.3.16 has been bypassed. The vulnerability allowed attackers to manipulate the ClassLoader leading to possible remote code execution and denial of service. Struts versions 2.0.0-2.3.16.1 are all currently vulnerable to attack. As of today no patch is available however Apache has a detailed write up on how to mitigate the vulnerability while they work on a security patch. Details can be found at http://struts.apache.org/announce.html#a20140424

WhiteHat has added detection for the Struts ClassLoader vulnerability across all service lines. Both dynamic and static assessments have been updated and will begin testing as soon as the next scan begins.

Our Customer Success team would be happy to answer any questions you may have regarding this issue. They can be reached by emailing support@whitehatsec.com

Editor’s Note: A patch has been released by Apache on Saturday 4/26 which should fix the ClassLoader issue in Struts. Users are encouraged to update to Struts 2.3.16.2 immediately. Details can be found at http://struts.apache.org/announce.html#a20140424

A Security Expert’s Thoughts on WhiteHat Security’s 2014 Web Stats Report

Editor’s note: Ari Elias-Bachrach is the sole proprietor of Defensium LLC. Ari is an application security expert. Having spent significant time breaking into web and mobile applications of all sorts as a penetration tester, he now works to try and improve application security. As a former developer who has experience with both static and dynamic analysis he can work closely with developers to try and remediate vulnerabilities. He has also developed and taught secure development classes, and can help make security part of the SDLC. He is a regular speaker on the field of application security at conferences. He can be found on Twitter @angelofsecurity. Given his experience and expertise, we asked Ari to review our 2014 Website Security Statistics Report which was announced yesterday to get his thoughts which he has shared as a guest blog post.

The most interesting and telling chart in my opinion is the Vulnerability class by language chart. I decided to start by asking myself a simple question: can vulnerabilities be dependent on the language used, and if so which vulnerabilities? I did a standard deviation on all vulnerability classes to see which ones had a high degree of variance across the different languages. XSS (13.2) and information leakage (16.4) were the two highest. In other words, those are the two vulnerabilities which can be most affected by the choice of programming language. In retrospect info disclosure isn’t surprising at all, but XSS is a little interesting. The third one is SQLi, which had a standard deviation of 3.8, and everything else is lower than that.

Conclusion 1: The presence or absence of Cross-site scripting and information disclosure vulnerabilities is very dependent on the environment used, and SQLi is a little bit dependent on the environment. Everything else isn’t affected that much.

Now while it seems that frameworks can do great things with respect to security, if you live by the framework, then you die by the framework. Looking at the “Days vulnerability open by language” chart, you can see some clear outliers where it looks like certain vulnerabilities simply cannot be fixed. If the developer can’t fix a problem in code, and you have to wait for an update to the framework, then you end up with those few really high mean times to fix. This brings us to the negative consequences of relying on the framework to take care of security for us – it can limit our ability to make security fixes as well. In this case the HTTP response splitting issue with ASP are both problems that cannot be fixed in the code, but require waiting for the vendor to make a change, which they may or may not judge necessary.

Conclusion 2: Live by the framework, die by the framework.

Also interesting is that XSS, which has the highest variance in occurrence, has the least variance in terms of time to fix. I guess once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. Honestly I have no idea why this would be, I just find it very interesting.

Conclusion 3: Once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. I can’t fathom the reason why, but my gut tells me it might be important.

I found the “Remediation rate by vulnerability class” chart to be perhaps the most surprising (at least to me). I would have assumed that the remediation rates per vulnerability would have been more closely correlated to the risk posed by each vulnerability, however that does not appear to be the case. Even more surprisingly, the remediation rates do not seem to be correlated to the ease of fixing the vulnerability, as measured by the previous chart on the number of days each vulnerability stayed open. Looking at SQLi for example, the remediation rate is high in asp, ColdFusion, .NET, and Java, and incredibly low in PHP and Perl. However PHP and Perl were the two languages where SQLi vulnerabilities were fixed the fastest! Why would they be getting fixed less often than other environments? XSS likewise seems to be easiest to fix in PHP, yet that’s the least likely place for it to be fixed. Perhaps some of this can be explained by a single phenomena – in some environments, it’s not worth fixing a vulnerability unless it can be done quickly and cheaply. If it’s a complex fix, it is simply not a priority. This would lead to low remediation rates and low days to patch at the same time. In my personal (and purely empirical non-scientific) experience, perl and php websites tend to be put up by smaller organizations, with less mature processes and a lesser emphasis on security and a greater focus on continuing to create new features. That may explain why many Perl and PHP vulnerability are either fixed fast or not at all. Without knowing more, my best guess is that many of the relationships here, while correlated, do not appear to be causal. In other words, some other force, like organizational culture, is driving both the choice of language and the remediation rate.

Conclusion 4: Remediation rates do vary across language, but the reasons seem to be unclear.

Final Conclusions
I started off with a very basic question “does choice of programming language matter”, and the answer does seem to be yes. While we all know that in theory there is no vulnerability that can’t exist in a given environment, and there’s no vulnerability that can’t be fixed in any given environment, the real world rarely works as neatly as it should “in theory”. Certain vulnerabilities are more likely in certain environments, and fixes may be easier or harder to apply, which impacts their likelihood of ever being applied. There has been a lot of talk lately about moving security into the framework, and this does provide evidence that this approach can be very successful. However it also shows the risks of this approach if the framework does not implement the right security controls and in the right way.

Relative Security of Programming Languages: Putting Assumptions to the Test

“In theory there is no difference between theory and practice. In practice there is.” – Yogi Berra

I like this quote because I think it sums up the way we as an industry all too often approach application security. We have our “best practices” and our conventional wisdom of how we think things operate, what we think is “secure” and standards that we think will constitute true security, in theory. However, in practice — in reality — all too often we find that what we think is wrong. We found this to be true when examining the relative security of popular programming languages, which is the topic of the WhiteHat Security 2014 Website Statistics Report that we launched today. The data we collected from the field defies the conventional wisdom we carry and pass down about the security of .Net, Java, ASP, Perl, and others.

The data that we derived in this report puts our beliefs around application security to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long? How do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?

By examining these questions and approaching their answers not with assumptions, but with hard evidence, our goal is to elevate conversations around how to “build-in” security from the start of the development process by picking a language and framework that not only solves business requirements, but security requirements as well.

For example, whereas one might assume that newer programming languages such as .Net or Java would be less prone to vulnerabilities, what we found was that there was not a huge difference between old languages and newer frameworks in terms of the average number of vulnerabilities. And when it comes to remediating vulnerabilities, contrary to what one might expect, legacy frameworks tended to have a higher rate of remediation – in fact, ColdFusion bested the whole field with an average remediation rate of almost 75% despite having been in existence for more than 20 years.

Similarly, many companies assume that secure coding is challenging because they have a ‘little bit of everything’ when it comes to the underlying languages used in building their applications. in our research, however, we found that not to be completely accurate. In most cases, organizations have a significant investment in one or two languages and very minimal investment in any others.

Our recommendations based on our findings? Don’t be content with assumptions. Remember, all your adversary needs is one vulnerability that they can exploit. Security and development teams must continue to measure their programs on an ongoing basis. Determine how many vulnerabilities you have and then how fast you should fix them. Don’t assume that your software development lifecycle is working just because you are doing a lot of things; anything measured tends to improve over time. This report can help serve as a real-world baseline to measure against your own.

To view the complete report, click here. I would also invite you to join the conversation on Twitter at #2014WebStats @whitehatsec.

WhiteHat Security Observations and Advice about the Heartbleed OpenSSL Exploit

The Heartbleed SSL attack is one of the most significant, and media-covered, vulnerabilities affecting the Internet in recent years. According to Netcraft, 17.5% of SSL-enabled sites on the Internet were vulnerable to the Heartbleed SSL attack just prior to its disclosure.

The vulnerable versions of OpenSSL were first released to the public 2 years ago. The implications of 2 years of exposure to this vulnerability are significant, and we will explore that more at the end of this article. First – immediate details:

This attack does not require any MitM (Man in the Middle) or other complex setups that SSL exploits usually require. This attack can be executed directly and anonymously against any webserver/device running a vulnerable version OpenSSL, and yield a wealth of sensitive information.

WhiteHat Security’s Threat Research Center (TRC) began testing customers for vulnerability to this attack immediately, using a custom SSL-testing tool from our TRC R&D labs. Our initial conclusions were that the frequency with which sites were vulnerable to this attack was low on production applications being monitored by the WhiteHat Sentinel security service. In our first test sample, covering 18,000 applications, we found a vulnerability rate of about 2% – much lower than Netcraft’s 17.5% vulnerability rate for all SSL-enabled websites. This may be a result of a biased sample, since we only evaluated applications under Sentinel service. (This may be because application owners who use Sentinel are already more security-conscious than most.)

While the frequency of vulnerability to the Heartbleed SSL attack is low: those that are vulnerable are severely vulnerable. Everything in memory on the webserver/device running a vulnerable version of OpenSSL is exposed. This includes:

  • UserIDs
  • Passwords
  • PII (Personally Identifiable Information)
  • SSL certificate private keys (e.g.-SSL using this cert will never be private again, if compromised)
  • Any private crypto keys
  • SSL-based Chat
  • SSL-based VPN
  • SSL-based anything
  • Connection Strings (Database userIDs/Passwords) to back-end databases, mainframes, LDAP, partner systems, etc. loaded in memory
  • Additional vulnerabilities in source code, that are loaded into memory, with precise locations (e.g.-Blind SQL injection)
  • Basically any code or data in memory on the server running a vulnerable version of OpenSSL is rendered in clear-text to all attackers

The most important thing you need to know regarding the above: this attack leaves no traces. No logs. Nothing suspect. It is a nice, friendly, read-only anonymous dump of memory. This is the digital equivalent of accidentally leaving a copy of your corporate diary, with all sorts of private secrets, on the seat of the mass-transit bus for everyone to read – including strangers you will never know. Your only recourse for secrets like passwords now is the delete key.

The list of vulnerable applications published to date includes mainstream web and mobile applications, security devices with web interfaces, and critical business applications handling sensitive and federally-regulated data.

This is a seriously sub-optimal situation.

Timeline of the Heartbleed SSL attack and WhiteHat Security’s response:

April 7th: Heartbleed 0day attack published. Websites vulnerable to the attack included major websites/email services that the majority of the Internet uses daily, and many mobile applications that use OpenSSL.

April 8th: WhiteHat TRC begins testing customers’ sites for vulnerability to Heartbleed SSL exploitation. In parallel, WhiteHat R&D begins QAing new automated tests to enable Sentinel services to identify this vulnerability at scale.

April 9th: WhiteHat TRC identifies roughly 350 websites vulnerable to Heartbleed SSL attack, out of initial sample of 18,000 production applications. We find an initial 1.9% average exploitability but this percentage appears to drop rapidly in the next 48 hours.

April 10th: WhiteHat R&D releases new Sentinel Heartbleed SSL vulnerability tests, enabling Sentinel to automatically identify if any applications under Sentinel service are vulnerable to Heartbleed SSL attacks with every scan. This brings test coverage to over 35,000 applications. Average vulnerability rate drops to below 1% by EOD April 10th.

Analysis: the more applications we scan using the new Heartbleed SSL attack tests, the fewer sites (by percent) we find vulnerable to this attack. We suspect this is because most customers have moved quickly to patch this vulnerability, due to the extreme severity of the vulnerability and the intense media coverage of the issue.

Speculation: we suspect that this issue will quickly disappear for most important SSL-enabled applications on the Internet – especially applications under some type of active DAST or SAST scanning service. It will likely linger on with small sites hosted by providers that do not offer (or pay attention to) any form of security patching service.

We also expect this issue to persist with internal (non-Internet-facing) applications and devices that use SSL, but which are commonly not tested or monitored by tools or services capable of detecting this vulnerability, and that are less frequently upgraded.

While the attack surface of internal network applications & devices may appear to be much smaller than Internet-facing applications, simple one-click exploits are already available on the Internet, usable by anyone on your network with access to a web browser. (link to exploit code: http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html)

This means that any internal user on your network who downloads one of these exploits is capable of extracting everything in memory from any device or application on your internal network that is vulnerable. This includes:

  • Internal routers and switches
  • Firewalls and IDS systems
  • Human Resources applications
  • Finance and payroll applications
  • Pretty much any application or device running a vulnerable version of OpenSSL
  • Agents exploiting internal devices will see all network traffic or application data in memory on the affected device/application

Due to the fact that most of these internal applications and devices lack the type of logging and alerting that would notify Information Security teams of active abuse of this vulnerability, our concern is that in coming months these internal devices & applications may provide rich grounds for exploitation that may never be discovered.

Conclusion & Recommendations:
The scope, impact, and ease-of-exploitation of this vulnerability make it one of the worst in Internet history. However, patches are readily available for most systems, and the impact risk of patching seems minimal. It appears most of our customers have already patched the majority of their Internet-facing systems against this exploit.

However, this vulnerability has existed for up to 2 years in OpenSSL implementations. We will likely never know if anyone has been actively exploiting this vulnerability, due to difficultly in logging/tracking attacks. If your organization is concerned about this, we recommend that — in addition to patching this vulnerability — SSL certificates of suspect systems be revoked and re-issued. We also recommend that all end-user passwords and system passwords be changed on suspect systems.

Is this vulnerability a “boy that cried wolf” hype situation? Bruce Schneier has another interesting perspective on this: https://www.schneier.com/blog/archives/2014/04/heartbleed.html.

WhiteHat will continue to share more information about this vulnerability as further information becomes available.

General reference material:
https://blog.whitehatsec.com/heartbleed-openssl-vulnerability/
https://www.schneier.com/blog/archives/2014/04/heartbleed.html/

Vulnerability details:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Exploit details:
http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html

Summaries:
http://business.kaspersky.com/the-heart-is-bleeding-out-a-new-critical-bug-found-in-openssl/
http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted-websites-vulnerable-to-heartbleed-bug.html

Heartbleed OpenSSL Vulnerability

Monday afternoon a flaw in the way OpenSSL handles the TLS heartbeat extension was revealed and nicknamed “The Heartbleed Bug.” According to the OpenSSL Security Advisory, Heartbleed reveals ‘a missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.’ The flaw creates an opening in SSL/TLS which an attacker could use to obtain private keys, usernames/passwords and content.

OpenSSL versions 1.0.1 through 1.0.1f as well as 1.0.2-beta1 are affected. The recommended fix is upgrade to 1.0.1g and to reissue certs for any sites that were using compromised OpenSSL versions.

WhiteHat has added testing to identify websites currently running affected versions. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. These tests are currently being run across all of our clients’ applications. We expect to get full coverage of all applications under service within the next two days. WhiteHat also recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested. Several tools have been made available to test for open issues. To access an online testing tool visit http://filippo.io/Heartbleed/. Another tool can be found on GitHub at https://github.com/titanous/heartbleeder and a new script can be obtained from http://seclists.org/nmap-dev/2014/q2/36

If you have any questions regarding the Heartbleed Bug please email support@whitehatsec.com and a representative will be happy to assist. Below you will find a link to the OpenSSL Security Advisory: https://www.openssl.org/news/secadv_20140407.txt Reference for Heartbeat extension https://tools.ietf.org/html/rfc6520

Aviator Status – 100k Downloads and Growing!

I realize it’s only been a handful of days since we launched the Windows version of Aviator, but it’s been an exciting ride. If you’ve never had to support a piece of software, it feels a bit like riding an unending roller coaster — you’re no longer in control once you get on the ride put your software out there. People will use it however they use it, and as the developer you simply have to adapt and keep iterating to make the user experience better and better. You can never get off the ride, which is a wonderful feeling – if you happen to like roller coasters! Okay, enough with that analogy.

When we released Aviator for Mac in October, we felt we were onto something when people started – almost immediately – emailing us asking us for features. We were sure we were on the right track when the media started writing articles. And when the number of downloads climbed from the thousands to tens of thousands to close to 45,000 Mac OSX downloads in just five months, we thought we were getting pretty incredible traction. But none of that prepared us for the response we received in just the handful of days since we launched Aviator for Windows. In just 5 days since the Windows launch, we have already hit a total number of 100,000 Aviator users – and that is without spending a single dime on advertising!

We were also pleasantly surprised that a huge chunk of our users came from other regions – as much as 30% of our new Windows user base was from Asia. This means that Aviator is already making a difference in every corner of the world. We’re extremely excited by this progress, and are very encouraged to continue to iterate and deliver new features. I think this really shows how visceral people’s reaction to security and privacy is. It’s no wonder – we’ve never given this kind of control to users before. Either that or our users got wind of how much faster surfing without ads and third-party tracking can be. :) Ever tried to surf the Internet on in-flight wireless? With Aviator you will find that websites are actually usable — give it a try!

We may never know why so many people chose Aviator, but I do hope more people share their user stories with us. We want to know our successes as well as the challenges that remain before us as we continue on this unending roller coaster ride. We really do appreciate all of your feedback and we thank you for helping to make Aviator such a huge success, right out of the gate. We’re just getting started!

Download WhiteHat Aviator for Windows or Mac here: http://www.whitehatsec.com/aviator

Re: Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Mary Ann Davidson wasn’t shy making her feelings known about scanning third-party software in a lively blog post entitled “Mandated ThirdParty Static Analysis: Bad Public Policy, Bad Security“. If you haven’t read it and you are in security, I do recommend giving it a read. Opinionated and outspoken, Mary Ann lays out her case for not scanning third-party software. In this case, although I don’t totally agree with each individual point, I do agree with the overall conclusion.

Mary Ann delineates many individual reasons organizations should not scan third-party COTS (Commercial Off The Shelf) software. The reasons include non-standard practice, vendors already scan, little-to-no ROI, harm to the product’s overall security, increased risk to other clients, uneven access to security information, and risks to IP protection. I think the case can actually be greatly simplified. Scanning COTS software is simply a waste of time because that is not where most organizations are going to find and reduce risk.

Take web applications, which is at the top of every CISO’s usual list of suspects for risk. Should every organization on the web perform a complete security review of every single layer in this technology stack? Or how about mobile? Should an organization perform a complete review of iOS and Android before writing a mobile app or allowing employees to use mobile phones? I’m sure the consulting industry would love this, but this is simply not feasible for organizations of any size.

So what are we to do? In my opinion, a security team should strive to measure and mitigate the greatest amount of risk to an organization within it’s budgetary and time limitations, enabling business to innovate with a reasonable amount of security assurance. For the vast majority of applications, that formula is going to lead directly to their own custom-written or custom outsourced software; specifically, their web applications.

Most organizations have a large number of web apps, a percentage of which will have horrific vulnerabilities which put the entire organization at risk. These vulnerabilities are well-known, very prevalent, and usually straightforward to remediate. A security program that provides continuous assessment for all the code written and/or commissioned by your organization both during development and deployment should be the front line of security for nearly every organization with a presence on the web, as it normally finds a trove of preventable risk that would otherwise be exploitable to attackers on the web.

So what is the problem with scanning our custom code and third-party COTS software? It is a misallocation of resources. Unless you have unlimited budget and time, you are much better off focusing and evaluating your custom-written source code for vulnerabilities, which can be ranked by criticality and mitigated by your development team.

Again, that is not to say there are no risks in using COTS software. Of course there are. All software has vulnerabilities. Risks are present in every level of a technology stack. For example, a web app may depend on a BIOS/UEFI, an OS, a web server, a database server, an application server, multiple server-side frameworks, multiple client-side frameworks, and many other auxiliary programs.

But the likelihood of your organization performing yet another evaluation of software that has most likely already gone through a security regimen is exponentially less effective in managing risk than focusing more of your security resources on building a more robust security program around your own in-house custom software.

Well, what should we do to mitigate third-party risk? The most overlooked and basic security precaution is to have a full manifest of all third-party COTS and open-source code running in your environment. Few, if any organizations have a full listing of all third-party apps and libraries they use. Keeping an account of this information, and then doing frequent checks for security updates and patches for this third-party code is obvious and elementary, but almost always universally overlooked.

This basic security formula of continuous scanning of custom applications, checking third-party libraries for security updates, and using this information to evaluate the biggest risks to your organization and working to mitigate the most severe risks would have prevented most of the successful web attacks that make daily headlines.