WhiteHat Website Security Statistics Report: From Detection to Correction

While web security used to be a reactionary afterthought, it has evolved to become a necessity for organizations that wish to conduct online business safely. Companies have switched from playing defense to playing offense in a game that is still difficult to win. In an effort to change the game, WhiteHat Security has been publishing its Website Security Statistics Report since 2006 in the hope of helping organizations improve web security before they become victim to an attack.

After several editions, this is by far the most data rich, educational, insightful and useful application security report I have ever read. I may be biased, but I believe this report is unique: something special and different that is an essential read for application security professionals. In creating this report, I have learned more about what works and what doesn’t work than I have learned doing anything else in my many years of working in application security. I am extremely confident that our readers will appreciate what we have created for them.

In this year’s report, we examine the activities of real-world application security programs along with the most prevalent vulnerabilities based on data collected from more than 30,000 websites under WhiteHat Sentinel management. From there, we can then determine how many vulnerabilities get fixed, the average time it takes to fix them, and how every application security program can measurably improve. Our research provides insights into how organizations can better determine which security metric to improve upon.

We’ve learned that vulnerabilities are plentiful, that they stay open for weeks or months, and that typically only half get fixed. We have become adept at finding vulnerabilities. The next phase is to improve the remediation process. In order to keep up with the increase in vulnerabilities, we need to make the remediation process faster and easier. The amount of time companies are vulnerable to web attacks is much too long – an average of 193 days from the first notification. Increasing the rate at which these vulnerabilities are remediated is the only way to protect users.

The best way to lower the average number of vulnerabilities, speed up time-to-fix, and increase remediation rates is to feed vulnerability results back to development through established bug tracking or mitigation channels. This places application security at the forefront of development and minimizes the need for remediation further down the road. The goal is more secure software, not more security software.

For security to improve, organizations need to set aside the idea of ‘best practices’ and not stop at compliance controls. Multiple parts of the organization must determine which teams should be held accountable for their specific job function. Organizations that don’t hold specific teams accountable have an average remediation rate of 24% versus 33% for companies that do. When you empower those who are also accountable, the organization has a higher likelihood of being effective.

In this year’s edition, the WhiteHat Website Security Statistics Report drives home the point that we now have a very clear understanding of what vulnerabilities are out there. Based on that information, we must create a solid, measurable remediation program to remove those vulnerabilities and increase the safety and security of the web.

To view the full report, click here. I would also invite you to join the conversation on Twitter at #WHStats @whitehatsec.

Logjam: Web Encryption Vulnerability

A team of researchers has released details of a new attack called “Logjam.” This attack, like FREAK, enables a man-in-the-middle attacker to downgrade the connection between the client and the server to an easier-to-break cipher. Many servers support these weaker ciphers, though there is no practical reason to support them. The solution is to simply not support any ciphers that are easy to break. In fact, the browser makers are doing that right now.

The offending ciphers, Export Diffie-Hellman ciphers, can be found in HTTPS, SSH, VPN, mail, and many other servers. This does not, however, mean that you are vulnerable, or that you need to panic. Exploiting this vulnerability requires man-in-the-middle and a high level of sophistication. The real risk is relatively low on this issue compared to Poodle or Heartbleed. You should simply test your TLS endpoints to ensure that they do not support any weak ciphers. If you took this step back when FREAK came out, you are likely already okay.

The specific ciphers to disable for this attack are DHE_EXPORT ciphers (or “EXP-EDH-” ciphers). But go ahead and disable all weak ciphers, while you’re at it.

All WhiteHat Sentinel dynamic testing services (BE, SE, PE, PL, Elite) now report the use of export ciphers as part of reporting on weak ciphers, and specifically call out the ciphers that are a concern for Logjam.

The research team that released the report has also set up a page to test your servers here: https://weakdh.org/sysadmin.html.

Remember that when you test a hostname, you are really testing the TLS endpoint for that connection, which may be a load balancer or firewall, and not your application server.

#Hackerkast 35: Airplane hacking, United bug bounty, and SEA hacks Washington Post

Hey Everyone! It was just Jeremiah Grossman and me today, as Matt Johansen is overseas this week attending various security conferences. So we braved on and did a short one with just three major articles.

First we covered Airplane hacking and a bit of drama that has been unfolding in the mainstream press related to hacking an airplane while on one. Jeremiah made the point that it’s not just illegal it’s also dangerous from a personal safety perspective. Rule number 1 of hacking – don’t hack the airplane while you’re still on it.

Then we discussed a bit about the United bug bounty program that was just announced. Although it’s interesting, it still doesn’t cover the major thing the public is worried about. Learning who is flying is bad, but doing something bad to an airplane is much much worse. And it does beg the question, why does the bounty program not cover the airplane if there are no flaws in airplanes?

Lastly we covered the latest SEA hack of Washington Post by way of their CDN provider, InstartLogic. Jer made the point that hacking InstartLogic is just the canary in the coal mine: it’s the other hacks that you don’t see until a year or two down the road that are really troubling. In some ways, the SEA is doing us a huge favor by letting us know about the issues without causing any real harm in the process.

Resources:
Airplane Hacking?!?!
United Rewards Bug Bounties with United Miles
SEA hacked Washington Post’s CDN InstartLogic

Notable stories this week that didn’t make the cut:
Firefox is going to Depreciate HTTP
Anti-gay demonstrators advertise gay porn site after their domain expires
Adblockers are immoral vs
Priority of Cnstituencies
Why a Law Firm is Baiting Cops With A Tor Server
VENOM Exploit Against QEMU and Xen Floppy Discs
Safari address-spoofing bug could be used in phishing, malware attacks

#HackerKast 34: SOHO Routers hacked, 3d printed ammo, Nazis & child porn, PayPal Remote Code Execution, Dubsmash 2, Twitter CSRF

Hey Everybody! We’re back from our 1 week break due to crazy schedules and even now we are without Jeremiah. Coconuts don’t make great WiFi antennae or something.

Started this episode talking about some Vendors who decided to do some weird, bad stuff this past week. In both stories it seems some security vendors were caught being naughty, starting with Tiversa. They are a security firm that decided it’d be a good idea to extort their own clients by finding a fake vulnerability and asking for money to fix this fake vulnerability. Then Tencent and Qihoo, two different Chinese AV Vendors, were both caught cheating on a certification test about how good their products were.

Moving away from shady vendors and on to shady home wireless routers. Not news to anybody, really: wifi routers you buy off the shelf aren’t quite state of the art when it comes to security. Hence, we see some sort of router hacking story pop up all the time. This time SOHO routers were targeted by the hacking group Anonymous, as per a report from Incapsula. It seems Anonymous saw a good opportunity to exploit these home routers and use them as a botnet, running their DDoS tool for fun and profit. The extremely 1337 H@x0r methodology being used here, which takes many years of cyber security experience and probably a CISSP to exploit, is a default username and password. Try to keep up here, the DEFAULT USERNAME AND PASSWORD out of the box was used to compromise MILLIONS of home routers and turn them into DDoS bots. I’ll just leave that there.

Next, Robert talked about some of the most ridiculous topics we’ve talked about on the podcast. He somehow related 3d printed ammunition to a story about Nazis and child pornography. You see, some court ruled somewhere that the file on the computer that can be used to 3d print bullets is now considered as munitions legally. In related(?) news, there was some Nazi war camp website that got hacked and got child pornography uploaded to it. When child porn is involved, the government immediately must confiscate the computers as evidence which essentially takes the website offline. Robert related the two by saying that you could also upload a 3d printer file which would have the same effect, now that a file can constitute illegal munitions.

In vulnerability disclosure news, PayPal was vulnerable to Remote Code Execution via a 3rd party library they were using. The Java Debug Wire Protocol using Shellifier was leaving port 8000 open on some Paypal servers, which allowed an attacker to gain access remotely — without authenticating — and execute commands. The part we don’t know yet is whether or how much PayPal paid the researcher who disclosed this to them. They’ve been known to pay big bounties in the past.

Robert then covered a fake mobile app called Dubsmash 2 that was uploaded to the Google Play store this week and got wildly popular. Apparently, Dubsmash is a popular app which allows you to lip sync to some songs — but the fraudulent sequel app wouldn’t be nearly as fun. What it did was immediately remove the “Dubsmash” part of the app and replace the icon with a mimic “Settings” icon. The moment a user clicked this icon, the app would generate thousands of pop-unders of porn sites and click on ads. The thought here was they are using this in a pay-per-click fraud scheme to generate earnings for the developer. 500,000 users downloaded the fake app to date.

Lastly, we talked about a CSRF vulnerability disclosed via HackerOne to Twitter about 11 months ago and recently disclosed publicly. This CSRF protection bypass was *very* creative and used a behavior in certain frameworks which treats commas as semicolons. This would allow an attacker to exploit a user by sending them a malicious link which would allow the attacker to use the CSRF token they stole on mobile.twitter.com. Really cool research that I’m glad eventually became public.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Resources:
Tiversa May Have Hacked Its Own Clients To Extort Them
2nd (Tencent and Qihoo) Chinese AV-Vendor Caught Cheating
3-D Printed Gun Lawsuit Starts the War Between Arms Control and Free Speech
Nazi camp website hacked with child porn on anniversary
MySQL Out of Band (2nd Order) Exploitation
Twitter CSRF Bug
PayPal Remote Code Execution (Java Debug Wire Protocol using Shellifier)
Your Smartphone Might Be Watching Porn Behind Your Back
Anonymous accused of running a botnet using thousands of hacked home routers

Notable stories this week that didn’t make the cut:
PHP == Operator Issue
Hack Google Password
Researchers Hijack Teleoperated Surgical Robot
Google PageSpeed Service End of Life
Windows to Kill of Patch Tuesday
PortSwigger Web Security Blog: Burp Suite now reports blind XXE injection
Practical Cache Attacks in JavaScript
25 Members of $15M Carding Gang Arrested
Apple ‘test’ iPad stolen from a Cupertino home: Report
Irate Congressman Gives Cops Easy Rule – Follow The Damned Constitution

Magic Hashes

For more than the last decade, PHP programmers have been wrestling with the equals-equals (==) operator. It’s caused a lot of issues. This has a particular implication for password hashes. Password hashes in PHP are base16 encoded and can come in the form of “0e812389…”. The problem is in == comparison the 0e means that if the following characters are all digits the whole string gets treated as a float. This was pointed out five years ago by Gregor Kopf, two years ago by Tyler Borland and Raz0r and again a year ago by Michal Spacek and Jos Wetzels but this technique is making more waves this past week.

Below is a list of hash types that when hashed are ^0+e\d*$ which equates to zero in PHP when magic typing using the “==” operator is applied. That means that when a password hash starts with “0e…” as an example it will always appear to match the below strings, regardless of what they actually are if all of the subsequent characters are digits from “0-9″. The implication is that these magic numbers when hashed are treated as the number “0” and compared against other hashes, the comparison will evaluate to true. Think of “0e…” as being the scientific notation for “0 to the power of some value” and that is always “0”. PHP interprets the string as an Integer.

<?php
if (hash('md5','240610708',false) == '0') {
  print "Matched.\n";
}
if ('0e462097431906509019562988736854' == '0') {
  print "Matched.\n";
}
?>

What this practically means is that the following “magic” strings are substantially more likely to evaluate to true when hashed given a completely random hash (E.g. a randomly assigned password, nonce, file hash or credential). Likewise if a straight guess of a hash is required the associated hashes are proven to be typed into the float “0” with the “==” comparison operator in PHP, and if another hash in a database also starts with a “0e…” the comparison will evaluate to true. Therefore, the hashes can also be substantially more likely to evaluate to true when compared with a database of hashes, even if they don’t actually match. Many cookies, as an example are simply hashes, and finding a collision becomes much more likely depending on how many valid credentials are in use at the time of test (See: Birthday paradox).

Use Case 1: Use the “Magic” Number below as a password or as a string that you expect to be hashed. When it is compared against the hash of the actual value, and if they both are treated as “0” and therefore evaluated as true, you will be able to log into the account without the valid password. This could be forced to happen in environments where automatic passwords are chosen for users during a forgot password flow and then attempting to log in immediately afterwards, as an example.

https://example.com/login.php?user=bob&pass=240610708

Use Case 2: The attacker can simply take the example in the Hash column in the table below and use it as a value. In some cases these values are simply done as a look-up against known values (in memory, or perhaps dumped from a database and compared). By simply submitting the hash value, the magic hash may collide with other hashes which both are treated as “0” and therefore compare to be true. This could be caused to happen

https://example.com/login.php?user=bob&token=0e462097431906509019562988736854

Hash Type

Hash Length

“Magic” Number / String

Magic Hash

Found By
md2 32 505144726 0e015339760548602306096794382326 WhiteHat Security, Inc.
md4 32 48291204 0e266546927425668450445617970135 WhiteHat Security, Inc.
md5 32 240610708 0e462097431906509019562988736854 Michal Spacek
sha1 40 10932435112 0e07766915004133176347055865026311692244 Independently found by Michael A. Cleverly & Michele Spagnuolo & Rogdham
sha224 56
sha256 64
sha384 96
sha512 128
ripemd128 32 315655854 0e251331818775808475952406672980 WhiteHat Security, Inc.
ripemd160 40 20583002034 00e1839085851394356611454660337505469745 Michael A Cleverly
ripemd256 64
ripemd320 80
whirlpool 128
tiger128,3 32 265022640 0e908730200858058999593322639865 WhiteHat Security, Inc.
tiger160,3 40 13181623570 00e4706040169225543861400227305532507173 Michele Spagnuolo
tiger192,3 48
tiger128,4 32 479763000 00e05651056780370631793326323796 WhiteHat Security, Inc.
tiger160,4 40
tiger192,4 48
snefru 64
snefru256 64
gost 64
adler32 8 FR 00e00099 WhiteHat Security, Inc.
crc32 8 2332 0e684322 WhiteHat Security, Inc.
crc32b 8 6586 0e817678 WhiteHat Security, Inc.
fnv132 8 2186 0e591528 WhiteHat Security, Inc.
fnv164 16 8338000 0e73845709713699 WhiteHat Security, Inc.
joaat 8 8409 0e074025 WhiteHat Security, Inc.
haval128,3 32 809793630 00e38549671092424173928143648452 WhiteHat Security, Inc.
haval160,3 40 18159983163 0e01697014920826425936632356870426876167 Independently found by Michael Cleverly & Michele Spagnuolo
haval192,3 48 48892056947 0e4868841162506296635201967091461310754872302741 Michael A. Cleverly
haval224,3 56
haval256,3 64
haval128,4 32 71437579 0e316321729023182394301371028665 WhiteHat Security, Inc.
haval160,4 40 12368878794 0e34042599806027333661050958199580964722 Michele Spagnuolo
haval192,4 48
haval224,4 56
haval256,4 64
haval128,5 32 115528287 0e495317064156922585933029613272 WhiteHat Security, Inc.
haval160,5 40
haval192,5 48
haval224,5 56
haval256,5 64

To find the above, I iterated over a billion hashed integers of each hash type to attempt to find an evaluation that results in true when compared against “0”. If I couldn’t find a match within the billion attempts I moved on to the next hashing algorithm. This technique was inefficient but it was reasonably effective at finding a “Magic” Number/String associated with most hash algorithms with a length of 32 hex characters or less on a single core. The one exception was “adler32″ which is used in zlib compression as an example and required a slightly different tactic. The moral of the story here is for the most part the more bits of entropy in a hash the better defense you will have. Here is the code used I used (adler32 required a lot of special treatment to find a valid hash that didn’t contain special characters):

<?php
function hex_decode($string) {
  for ($i=0; $i < strlen($string); $i)  {
    $decoded .= chr(hexdec(substr($string,$i,2)));
    $i = (float)($i)+2;
  }
  return $decoded;
}
foreach (hash_algos() as $v) {
  $a = 0;
  print "Trying $v\n";
  while (true) {
    $a++;
    if ($a > 1000000000) {
      break;
    }
    if ($v === 'adler32') {
      $b = hex_decode($a);
    } else {
      $b = $a;
    }
    $r = hash($v, $b, false);
    if ($r == '0') {
      if(preg_match('/^[\x21-\x7e]*$/', $b)) {
        printf("%-12s %s %s\n", $v, $b, $r);
        break;
      }
    }
  }
}
?>

I didn’t have to just use integers as found in most of the results but it was slightly easier to code. Also, in hindsight it’s also slightly more robust because sometimes people force the passwords to upper or lowercase, and numbers are uneffected by this, so using integers is slightly safer. However, in a practical attack, an attacker might have to find a password that conforms to password requirements (at least one upper case, one lower case, one number and one special character) and also is evaluated into zero when hashed. For example, after 147 million brute force attempts, I found that “Password147186970!” converts to “0e153958235710973524115407854157″ in md5 which would meet that stringent password requirement and still evaluate to zero.

To round this out, we’ve found in testing that a 32 character hash has collisions with this issue in about 1/200,000,000 of random hash tests. That’s thankfully not that often, but it’s often enough that it might be worth trying on a high volume website or one that generates lots of valid credentials. Practically this is rather difficult to do, thankfully, without sending a massive amount of attempts in the most likely instances. Note: there are similar issues with “0x” (hex) and “0o” (octal) as well but those characters do not appear in hashes, so probably less interesting in most cases. It’s also worth mentioning that “==” and “!=” both suffer from the same issue.

Are websites really vulnerable to this attack? Yes, yes, they are. This will surely cause issues across many many different types of code repositories like this and this and this and this to name just a few. Similar confusion could be found in Perl with “==” and “eq”, as well as loosely cast languages like JavaScript as well. (Thanks to Jeremi M Gosney for help thinking this through.) I wouldn’t be surprised to see a lot of CVEs related to this.

Patch: Thankfully the patch is very simple. If you write PHP you’ve probably heard people mention that you should be using triple equals “===”. This is why. All you need to do is change “==” to “===” and “!=” to “!==” respectively to prevent PHP from attempting to guess the variable type (float vs string). Some people have also recommended using the “hash_equals” function.

WhiteHat will now be testing this with both our dynamic scanner and static code analysis for WhiteHat customers. If you want a free check please go here. This is rather easily found using static code analysis looking for comparisons of hashes in PHP. Lastly, if you have some computing horsepower and have any interest in this attack, please consider contributing to any value/hash pairs that we haven’t found samples for yet or for hash algorithms we haven’t yet listed.

#HackerKast 32: WordPress Core XSS, Spoof Email Tanks Stock, Tesla Defacement via DNS Hack, 451 Status Code, MS15-034 Microsoft Vulnerability

Hey All! Thanks for checking out this week’s HackerKast! We’re all back and recovering from RSA and my feet still hurt.

Starting off with This Week In WordPress Sucks™, we’ve got a vulnerability in WordPress core this time. This is usually not the case as core has been gone over several times with a fine toothed comb, but some persistent XSS in core comment functionality popped up anyway. Also, as per usual, a few hundred plugins were vulnerable to an XSS that was found in two different frequently used functions that were poorly documented. The core issue were patched already but it is up to administrators of WordPress installs to race and get the patch installed.

Next, in silly things that affect the stock market news, Italy’s 2nd largest bank had a hoax email go out pretending to be the CEO resigning. Within moments, the stock takes a huge crash before coming back up after everyone realizes it was a hoax. We’ve seen this before a few times, notably the time Associated Press Twitter account was hacked and tweeted about a bomb at the White House which caused the entire stock market to take a dive for a few minutes. This all points to the fact that there are automated stock trading systems out there making decisions based off of social media and news information.

We had a little chat about the recent problem over at Tesla where their homepage was “defaced”. This wasn’t actually a defacement of any servers on their end but the attackers went after the recently popular low hanging fruit of DNS providers. Once the DNS provider was owned, the homepage was redirected along with any MX records allowing emails to be rerouted to the attackers. With this email rerouting in place, they then sent out some Twitter password reset emails which allowed them to take over the social media accounts. What Robert and I touched on at the end here is that Tesla was lucky that this was all for the lulz because that email rerouting, if done correctly, could’ve been silently MiTMing the company’s emails for some time before anybody noticed. Scary stuff relying on a DNS provider with that level of severity of compromise.

A new status code is being presented in the HTTP standard for the purposes of displaying a legally related block. Instead of just a 404, the browser would now present a 451 which would mean legally restricted due to any number of reasons. Most popularly this would show up for geolocation related blocks of content that tons of Netflix users are very aware of.

Lastly, MS-15-034, came out which was a Microsoft Buffer Overflow vulnerability in IIS servers. Of course Robert couldn’t help himself and wrote a snippet of exploit code. Then in This Week In RSnake Puts Something Dangerous Social Media™ he posted this code to Twitter for people to play with exploit in a remotely exploitable way. We’re toying with a possible demo we could do of this for you all but might take some tinkering to make it interesting.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay

Resources:

XSS 0day in WordPress Core
Many WordPress Plugins Found Vulnerable to XSS
Fake Email Regarding CEO Resignation Tanks Stock
Tesla’s DNS and Twitter Account Hacked
New HTTP “Legally Restricted” Status Code Proposed
MS15-034 Buffer Overflow in Microsoft HTTP pt 1.
MS15-034 Buffer Overflow in Microsoft HTTP pt 2.
MS15-034 Buffer Overflow in Microsoft HTTP pt 3.

Notable stories this week that didn’t make the cut:

Thirty Meter Telescope Gets DDoS’d
Google’s April Fools Joke Actually Made Users Less Secure
Extremely Hackable eVoting Machine
Security Expert Pulled Off Flight by FBI After Exposing Airline Security Flaws
Senate Proposes Re-classifying Certain Uses of Software/Hardware as “Fair Use” and Exempt from DMCA
Navy Announces It Will Stop Buying Manned Aircraft
“Better Presentation of URLs in Search” Should Read “Removal of URLs In Search”
“The Real Deal” DarkNet 0Day Auction

#HackerKast 31: RSA San Francisco

We have a special and rare treat this week on HackerKast: Jeremiah, Matt and Robert all together in San Francisco for RSAC. They give a brief overview of some of the interesting conversations and topics they’ve come across.

A recurring topic in conversations with Robert is about how DevOps can improve security and help find vulnerabilities faster. Matt mentions Gauntlt, a cool new project that he contributes to. Gauntlt is a tool that puts security tools into your build-pipeline to test for vulnerabilities before the code goes to production.

Matt also mentions that his buddies at Verizon came out with data showing that people aren’t getting hacked by mobile apps. We haven’t seen large data breaches via mobile apps lead to any financial loss. With the recent surge in mobile use for sensitive data, are these types of data breaches something we should worry about?

On a more pleasant note, Jer was happy to hear that people and companies are realizing the importance of security. Industry leaders are now showing interest in doing application security the right way through a holistic approach.

Also at RSA, Jer talks security guarantees while Matt/Kuskos dive into our Top 10 Web Hacks.

Speaking of Government Backdoors

After Alex Stamos’ stand off with Admiral Mike Rogers, I got to thinking about what the Admiral must be saying when he insisted that government “front doors” were technically possible to create in a way that didn’t give them ultimate access. Then a story came out about a split-key approach that is being studied. Let me explain to you why that is a bad idea and propose a technically less dangerous one.

Barring any conversations about the ethics, the legal conundrums, the loss of trust, the weakening of freedoms, the chilling effect, or the future where we have to provide similar access to any government that asks, there are some legitimate reasons this design is bad. First a brief primer on how split-keys work.

Let’s take a simple encryption algorithm that just uses the password “Will Wheaton” to decrypt the plaintext. Now let’s say government agency A (the FBI/NSA or some similar organization) has access to the first half of the password “Will”. “Will Wheaton” is a very weak password, but it’s made significantly weaker when one party knows at least half of the secret. But it gets worse. Let’s say government agency B (the FISA court) has the second half of the password “Wheaton”. Eventually they need to combine the password somewhere. That physical place is a place where both halves of the password have to be typed in at the same time. Let’s call it a SCIF for argument’s sake.

In this example the SCIF is now the one place where all secrets go, and makes it a prime target to attack. Now both parties can see the data, instead of it just being one party. There may be situations where truly only one party should see the data. If the password is always the same for every piece of encrypted information for all conversations, it practically guarantees abuse once both halves are known. Not only is it significantly easier to break the original encryption since both parties have half the key material, but it has also created a single place where two parties now have to combine their two halves and it is far more likely to be abused.

What happens when access to that user’s data is no longer deemed useful? Does the key no longer become useful? What if they find out they were mistaken and the data they were looking at is benign? Is there a way to disable their password? No – that’s not how passwords or keys work when they have to work everywhere all the time. All they can do is tell Apple, or Google or whoever created the backdoor to change the user’s keys and/or create a different backdoor password to be created. That’s one of the major drawbacks of this model. It could also inadvertently tip off the suspect in the process if they notice a new key being issued as well, depending on how it was implemented.

Now let’s take a slightly different scenario where Apple/Google had a rolling window where passwords changed every day, say. One day it was “Will Wheaton” the next it was “Darth Vader” and so on. That way the FBI/NSA and the FISA courts could subpoena any piece of information but it had to be marked with a certain time period (say ten days and they would use their corresponding 10 keys split into two parts each for a grand total of 20 key-halves). That way, they only had access to certain pieces of information and only for that one conversation, and nothing after that time period. That has a better chance of being successful, but still relies on the parties to come together at some point and allows them both to see the resultant classified material.

A more useful approach would be to have four sets of keys for each time-slice of one day. Key 1 and 2 belonged to the FBI/NSA and Key 3 and 4 belonged to the FISA court. Key 1 would decrypt to a blob of further encrypted material that would only be decrypted fully by Key 3 (think of Key 1 as the outer slice of an onion and Key 3 the inner slice to get to the center). Also Key 4 would decrypt a blob of decrypted material that could only be fully decrypted by key 2. That way you could guarantee that neither individual key could be subverted to fully decrypt without the other’s involvement. It would also allow either or both to see the resultant material should they need it but not without each other’s approval. It would also guarantee that the key material wouldn’t be abused beyond the time slice for the conversation in question.

So here is how it would break down. FBI/NSA ask FISA for approval to decrypt User A’s conversation with User B. FISA agrees, and FBI/NSA request Apple/Google give them the time slice of Tuesday and Wednesday. Apple/Google respond with corresponding numbers 1234 and 1235 with corresponding blobs of encrypted text (if the FBI/NSA don’t already have it).. FBI/NSA request that FISA decrypt the blobs with Key 4(Tuesday) and Key 4(Wednesday) corresponding with conversation 1234 and 1235. FISA returns two encrypted blobs that won’t be useful until the FBI/NSA use their Key 2(Tuesday) and Key 2(Wednesday) corresponding with the time slice Tuesday and Wednesday for conversation 1234 and 1235. The FBI/NSA decrypt the final encrypted blobs and are able to read the conversation. At this point Apple/Google know nothing about the data, only that it was subpoena’d. The FISA court was aware of and complicit in the decryption but never saw the data, and the FBI/NSA got only the data they requested and nothing more. If the Court also needs to see the data, corresponding keys 1 and 3 are used for the same time-slice against the corresponding blobs of data.

Of course this is a huge burden, because now each user has four keys that need to be created for each day. Assuming there are 3 billion people in the world, and they use probably 3 different types of chatting systems per day, that would require something like 36 billion keys to be shared by two government agencies (18 billion each) per day. That’s a lot. Not to mention that keys wouldn’t just be short passwords, but presumably something like x509 or GPG certs, which can be quite large. And that also assumes that they can somehow get access to those keys in a way that the other (or malicious 3rd parties) can’t intercept or see. The devil lies deep in those details.

Ultimately though, I think Alex Stamos is right to press the government. Our industry thrives on trust, and if people believe that the government is spying on them, they are significantly less likely to transact or act normally – as themselves. Even if we can solve for the technical problems we have to be extremely thoughtful on how or even if we deploy it at all. Even when one’s only crime is one of thought or ideas, this kind of system dramatically increases the likelihood that the idea of freedom of expression will be lost in the annals of time. We all have to decide: would we rather have security in the form of big brother, or would we rather have privacy? We can’t have both, so we had better make up our minds now before the decisions are made on our behalf.

Please check out a similar and wonderfully written post by Matthew Green as well.

Web Security for the Tech-Impaired: The Importance of the ’S’

There’s one little letter that has huge importance when you’re logging into sites or buying your favorite items: it’s the letter ’S’. The ’s’ I’m referring to is the ’S’ in HTTPS. You may never have seen the ’S’ before in your web browser, or you may have seen it and never realized it’s importance. You may know it as that thing that gets added before the website you type in. What is it’s meaning? Why is it important? You shall find these answers in this post!

HTTP and HTTPS are referred to as ‘protocols’. In essence, these protocols are defining how your computer will talk to another computer. As you browse the web, you may notice that some sites use HTTP, while others use HTTPS. If you bring up CNN’s home page edition.cnn.com you’ll notice that it either shows http in front of the URL in the URL bar or just edition.cnn.com. This shows that the site is using the HTTP protocol to communicate. HTTP is a non-secure way of transmitting data from your computer to the website. Data over the HTTP protocol can be intercepted and read at any point between you and the website’s computer. This is what’s known as a ‘man-in-the-middle’ (MITM). A person listening in on your virtual conversation between your computer and the website’s computer can look at all the data that’s being sent. This isn’t a big deal if you’re looking at articles on CNN or searching for content on Wikipedia, but what if you log in to a site or buy something from an online store? You certainly don’t want the bad guys to know your username and password or your credit card number, so how do you protect yourself?

This is where the mighty ’S’ comes to the rescue. The protocol HTTPS is a way of securely sending data from your computer to the website you are interacting with. If a site is using HTTPS you’ll notice the HTTPS in front of the URL. As an example, go to www.facebook.com. In more modern and up-to-date browsers, you’ll likely see the HTTPS colored in either green or red and a lock icon. The green text with the lock icon is stating that you’re communicating securely with this website and everything looks to be going well.

If the https is red, there is probably some type of issue with the site security. It may be that the site’s certificate is out of date or invalid, or it may be that the site includes insecure third-party content, or there may be other issues. In any case, it is always safest not to proceed with a transaction that involves information you would like to keep secure if the HTTPS and lock icon are not green.

HTTPS uses a complicated system to encrypt the data you send to the website and vice versa. A bad guy who is performing an MITM attack will still see the conversation between you and the website, but it will be completely incoherent, like listening to a conversation in a language that’s been made up by the two people talking. Anytime you are doing anything that requires a login, credit card number, social security numbers, or ANY private data, you want to make sure that you see that HTTPS protocol, and if you have the benefit of modern browsers, that the green lock icon is present. NEVER log in or give any sensitive information to a site that does not communicate over HTTPS.

Protecting your Intellectual Property: Are Binaries Safe?

Organizations have been steadily maturing their application testing strategies and in the next several weeks we will be releasing the WhiteHat Website Security Statistics report that explores the outcomes of that maturation.

As part of that research we explored some of the activities being undertaken as part of application security programs and we were impressed to see that 87% of the respondents perform static analysis. 32% of them perform it with each major release and 13% are performing it daily.

This adoption of testing earlier in the software lifecycle is a welcome move. It is not a simple task for many companies to build out the policies that are essential for driving the maturity of an application security program.

We wanted to explore a policy that seems to have been conflated with the need to gain visibly into third-party software service providers and commercial off-the-shelf software (COTS) vendors’ products.

There seems to be a significant amount of confusion and perhaps intentional fear uncertainty and doubt (FUD) in this area. The way you go about testing third party software should mirror the way you go about testing your own software. Binary analysis of software for the purpose of not exposing your Intellectual Property (IP) is where the question of measurable security lies.

Binaries can easily be decompiled, revealing nearly 100% of the source code. If your organization is distributing the binaries that make up your web application to a third party, you have effectively given them all the source code as well. This conflation of testing policies leads to a false sense of Intellectual Property protection.

Reverse engineering, while requiring some effort is no problem. Tools such as ILSpy and Show My Code are freely and widely available.Sharing your binaries in an attempt to protect your Intellectual Property actually end up exposing 100% of your IP.

Source and Binary

This video illustrates this point.

Educational Series: How lost or stolen binary applications expose all your intellectual property. from WhiteHat Security on Vimeo.

While customers are often required by policy to protect their source code, the only way to do that is to protect your binaries. That means being careful never to turn on the compilation options that allow for binary review that other vendors require. Or at a very minimum it requires that those same binaries never get uploaded to production where they may be exposed via vulnerabilities. Either way, if your requirement is to protect your IP you need to make certain your binaries don’t fall into the wrong hands, because inside of those binaries could be the keys to the castle.

For more information, click here to see the infographic on the two testing methodologies.