Tag Archives: web application security

Teaching Code in 2014

Guest post by – JD Glaser

“Wounds from a friend can be trusted, but an enemy multiplies kisses” – Proverbs 27:6

This proverb, over 2,000 years old, is directly applicable to all authors of programming material today. By avoiding security coverage, by explicitly teaching insecure examples, authors do the world at large a huge disservice by multiplying both the effect of incorrect knowledge and the development of insecure habits in developers. The teacher is the enemy when their teachings encourage poor practices through example, which ultimately bites the student in the end at no cost to the teacher. In fact, if you are skilled enough to be an effective teacher, the problem is worse than for poor teachers. Good teachers by virtue of their teaching skill greatly multiply the ‘kisses’ of poor example code that eventually becomes ‘acceptable production code’ ad infinitum.

So, to the authors of programming material everywhere, whether you write books or blogs, this article is targeted at you. By choosing to write, you have chosen to teach. Therefore you have a responsibility.

No Excuse for Demonstrating Insecure Code Examples

This is year 2014. There is simply no excuse for not demonstrating and explaining secure code within the examples of your chosen language.

Examples are critical. First, examples teach. Second, examples, one way or the other, find their way into production code. This is a simple fact with huge ramifications. Once a technique is learned, once that initial impact is made upon the mind, that newly learned technique becomes the way it is done. When you teach an insecure example, you perpetuate that poor knowledge in the world and it repeats itself again and again.

Change is difficult for everyone. Pepsi learned this the expensive way. Once people accepted that Coke was It, hundreds of millions of dollars spent on really cool advertising have not been able to change that mindset of that opinion. The mind has only one slot for number one, and Pepsi has remained second ever since. Don’t continue to enforce security in the mind as second.

Security Should Be Ingrained, Not Added

When teaching, even if you are ‘just’ calculating points on a sample graph for those new to chart programming, if any of that data comes from user supplied input via that simple AJAX call to get the user going, your sample code should filter that sample data. When you save it, your sample code should escape it for the chosen database; when you display it to the user, the sample code of your chosen language should escape it for the intended display context. When your sample data needs to be encrypted, your sample code should apply modern cryptography. There are no innocent examples anymore.

Security should be ingrained, not added. Programmers need to be trained to see security measures in line with normal functionality. In this way, they are trained to incorporate security measures initially as code is written, and also to identify when security measures are missing.

When Security Measures are removed for the sake of ‘clarity’ the student is being taught unconsciously that security makes things less clear. The student is also being trained directly by example to add security ‘later’.

Learning Node.js In 2014

Most of the latest Node.js books have some great material, but only one out of several took the time to properly teach, via integrated code examples, how to properly escape Node.js code for use with MySQL. Kudos to Pedro Teixeira, who wrote Professional Node.js from Wrox Press, for teaching proper security measures as an integral part of the lesson to those adapting to Node.js.

Contrast this with Node.js for PHP Developers from O’Reilly Press, where the examples explicitly demonstrate how to insecurely code Node.js with SQL Injection holes. The code in this book actually teaches the next new wave how to code wide open vulnerabilities to servers. Considering the rapid adoption of Node.js for server applications, and the millions who will use it, this is a real problem.

The fact that the retired legacy method of building SQL statements through insecure string concatenation was chosen by the author as the instruction method for this book really demonstrates the power of ingrained learning. Once something is learned, for good or bad, a person keeps repeating what they know for years. Again, change is difficult.

The developers taught by these code examples will build apps that we may all use at some point. The security vulnerabilities they contain will effect everyone. It makes one wonder if you are the author whose book or blog taught the person who coded the latest Home Depot penetration. This will be code that has to be retrofitted later at great time and great expense. It is not a theoretical problem. It does have a large dollar figure attached to its cost.

For some reason, authors of otherwise good material choose to avoid teaching security in an integral way. Either they are not knowledgeable about security, in which it is time to up their game, or have fallen into the ‘clarity omission’ trap. This is a wide spread practice, adopted by almost everyone, of declaring that since this ‘example code’ is not production code, insecure code is excusable, and therefore critical facts are not presented. This is a mistake with far-reaching implications.

I recently watched a popular video on PHP training from an instructor who is in all other respects a good instructor. Apparently, this video has educated at least 15,000 developers so far. In it, the author explicitly states briefly that output escaping should be done. However, in his very next statement, he declares that for the sake of brevity, the technique won’t be demonstrated, and the examples given out as part of the training do not incorporate the technique. The opportunity to ingrain the idea that output escaping is necessary, and that it should be an automated part of a developer’s toolkit, has just been lost on the majority of 15,000 students because the code to which they will later turn for reference is lacking. Most, if not all, will ignore it as a practice until mandated by an external force.

Stop the Madness

In the real world, when security is not incorporated at the beginning, it costs additional time and money to retrofit it later. This cost is never paid until a breach is announced on the front page of a news service and someone is sued for negligence. Teachers have a responsibility to teach this.

Coders code as they are taught. Coders are primarily taught through books and blog articles. Blog articles are especially critical for learning as they are the fastest way to learn the latest language technique. Therefore, bloggers are equally at fault when integrated security is absent from their examples.

The fact is that if you are a writer, you are a trainer. You are in fact training a developer how to do something over and over again. Security should be integral. Security books on their own, as secondary topics, should not be needed. Think about that.

The past decade has been spent railing against insecure programming practices, but the question needs to be asked, who is doing the teaching? And what is being taught?

You, the writer, are at the root of this widespread security problem. Again, this is 2014 and these issues have been in the spotlight for 10+ years. This is not about mistakes, or being a security expert. This is about the complete avoidance of the basics. What habit is a young programmer learning now that will effect the code going into service next year? and the years after?

The Truth Hurts

If you are still inadvertently training coders how to write blatantly insecure code, either through ignorance or omission, you have no business training others, especially if you are successful. If you want to teach, if you make money teaching programming, you need to stop the madness, educate yourself, and reverse the trend.

Write those three extra lines of filtering code and add that extra paragraph explaining the purpose. Teach developers how to see security ingrained in code.

Stop multiplying the sweet kiss of simplicity and avoidance. Stop making yourself the enemy of those who like to keep their credit cards private.

About JD
In his own words JD, “Did a brief stint of 15 years in security. Headed the development of TripWire for Windows and Foundscan. Founded NT OBJECTives, a company to scan web stuff. BlackHat speaker on Windows forensic issues. Built the Windows Forensic Toolkit and FPort, with the Department of Justice has used to help convict child pornographers.”

Raising the CSRF Bar

For years, we at WhiteHat have been recommending Tokenization as the number one protection from Cross Site Request Forgery (CSRF). Just having a token is not enough, of course, as it must be cryptographically strong, significantly random, and properly validated on the server. Can’t stress that last point enough as the first thing I try when I see a CSRF token is to empty the value out and see if the form submit is accepted as valid.

Historically the bar for “good enough” when it comes to CSRF tokens was if the token was changed at least per session.

For example: A user logs in and a token is generated & attached to their authenticated session. That token is used for that user for all sensitive/administrative forms that are to be protected against CSRF. As long as when the user is logged out and logged back in they received a new token and the old one was invalidated, that met the bar for “good enough.”

Not anymore.

Thanks to some fellow security researchers who are much smarter than I when it comes to crypto chops, (Angelo Prado, Neal Harris, and Yoel Gluck), and their latest attack on SSL (which they dubbed BREACH), that no longer suffices.

BREACH is an attack on HTTP Response compression algorithms, like gzip, which are very popular. Make your giant HTTP responses smaller, which makes them faster for your user? No brainer. Well now, thanks to the BREACH attack, within around 1000 requests an attacker can pull sensitive SSL-encrypted information from HTTP Responses — sensitive information such as… CSRF Tokens!

The bar must be raised. We now no longer allow Tokens to stay the same for an entire session if response compression is enabled. In order to combat this, CSRF Tokens need to be generated for every request/response. This way they can not be stolen by making 1000 requests against a consistent token.

TL;DR

  • Old “good enough” – CSRF Tokens unique per session
  • BREACH (BlackHat 2013)
  • New “good enough” – CSRF Tokens must be unique per request if HTTP Response compression is being used.

The Insecurity of Security Through Obscurity

The topic of SQL Injection (SQL) is well known to the security industry by now. From time to time, researchers will come across a vector so unique that it must be shared. In my case I had this gibberish — &#MU4<4+0 — turn into an exploitable vector due to some unique coding done on the developers’ end. So as all stories go, let’s start at the beginning.

When we come across a login form in a web application, there are a few go-to testing strategies. Of course the first thing I did in this case was submit a login request with a username of admin and my password was ‘ OR 1=1–. To no one’s surprise, the application responded with a message about incorrect information. However, the application did respond in an unusual way: the password field had ( QO!.>./* in it. My first thought was, “what is the slim chance they just returned to me the real admin password? It looks like it was randomly generated at least.” So, of course, I submitted another login attempt but this time I used the newly obtained value as the password. This time I got a new value returned to me on the failed login attempt: ) SL”+?+1′. Not surprisingly, they did not properly encode the HTML markup characters in this new response, so if we can figure out what is going on we can certainly get reflective Cross Site Scripting (XSS) on this login page. At this point it’s time to take my research to a smaller scale and attempt to understand what is going on here on a character-by-character level.

The next few login attempts submitted were intended to scope out what was going on. By submitting three requests with a, b, and c, it may be possible to see a bigger picture starting to emerge. The response for each appears to be the next letter in the alphabet — we got b, c, and d back in return. So as a next step, I tried to add on a bit to this knowledge. If we submit aa we should expect to get bb back. Unfortunately things are never just that easy. The application responded with b^ instead. So let’s see what happens on a much larger string composed of the same letter; this time I submitted aaaaaaaaaaaa (that’s 12 ‘a’ characters in a row), and to my surprise got this back b^c^b^b^c^b^. Now we have something to work with. Clearly there seems to be some sort of pattern emerging of how these characters are transforming — it looks like a repeating series. The first 6 characters are the same as the last 6 characters in the response.

So far we have only discussed these characters in their human readable format that you see on your keyboard. In the world of computers, they all have multiple secondary identities. One of the more common translations is their ASCII numerical equivalent. When a computer sees the letter ‘a’ the computer can also convert that character into a number, in this case 97. By giving these characters a number we may have an easier time determining what pattern is going on. These ASCII charts can be found all over the Internet and to give credit to the one I used, www.theasciicode.com.ar helped out big time here.

Since we have determined that there is a pattern repeating for every 6 characters, let’s figure out what this shift actually looks like numerically. We start off with injecting aaaaaa and as expected get back b^c^b^. But what does this look like if we used the ASCII numerical equivalent instead? Now in the computer world we are injecting 97,97,97,97,97,97 and we get back 98,94,99,94,98,94. From this view it looks like each value has its own unique shift being applied to it. Surely everyone loved matrices as much as I did in math class, so lets bust out some of that old matrix subtraction: [97,97,97,97,97,97] – [98,94,99,94,98,94] = [-1,3,-2,3,-1,3]. Now we have a series that we can apply to the ASCII numerical equivalent to what we want to inject in order to get it to reflect how we want.

Finally, its time to start forming a Proof of Concept (PoC) injection to exploit this potential XSS issue. So far we have found out that they have a repeating pattern of 6 character shifts being applied to our input and we have determined the exact shift occurring on each character. If we apply the correct character shifts to an exploitable injection, such as “><img/src=”h”onerror=alert(2)//, we would need to submit it as !A:llj.vpf<%g%mqduqrp@`odur+1,.2. Of course, seeing that alert box pop up is the visual verification that needed that we have reverse engineered what is going on.

Since we have a working PoC for XSS, lets revisit our initial testing for SQL injection. When we apply the same character shifts that we discovered to ‘ OR 1=1–, we find that it needs to be submitted as &#MU4<4+0. One of the space characters in our injection is being shifted by -1 which results in a non-printable character between the ‘U’ and the ’4′. With the proper URL encoding applied, it would appear as %26%23MU%1F4%3C4%2B0 when being sent to the application. It was a glorious moment when I saw this application authenticate me as the admin user.

Back in the early days of the Internet and well before I even started learning about information security, this type of attack was quite popular. In current times, developers are commonly using parameterized queries properly on login forms so finding this type of issue has become quite rare. Unfortunately this particular app was not created for the US market and probably was developed by very green coders within a newly developing country. This was their attempt at encoding users passwords when we all know they should be hashed. Had this application not returned any content in the password field on failed login attempts, this SQL injection vulnerability would have remained perfectly hidden to any black box testing through this method. This highlights one of the many ways a vulnerability may exist but is obscured to those testing the application.

For those still following along, I have provided my interpretation of what the backend code may look like for this example. By flipping the + and – around I could also use this same code to properly encode my injection so that it reflects the way I wanted it too:


<!DOCTYPE html>
<html>
<body>
<form name="login" action="#" method="post">
<?php
$strinput = $_POST['password'];
$strarray = str_split($strinput);
for ($i = 0; $i < strlen($strinput); $i++) {
    if ($i % 6 == 0) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 1) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 2) {
        $strarray[$i] = chr(ord($strarray[$i]) - 2);
        }
    if ($i % 6 == 3) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 4) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 5) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    }
$password = implode($strarray);
echo "Login:<input type=\"text\" name=\"username\" value=\"" . htmlspecialchars($_POST['username']) . "\"><br>\n";
echo "Password:<input type=\"password\" name=\"password\" value=\"" . $password . "\"><br>\n";
// --- CODE SNIP ---
$examplesqlquery = "SELECT id FROM users WHERE username='" . addslashes($_POST['username']) . "' AND password='$password'";
// --- CODE SNIP ---
?>
<input type="submit" value="submit">
</form>
</body>
</html>

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.

What’s the Difference between Aviator and Chromium / Google Chrome?

Context:

It’s a fundamental rule of Web security: a Web browser must be able to defend itself against a hostile website. Presently, in our opinion, the market share leading browsers cannot do this adequately. This is an every day threat to personal security and privacy for the more than one billion people online, which includes us. We’ve long held and shared this point of view at WhiteHat Security. Like any sufficiently large company, we have many internal staff members who aren’t as tech savvy as WhiteHat’s Threat Research Center, so we had the same kind of security problem that the rest of the industry had: we had to rely on educating our users, because no browser on the market was suitable for our security needs. But education is a flawed approach – there are always new users and new security guidelines. So instead of engaging in a lengthy educational campaign, we began designing an internal browser that would be secure and privacy-protecting enough for our own users — by default. Over the years a great many people — friends, family members, and colleagues alike — have asked us what browser we recommend, even asked us what browser their children should use. Aviator became our answer.

Why Aviator:

The attacks a website can generate against a visiting browser are diverse and complex, but can be broadly categorized in two types. The first type of attack is designed to escape the confines of the browser walls and infect the desktop with malware. Today’s top tier browser defenses include software security in the browser core, an accompanying sandbox, URL blacklists, silent-updates, and plug-in click-to-play. Well-known browser vendors have done a great job in this regard and should be commended. No one wins when users desktops become part of a botnet.

Unfortunately, the second type of browser attack has been left largely undefended. These attacks are pernicious and carry out their exploits within the browser walls. They typically don’t implant malware, but they are indeed hazardous to online security and privacy. I’ve previously written up a lengthy 8-part blog post series on the subject documenting the problems. For a variety of reasons, these issues have not been addressed by the leading browser vendors. Rather than continue asking for updates that would likely never come, we decided we could do it ourselves.

To create Aviator we leveraged open source Chromium, the same browser core used by Google Chrome. Then, because the BSD license of Chromium allows us, we made many very particular changes to the code and configuration to enhance security and privacy. We named our product Aviator. Many people are eager to learn what exactly the differences are, so let’s go over them.

Differences:

  1. Protected Mode (Incognito Mode) / Not Protected Mode:
    TL;DR All Web history, cache, cookies, auto-complete, and local storage data is deleted after restart.
    Most people are unaware that there are 12 or more locations that websites may store cookie and cookie-like data in a browser. Cookies are typically used to track your surfing habits from one website to the next, but they also expose your online activity to nosy people with access to your computer. Protected Mode purges these storage areas automatically with each browser restart. While other browsers have this feature or something similar, it is not enabled by default, which can make it a chore to use. Aviator launches directly into Protected Mode by default and clearly indicates the mode of the current window. The security / privacy side effect of Protected Mode also helps protect against browser auto-complete hacking, login detection, and deanonymization via clickjacking by reducing the amount of session states you have open – due to an intentional lack of persistence in the browser over different sessions.
  2. Connection Control: 
    TL;DR Rules for controlling the connections made by Aviator. By default, Aviator blocks Intranet IP-addresses (RFC1918).
    When you visit a website, it can instruct your browser to make potentially dangerous connections to internal IP addresses on your network — IP addresses that could not otherwise be connected to from the outside (NAT). Exploitation may lead to simple reconnaissance of internet networks, or it may permanently compromise your network by overwriting the firmware on the router. Without installing special third-party software, it’s impossible to block any bit of Web code from carrying out browser-based intranet hacking. If Aviator happens to be blocking something you want to be able to get to, Connection Control allows the user to create custom rules — or temporarily use another browser.
  3. Disconnect bundled (Disconnect.me): 
    TL;DR Blocks ads and 3rd-party trackers.

    Essentially every ad on every website your browser encounters is tracking you, storing bits of information about where you go and what you do. These ads, along with invisible 3rd-party trackers, also often carry malware designed to exploit your browser when you load a page, or to try to trick you into installing something should you choose to click on it. Since ads can be authored by anyone, including attackers, both ads and trackers may also harness your browser to hack other systems, hack your intranet, incriminate you, etc. Then of course the visuals in the ads themselves are often distasteful, offensive, and inappropriate, especially for children. To help protect against tracking, login detection and deanonymization, auto cross-site scripting, drive-by-downloads, and evil cross-site request forgery delivered through malicious ads, we bundled in the Disconnect extension, which is specifically designed to block ads and trackers. According to the Chrome web store, over 400,000 people are already using Disconnect to protect their privacy. Whether you use Aviator or not, we recommend that you use Disconnect too (Chrome / Firefox supported). We understand many publishers depend on advertising to fund the content. They also must understand that many who use ad blocking software aren’t necessarily anti-advertising, but more pro security and privacy. Ads are dangerous. Publishers should simply ask visitors to enable ads on the website to support the content they want to see, which Disconnect’s icon makes it easy to do with a couple of mouse-clicks. This puts the power and the choice into the hands of the user, which is where we believe it should be.
  4. Block 3rd-party Cookies: 
    TL;DR Default configuration update. 

    While it’s very nice that cookies, including 3rd-party cookies, are deleted when the browser is closed, it’s even better when 3rd-party cookies are not allowed in the first place. Blocking 3rd-party cookies helps protect against tracking, login detection, and deanonymization during the current browser session.
  5. DuckDuckGo replaces Google search: 
    TL;DR Privacy enhanced replacement for the default search engine. 

    It is well-known that Google search makes the company billions of dollars annually via user advertising and user tracking / profiling. DuckDuckGo promises exactly the opposite, “Search anonymously. Find instantly.” We felt that that was a much better default option. Of course if you prefer another search engine (including Google), you are free to change the setting.
  6. Limit Referer Leaks: 
    TL;DR Referers no longer leak cross-domain, but are only sent same-domain by default. 

    When clicking from one link to the next, browsers will tell the destination website where the click came from via the Referer header (intentionally misspelled). Doing so could possibly leak sensitive information such as the search keywords used, internal IPs/hostnames, session tokens, etc. These leaks are often caused by the referring URL and offer little, if any, benefit to the user. Aviator therefore only sends these headers within the same domain.
  7. Plug-Ins Click-to-Play: 
    TL;DR Default configuration update enabled by default. 

    Plug-ins (E.g. Flash and Java) are a source for tracking, malware exploitation, and general annoyance. Plug-ins often keep their own storage for cookie-like data, which isn’t easy to delete, especially from within the browser. Plug-ins are also a huge attack vector for malware infection. Your browser might be secure, but the plug-ins are not and one must update constantly. Then of course all those annoying sounds and visuals made by plug-ins which are difficult to identify and block once they load. So, we blocked them all by default. When you want to run a plug-in, say on YouTube, just one-click on the puzzle piece. If you want a website to always load the plug-ins, that’s a configuration change as well. “Always allow plug-ins on…”
  8. Limit data leakage to Google: 
    TL;DR Default configuration update.

    In Aviator we’ve disabled “Use a web service to help resolve navigation errors” and “Use a prediction service to help complete searches and URLs typed in the address bar” by default. We also removed all options to sync / login to Google, and the tracking traffic sent to Google upon Chromium installation. For many of the same reasons that we have defaulted to DuckDuckGo as a search engine, we have limited what is sent in the browser to Google to protect your privacy. If you chose to use Google services, that is your choice. If you chose not to though, it can be difficult in some browsers. Again, our mantra is choice – and this gives you the choice.
  9. Do Not Track: 
    TL;DR Default configuration update.

    Enabled by default. While we prefer “Can-Not-Track” to “Do-Not-Track”, we figure it was safe enough to enable the “Do Not Track” signal by default in the event it gains traction.

We so far have appreciated the response to WhiteHat Aviator and welcome additional questions and feedback. Our goal is to continue to make this a better and more secure browser option for consumers. Please continue to spread the word and share your thoughts with us. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

Introducing WhiteHat Aviator – A Safer Web Browser

Jeremiah Grossman and I have been publicly discussing browser security and privacy, or the lack thereof, for many years. We’ve shared the issues hundreds of times at conferences, in blog posts, on Twitter, in white papers, and in the press. As the adage goes, “If you’re not paying for something, you’re not the customer; you’re the product being sold.” Browsers are no different, and the major vendors (Google, Mozilla, Microsoft) simply don’t want to make the changes necessary to offer a satisfactorily secure and private browser.

Before I go any further, it’s important to understand that it’s NOT that the browser vendors (Google, Mozilla, and Microsoft) don’t grasp or appreciate what plagues their software. They understand the issues quite well. Most of the time they actually nod their heads and even agree with us! This naturally invites the question: “why aren’t the necessary changes made to fix things and protect people?”

The answer is simple. Browser vendors (Google, Mozilla, and Microsoft) choose not to make these changes because doing so would run the risk of hurting their market share and their ability to make money. You see, offering what we believe is a reasonably secure and privacy-protecting browser requires breaking the Web, even though it’s just a little and in ways few people would notice. As just one example of many, let’s discuss the removal of ads.

The online advertising industry is promoted as a means of helping businesses reach an interested target audience. But tens of millions of people find these ads to be annoying at best, and many find them highly objectionable. The targeting and the assumptions behind them are often at fault: children may be exposed to ads for adult sites, and the targeting is often based on bias and stereotypes that can cause offense. Moreover, these ads can be used to track you across the web, are often laden with malicious malware, and can point those who click on them to scams.

One would think that people who don’t want to click on ads are not the kind of people the ad industry wants anyway. So if browser vendors offered a feature capable of blocking ads by default, it would increase the user satisfaction for millions, provide a more secure and privacy-protecting online experience, and ensure that advertisements were seen by people who would react positively, rather than negatively, to the ads. And yet not a single browser vendor offers ad blocking, instead relying on optional third-party plugins, because this breaks their business model and how they make money. Current incentives between the user and browser vendor are misaligned. People simply aren’t safe online when their browser vendor profits from ads.

I could go on and give a dozen more examples like this, but rather than continuing to beat a drum that no one with the power to make the change is willing to listen to – we decided it was time to draw a line in the sand, and to start making the Web work the way we think it should: a way that protects people. That said, I want to share publicly for the first time some details about WhiteHat Aviator, our own full-featured web browser, which was until now a top secret internal project from our WhiteHat Security Labs team. Originally, Aviator started out as an experiment by our Labs team to test our many Web security and privacy theories, but today Aviator is the browser given to all WhiteHat employees. Jeremiah, myself, and many others at WhiteHat use Aviator daily as our primary browser. We’re often asked by those outside the company what browser we use, to which we have answered, “our own.” After years of research, development, and testing we’ve finally arrived at a version that’s mature enough for public consumption (OS X). Now you can use the same browser that we do.

WhiteHat Security has no interest or stake in the online advertising industry, so we can offer a browser free of ulterior motives. What you see is what you get. We aren’t interested in tracking you or your browsing history, or in letting anyone else have that information either.

Aviator is designed for the every day person who really values their online security and privacy:

  • We bundled Aviator with Disconnect to remove ads and tracking
  • Aviator is always in private mode
  • Each tab is sandboxed (a sandbox provides controls to help prevent one program from making changes to others, or to your environment)
  • We strip out referring URLs across domains to protect your privacy
  • Flash and Java are click-to-play – greatly reducing the risk of drive-by downloads
  • We block access to websites behind your firewall to prevent Intranet hacking

Default settings in Aviator are set to protect your security and your privacy.

We hope you enjoy using Aviator as much as we’ve enjoyed building it. If people like it, we will create a Windows version as well and we’ll add additional privacy and security features. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

20,000

20,000. That’s the number of websites we’ve assessed for vulnerabilities with WhiteHat Sentinel. Just saying that number alone really doesn’t do it any justice though. The milestone doesn’t capture the gravity and importance of the accomplishment, nor does it fully articulate everything that goes into that number, and what it took to get here. As I reflect on 20,000 websites, I think back to the very early days when so many people told us our model could never work, that we’d never see 1,000 sites, let alone 20x that number. (By the way, I remember their names distinctly ;).) In fairness, what they couldn’t fully appreciate then is “Web security” in terms of what it really takes to scale, which means they truly didn’t understand “Web security.”

When WhiteHat Security first started back in late 2001, consultants dominated the vulnerability assessment space. If a website was [legally] tested for vulnerabilities, it was done by an independent third-party. A consultant would spend roughly a week per website, scanning, prodding around, modifying cookies, URLs and hidden form fields, and then finally deliver a stylized PDF report documenting their findings (aka “the annual assessment”). A fully billed consultant might be able to comprehensively test 40 individual websites per year, and the largest firms would maybe have many as 50 consultants. So collectively, the entire company could only get to about 2,000 websites annually. This is FAR shy of just the 1.8 million SSL-serving sites on the Web. This exposed an unacceptable limitation of their business model.

WhiteHat, at the time of this writing, handles 10x the workload of any consulting firm we’re aware of, and we’re nowhere near capacity. Not only that, WhiteHat Sentinel is assessing these 20,000 websites on a roughly weekly basis, not just once a year! That’s orders of magnitude more security value delivered than what the one-time assessments can possibly provide. Remember, the Web is a REALLY big place, like 700 million websites big in total. And that right there is what Web security is all about, scale. If any solution is unable to scale, it’s not a Web security solution. It’s a one-off. It might be a perfectly acceptable one-off, but a one-off none-the-less.

Achieving scalability in Web security must take into account the holy trinity, a symbiotic combination of People, Process, and Technology – in that order. No [scalable] Web security solution I’m aware of can exist without all three. Not developer training, not threat modeling, not security in QA, not Web application firewalls, not centralized security controls, and certainly not vulnerability assessment. Nothing. No technological innovation can replace the need for the other two factors. The best we can expect of technology is to increase the efficiency of people and processes. We’ve understood this at WhiteHat Security since day one, and it’s one of the biggest reasons WhiteHat Security continues to grow and be successful where many others have gone by the wayside.

Over the years, while the vulnerabilities themselves have not really changed much, Web security culture definitely has. As the industry matures and grows, and awareness builds, we see the average level of Web security competency decrease! This is something to be expected. The industry is no longer dominated by a small circle of “elites.” Today, most in this field are beginners, with 0 – 3 years of work experience, and this is a very good sign.

That said, there is still a huge skill and talent debt everyone must be mindful of. So the question is: in the labor force ecosystem, who is in the best position to hire, train, and retain Web security talent – particularly the Breaker (vulnerability finders) variety – security vendors or enterprises? Since vulnerability assessment is not and should not be in most enterprises’ core competency, AND the market is highly competitive for talent, we believe the clear answer is the former. This is why we’ve invested so greatly in our Threat Research Center (TRC) – our very own professional Web hacker army.

We started building our TRC more than a decade ago, recruiting and training some of the best and brightest minds, many of whom have now joined the ranks of the Web security elite. We pride ourselves on offering our customers not only a very powerful and scalable solution, but also an “army of hackers” – more than 100 strong and growing – that is at the ready, 24×7, to hack them first. “Hack Yourself First” is a motto that we share proudly, so our customers can be made aware of the vulnerabilities that exist on their sites and can fix them before the bad guys exploit them.

That is why crossing the threshold of 20,000 websites under management is so impressive. We have the opportunity to assess all these websites in production – as they are constantly updating and changing – on a continuous basis. This arms our team of security researchers with the latest vulnerability data for testing and measuring and ultimately protecting our customers.

Other vendors could spend millions of dollars building the next great technology over the next 18 months, but they cannot build an army of hackers in 18 months; it just cannot be done. Our research and development department is constantly working on ways to improve our methods of finding vulnerabilities, whether with our scanner or by understanding business logic vulnerabilities. They’re also constantly updated with new 0-days and other vulnerabilities that we try to incorporate into our testing. These are skills that take time to cultivate and strengthen and we have taken years to do just that.

So, I have to wonder: what will 200,000 websites under management look like? It’s hard to know, really. We had no idea 10+ years ago what getting to 20,000 would look like, and we certainly never would have guessed that it would mean we would be processing more than 7TB of data over millions of dollars of infrastructure per week. That said, given the speed at which the Internet is growing and the speed at which we are growing with it, we could reach 200,000 sites in the next 18 months with. And that is a very exciting possibility.

WhiteHat to Host Webinar—Beyond the Breach: Insights Into the Hidden Cost of a Web Security Breach

On Wednesday, September 12, at 11:00 AM PT, join WhiteHat Security’s Gillis Jones as he shares his research into the real financial implications of a website hack. On his journey through the business-side of Web security, Gillis uncovered bleak realities including discrepancies in accounting practices; a lack of consistent disclosure policies to address Web hacks; and gross under-reporting of financial losses once a breach is disclosed.

Gillis will also share cost-analysis formulas he developed while researching website breaches, as well as the specific costs of a breach in each of these areas:

  1. Personnel
  2. Technical, Infrastructure & Immediate Losses of Income
  3. Compliance
  4. Damaged, Destroyed or Stolen Hardware
  5. Customer Turnover & Loss of Customer Trust
  6. “Customer Management” After the Breach
  7. Fines & Lawsuits
  8. Public Relations “Damage Control”

To register for the webinar, please visit this link: https://reg.whitehatsec.com/forms/WEBINARbreach0912

Session Cookie Secure Flag Java

What is it and why should I care?
Session cookies (or, to Java folks, the cookie containing the JSESSIONID) are the cookies used to perform session management for Web applications. These cookies hold the reference to the session identifier for a given user, and the same identifier is maintained server-side along with any session-scoped data related to that session id. Because cookies are transmitted on every request, they are the most common mechanism used for session management in Web applications.

The secure flag is an additional flag that you can set on a cookie to instruct the browser to send this cookie ONLY when on encrypted HTTPS transmissions (i.e. NEVER send the cookie on unencrypted HTTP transmissions). This ensures that your session cookie is not visible to an attacker in, for instance, a man-in-the-middle (MITM) attack. While a secure flag is not the complete solution to secure session management, it is an important step in providing the security required.

What should I do about it?
The resolution here is quite simple. You must add the secure flag to your session cookie (and preferably to all cookies, because any requests to your site should be HTTPS, if possible).

Here’s an example of how a session cookie might look without the secure flag:

Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H;

And now, how the same session cookie would look with the secure flag:

Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; secure;

Not much to it. Obviously, you can do this manually, but if you’re working in a Java Servlet 3.0 or newer environment, a simple configuration setting in the web.xml will take care of this for you. You should add the snippet below to your web.xml.

<session-config>
<cookie-config>
<secure>true</secure>
</cookie-config>
</session-config>

As you can see, resolving this issue is quite simple. It should be on everyone’s //TODO list.

References
———–
http://blog.mozilla.com/webappsec/2011/03/31/enabling-browser-security-in-web-applications/

http://michael-coates.blogspot.com/2011/03/enabling-browser-security-in-web.html

https://www.owasp.org/index.php/SecureFlag

Session Fixation Prevention in Java

What is it and why should I care?
Session fixation, by most definitions, is a subclass of session hijacking. The most common basic flow is:

Step 1. Attacker gets a valid session ID from an application
Step 2. Attacker forces the victim to use that same session ID
Step 3. Attacker now knows the session ID that the victim is using and can gain access to the victim’s account

Step 2, which requires forcing the session ID on the victim, is the only real work the attacker needs to do. And even this action on the attacker’s part is often performed by simply sending the victim a link to a website with the session ID attached to the URL.

Obviously, one user being able to take over another user’s account is a serious issue, so…

What should I do about it?
Fortunately, resolving session fixation is usually fairly simple. The basic advice is:

Invalidate the user session once a successful login has occurred.

The usual basic flow to handle session fixation prevention looks like:

1. User enters correct credentials
2. System successfully authenticates user
3. Any existing session information that needs to be retained is moved to temporary location
4. Session is invalidated (HttpSession#invalidate())
5. New session is created (new session ID)
6. Any temporary data is restored to new session
7. User goes to successful login landing page using new session ID

A useful snippet of code is available from the ESAPI project that shows how to change the session identifier:

http://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/main/java/org/owasp/esapi/reference/DefaultHTTPUtilities.java (look at the changeSessionIdentifier method)

There are other activities that you also can perform to provide additional assurance against session fixation. A number are listed below:

1. Check for session fixation if a user tries to login using a session ID that has been specifically invalidated (requires maintaining this list in some type of LRU cache)

2. Check for session fixation if a user tries to use an existing session ID already in use from another IP address (requires maintaining this data in some type of map)

3. If you notice these types of obvious malicious behavior, consider using something like AppSensor to protect your app, and to be aware of the attack

As you can see, session fixation is a serious issue, but has a pretty simple solution. Your best bet if possible is to include an appropriate solution in some “enterprise” framework (like ESAPI) so this solution applies evenly to all your applications.

References
———–
https://www.owasp.org/index.php/Session_fixation

http://www.acros.si/papers/session_fixation.pdf

http://cwe.mitre.org/data/definitions/384.html

http://projects.webappsec.org/w/page/13246960/Session%20Fixation