Author Archives: Jeremiah Grossman

About Jeremiah Grossman

Jeremiah Grossman is the Founder and interim CEO of WhiteHat Security, where he is responsible for Web security R&D and industry outreach. Over the last decade, Mr. Grossman has written dozens of articles, white papers, and is a published author. His work has been featured in the Wall Street Journal, Forbes, NY Times and hundreds of other media outlets around the world. As a well-known security expert and industry veteran, Mr. Grossman has been a guest speaker on six continents at hundreds of events including TED, BlackHat Briefings, RSA, SANS, and others. He has been invited to guest lecture at top universities such as UC Berkeley, Stanford, Harvard, UoW Madison, and UCLA. Mr. Grossman is also a co-founder of the Web Application Security Consortium (WASC) and previously named one of InfoWorld's Top 25 CTOs. He serves on the advisory board of two hot start-ups, Risk I/O and SD Elements, and is a Brazilian Jiu-Jitsu Black Belt. Before founding WhiteHat, Mr. Grossman was an information security officer at Yahoo!

Relative Security of Programming Languages: Putting Assumptions to the Test

“In theory there is no difference between theory and practice. In practice there is.” – Yogi Berra

I like this quote because I think it sums up the way we as an industry all too often approach application security. We have our “best practices” and our conventional wisdom of how we think things operate, what we think is “secure” and standards that we think will constitute true security, in theory. However, in practice — in reality — all too often we find that what we think is wrong. We found this to be true when examining the relative security of popular programming languages, which is the topic of the WhiteHat Security 2014 Website Statistics Report that we launched today. The data we collected from the field defies the conventional wisdom we carry and pass down about the security of .Net, Java, ASP, Perl, and others.

The data that we derived in this report puts our beliefs around application security to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long? How do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?

By examining these questions and approaching their answers not with assumptions, but with hard evidence, our goal is to elevate conversations around how to “build-in” security from the start of the development process by picking a language and framework that not only solves business requirements, but security requirements as well.

For example, whereas one might assume that newer programming languages such as .Net or Java would be less prone to vulnerabilities, what we found was that there was not a huge difference between old languages and newer frameworks in terms of the average number of vulnerabilities. And when it comes to remediating vulnerabilities, contrary to what one might expect, legacy frameworks tended to have a higher rate of remediation – in fact, ColdFusion bested the whole field with an average remediation rate of almost 75% despite having been in existence for more than 20 years.

Similarly, many companies assume that secure coding is challenging because they have a ‘little bit of everything’ when it comes to the underlying languages used in building their applications. in our research, however, we found that not to be completely accurate. In most cases, organizations have a significant investment in one or two languages and very minimal investment in any others.

Our recommendations based on our findings? Don’t be content with assumptions. Remember, all your adversary needs is one vulnerability that they can exploit. Security and development teams must continue to measure their programs on an ongoing basis. Determine how many vulnerabilities you have and then how fast you should fix them. Don’t assume that your software development lifecycle is working just because you are doing a lot of things; anything measured tends to improve over time. This report can help serve as a real-world baseline to measure against your own.

To view the complete report, click here. I would also invite you to join the conversation on Twitter at #2014WebStats @whitehatsec.

Adding Open Source Framework Hardening to Your SDLC – Podcast

I talk with G.S. McNamara, Federal Information Security Senior Consultant, about fixing open source framework vulnerabilities, what to consider when pushing open source, how to implement a system around patches without impacting performance, and security considerations on framework selections.





Want to do a podcast with us? Signup to be part of our Unsung Hero program.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Top 10 Web Hacking Techniques 2013

Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its eighth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent work. Past Top 10s and the number of new attack techniques discovered in each year:
2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51) and 2012 (56).

Phase 1: Open community voting for the final 15 [Jan 23-Feb 3]
Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top 15 overall. Comment with your vote!

Phase 2: Panel of Security Experts Voting [Feb 4-Feb 11]
From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as phase 1, the judges will rank the final 20 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2013!

Complete 2013 List (in no particular order):

  1. Tor Hidden-Service Passive De-Cloaking
  2. Top 3 Proxy Issues That No One Ever Told You
  3. Gravatar Email Enumeration in JavaScript
  4. Pixel Perfect Timing Attacks with HTML5
  5. Million Browser Botnet Video Briefing
    Slideshare
  6. Auto-Complete Hack by Hiding Filled in Input Fields with CSS
  7. Site Plagiarizes Blog Posts, Then Files DMCA Takedown on Originals
  8. The Case of the Unconventional CSRF Attack in Firefox
  9. Ruby on Rails Session Termination Design Flaw
  10. HTML5 Hard Disk Filler™ API
  11. Aaron Patterson – Serialized YAML Remote Code Execution
  12. Fireeye – Arbitrary reading and writing of the JVM process
  13. Timothy Morgan – What You Didn’t Know About XML External Entity Attacks
  14. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  15. James Bennett – Django DOS
  16. Phil Purviance – Don’t Use Linksys Routers
  17. Mario Heiderich – Mutation XSS
  18. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  19. Carlos Munoz – Bypassing Internet Explorer’s Anti-XSS Filter
  20. Zach Cutlip – Remote Code Execution in Netgear routers
  21. Cody Collier – Exposing Verizon Wireless SMS History
  22. Compromising an unreachable Solr Serve
  23. Finding Weak Rails Security Tokens
  24. Ashar Javad Attack against Facebook’s password reset process.
  25. Father/Daughter Team Finds Valuable Facebook Bug
  26. Hacker scans the internet
  27. Eradicating DNS Rebinding with the Extended Same-Origin Policy
  28. Large Scale Detection of DOM based XSS
  29. Struts 2 OGNL Double Evaluation RCE
  30. Lucky 13 Attack
  31. Weaknesses in RC4

Leave a comment if you know of some techniques that we’ve missed, and we’ll add them in up until the submission deadline.

Final 15 (in no particular order):

  1. Million Browser Botnet Video Briefing
    Slideshare
  2. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  3. Hacker scans the internet
  4. HTML5 Hard Disk Filler™ API
  5. Eradicating DNS Rebinding with the Extended Same-Origin Policy
  6. Aaron Patterson – Serialized YAML Remote Code Execution
  7. Mario Heiderich – Mutation XSS
  8. Timothy Morgan – What You Didn’t Know About XML External Entity Attacks
  9. Tor Hidden-Service Passive De-Cloaking
  10. Auto-Complete Hack by Hiding Filled in Input Fields with CSS
  11. Pixel Perfect Timing Attacks with HTML5
  12. Large Scale Detection of DOM based XSS
  13. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  14. Weaknesses in RC4
  15. Lucky 13 Attack

Prizes [to be announced]

  1. The winner of this year’s top 10 will receive a prize!
  2. After the open community voting process, two survey respondents will be chosen at random to receive a prize.

The Top 10

  1. Mario Heiderich – Mutation XSS
  2. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  3. Pixel Perfect Timing Attacks with HTML5
  4. Lucky 13 Attack
  5. Weaknesses in RC4
  6. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  7. Million Browser Botnet Video Briefing
    Slideshare
  8. Large Scale Detection of DOM based XSS
  9. Tor Hidden-Service Passive De-Cloaking
  10. HTML5 Hard Disk Filler™ API

Honorable Mention

  1. Aaron Patterson – Serialized YAML Remote Code Execution

iCEO

As many of you know, I started WhiteHat Security more than 10 years ago with the mission to help secure the Web, company by company. It has been an incredible roller coaster ride over the past decade, both at WhiteHat Security, which has experienced phenomenal growth, as well as within the industry itself. By that I mean that we have come a long way: organizations and companies are now more aware of how critical their presence is on the web, and therefore how crucial it is that they ensure that presence is secure. I am proud to say that we now manage more than 30,000 websites today, but with more than 900 million websites on the Internet today, we clearly still have a long way to go. And with that, I am pleased to announce that I have accepted the WhiteHat Security Board of Director’s invitation to step in to the role of interim CEO of WhiteHat Security.

I am very proud of what this company has achieved since 2001: we have grown from a team of one to more than 350 passionate contributors in all areas of the business, including the world’s largest army of hackers; we have received some of the highest industry accolades and awards out there; WhiteHat Sentinel is now a leading web application security solution for nearly 600 of the world’s best-known brands, and we continue to experience exponential growth. All of this success is due in large part to the leadership and direction of our former CEO Stephanie Fohn who took a leap of faith with us more than 8 years ago and who will remain one of the company’s biggest supporters. I would be remiss if I did not acknowledge the work and dedication that she has put in to this company and I wish her the best as she takes time to be with family.

The Internet is a continuously evolving space. New websites launch every day and new threats emerge constantly. As such, web security is complex even for some of the most seasoned security practitioners. My focus will remain on ensuring that our customers – both current and future – have the best product and technology experience and that they get the highest levels of support that such a fast-paced market dictates. We will continue to lead on the fronts of innovation and customer success, and we will do that with a team of some of the most talented application security technologists, business executives, practitioners and contributors that this industry has to offer.

I look forward to leading WhiteHat Security into this next chapter.

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.

What’s the Difference between Aviator and Chromium / Google Chrome?

Context:

It’s a fundamental rule of Web security: a Web browser must be able to defend itself against a hostile website. Presently, in our opinion, the market share leading browsers cannot do this adequately. This is an every day threat to personal security and privacy for the more than one billion people online, which includes us. We’ve long held and shared this point of view at WhiteHat Security. Like any sufficiently large company, we have many internal staff members who aren’t as tech savvy as WhiteHat’s Threat Research Center, so we had the same kind of security problem that the rest of the industry had: we had to rely on educating our users, because no browser on the market was suitable for our security needs. But education is a flawed approach – there are always new users and new security guidelines. So instead of engaging in a lengthy educational campaign, we began designing an internal browser that would be secure and privacy-protecting enough for our own users — by default. Over the years a great many people — friends, family members, and colleagues alike — have asked us what browser we recommend, even asked us what browser their children should use. Aviator became our answer.

Why Aviator:

The attacks a website can generate against a visiting browser are diverse and complex, but can be broadly categorized in two types. The first type of attack is designed to escape the confines of the browser walls and infect the desktop with malware. Today’s top tier browser defenses include software security in the browser core, an accompanying sandbox, URL blacklists, silent-updates, and plug-in click-to-play. Well-known browser vendors have done a great job in this regard and should be commended. No one wins when users desktops become part of a botnet.

Unfortunately, the second type of browser attack has been left largely undefended. These attacks are pernicious and carry out their exploits within the browser walls. They typically don’t implant malware, but they are indeed hazardous to online security and privacy. I’ve previously written up a lengthy 8-part blog post series on the subject documenting the problems. For a variety of reasons, these issues have not been addressed by the leading browser vendors. Rather than continue asking for updates that would likely never come, we decided we could do it ourselves.

To create Aviator we leveraged open source Chromium, the same browser core used by Google Chrome. Then, because the BSD license of Chromium allows us, we made many very particular changes to the code and configuration to enhance security and privacy. We named our product Aviator. Many people are eager to learn what exactly the differences are, so let’s go over them.

Differences:

  1. Protected Mode (Incognito Mode) / Not Protected Mode:
    TL;DR All Web history, cache, cookies, auto-complete, and local storage data is deleted after restart.
    Most people are unaware that there are 12 or more locations that websites may store cookie and cookie-like data in a browser. Cookies are typically used to track your surfing habits from one website to the next, but they also expose your online activity to nosy people with access to your computer. Protected Mode purges these storage areas automatically with each browser restart. While other browsers have this feature or something similar, it is not enabled by default, which can make it a chore to use. Aviator launches directly into Protected Mode by default and clearly indicates the mode of the current window. The security / privacy side effect of Protected Mode also helps protect against browser auto-complete hacking, login detection, and deanonymization via clickjacking by reducing the amount of session states you have open – due to an intentional lack of persistence in the browser over different sessions.
  2. Connection Control: 
    TL;DR Rules for controlling the connections made by Aviator. By default, Aviator blocks Intranet IP-addresses (RFC1918).
    When you visit a website, it can instruct your browser to make potentially dangerous connections to internal IP addresses on your network — IP addresses that could not otherwise be connected to from the outside (NAT). Exploitation may lead to simple reconnaissance of internet networks, or it may permanently compromise your network by overwriting the firmware on the router. Without installing special third-party software, it’s impossible to block any bit of Web code from carrying out browser-based intranet hacking. If Aviator happens to be blocking something you want to be able to get to, Connection Control allows the user to create custom rules — or temporarily use another browser.
  3. Disconnect bundled (Disconnect.me): 
    TL;DR Blocks ads and 3rd-party trackers.

    Essentially every ad on every website your browser encounters is tracking you, storing bits of information about where you go and what you do. These ads, along with invisible 3rd-party trackers, also often carry malware designed to exploit your browser when you load a page, or to try to trick you into installing something should you choose to click on it. Since ads can be authored by anyone, including attackers, both ads and trackers may also harness your browser to hack other systems, hack your intranet, incriminate you, etc. Then of course the visuals in the ads themselves are often distasteful, offensive, and inappropriate, especially for children. To help protect against tracking, login detection and deanonymization, auto cross-site scripting, drive-by-downloads, and evil cross-site request forgery delivered through malicious ads, we bundled in the Disconnect extension, which is specifically designed to block ads and trackers. According to the Chrome web store, over 400,000 people are already using Disconnect to protect their privacy. Whether you use Aviator or not, we recommend that you use Disconnect too (Chrome / Firefox supported). We understand many publishers depend on advertising to fund the content. They also must understand that many who use ad blocking software aren’t necessarily anti-advertising, but more pro security and privacy. Ads are dangerous. Publishers should simply ask visitors to enable ads on the website to support the content they want to see, which Disconnect’s icon makes it easy to do with a couple of mouse-clicks. This puts the power and the choice into the hands of the user, which is where we believe it should be.
  4. Block 3rd-party Cookies: 
    TL;DR Default configuration update. 

    While it’s very nice that cookies, including 3rd-party cookies, are deleted when the browser is closed, it’s even better when 3rd-party cookies are not allowed in the first place. Blocking 3rd-party cookies helps protect against tracking, login detection, and deanonymization during the current browser session.
  5. DuckDuckGo replaces Google search: 
    TL;DR Privacy enhanced replacement for the default search engine. 

    It is well-known that Google search makes the company billions of dollars annually via user advertising and user tracking / profiling. DuckDuckGo promises exactly the opposite, “Search anonymously. Find instantly.” We felt that that was a much better default option. Of course if you prefer another search engine (including Google), you are free to change the setting.
  6. Limit Referer Leaks: 
    TL;DR Referers no longer leak cross-domain, but are only sent same-domain by default. 

    When clicking from one link to the next, browsers will tell the destination website where the click came from via the Referer header (intentionally misspelled). Doing so could possibly leak sensitive information such as the search keywords used, internal IPs/hostnames, session tokens, etc. These leaks are often caused by the referring URL and offer little, if any, benefit to the user. Aviator therefore only sends these headers within the same domain.
  7. Plug-Ins Click-to-Play: 
    TL;DR Default configuration update enabled by default. 

    Plug-ins (E.g. Flash and Java) are a source for tracking, malware exploitation, and general annoyance. Plug-ins often keep their own storage for cookie-like data, which isn’t easy to delete, especially from within the browser. Plug-ins are also a huge attack vector for malware infection. Your browser might be secure, but the plug-ins are not and one must update constantly. Then of course all those annoying sounds and visuals made by plug-ins which are difficult to identify and block once they load. So, we blocked them all by default. When you want to run a plug-in, say on YouTube, just one-click on the puzzle piece. If you want a website to always load the plug-ins, that’s a configuration change as well. “Always allow plug-ins on…”
  8. Limit data leakage to Google: 
    TL;DR Default configuration update.

    In Aviator we’ve disabled “Use a web service to help resolve navigation errors” and “Use a prediction service to help complete searches and URLs typed in the address bar” by default. We also removed all options to sync / login to Google, and the tracking traffic sent to Google upon Chromium installation. For many of the same reasons that we have defaulted to DuckDuckGo as a search engine, we have limited what is sent in the browser to Google to protect your privacy. If you chose to use Google services, that is your choice. If you chose not to though, it can be difficult in some browsers. Again, our mantra is choice – and this gives you the choice.
  9. Do Not Track: 
    TL;DR Default configuration update.

    Enabled by default. While we prefer “Can-Not-Track” to “Do-Not-Track”, we figure it was safe enough to enable the “Do Not Track” signal by default in the event it gains traction.

We so far have appreciated the response to WhiteHat Aviator and welcome additional questions and feedback. Our goal is to continue to make this a better and more secure browser option for consumers. Please continue to spread the word and share your thoughts with us. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

20,000

20,000. That’s the number of websites we’ve assessed for vulnerabilities with WhiteHat Sentinel. Just saying that number alone really doesn’t do it any justice though. The milestone doesn’t capture the gravity and importance of the accomplishment, nor does it fully articulate everything that goes into that number, and what it took to get here. As I reflect on 20,000 websites, I think back to the very early days when so many people told us our model could never work, that we’d never see 1,000 sites, let alone 20x that number. (By the way, I remember their names distinctly ;).) In fairness, what they couldn’t fully appreciate then is “Web security” in terms of what it really takes to scale, which means they truly didn’t understand “Web security.”

When WhiteHat Security first started back in late 2001, consultants dominated the vulnerability assessment space. If a website was [legally] tested for vulnerabilities, it was done by an independent third-party. A consultant would spend roughly a week per website, scanning, prodding around, modifying cookies, URLs and hidden form fields, and then finally deliver a stylized PDF report documenting their findings (aka “the annual assessment”). A fully billed consultant might be able to comprehensively test 40 individual websites per year, and the largest firms would maybe have many as 50 consultants. So collectively, the entire company could only get to about 2,000 websites annually. This is FAR shy of just the 1.8 million SSL-serving sites on the Web. This exposed an unacceptable limitation of their business model.

WhiteHat, at the time of this writing, handles 10x the workload of any consulting firm we’re aware of, and we’re nowhere near capacity. Not only that, WhiteHat Sentinel is assessing these 20,000 websites on a roughly weekly basis, not just once a year! That’s orders of magnitude more security value delivered than what the one-time assessments can possibly provide. Remember, the Web is a REALLY big place, like 700 million websites big in total. And that right there is what Web security is all about, scale. If any solution is unable to scale, it’s not a Web security solution. It’s a one-off. It might be a perfectly acceptable one-off, but a one-off none-the-less.

Achieving scalability in Web security must take into account the holy trinity, a symbiotic combination of People, Process, and Technology – in that order. No [scalable] Web security solution I’m aware of can exist without all three. Not developer training, not threat modeling, not security in QA, not Web application firewalls, not centralized security controls, and certainly not vulnerability assessment. Nothing. No technological innovation can replace the need for the other two factors. The best we can expect of technology is to increase the efficiency of people and processes. We’ve understood this at WhiteHat Security since day one, and it’s one of the biggest reasons WhiteHat Security continues to grow and be successful where many others have gone by the wayside.

Over the years, while the vulnerabilities themselves have not really changed much, Web security culture definitely has. As the industry matures and grows, and awareness builds, we see the average level of Web security competency decrease! This is something to be expected. The industry is no longer dominated by a small circle of “elites.” Today, most in this field are beginners, with 0 – 3 years of work experience, and this is a very good sign.

That said, there is still a huge skill and talent debt everyone must be mindful of. So the question is: in the labor force ecosystem, who is in the best position to hire, train, and retain Web security talent – particularly the Breaker (vulnerability finders) variety – security vendors or enterprises? Since vulnerability assessment is not and should not be in most enterprises’ core competency, AND the market is highly competitive for talent, we believe the clear answer is the former. This is why we’ve invested so greatly in our Threat Research Center (TRC) – our very own professional Web hacker army.

We started building our TRC more than a decade ago, recruiting and training some of the best and brightest minds, many of whom have now joined the ranks of the Web security elite. We pride ourselves on offering our customers not only a very powerful and scalable solution, but also an “army of hackers” – more than 100 strong and growing – that is at the ready, 24×7, to hack them first. “Hack Yourself First” is a motto that we share proudly, so our customers can be made aware of the vulnerabilities that exist on their sites and can fix them before the bad guys exploit them.

That is why crossing the threshold of 20,000 websites under management is so impressive. We have the opportunity to assess all these websites in production – as they are constantly updating and changing – on a continuous basis. This arms our team of security researchers with the latest vulnerability data for testing and measuring and ultimately protecting our customers.

Other vendors could spend millions of dollars building the next great technology over the next 18 months, but they cannot build an army of hackers in 18 months; it just cannot be done. Our research and development department is constantly working on ways to improve our methods of finding vulnerabilities, whether with our scanner or by understanding business logic vulnerabilities. They’re also constantly updated with new 0-days and other vulnerabilities that we try to incorporate into our testing. These are skills that take time to cultivate and strengthen and we have taken years to do just that.

So, I have to wonder: what will 200,000 websites under management look like? It’s hard to know, really. We had no idea 10+ years ago what getting to 20,000 would look like, and we certainly never would have guessed that it would mean we would be processing more than 7TB of data over millions of dollars of infrastructure per week. That said, given the speed at which the Internet is growing and the speed at which we are growing with it, we could reach 200,000 sites in the next 18 months with. And that is a very exciting possibility.

Upcoming SANS Webcast: Convincing Management to Fund Application Security

Many security departments struggle tirelessly to obtain adequate budget for security, especially application security. It’s also no secret that security spending priorities are often grossly misaligned with respect to how businesses invest in IT. This is something I’ve discussed on my blog many times in the past.

The sheer lack of resources is a key reason why Web applications have been wide open to exploitation for as long as they’ve existed, and why companies are constantly getting hacked. While many in the industry understand the problem, they struggle justifying the level of funding necessary to protect the software their organizations build or license.

In December 2012, the SANS Institute conducted a survey of 700 organizations on app security programs and practices. That survey revealed that the primary barriers to implementing secure app management programs were “lack of management funding/buy-in,” followed by lack of resources and skills. Those two are pretty closely aligned, don’t you think?

A 2013 Microsoft survey obtained similar results. In it, more than 2,200 IT professionals and 490 developers worldwide were asked about secure development life cycle processes. The top barriers they cited were lack of management approval, lack of training and support, and cost. It’s time we start developing tools and strategies to begin solving this problem.

In a recent CSO article, SANS’ John Pescatore made some excellent points about how security people need to start approaching their relationships with management. Instead of sounding the alarm, they need to focus more on providing solutions. Let’s say that again: bring management BUSINESS SOLUTIONS and not just the problems. John correctly states that a CEO thinks in terms of opportunity costs, so security people need to use a similar mindset when strategizing a budget conversation with a CEO. Doing so does wonders.

Obviously, that’s not nearly enough of an answer to get a productive conversation started. We security people need more examples, business models, cost models, ROI models, real-world examples, and so on. This will be the topic of a webcast I’m co-presenting with John Pescatore, hosted by SANS. If you’d like come and hear us go over the the material, we’d love to have you there! on September 19 1pm EDT (10am PDT). Or, skip the webcast, and just read this whitepaper on the topic.

Government Surveillance: Why it doesn’t matter if you delete your email

Every three months I have a task alert reminding me to “delete cloud data.” This kicks off an hour or two spent clicking checkboxes and trashcan icons, getting rid of as much data as I can that is stored on someone else’s servers (Gmail, other Webmail, Twitter Direct Messages, LinkedIn messages, Facebook messages, and so on). The reason I do this is simple: to protect against the eventuality that someone will hijack one of my online accounts.

Even as a security pro, I’m not so arrogant as to think that I can’t be hacked, and my online accounts are especially vulnerable since I am not in total control of them. I figure that getting hacked is only a matter of time, either through a social engineering trick or exploitation of a website vulnerability. We’ve seen a number of celebrities and security pros alike suffer this already. For me, when the day comes, I want to limit the data loss exposure to no more than three months. It’s not that any of my data kept in “the cloud” is super sensitive, but I still don’t want it dumped on Pastebin.

While this ritual has served me well, there’s one glaring problem: the National Security Agency (NSA). Well, specifically PRISM and any other surveillance programs that they and other governments have. According to published reports, government agencies have what can only be described as wholesale access to end-user data located at Google, Facebook, and many other companies storing email and other interpersonal communication. Through various “transparency” reports released, we’re talking tens of thousands of requests without much, if any, governmental oversight or people having the power to legally object. Protecting my data against this sort of compromise is very different and renders my aforementioned data deletion useless. I’ll explain why.

Let’s say you use Gmail, or any Webmail provider for that matter. Using a browser, you craft an email, send it to another Gmail user, then subsequently delete that message from your Sent folder. Let’s say that recipient then responds to your email. You read it, and then promptly delete it. From your perspective, in your account the data is gone and anyone directly hijacking your account can’t see that anything was ever sent or received. This is exactly the outcome we were looking for. BUT, this is not necessarily true from the service provider’s perspective, or for government surveillance.

You see, the Gmail user you’ve been emailing still has a perfect transactional record of all of your sent/received email, which is sitting somewhere in their account, probably in their Inbox or Sent folders. Now, scale this out to all the email you send to any Webmail provider, and you start to get the idea. You might have deleted your email in your account, but no one else has deleted your email in their account. When Google (et al) receives a governmental order to hand over all email to/from “@gmail.com”, they can do the search system-wide. To be fair, I have no idea if they actually perform the search this way, but the fact is that they technically can.

At this point it’s also important to appreciate that when you delete email on a Webmail service, there is zero guarantee that your email has in fact been deleted. At least, nothing like the assurance you get with your own system(s).

When explaining this situation, a common reaction is suggesting Google should simply encrypt your email/data, so that not even they can read it. Before getting to that, let’s understand why Google, Yahoo, Facebook, and hundreds of companies offer you free online services. They do so because in exchange they get access to your data – however sensitive – and personal interests, no matter how private. They sell aggregated access to this data to advertisers who wish to promote their brand or influence your buying habits. That’s essentially how they make their tens of billions of dollars annually.

This relationship is not necessarily a bad deal and so far, it isn’t even controversial. What’s controversial is that Google, or any other “free” Webmail provider that needs to read your data to make money, obviously they can’t encrypt it from themselves to protect against government surveillance. It would be contrary to their business model. On this point even Vint Cerf, one of the fathers of the Internet and Google’s Chief Internet Evangelist, agrees. In the wake of the PRISM headlines, a main concern of theirs is that users will freak out and withdraw their data, decrease use of the service out of fear, and they then lose money. I think their concerns are well-founded.

That’s why, in response, companies like Google, Yahoo, Twitter, Facebook and others are eager to reassure their users and consumers that they are going to resist surveillance to the extent they legally can do so and continue to be “transparent” with them by disclosing the number of times that government has made data requests. They’ll even go so far as to challenge a government gag order to make sure they can disclose to users with as much details as possible. The truth of the matter is, “transparency” is probably the best these companies can do, but it’s just not good enough – nor will it ever be unless these companies change their business models, which they can’t, so they won’t, so we’re stuck.

What’s the answer then? On any individual user level, my quick advice has always been: if it’s something that you can’t afford to lose, or something that is truly personal to you, don’t put it on the Internet. In the same vein, if you’re going to be browsing NSFW sites while at work then do so using a search engine that does not track your data. DuckDuckGo and a few other sites like it can be a good option for this. And then, of course, you could use PGP or other tools to encrypt your email content before pasting it to Gmail. Unfortunately, personal email encryption software hasn’t proven itself very easy or attractive enough for mainstream use. And admittedly, PGP itself does not completely safeguard your email from the government or other prying eyes: the email envelope itself, which includes valuable info such as sender, recipient, subject, time sent, mail servers, etc., is still visible.

For companies like Google, Yahoo, and Facebook, the only real solution they can offer to their users is to redesign their business models so that they are not reliant on the ability to store and read user data to succeed. Yes, far easier said than done, but let’s just consider that for a moment.

If, for example, Facebook charged it’s more than one billion users just $5.00 USD for an entire year’s worth of the service, it would more than make up for the 2012 revenue it receives from advertisers. Without complete reliance upon advertisers for revenue, Facebook would no longer have any real reason to keep your data or reason not to encrypt it. A similar model could be applied to Webmail as well. In fact, Google offer paid-for corporate email hosting already via Google Apps. So why isn’t the email encrypted from themselves? (Or maybe it is?)

For Google, Yahoo and Microsoft, advertising based on search terms just does not need to be targeted at the individual – this eliminates the need to retain search analytics information. It doesn’t eliminate advertising completely, it just makes advertising tied to individual search queries no longer tied to your personal information – which means they don’t have to store the data, or if they do, leave it in such a way that they can read it.

Perhaps in all of this I’m just being naive, even a little bit idealistic. The more likely reality is there is simply too many business conflicts of interest for these companies right now under their current models to charge for their services directly and encrypt the data, so the only thing they can do is offer “transparency.” For me, that’s just not good enough.

HackerKombat II: Capturing Flags, Pursuing the Trophy

Years ago, a small group of 5-6 of us at WhiteHat held impromptu hacking contests – usually over lunch or during breaks in the day – in which we would race each other to be the quickest to discover vulnerabilities in real live customer websites (banks, retailers, social networks, whatever). No website survived longer than maybe 20 minutes. These contests were a nice break in the day and they allowed us to share (or perhaps show off) our ability to break into things quickly. The activity usually provided comic relief, moments of humility, :) and most importantly they opened opportunities to learn from each other.

We have scores of extremely talented and creative minds working at WhiteHat and these activities were some of the earliest testaments to that. Our corporate culture is eager to break what was previously thought of as “secure,” often just for the fun and challenge. Today, WhiteHat has more than 100 application security specialists in our Threat Research Center (TRC) alone – essentially our own Web hacker army. With so many people now, our contests were forced to evolve, to grow and to mature. We now organize a formal internal activity called HackerKombat.

HackerKombat is a WhiteHat employee only event, a game we hold every couple of months, a late-night battle between some of the best “breakers” in the business. HackerKombat is our version of a “Hackathon,” which companies like Facebook and others host as a means to challenge their engineers to build cool new apps, new features, etc.

BJSgv2mCYAI1Lft

HackerKombat challenges our team to break things — to break websites and web applications, to test our hacker skills in a pizza and alcohol infused environment. The goals are to have some fun in a way that only hackers could appreciate, but also to encourage teamwork and thinking outside the box, and to expose areas of knowledge where we are weak.

Unlike years past, the websites and applications we target are staged – no more hacking live customer sites! We have learned that while the average business-driving website might withstand the malicious traffic of a few hackers targeting it, a dozen or more could easily cause downtime. We certainly can’t have that and you’ll see how easy that can be later in this post.

The HackerKombat challenges are designed by Kyle Osborn (@theKos), a WhiteHat TRC alumnus, accomplished security researcher, and frequent conference speaker, who is currently employed by Tesla Motors. Challenges are also developed by current TRC members, but doing so disqualifies them from actually playing — gotta keep things fair as we can. This isn’t much in way of rules for HackerKombat. I mean, are hackers expected to follow them anyway? ;)

Today, finding a single vulnerability is nowhere near enough to claim victory. HackerKombat is a series of challenges that are very difficult and require a wide variety of technical ability. Defeating every challenge requires a great team, and great teamwork. No way can a single person, even the best and brightest among us, get through every challenge and expect to have any chance of winning. Past events have shown there is strength in numbers – so we also had to cap the team size at 5-6 to keep things even.

A few weeks ago we hosted the second formal event – HackerKombat II. Teams were decided by draft, for a total of six teams with five combatants each spanning our Santa Clara headquarters as well as in our TRC location in Houston. In the hours leading up to HK II the trash talking was constant and searing. There was even an office pool posted and people were placing bets on the winning team! The biggest prize of all: our custom trophy.

trophy

The exact moment the game began the trash-talking ceased, poker faces were set – chatter became eerily quiet. If you wanted to win, and everyone did, every second and key press mattered. If someone was active on Jabber (chat client), you knew they were stuck. ;)

Each team’s approach to the 10 challenges was probably different. For my team – “Zerg” – we assessed each by triaging them first: determining what skill sets it would take and assigning those tasks to the right team member to tackle.  The first 4 challenges or so were completed fairly easily within the first hour. The next 2-3 challenges we had to pair up to defeat them. Writing some code was necessary. Another hour gone. Then things got hard, really hard, and every team’s progress slowed way down.

Some of the challenges posed interesting hurdles that the designers did not anticipate. For instance, one challenge required teams to run DirBuster, which brute-forces web requests looking for a hidden web directory. The problem, however, is that a single Apache web server is not used to handling a dozen people all doing the same thing and sending thousands of requests per second. The challenge server died. Remember how I mentioned downtime? Apparently, speed in capturing that particular flag was the winning skill because no other team could get in to tackle it! Argh!

For the most difficult challenges, 9 and 10, Zerg had to gel together as a team to try to figure out the best approach and make incremental gains. I’m clearly very weak in my steganography skills. Terribly frustrating at a time we were so close to victory, but couldn’t seal the deal. An hour of study beforehand would have been enough.

In the end, the winning team –  “Terrans” from Santa Clara – prevailed by completing all 10 challenges and capturing all 12 flags in a time of 4h and 46min, barely edging out the team in Houston – “PurpleStuff” – which came in second at 4h and 49min. Yes, when it was all said and done, 3 minutes separated the leaders. Imagine that!

In another moment of humility, Robert Hansen (@RSnake), another “great” in the industry, can at least claim he beat me and came in second. :) I’m not exactly certain even now where my team placed, probably around 4th, as every team managed to capture at least 10 flags before the Terrans claimed ultimate victory.  I congratulate Rob, Nick, Dustin, Jon Paul and Ron for their win.

All in all, HK II was fun for all involved and everyone learned a great deal. We learned new techniques that the bad guys can use in the wild, and we learned where each of us individually needs to brush up on our studies. HK II’s success makes a founder very proud. I’m sure there are few, if any, companies that can pull off such an event.

 I look forward to HK III. I want that trophy!

[Check out photos from JerCon and HK II here.]