WhiteHat Security Observations and Advice about the Heartbleed OpenSSL Exploit

The Heartbleed SSL attack is one of the most significant, and media-covered, vulnerabilities affecting the Internet in recent years. According to Netcraft, 17.5% of SSL-enabled sites on the Internet were vulnerable to the Heartbleed SSL attack just prior to its disclosure.

The vulnerable versions of OpenSSL were first released to the public 2 years ago. The implications of 2 years of exposure to this vulnerability are significant, and we will explore that more at the end of this article. First – immediate details:

This attack does not require any MitM (Man in the Middle) or other complex setups that SSL exploits usually require. This attack can be executed directly and anonymously against any webserver/device running a vulnerable version OpenSSL, and yield a wealth of sensitive information.

WhiteHat Security’s Threat Research Center (TRC) began testing customers for vulnerability to this attack immediately, using a custom SSL-testing tool from our TRC R&D labs. Our initial conclusions were that the frequency with which sites were vulnerable to this attack was low on production applications being monitored by the WhiteHat Sentinel security service. In our first test sample, covering 18,000 applications, we found a vulnerability rate of about 2% – much lower than Netcraft’s 17.5% vulnerability rate for all SSL-enabled websites. This may be a result of a biased sample, since we only evaluated applications under Sentinel service. (This may be because application owners who use Sentinel are already more security-conscious than most.)

While the frequency of vulnerability to the Heartbleed SSL attack is low: those that are vulnerable are severely vulnerable. Everything in memory on the webserver/device running a vulnerable version of OpenSSL is exposed. This includes:

  • UserIDs
  • Passwords
  • PII (Personally Identifiable Information)
  • SSL certificate private keys (e.g.-SSL using this cert will never be private again, if compromised)
  • Any private crypto keys
  • SSL-based Chat
  • SSL-based VPN
  • SSL-based anything
  • Connection Strings (Database userIDs/Passwords) to back-end databases, mainframes, LDAP, partner systems, etc. loaded in memory
  • Additional vulnerabilities in source code, that are loaded into memory, with precise locations (e.g.-Blind SQL injection)
  • Basically any code or data in memory on the server running a vulnerable version of OpenSSL is rendered in clear-text to all attackers

The most important thing you need to know regarding the above: this attack leaves no traces. No logs. Nothing suspect. It is a nice, friendly, read-only anonymous dump of memory. This is the digital equivalent of accidentally leaving a copy of your corporate diary, with all sorts of private secrets, on the seat of the mass-transit bus for everyone to read – including strangers you will never know. Your only recourse for secrets like passwords now is the delete key.

The list of vulnerable applications published to date includes mainstream web and mobile applications, security devices with web interfaces, and critical business applications handling sensitive and federally-regulated data.

This is a seriously sub-optimal situation.

Timeline of the Heartbleed SSL attack and WhiteHat Security’s response:

April 7th: Heartbleed 0day attack published. Websites vulnerable to the attack included major websites/email services that the majority of the Internet uses daily, and many mobile applications that use OpenSSL.

April 8th: WhiteHat TRC begins testing customers’ sites for vulnerability to Heartbleed SSL exploitation. In parallel, WhiteHat R&D begins QAing new automated tests to enable Sentinel services to identify this vulnerability at scale.

April 9th: WhiteHat TRC identifies roughly 350 websites vulnerable to Heartbleed SSL attack, out of initial sample of 18,000 production applications. We find an initial 1.9% average exploitability but this percentage appears to drop rapidly in the next 48 hours.

April 10th: WhiteHat R&D releases new Sentinel Heartbleed SSL vulnerability tests, enabling Sentinel to automatically identify if any applications under Sentinel service are vulnerable to Heartbleed SSL attacks with every scan. This brings test coverage to over 35,000 applications. Average vulnerability rate drops to below 1% by EOD April 10th.

Analysis: the more applications we scan using the new Heartbleed SSL attack tests, the fewer sites (by percent) we find vulnerable to this attack. We suspect this is because most customers have moved quickly to patch this vulnerability, due to the extreme severity of the vulnerability and the intense media coverage of the issue.

Speculation: we suspect that this issue will quickly disappear for most important SSL-enabled applications on the Internet – especially applications under some type of active DAST or SAST scanning service. It will likely linger on with small sites hosted by providers that do not offer (or pay attention to) any form of security patching service.

We also expect this issue to persist with internal (non-Internet-facing) applications and devices that use SSL, but which are commonly not tested or monitored by tools or services capable of detecting this vulnerability, and that are less frequently upgraded.

While the attack surface of internal network applications & devices may appear to be much smaller than Internet-facing applications, simple one-click exploits are already available on the Internet, usable by anyone on your network with access to a web browser. (link to exploit code: http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html)

This means that any internal user on your network who downloads one of these exploits is capable of extracting everything in memory from any device or application on your internal network that is vulnerable. This includes:

  • Internal routers and switches
  • Firewalls and IDS systems
  • Human Resources applications
  • Finance and payroll applications
  • Pretty much any application or device running a vulnerable version of OpenSSL
  • Agents exploiting internal devices will see all network traffic or application data in memory on the affected device/application

Due to the fact that most of these internal applications and devices lack the type of logging and alerting that would notify Information Security teams of active abuse of this vulnerability, our concern is that in coming months these internal devices & applications may provide rich grounds for exploitation that may never be discovered.

Conclusion & Recommendations:
The scope, impact, and ease-of-exploitation of this vulnerability make it one of the worst in Internet history. However, patches are readily available for most systems, and the impact risk of patching seems minimal. It appears most of our customers have already patched the majority of their Internet-facing systems against this exploit.

However, this vulnerability has existed for up to 2 years in OpenSSL implementations. We will likely never know if anyone has been actively exploiting this vulnerability, due to difficultly in logging/tracking attacks. If your organization is concerned about this, we recommend that — in addition to patching this vulnerability — SSL certificates of suspect systems be revoked and re-issued. We also recommend that all end-user passwords and system passwords be changed on suspect systems.

Is this vulnerability a “boy that cried wolf” hype situation? Bruce Schneier has another interesting perspective on this: https://www.schneier.com/blog/archives/2014/04/heartbleed.html.

WhiteHat will continue to share more information about this vulnerability as further information becomes available.

General reference material:
https://blog.whitehatsec.com/heartbleed-openssl-vulnerability/
https://www.schneier.com/blog/archives/2014/04/heartbleed.html/

Vulnerability details:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160
https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Exploit details:
http://samiux.blogspot.com/2014/04/exploit-dev-heartbleed-cve-2014-0160.html

Summaries:
http://business.kaspersky.com/the-heart-is-bleeding-out-a-new-critical-bug-found-in-openssl/
http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted-websites-vulnerable-to-heartbleed-bug.html

Heartbleed OpenSSL Vulnerability

Monday afternoon a flaw in the way OpenSSL handles the TLS heartbeat extension was revealed and nicknamed “The Heartbleed Bug.” According to the OpenSSL Security Advisory, Heartbleed reveals ‘a missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.’ The flaw creates an opening in SSL/TLS which an attacker could use to obtain private keys, usernames/passwords and content.

OpenSSL versions 1.0.1 through 1.0.1f as well as 1.0.2-beta1 are affected. The recommended fix is upgrade to 1.0.1g and to reissue certs for any sites that were using compromised OpenSSL versions.

WhiteHat has added testing to identify websites currently running affected versions. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. These tests are currently being run across all of our clients’ applications. We expect to get full coverage of all applications under service within the next two days. WhiteHat also recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested. Several tools have been made available to test for open issues. To access an online testing tool visit http://filippo.io/Heartbleed/. Another tool can be found on GitHub at https://github.com/titanous/heartbleeder and a new script can be obtained from http://seclists.org/nmap-dev/2014/q2/36

If you have any questions regarding the Heartbleed Bug please email support@whitehatsec.com and a representative will be happy to assist. Below you will find a link to the OpenSSL Security Advisory: https://www.openssl.org/news/secadv_20140407.txt Reference for Heartbeat extension https://tools.ietf.org/html/rfc6520

Aviator Status – 100k Downloads and Growing!

I realize it’s only been a handful of days since we launched the Windows version of Aviator, but it’s been an exciting ride. If you’ve never had to support a piece of software, it feels a bit like riding an unending roller coaster — you’re no longer in control once you get on the ride put your software out there. People will use it however they use it, and as the developer you simply have to adapt and keep iterating to make the user experience better and better. You can never get off the ride, which is a wonderful feeling – if you happen to like roller coasters! Okay, enough with that analogy.

When we released Aviator for Mac in October, we felt we were onto something when people started – almost immediately – emailing us asking us for features. We were sure we were on the right track when the media started writing articles. And when the number of downloads climbed from the thousands to tens of thousands to close to 45,000 Mac OSX downloads in just five months, we thought we were getting pretty incredible traction. But none of that prepared us for the response we received in just the handful of days since we launched Aviator for Windows. In just 5 days since the Windows launch, we have already hit a total number of 100,000 Aviator users – and that is without spending a single dime on advertising!

We were also pleasantly surprised that a huge chunk of our users came from other regions – as much as 30% of our new Windows user base was from Asia. This means that Aviator is already making a difference in every corner of the world. We’re extremely excited by this progress, and are very encouraged to continue to iterate and deliver new features. I think this really shows how visceral people’s reaction to security and privacy is. It’s no wonder – we’ve never given this kind of control to users before. Either that or our users got wind of how much faster surfing without ads and third-party tracking can be. :) Ever tried to surf the Internet on in-flight wireless? With Aviator you will find that websites are actually usable — give it a try!

We may never know why so many people chose Aviator, but I do hope more people share their user stories with us. We want to know our successes as well as the challenges that remain before us as we continue on this unending roller coaster ride. We really do appreciate all of your feedback and we thank you for helping to make Aviator such a huge success, right out of the gate. We’re just getting started!

Download WhiteHat Aviator for Windows or Mac here: http://www.whitehatsec.com/aviator

Re: Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Mary Ann Davidson wasn’t shy making her feelings known about scanning third-party software in a lively blog post entitled “Mandated ThirdParty Static Analysis: Bad Public Policy, Bad Security“. If you haven’t read it and you are in security, I do recommend giving it a read. Opinionated and outspoken, Mary Ann lays out her case for not scanning third-party software. In this case, although I don’t totally agree with each individual point, I do agree with the overall conclusion.

Mary Ann delineates many individual reasons organizations should not scan third-party COTS (Commercial Off The Shelf) software. The reasons include non-standard practice, vendors already scan, little-to-no ROI, harm to the product’s overall security, increased risk to other clients, uneven access to security information, and risks to IP protection. I think the case can actually be greatly simplified. Scanning COTS software is simply a waste of time because that is not where most organizations are going to find and reduce risk.

Take web applications, which is at the top of every CISO’s usual list of suspects for risk. Should every organization on the web perform a complete security review of every single layer in this technology stack? Or how about mobile? Should an organization perform a complete review of iOS and Android before writing a mobile app or allowing employees to use mobile phones? I’m sure the consulting industry would love this, but this is simply not feasible for organizations of any size.

So what are we to do? In my opinion, a security team should strive to measure and mitigate the greatest amount of risk to an organization within it’s budgetary and time limitations, enabling business to innovate with a reasonable amount of security assurance. For the vast majority of applications, that formula is going to lead directly to their own custom-written or custom outsourced software; specifically, their web applications.

Most organizations have a large number of web apps, a percentage of which will have horrific vulnerabilities which put the entire organization at risk. These vulnerabilities are well-known, very prevalent, and usually straightforward to remediate. A security program that provides continuous assessment for all the code written and/or commissioned by your organization both during development and deployment should be the front line of security for nearly every organization with a presence on the web, as it normally finds a trove of preventable risk that would otherwise be exploitable to attackers on the web.

So what is the problem with scanning our custom code and third-party COTS software? It is a misallocation of resources. Unless you have unlimited budget and time, you are much better off focusing and evaluating your custom-written source code for vulnerabilities, which can be ranked by criticality and mitigated by your development team.

Again, that is not to say there are no risks in using COTS software. Of course there are. All software has vulnerabilities. Risks are present in every level of a technology stack. For example, a web app may depend on a BIOS/UEFI, an OS, a web server, a database server, an application server, multiple server-side frameworks, multiple client-side frameworks, and many other auxiliary programs.

But the likelihood of your organization performing yet another evaluation of software that has most likely already gone through a security regimen is exponentially less effective in managing risk than focusing more of your security resources on building a more robust security program around your own in-house custom software.

Well, what should we do to mitigate third-party risk? The most overlooked and basic security precaution is to have a full manifest of all third-party COTS and open-source code running in your environment. Few, if any organizations have a full listing of all third-party apps and libraries they use. Keeping an account of this information, and then doing frequent checks for security updates and patches for this third-party code is obvious and elementary, but almost always universally overlooked.

This basic security formula of continuous scanning of custom applications, checking third-party libraries for security updates, and using this information to evaluate the biggest risks to your organization and working to mitigate the most severe risks would have prevented most of the successful web attacks that make daily headlines.

WhiteHat Aviator Beta for Windows

Since launching the Mac version of WhiteHat Aviator in October, the number one most-asked-for feature was a Windows version of the browser. Today we hit a major milestone: our Labs team is excited to announce that we are launching the Windows beta. If you want to try it, please download Aviator for Windows here.

Outside of keeping our blog and Twitter followers up-to-date since it’s release in October, we have done little-to-nothing to get attention for Aviator. There has been no marketing or sales resources invested in Aviator. Despite this, we’ve gotten tens of thousands of downloads with our Mac OSX version, and that number has been growing rapidly as the world takes notice.

Now the obvious next question everyone will ask is: “when do I get a version for XYZ operating system?” While we know this is highly important to a lot of our users, we have to balance that with a number of other features — which leads us to perhaps the second most-asked question: “how are you making money on Aviator?” The answer is, right now we aren’t. Therefore, some of our efforts will also be directed towards determining how to sell this in a way that does not involve profiting from our users’ information as many other browsers are in the unfortunate business of doing. As the saying goes, “if you aren’t paying for it, you’re the product.”

That said, we want to make sure that all of our existing users of WhiteHat Aviator know that they will continue to get the browser for free, forever. That’s right! Once we have determined how to monetize it, only new users will need to pay for a license. So, by all means encourage your friends to download it now, so they can enjoy Aviator for free, forever. This is our small way of thanking early Aviator adopters: if you’re one of them, you will never have to pay. A safer browser with free lifetime technical support? It’s unheard of, I know!

Don’t worry, we have a lot of exciting features on the horizon, and we do plan on supporting a number of additional operating systems. One thing at a time! We are thrilled with the hundreds of people who have written encouraging emails, made suggestions, offered feedback and sent us bug reports. We know we’ve hit a nerve and we’re excited by the prospect of a better, faster browser that works for the masses.

Lastly, a special thanks to all of our Windows Alpha testers and Mac Beta testers, without whom we surely wouldn’t have had such a well thought-out product. Please keep your feedback coming! Your input is critical for improving future Aviator versions.

Raising the CSRF Bar

For years, we at WhiteHat have been recommending Tokenization as the number one protection from Cross Site Request Forgery (CSRF). Just having a token is not enough, of course, as it must be cryptographically strong, significantly random, and properly validated on the server. Can’t stress that last point enough as the first thing I try when I see a CSRF token is to empty the value out and see if the form submit is accepted as valid.

Historically the bar for “good enough” when it comes to CSRF tokens was if the token was changed at least per session.

For example: A user logs in and a token is generated & attached to their authenticated session. That token is used for that user for all sensitive/administrative forms that are to be protected against CSRF. As long as when the user is logged out and logged back in they received a new token and the old one was invalidated, that met the bar for “good enough.”

Not anymore.

Thanks to some fellow security researchers who are much smarter than I when it comes to crypto chops, (Angelo Prado, Neal Harris, and Yoel Gluck), and their latest attack on SSL (which they dubbed BREACH), that no longer suffices.

BREACH is an attack on HTTP Response compression algorithms, like gzip, which are very popular. Make your giant HTTP responses smaller, which makes them faster for your user? No brainer. Well now, thanks to the BREACH attack, within around 1000 requests an attacker can pull sensitive SSL-encrypted information from HTTP Responses — sensitive information such as… CSRF Tokens!

The bar must be raised. We now no longer allow Tokens to stay the same for an entire session if response compression is enabled. In order to combat this, CSRF Tokens need to be generated for every request/response. This way they can not be stolen by making 1000 requests against a consistent token.

TL;DR

  • Old “good enough” – CSRF Tokens unique per session
  • BREACH (BlackHat 2013)
  • New “good enough” – CSRF Tokens must be unique per request if HTTP Response compression is being used.

Adding Open Source Framework Hardening to Your SDLC – Podcast

I talk with G.S. McNamara, Federal Information Security Senior Consultant, about fixing open source framework vulnerabilities, what to consider when pushing open source, how to implement a system around patches without impacting performance, and security considerations on framework selections.





Want to do a podcast with us? Signup to be part of our Unsung Hero program.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Top 10 Web Hacking Techniques 2013

Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its eighth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent work. Past Top 10s and the number of new attack techniques discovered in each year:
2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51) and 2012 (56).

Phase 1: Open community voting for the final 15 [Jan 23-Feb 3]
Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top 15 overall. Comment with your vote!

Phase 2: Panel of Security Experts Voting [Feb 4-Feb 11]
From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as phase 1, the judges will rank the final 20 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2013!

Complete 2013 List (in no particular order):

  1. Tor Hidden-Service Passive De-Cloaking
  2. Top 3 Proxy Issues That No One Ever Told You
  3. Gravatar Email Enumeration in JavaScript
  4. Pixel Perfect Timing Attacks with HTML5
  5. Million Browser Botnet Video Briefing
    Slideshare
  6. Auto-Complete Hack by Hiding Filled in Input Fields with CSS
  7. Site Plagiarizes Blog Posts, Then Files DMCA Takedown on Originals
  8. The Case of the Unconventional CSRF Attack in Firefox
  9. Ruby on Rails Session Termination Design Flaw
  10. HTML5 Hard Disk Filler™ API
  11. Aaron Patterson – Serialized YAML Remote Code Execution
  12. Fireeye – Arbitrary reading and writing of the JVM process
  13. Timothy Morgan – What You Didn’t Know About XML External Entity Attacks
  14. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  15. James Bennett – Django DOS
  16. Phil Purviance – Don’t Use Linksys Routers
  17. Mario Heiderich – Mutation XSS
  18. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  19. Carlos Munoz – Bypassing Internet Explorer’s Anti-XSS Filter
  20. Zach Cutlip – Remote Code Execution in Netgear routers
  21. Cody Collier – Exposing Verizon Wireless SMS History
  22. Compromising an unreachable Solr Serve
  23. Finding Weak Rails Security Tokens
  24. Ashar Javad Attack against Facebook’s password reset process.
  25. Father/Daughter Team Finds Valuable Facebook Bug
  26. Hacker scans the internet
  27. Eradicating DNS Rebinding with the Extended Same-Origin Policy
  28. Large Scale Detection of DOM based XSS
  29. Struts 2 OGNL Double Evaluation RCE
  30. Lucky 13 Attack
  31. Weaknesses in RC4

Leave a comment if you know of some techniques that we’ve missed, and we’ll add them in up until the submission deadline.

Final 15 (in no particular order):

  1. Million Browser Botnet Video Briefing
    Slideshare
  2. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  3. Hacker scans the internet
  4. HTML5 Hard Disk Filler™ API
  5. Eradicating DNS Rebinding with the Extended Same-Origin Policy
  6. Aaron Patterson – Serialized YAML Remote Code Execution
  7. Mario Heiderich – Mutation XSS
  8. Timothy Morgan – What You Didn’t Know About XML External Entity Attacks
  9. Tor Hidden-Service Passive De-Cloaking
  10. Auto-Complete Hack by Hiding Filled in Input Fields with CSS
  11. Pixel Perfect Timing Attacks with HTML5
  12. Large Scale Detection of DOM based XSS
  13. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  14. Weaknesses in RC4
  15. Lucky 13 Attack

Prizes [to be announced]

  1. The winner of this year’s top 10 will receive a prize!
  2. After the open community voting process, two survey respondents will be chosen at random to receive a prize.

The Top 10

  1. Mario Heiderich – Mutation XSS
  2. Angelo Prado, Neal Harris, Yoel Gluck – BREACH
  3. Pixel Perfect Timing Attacks with HTML5
  4. Lucky 13 Attack
  5. Weaknesses in RC4
  6. Timur Yunusov and Alexey Osipov – XML Out of Band Data Retrieval
  7. Million Browser Botnet Video Briefing
    Slideshare
  8. Large Scale Detection of DOM based XSS
  9. Tor Hidden-Service Passive De-Cloaking
  10. HTML5 Hard Disk Filler™ API

Honorable Mention

  1. Aaron Patterson – Serialized YAML Remote Code Execution

The Insecurity of Security Through Obscurity

The topic of SQL Injection (SQL) is well known to the security industry by now. From time to time, researchers will come across a vector so unique that it must be shared. In my case I had this gibberish — &#MU4<4+0 — turn into an exploitable vector due to some unique coding done on the developers’ end. So as all stories go, let’s start at the beginning.

When we come across a login form in a web application, there are a few go-to testing strategies. Of course the first thing I did in this case was submit a login request with a username of admin and my password was ‘ OR 1=1–. To no one’s surprise, the application responded with a message about incorrect information. However, the application did respond in an unusual way: the password field had ( QO!.>./* in it. My first thought was, “what is the slim chance they just returned to me the real admin password? It looks like it was randomly generated at least.” So, of course, I submitted another login attempt but this time I used the newly obtained value as the password. This time I got a new value returned to me on the failed login attempt: ) SL”+?+1′. Not surprisingly, they did not properly encode the HTML markup characters in this new response, so if we can figure out what is going on we can certainly get reflective Cross Site Scripting (XSS) on this login page. At this point it’s time to take my research to a smaller scale and attempt to understand what is going on here on a character-by-character level.

The next few login attempts submitted were intended to scope out what was going on. By submitting three requests with a, b, and c, it may be possible to see a bigger picture starting to emerge. The response for each appears to be the next letter in the alphabet — we got b, c, and d back in return. So as a next step, I tried to add on a bit to this knowledge. If we submit aa we should expect to get bb back. Unfortunately things are never just that easy. The application responded with b^ instead. So let’s see what happens on a much larger string composed of the same letter; this time I submitted aaaaaaaaaaaa (that’s 12 ‘a’ characters in a row), and to my surprise got this back b^c^b^b^c^b^. Now we have something to work with. Clearly there seems to be some sort of pattern emerging of how these characters are transforming — it looks like a repeating series. The first 6 characters are the same as the last 6 characters in the response.

So far we have only discussed these characters in their human readable format that you see on your keyboard. In the world of computers, they all have multiple secondary identities. One of the more common translations is their ASCII numerical equivalent. When a computer sees the letter ‘a’ the computer can also convert that character into a number, in this case 97. By giving these characters a number we may have an easier time determining what pattern is going on. These ASCII charts can be found all over the Internet and to give credit to the one I used, www.theasciicode.com.ar helped out big time here.

Since we have determined that there is a pattern repeating for every 6 characters, let’s figure out what this shift actually looks like numerically. We start off with injecting aaaaaa and as expected get back b^c^b^. But what does this look like if we used the ASCII numerical equivalent instead? Now in the computer world we are injecting 97,97,97,97,97,97 and we get back 98,94,99,94,98,94. From this view it looks like each value has its own unique shift being applied to it. Surely everyone loved matrices as much as I did in math class, so lets bust out some of that old matrix subtraction: [97,97,97,97,97,97] – [98,94,99,94,98,94] = [-1,3,-2,3,-1,3]. Now we have a series that we can apply to the ASCII numerical equivalent to what we want to inject in order to get it to reflect how we want.

Finally, its time to start forming a Proof of Concept (PoC) injection to exploit this potential XSS issue. So far we have found out that they have a repeating pattern of 6 character shifts being applied to our input and we have determined the exact shift occurring on each character. If we apply the correct character shifts to an exploitable injection, such as “><img/src=”h”onerror=alert(2)//, we would need to submit it as !A:llj.vpf<%g%mqduqrp@`odur+1,.2. Of course, seeing that alert box pop up is the visual verification that needed that we have reverse engineered what is going on.

Since we have a working PoC for XSS, lets revisit our initial testing for SQL injection. When we apply the same character shifts that we discovered to ‘ OR 1=1–, we find that it needs to be submitted as &#MU4<4+0. One of the space characters in our injection is being shifted by -1 which results in a non-printable character between the ‘U’ and the ’4′. With the proper URL encoding applied, it would appear as %26%23MU%1F4%3C4%2B0 when being sent to the application. It was a glorious moment when I saw this application authenticate me as the admin user.

Back in the early days of the Internet and well before I even started learning about information security, this type of attack was quite popular. In current times, developers are commonly using parameterized queries properly on login forms so finding this type of issue has become quite rare. Unfortunately this particular app was not created for the US market and probably was developed by very green coders within a newly developing country. This was their attempt at encoding users passwords when we all know they should be hashed. Had this application not returned any content in the password field on failed login attempts, this SQL injection vulnerability would have remained perfectly hidden to any black box testing through this method. This highlights one of the many ways a vulnerability may exist but is obscured to those testing the application.

For those still following along, I have provided my interpretation of what the backend code may look like for this example. By flipping the + and – around I could also use this same code to properly encode my injection so that it reflects the way I wanted it too:


<!DOCTYPE html>
<html>
<body>
<form name="login" action="#" method="post">
<?php
$strinput = $_POST['password'];
$strarray = str_split($strinput);
for ($i = 0; $i < strlen($strinput); $i++) {
    if ($i % 6 == 0) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 1) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 2) {
        $strarray[$i] = chr(ord($strarray[$i]) - 2);
        }
    if ($i % 6 == 3) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    if ($i % 6 == 4) {
        $strarray[$i] = chr(ord($strarray[$i]) - 1);
        }
    if ($i % 6 == 5) {
        $strarray[$i] = chr(ord($strarray[$i]) + 3);
        }
    }
$password = implode($strarray);
echo "Login:<input type=\"text\" name=\"username\" value=\"" . htmlspecialchars($_POST['username']) . "\"><br>\n";
echo "Password:<input type=\"password\" name=\"password\" value=\"" . $password . "\"><br>\n";
// --- CODE SNIP ---
$examplesqlquery = "SELECT id FROM users WHERE username='" . addslashes($_POST['username']) . "' AND password='$password'";
// --- CODE SNIP ---
?>
<input type="submit" value="submit">
</form>
</body>
</html>

List of HTTP Response Headers

Every few months I find myself looking up up the syntax of a relatively obscure HTTP header. Regularly I find myself wondering why there isn’t a good definitive list of common HTTP Response headers anywhere. Usually the lists on the Internet are missing half a dozen HTTP headers. So I’ve taken care to gather a list all of the HTTP response headers I could find. Hopefully this is useful to you, and removes some of the mystique behind how HTTP works if you’ve never seen headers before.

Note: this does not include things like IncapIP or other proxy/service specific headers that aren’t standard, and nor does it include request headers.

Header Example Value Notes
Access-Control-Allow-Credentials true
Access-Control-Allow-Headers X-PINGOTHER
Access-Control-Allow-Methods PUT, DELETE, XMODIFY
Access-Control-Allow-Origin http://example.org
Access-Control-Expose-Headers X-My-Custom-Header, X-Another-Custom-Header
Access-Control-Max-Age 2520
Accept-Ranges bytes
Age 12
Allow GET, HEAD, POST, OPTIONS Commonly includes other things, like PROPFIND etc…
Alternate-Protocol 443:npn-spdy/2,443:npn-spdy/2
Cache-Control private, no-cache, must-revalidate
Client-Date Tue, 27 Jan 2009 18:17:30 GMT
Client-Peer 123.123.123.123:80
Client-Response-Num 1
Connection Keep-Alive
Content-Disposition attachment; filename=”example.exe”
Content-Encoding gzip
Content-Language en
Content-Length 1329
Content-Location /index.htm
Content-MD5 Q2hlY2sgSW50ZWdyaXR5IQ==
Content-Range bytes 21010-47021/47022
Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP default-src ‘self’ Different header needed to control different browsers
Content-Security-Policy-Report-Only default-src ‘self’; …; report-uri /csp_report_parser;
Content-Type text/html Can also include charset information (E.g.: text/html;charset=ISO-8859-1)
Date Fri, 22 Jan 2010 04:00:00 GMT
ETag “737060cd8c284d8af7ad3082f209582d”
Expires Mon, 26 Jul 1997 05:00:00 GMT
HTTP /1.1 401 Unauthorized Special header, no colon space delimiter
Keep-Alive timeout=3, max=87
Last-Modified Tue, 15 Nov 1994 12:45:26 +0000
Link <http://www.example.com/>; rel=”cononical” rel=”alternate”
Location http://www.example.com/
P3P policyref=”http://www.example.com/w3c/p3p.xml”, CP=”NOI DSP COR ADMa OUR NOR STA”
Pragma no-cache
Proxy-Authenticate Basic
Proxy-Connection Keep-Alive
Refresh 5; url=http://www.example.com/
Retry-After 120
Server Apache
Set-Cookie test=1; domain=example.com; path=/; expires=Tue, 01-Oct-2013 19:16:48 GMT Can also include the secure and HTTPOnly flag
Status 200 OK
Strict-Transport-Security max-age=16070400; includeSubDomains
Timing-Allow-Origin www.example.com
Trailer Max-Forwards
Transfer-Encoding chunked compress, deflate, gzip, identity
Upgrade HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
Vary *
Via 1.0 fred, 1.1 example.com (Apache/1.1)
Warning Warning: 199 Miscellaneous warning
WWW-Authenticate Basic
X-Aspnet-Version 2.0.50727
X-Content-Type-Options nosniff
X-Frame-Options deny
X-Permitted-Cross-Domain-Policies master-only Used by Adobe Flash
X-Pingback http://www.example.com/pingback/xmlrpc
X-Powered-By PHP/5.4.0
X-Robots-Tag noindex,nofollow
X-UA-Compatible Chome=1
X-XSS-Protection 1; mode=block

If I’ve missed any response headers, please let us know by leaving a comment and I’ll add it into the list. Perhaps at some point I’ll create a similar list for Request headers, if people find this helpful enough.