Aviator (Default) Search Change

In an effort to find ways to work with a search provider, we spent a lot of time researching various models that would enable us to stay on the side of our users AND allow us to generate revenue to help us pay for Aviator development. Naturally we attempted to work with DuckDuckGo since they were already our search provider of choice. Unfortunately, the only way they were willing to work with us was to monetize ads, and we just aren’t willing to do that. Browsers monetizing ads is at the root of what’s causing issues for users, stifling security and eliminating privacy.

After months of work we decided that Disconnect Search was the best and most exciting path forward. We have a long-standing relationship with the Disconnect team because of their popular browser plugin, and their privacy record is spotless — and Disconnect was comfortable working a deal with us that didn’t rely on selling ads. You can’t beat that! We were thrilled to find a partner who cares enough about their users and ours to forgo the typical death cycle of mandatory partnerships that revolve around advertising, and instead just revolve around being the default search.

This is just another way we want to be clear that we are on our customer’s side, even in matters of business. Our transparency with our business model is the crux of why our users can trust our decisions to be in their best interest. So, in the coming update you will notice that the browser politely asks you if you want to switch from DuckDuckGo to Disconnect. The option is yours, of course, but this will help us continue to evolve the browser, and we believe Disconnect is the most private search engine we could find to boot. Two birds with one stone, right?!

As always, questions and comments are welcome!

Sentinel Elite: Adding $250,000 Worth of Breach Protection

A week ago WhiteHat launched Sentinel Elite where we made a bold statement, perhaps one of the boldest statements any security vendor can make. We’re offering a financially backed security guarantee: if a website covered by Sentinel Elite gets hacked, specifically using a vulnerability we didn’t identify and should have, the customer will be refunded in full.

Since the announcement, the feedback we’ve received has been both incredible and incredibly interesting. It’s clear to us the concept of a ‘security guarantee’ strikes a nerve and we are finding that others in the industry have called for similar action. In fact, a recent report by ChangeWave (a subsidiary of 451 Research), entitled ‘Corporate Cloud Computing Trends’, says the following:

“We also asked about the importance of being offered a ‘security guarantee’ by cloud service providers. Three-quarters of respondents (74%) say it’s ‘Very Important’ that cloud providers offer a guarantee, and another 22% say ‘Somewhat Important.’ Companies not using cloud place a greater importance on security guarantees than current users. As such, security guarantees give cloud service providers an opportunity to attract new customers.”

Even Dan Geer (CISO, In-Q-Tel), in his Black Hat keynote, called for software liability: “the only two products not covered by product liability are religion and software, and software shall not escape much longer.”

Clearly, this is an idea whose time has come!

While many have been commending us for putting our money where our mouth is, which we appreciate, we’ve also been asked to do more. We heard multiple times that in the long run, a product refund is not substantive enough when compared to customer breach costs in the event of an incident — which could easily extend from six figures on up. And you know what? They are absolutely right! WhiteHat should have more skin in the game. So, we’re taking this feedback to heart and we are upping the ante:

Now, not only will Sentinel Elite customers receive a full refund in the event that their site is breached as a result of a vulnerability that we should have discovered but missed, we will also cover up to $250,000 in damages to the affected company.

Like we’ve said before, WhiteHat is serious about web security. We’re serious when we say a security vendor’s interests should be in line with their customers. We encourage other vendors to follow suit and we encourage their customers to settle for nothing less. This is the best way to achieve better security outcomes, more secure software, and a more secure Web. Other industries have already done this. InfoSec can too!

For more information about Sentinel Elite, please click here.

DHS and Cyberterrorism

The DHS was recently polled on what groups and attacks they are personally most concerned about. This comes from a pretty wide range of intelligence officers at various levels of the military industrial complex. This underscores how the military is thinking and what they are currently most focused on. The tidbits I found interesting are on pages 7 and 8:


The DHS seems to be most concerned about Sovereign Citizens and Islamic Extremists/Jihadists (in that order). The rationale isn’t well explained, but I would presume that physical proximity and the radical nature of Sovereign Citizen groups trumps the extremist nature of Jihadists. I’m speculating, but that would seem to make sense. It could also be a reaction to FUD, but it’s hard to say.

More interestingly, the threat they find most viable is Cyberterrorism. That makes a lot of sense, because Cyberterrorism is cheap, can be done instantaneously, can be done remotely, and can be done with minimal skills and at minimal risk. It’s really hard to tell what’s Cyberterrorism versus what is just a normal for-profit attack, and attribution is largely an un-solvable problem if the attacker knows what they’re doing. Also, even if you can identify the correct adversary, extradition/rendition are tough problems.

There’s not a lot of substance here, because it’s all polls, but it’s interesting to see that our industry is at the top of the US intelligence community’s mind.

Security Guaranteed: Customers Deserve Nothing Less

WhiteHat Security Sentinel Elite

Ever notice how everything in the information security industry is sold “as is”? No guarantees, no warrantees, no return policies. This provides little peace of mind that any of the billions that are spent every year on security products and services will deliver as advertised. In other words, there is no way of ensuring that what customers purchase truly protects them from getting hacked, breached, or defrauded. And when these security products fail – and I do mean when – customers are left to deal with the mess on their own, letting the vendors completely off the hook. This does not seem fair to me, so I can only imagine how a customer might feel in such a case. What’s worse, any time someone mentions the idea of a security guaranty or warranty, the standard retort is “perfect security is impossible,” “we provide defense-in-depth,” or some other dismissive and ultimately unaccountable response.

Still, the naysayers have a valid point. Given enough time and energy, everything can be hacked, including security products, but this admission does not inspire much confidence in those who buy our warez and whose only fear is getting hacked. We, as an industry, are not doing anything to alleviate that fear. With something as important as information security is today, personally I think customers deserve more assurance. I believe customers should demand accountability from their vendors in particular. I believe the “as is” culture in security is something the industry must move away from. Why? Because if it were incumbent upon vendors to stand by their product(s) we would start to see more push against the status quo and, perhaps, even renewed innovation.

At the core of the issue is bridging the gap between the “nothing-is-perfect” mindset and the business requirements for providing security guarantees.

If you think about it, many other industries already offer guarantees, warrantees, or 100% return policies for less than perfect products. Examples include electronics, clothing, cars, lawn care equipment, and basically anything you buy on Amazon. As we know, all these items have defect rates, yet it doesn’t appear to prevent those sellers from standing behind their products. Perhaps the difference is, unlike most security vendors, these merchants know their product failure rates and replacement costs. This business insight is precisely why they’re willing to reimburse their customers accordingly. Security vendors by contrast tend NOT to know their failure rates, and if they do, they’re likely horrible (anti-virus is a perfect example of this). As such, vendors are unwilling to put their money where their mouth is, the “as is” culture remains, and interests between security vendor and customer are misaligned.

The key then, is knowing the security performance metrics and failure rates (i.e. having enough data on how the bad guys broke in and why the security controls failed) of the products. With this information in hand, offering a security guarantee is not only possible, but essential!

WhiteHat Security is in a unique position to lead the charge away from selling “as is” and towards security guarantees. We can do this, because we have the data and metrics to prove our performance. Other Software-as-a-Service vendors could theoretically do the same, and we encourage them to consider doing so.

For example, at WhiteHat we help our customers protect their websites from getting hacked by identifying vulnerabilities and helping to get them fixed before they’re exploited. If the bad guys are then unable to find and exploit a vulnerability we missed, or if they decide to move on to easier targets, that’s success! Failure, on the other hand, is missing a vulnerability we should have found which results in the website getting hacked. This metric – the product failure rate – is something any self-respecting vulnerability assessment vendor should track very closely. We do, and here’s how we bring it all together:

  1. WhiteHat’s Sentinel scanning platform and the 100+ person army of Web security experts behind it in our Threat Research Center (TRC) tests tens of thousands of websites on a 24x7x365 basis. We’ve been doing this for more than a decade and we have a larger and more accurate website vulnerability data set than anyone else. We know with a fine degree of accuracy what vulnerabilities we are able to identify – and which ones we are not.
  2. We also have data sharing relationships with Verizon (and others) on the incident side of the equation. This is to say we have good visibility into what attack techniques the bad guys are trying and what they’re likely to successfully exploit. This insight helps us focus R&D resources towards the vulnerabilities that matter most.
  3. We also have great working relationships with our customers so that when something unfortunate does occur – which can be anything from something as simple as a ‘missed’ vulnerability, to a site that was no longer being scanned by our solution that contained a vulnerability, all the way to a real breach – we’re in the loop. This is how we can determine whether something we missed and should have found actually results in a breach.

Bottom line: in the past 10+ years of performing countless assessments and identifying millions of vulnerabilities, there have been only a small number of instances in which we missed a vulnerability that we should have found that we know was likely used to cause material harm to our customers. All told, our failure rate is far less than even one percent (<.01%), which is an impressive track record and one that we are quite proud of. I am not familiar with any other software scanning vendor who even claims to know what their failure rate metric is, let alone has the confidence to publicly talk about it. And it is for this reason that we can confidently stand behind our own security guarantee for customers with the new Sentinel Elite.

Introducing: Sentinel Elite

Sentinel Elite is a brand new service line from WhiteHat in which we deploy our best and most comprehensive website vulnerability assessment processes. Sentinel Elite builds on the proven security of WhiteHat Sentinel, which offers the lowest false-positive rate of any web application security solution available as well as more than 10 years of website vulnerability assessment experience. This service, combined with a one-of-a-kind security guarantee from WhiteHat gives customers the confidence in both their purchase decisions as well as the integrity of their websites and data.

Sentinel Elite customers will have access to a dedicated subject matter expert (SME) who expedites communication and response times, as well as coordinates the internal and external activities supporting your applications security program. The SME will also supply prioritized guidance support, so customers know which vulnerabilities to fix first… or not! Customers also receive access to the WhiteHat Limited Platinum Support program, which includes a one-hour SLA, quarterly summaries and exploit reviews, as well as a direct line to our TRC. Sentinel Elite customers must in turn provide us with what we need to do our work, such as giving us valid website credentials and taking action to remediate identified vulnerabilities. Provided everyone does what they are responsible for, our customers can rest assured that their website and critical applications will not be breached. And we are prepared to stand behind that claim.

If it happens that a website covered by Sentinel Elite gets hacked, specifically using a vulnerability we missed and should have found, the customer will be refunded in full. It’s that simple.

We know there will be those in the community who will be skeptical. That’s the nature of our industry and we understand the skepticism. In the past, other security vendors have offered half-hearted or gimmicky guarantees, but that’s not what we’re doing here. We’re serious about web security, we always have been. We envision an industry where outcomes and results matter, a future where all security products come with security guarantees, and most importantly, a future where the vendors’ best interests are in line with their customers’ best interests. How amazing would that be not only for customers but also for the Internet and the world we live, work and do business in? Sentinel Elite is the first of many steps we are taking to make this a reality.

For more information about Sentinel Elite, please click here.

Better Single Sign-On for WhiteHat Security Customers

WhiteHat has just integrated PingFederate into Sentinel to provide better single-signon support to our customers. With single sign-on, your own single sign-on portal can require exactly what you want from someone logging in – username and password, or an RSA token code, or a text-back number, or a thumb scan – whatever you prefer to uniquely identify your users. This allows you to make your own security as tight as you like.

Once a user has logged in, their authentication can be “federated” – exchanged securely – with other portals, either locally or at other sites. This is where WhiteHat’s PingFederate integration comes in. That login – secured however you want to secure it – can now be transmitted to WhiteHat’s PingFederate instance, which validates it and then passes it on to Sentinel to actually do the login. Only the fact that the user is valid and their email is exchanged; passwords, internal user IDs, and any other identifying information remains on your server and never leaves it. This means you can now authenticate your Sentinel users as stringently as you do those for your own applications.

That sounds pretty nice, but we also support single sign-on for deep links into Sentinel. Those links actually link to our PingFederate instance, which uses the link to bounce you back to your own single sign-on portal and then back into Sentinel with the federated authentication. If you’re already logged in, then the process just takes you into Sentinel after automatically picking up the authentication.

Past these basic features, it’s now possible for us to extend single sign-on to provide more, like automatic provisioning of Sentinel users, directly adding and removing users in Sentinel simply by doing the same with them locally on your sign-on portal. This makes it much easier for your admins to manage Sentinel users, since they’re just like any other users on your system. We’re also looking into the possibility of integrating our Customer Success Center into PingFederate as well, to make accessing either or both Sentinel and our Customer Success Center far easier and simpler.

Remember, you need your own single sign-on solution to connect up with us – if you have one already, give us a call and ask us about integrating Sentinel. One less ID to remember, and one less password to forget!

The Ghost of Information Disclosure

Information disclosure is a funny thing. Information disclosure can be almost completely innocuous or — as in the case of Heartbleed — it can be devastating. There is a new website called un1c0rn.net that aims to make hacking a lot easier by letting attackers utilize Heartbleed data that has been amassed into one place.

The business model is simple – 0.01 Bitcoins (Around $5) for data. It leaves
no traces on the remote server because the data isn’t stored there anymore,
it’s on un1c0rn’s server. So let’s play a sample attack out.

1) Heartbleed comes out;

2) Some time in the future un1c0rn scans a site that is vulnerable and logs it;

3) A would-be attacker searches through un1c0rn and finds a site of interest;

4) Attacker leverages the information to successfully mount an attack against the target server leveraging the data.

In this model, the attacker’s first packets to the server in question could be
the one that compromises them. But it’s actually more interesting than that. As I was looking through the data I found this query.


For those of you who don’t live and breathe HTTP this is an authorization request with a base64 encoded string (which is trivial to reverse) that contains the usernames and passwords to the sites in question. This simple request found 400 sites with this simple flaw in it. So let’s play out another attack scenario.

1) Heartbleed comes out;

2) Some time in the future un1c0rn scans a site that is vulnerable and logs it;

3) Site is diligent and finds that they are vulnerable, patching up immediately and switching out their SSL certificate with a new one.

4) A would-be attacker searches through un1c0rn and finds a site of interest;

5) Using the information they found they still compromise the site with the username/password, even though the site is no longer vulnerable to the attack in question.

This is the problem with Information Disclosure – it still can be useful even long after the hole that was used to gather the data has been closed. That’s why in the case of Heartbleed and similar attacks not only do you have to fix the hole but you also have to expire all of the passwords, and remove all of the cookies or any other way that a user could gain access to the system.

The moral of the story is that you may find yourself being compromised seemingly almost magically in a scenario like this. How can someone guess a cookie correctly on the first attempt? Or guess a username/password on the first try? Or exploit a hole without ever having looked at your proprietary source code or even having visited your site before? Or find a hidden path to a directory that isn’t linked to from anywhere? Well, it may not be magic – it may be the ghost of Information Disclosure coming back to haunt you.


SSL/TLS MITM vulnerability CVE-2014-0224

We are aware of the OpenSSL advisory posted at https://www.openssl.org/news/secadv_20140605.txt. OpenSSL is vulnerable to a ChangeCipherSpec (CCS) Injection Vulnerability.

An attacker using a carefully crafted handshake can force the use of weak keying material in OpenSSL SSL/TLS clients and servers.

The attack can only be performed between a vulnerable client *and* a vulnerable server. Desktop Web Browser clients (i.e. Firefox, Chrome Internet Explorer) and most mobile browsers( i.e. Safari mobile, Firefox mobile) are not vulnerable, because they do not use OpenSSL. Chrome on Android does use OpenSSL, and may be vulnerable. All other OpenSSL clients are vulnerable in all versions of OpenSSL.

Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.

OpenSSL 0.9.8 SSL/TLS users (client and/or server) should upgrade to 0.9.8za.
OpenSSL 1.0.0 SSL/TLS users (client and/or server) should upgrade to 1.0.0m.
OpenSSL 1.0.1 SSL/TLS users (client and/or server) should upgrade to 1.0.1h.

WhiteHat is actively working on implementing a check for sites under service. We will update this blog with additional information as it is available.

Editor’s note:
June 6, 2014
WhiteHat has added testing to identify websites currently running affected versions of OpenSSL across all of our DAST service lines. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. WhiteHat recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested and patched.

If you have any questions regarding the the new CCS Injection SSL Vulnerability please email support@whitehatsec.com and a representative will be happy to assist.

Easy Things Are Often the Hardest to Get Right: Security Advice from a Development Manager

I’m not a security guy. I haven’t done much hands-on software development for awhile either. I’m a development manager, project manager, and CTO, having spent much of my career building technology for stock exchanges and central banks. About six years ago I helped to launch an online institutional trading platform in the US, where I serve as the CTO today. The reliability and integrity of our technology and operations are critically important, so we worked with some very smart people in the info sec community to make sure that we designed and built security into our systems from the start.

Through this work I got to know about OWASP, SAFECode and WASC, and met many of the people involved in trying to make the software world secure. Over the last couple of years I’ve helped out on some OWASP projects, most recently the OWASP Top 10 Proactive Controls (a Top 10 list for developers instead of security auditors) and at the SANS Institute in their SANS Analyst program. My special areas of interest right now are secure development with Agile and Lean methods, and integrating security into DevOps.

I’ve learned that preaching about security principles, “economy of mechanism” and “complete mediation” and “least common mechanism” (and whatever else Saltzer and Schroeder talked about 40 years ago) and publishing manifestos are a waste of time. There’s nothing actionable; nothing that developers can see or use or test.

I’ve also learned that you can’t legislate “security-in”. Regulatory compliance constraints and formal security policies don’t make software more secure. The best that they can do is act as leverage to get security on the table.

I am only interested in things that work, especially things that work for small teams. Because a lot of software is built by small teams and small companies like mine, and because what works for small teams can be scaled up: look at the success of Agile as it makes its way from teams to the enterprise. I want things that developers can do on their own, without the help of a security team – because small companies don’t have security teams (or even a “security guy”), and because the only way that appsec can succeed at scale is if developers make it part of their jobs.

Here’s what I’ve found works so far:

In design: trust, but verify

When designing, or changing the design, make sure that you understand your dependencies and system contracts, the systems and services and data you depend on – or that depend on you. What happens if a service or system fails or is slow or occasionally times out? Can you trust the quality of the data that you get from them? Can you trust that they will take good care of your data? How can you be sure?

All input is evil

Michael Howard at Microsoft has “one rule” for developers: if there’s one thing that developers can do to make applications more secure, it’s to recognize that

All input is evil unless proven otherwise

and it’s up to developers to defend against evil through careful data checking. Get to know – and love – type-checking and regex.

This is simple to explain and easy to understand. It’s something that developers can think about in design and in coding. There are testing tools that can help check on it (through taint analysis or fuzzing). It will pay back in making the system more reliable and more resilient, catching and preventing run-time errors and failures, as well as making the system more secure.

Write good code

Security builds on top of code quality. Bad code, code with serious bugs, code that is too hard to understand and unsafe to change, will never be secure. Look at some of the most serious recent security problems: the Apple iOS SSL bug and the Heartbleed OpenSSL bug. These weren’t failures in understanding appsec principles or gaps in design – they were caused by sloppy coding.

Identify your high-risk code: security plumbing and other plumbing, network-facing APIs, code that deals with money or control functions or sensitive data, core algorithms. Make sure that your best developers work on this code, that they have extra time to review it carefully to make sure that the code does what it is supposed to do, and that it “won’t boink” if it hits bad data or some other exception occurs. Step through the logic, check API contracts, data validation, error handling and exception handling. Spend extra time testing it.

Never stop testing

Agile and XP and TDD/ATDD and Continuous Integration have made developers responsible for testing their own software to verify that it works by writing automated tests as much as possible. This needs to be extended to security. Developers can write security unit tests and run them in Continuous Integration.

Find a good security testing tool – a dynamic scanner or a static analysis tool – that is affordable and fast and generates minimal noise, and make it part of your automated testing pipeline. Run it as often as possible. Bugs that are caught faster will be fixed faster and cost less to fix. Make security testing something that developers don’t have to think about – it’s always “just there.”

Automated testing is an important part of a testing program, but it isn’t enough to build good software. Like some other shops, we also do a lot of manual exploratory testing. A skilled tester working through key features of the app, pushing the boundaries, trying to understand and break things will find important bugs and tell you if the software really is ready. A good pen test is exploratory testing on steroids. For every bug they find, ask: Why is it broken? How did we miss it? What else could be wrong? Pen tests as a security gate are too little too late. But as a health check on your SDLC? Priceless.

Vulnerabilities are bugs – so fix them

Money spent on training and tools and testing is wasted if you don’t fix the bugs that you find. Security vulnerabilities are bugs – you may have to track them separately for compliance reasons, but to developers they should just be bugs, handled and prioritized and fixed like any other bug. If you have a development culture where bugs are taken seriously (Zero Bug Tolerance), and security vulnerabilities are bugs, then security vulnerabilities will get fixed. If developers aren’t fixing bugs, then you have a serious management problem that you need to take care of.

Training works – to a point

I agree that every developer and tester (and their managers) should get some training in secure development, so that everyone understands the fundamentals. But not everybody will “get it” when it comes to secure development – and not everybody has to. Adobe’s karate belt approach to appsec training makes good sense. Everybody should get white belt level awareness training. Every team should have somebody the rest of the team looks to for technical leadership and mentoring. These people are the ones who need in-depth black belt appsec training, and who can take best advantage of it. They can do most of the heavy-lifting required, help the rest of the team on security questions, and keep the team honest.

Simple is not so simple

These are the things that I’ve found that work: simple rules, simple tools, simple good practices. Make sure that at least your best developers are trained in application security. Understand trust and layering-in design. Write solid, defensive code. Test your code for security like you test it for correctness and usability. If you find security bugs, fix them.

Sounds easy. But what I’ve learned from working in software development for a long time is that the easy things are often the hardest to get right.

About Jim Bird
Jim is a development manager and CTO who has worked in software engineering for more than 25 years. He is currently the co-founder and CTO of a major US-based institutional trading service, where he manages the company’s technology organization and information security programs. Jim has worked as a consultant to IBM and to stock exchanges and banks globally. He was also the CTO of a technology firm (now part of NASDAQ OMX) that built custom IT solutions for stock exchanges and central banks in more than 30 countries. You can follow him on his blog at Building Real Software.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

A Tale of Responsible Disclosure

In the past few months I’ve taken on cycling as a new hobby, and one of the services I have been using to track routes and mileage is a popular webapp known as MapMyRide. They actually have several services that are split up per activity including MapMyFitness, MapMyHike, MapMyWalk, and MapMyRun. Several colleagues have also joined me in recreational cycling, so one day I decided to plan out a route for us to use repeatedly. After the route was created using MapMyRide, I decided to give it the quirky name Tour de <’hackers”>. It would get a laugh out of any XSS hunters because most of us in application security would recognize it as a benign HTML tag that also checks for single and double quote attribute breaks.

Take a close look at the “Share via Tweet/Facebook/Email” section


Oh my…
Whelp, that’s awesome, I’ve now inadvertently done the number one thing an ethical pentester should never do, and that’s play “outside of the sandbox.” You see, companies that don’t have an extremely active security team don’t really distinguish between a security professional that may be using the application legitimately, or an actual intruder that’s digging for exploits. Those of us that are paid to ethically hack into web applications and services are normally protected by various legal documents as per the contract of said penetration tests. XSS isn’t really hard to find – and truthfully I’m quite annoyed with all of its hype – but to a company it really doesn’t matter how cool or complex the exploit is so long as it’s damaging in some fashion to the business or brand. This particular one is persistent and easily wormable. If anyone remembers using MySpace (I admit that I did and accept the shame for it) they might be familiar with the MySpace Samy Worm that infected around a million profiles in less than 24 hours. Samy Kamkar’s XSS injection made you friend request him and added a little text to your profile page that said “but most of all, samy is my hero.” Silly, right? Well, what’s not silly is that his home was raided and he ended up entering a felony plea agreement that involved three years probation without computer use.

Let’s look at the facts. I don’t know if the makers of MapMyRide have an intrusion detection system (IDS) in place or if anything has flagged internally on this input. A quick Google search doesn’t reveal anything about them participating in Bug Bounties or Responsible Disclosure. I also did this on an account that is quite visibly linked to my personal Facebook account, so there’s a first and last name right there. All I can do now is approach them with what I’ve found, state my case and my intent, and hope that they seem me as a “do-good” individual that wants to ethically disclose the issue.

I hunted down a MapMyRide co-owner’s Twitter handle through LinkedIn, who then referred me to their VP of Engineering, Jesse Demmel. Now I can’t just email him saying that I was able to break the HTML document because that might not warrant enough attention. I’m going to have to send them an actual proof of concept and cause more injections to go through. Thankfully (open to interpretation) there wasn’t any input validation in place at all and we could call the cookies easily with <svg/onload=alert(document.cookies)>. Voilà.


Then things started getting a bit more interesting. I realized that the other domains shared this exact same template. This injection isn’t only persistent on mapmyride.com, but on the other four as well!





So, if I were to actually exploit this across all five domains from one entry point, how many people would I affect?

Here are some WolframAlpha statistics:


Based on those numbers, total page views across all sites are approximately 1,353,300 per day with 431,800 daily visitors. That’s not too shabby of a victim pool. With this injection I could make anyone that viewed any of these “routes” deactivate their account, change their profile information, change their profile pictures, steal their addresses, make things they’ve set as private now turn to public, have them each create support tickets to overload customer service, etc. etc., while also causing each victim to become a carrier of the injection which will cause an exponential increase in infection. Basically, anything within the context of the application is now accessible to me through JavaScript execution. I now carefully worded an email to Mr. Demmel explaining the severity and impact of the vulnerability and he responded in less than 24 hours:

“Hey Jonathan,

Sorry I didn’t get back to you sooner as I was traveling. Thanks for the details. We take this very seriously and are actively working on a fix, which we hope to push live shortly.

We have escaping turned on in Django and are using the latest version. This will require a patch and we’re working through that today (and the weekend if necessary).

I really appreciate you bringing this to our attention. We have gone through security audits in the past and have another scheduled soon.


The moral of this story: If you’re going to responsibly disclose I suggest the following things:

  1. Contact the highest person in the organization chain as possible initially. If you start by contacting someone like a customer support representative they may have no idea what you’re talking about. I’ve reported a bug once before and my first contact thought I was talking about spelling/grammar mistakes. This time I went straight to the co-owner.
  2. Don’t threaten with blog posts or say “I’ve reached out to Gawker, HuffingtonPost, or Threatpost and they’re ready to run this story, whatcha gonna do to fix it?” You’ll look like the enemy really quick and won’t be doing yourself any favors.
  3. Don’t ask for a reward. There’s a simple word for this: extortion.
  4. Simply make your intentions known. Know how to spot XSS, SQLi, or RCE by casually browsing an application that you use personally? You’re one-in-a-million (quite literally). Use your knowledge not only to report the problem, but also to tell them how they can fix it in case they are unaware. Intentions and thoughtfulness will go the longest for you in these regards.
  5. If you’re going to blog about it, like I’m doing here, wait until the issue is fixed and no longer exploitable (at least by the same attack vector). Asking for permission is wise as well, so that they can coordinate a release statement at the same time. Again, work with them, not against them.

Happy hacking.