DHS and Cyberterrorism

The DHS was recently polled on what groups and attacks they are personally most concerned about. This comes from a pretty wide range of intelligence officers at various levels of the military industrial complex. This underscores how the military is thinking and what they are currently most focused on. The tidbits I found interesting are on pages 7 and 8:


The DHS seems to be most concerned about Sovereign Citizens and Islamic Extremists/Jihadists (in that order). The rationale isn’t well explained, but I would presume that physical proximity and the radical nature of Sovereign Citizen groups trumps the extremist nature of Jihadists. I’m speculating, but that would seem to make sense. It could also be a reaction to FUD, but it’s hard to say.

More interestingly, the threat they find most viable is Cyberterrorism. That makes a lot of sense, because Cyberterrorism is cheap, can be done instantaneously, can be done remotely, and can be done with minimal skills and at minimal risk. It’s really hard to tell what’s Cyberterrorism versus what is just a normal for-profit attack, and attribution is largely an un-solvable problem if the attacker knows what they’re doing. Also, even if you can identify the correct adversary, extradition/rendition are tough problems.

There’s not a lot of substance here, because it’s all polls, but it’s interesting to see that our industry is at the top of the US intelligence community’s mind.

Security Guaranteed: Customers Deserve Nothing Less

WhiteHat Security Sentinel Elite

Ever notice how everything in the information security industry is sold “as is”? No guarantees, no warrantees, no return policies. This provides little peace of mind that any of the billions that are spent every year on security products and services will deliver as advertised. In other words, there is no way of ensuring that what customers purchase truly protects them from getting hacked, breached, or defrauded. And when these security products fail – and I do mean when – customers are left to deal with the mess on their own, letting the vendors completely off the hook. This does not seem fair to me, so I can only imagine how a customer might feel in such a case. What’s worse, any time someone mentions the idea of a security guaranty or warranty, the standard retort is “perfect security is impossible,” “we provide defense-in-depth,” or some other dismissive and ultimately unaccountable response.

Still, the naysayers have a valid point. Given enough time and energy, everything can be hacked, including security products, but this admission does not inspire much confidence in those who buy our warez and whose only fear is getting hacked. We, as an industry, are not doing anything to alleviate that fear. With something as important as information security is today, personally I think customers deserve more assurance. I believe customers should demand accountability from their vendors in particular. I believe the “as is” culture in security is something the industry must move away from. Why? Because if it were incumbent upon vendors to stand by their product(s) we would start to see more push against the status quo and, perhaps, even renewed innovation.

At the core of the issue is bridging the gap between the “nothing-is-perfect” mindset and the business requirements for providing security guarantees.

If you think about it, many other industries already offer guarantees, warrantees, or 100% return policies for less than perfect products. Examples include electronics, clothing, cars, lawn care equipment, and basically anything you buy on Amazon. As we know, all these items have defect rates, yet it doesn’t appear to prevent those sellers from standing behind their products. Perhaps the difference is, unlike most security vendors, these merchants know their product failure rates and replacement costs. This business insight is precisely why they’re willing to reimburse their customers accordingly. Security vendors by contrast tend NOT to know their failure rates, and if they do, they’re likely horrible (anti-virus is a perfect example of this). As such, vendors are unwilling to put their money where their mouth is, the “as is” culture remains, and interests between security vendor and customer are misaligned.

The key then, is knowing the security performance metrics and failure rates (i.e. having enough data on how the bad guys broke in and why the security controls failed) of the products. With this information in hand, offering a security guarantee is not only possible, but essential!

WhiteHat Security is in a unique position to lead the charge away from selling “as is” and towards security guarantees. We can do this, because we have the data and metrics to prove our performance. Other Software-as-a-Service vendors could theoretically do the same, and we encourage them to consider doing so.

For example, at WhiteHat we help our customers protect their websites from getting hacked by identifying vulnerabilities and helping to get them fixed before they’re exploited. If the bad guys are then unable to find and exploit a vulnerability we missed, or if they decide to move on to easier targets, that’s success! Failure, on the other hand, is missing a vulnerability we should have found which results in the website getting hacked. This metric – the product failure rate – is something any self-respecting vulnerability assessment vendor should track very closely. We do, and here’s how we bring it all together:

  1. WhiteHat’s Sentinel scanning platform and the 100+ person army of Web security experts behind it in our Threat Research Center (TRC) tests tens of thousands of websites on a 24x7x365 basis. We’ve been doing this for more than a decade and we have a larger and more accurate website vulnerability data set than anyone else. We know with a fine degree of accuracy what vulnerabilities we are able to identify – and which ones we are not.
  2. We also have data sharing relationships with Verizon (and others) on the incident side of the equation. This is to say we have good visibility into what attack techniques the bad guys are trying and what they’re likely to successfully exploit. This insight helps us focus R&D resources towards the vulnerabilities that matter most.
  3. We also have great working relationships with our customers so that when something unfortunate does occur – which can be anything from something as simple as a ‘missed’ vulnerability, to a site that was no longer being scanned by our solution that contained a vulnerability, all the way to a real breach – we’re in the loop. This is how we can determine whether something we missed and should have found actually results in a breach.

Bottom line: in the past 10+ years of performing countless assessments and identifying millions of vulnerabilities, there have been only a small number of instances in which we missed a vulnerability that we should have found that we know was likely used to cause material harm to our customers. All told, our failure rate is far less than even one percent (<.01%), which is an impressive track record and one that we are quite proud of. I am not familiar with any other software scanning vendor who even claims to know what their failure rate metric is, let alone has the confidence to publicly talk about it. And it is for this reason that we can confidently stand behind our own security guarantee for customers with the new Sentinel Elite.

Introducing: Sentinel Elite

Sentinel Elite is a brand new service line from WhiteHat in which we deploy our best and most comprehensive website vulnerability assessment processes. Sentinel Elite builds on the proven security of WhiteHat Sentinel, which offers the lowest false-positive rate of any web application security solution available as well as more than 10 years of website vulnerability assessment experience. This service, combined with a one-of-a-kind security guarantee from WhiteHat gives customers the confidence in both their purchase decisions as well as the integrity of their websites and data.

Sentinel Elite customers will have access to a dedicated subject matter expert (SME) who expedites communication and response times, as well as coordinates the internal and external activities supporting your applications security program. The SME will also supply prioritized guidance support, so customers know which vulnerabilities to fix first… or not! Customers also receive access to the WhiteHat Limited Platinum Support program, which includes a one-hour SLA, quarterly summaries and exploit reviews, as well as a direct line to our TRC. Sentinel Elite customers must in turn provide us with what we need to do our work, such as giving us valid website credentials and taking action to remediate identified vulnerabilities. Provided everyone does what they are responsible for, our customers can rest assured that their website and critical applications will not be breached. And we are prepared to stand behind that claim.

If it happens that a website covered by Sentinel Elite gets hacked, specifically using a vulnerability we missed and should have found, the customer will be refunded in full. It’s that simple.

We know there will be those in the community who will be skeptical. That’s the nature of our industry and we understand the skepticism. In the past, other security vendors have offered half-hearted or gimmicky guarantees, but that’s not what we’re doing here. We’re serious about web security, we always have been. We envision an industry where outcomes and results matter, a future where all security products come with security guarantees, and most importantly, a future where the vendors’ best interests are in line with their customers’ best interests. How amazing would that be not only for customers but also for the Internet and the world we live, work and do business in? Sentinel Elite is the first of many steps we are taking to make this a reality.

For more information about Sentinel Elite, please click here.

Better Single Sign-On for WhiteHat Security Customers

WhiteHat has just integrated PingFederate into Sentinel to provide better single-signon support to our customers. With single sign-on, your own single sign-on portal can require exactly what you want from someone logging in – username and password, or an RSA token code, or a text-back number, or a thumb scan – whatever you prefer to uniquely identify your users. This allows you to make your own security as tight as you like.

Once a user has logged in, their authentication can be “federated” – exchanged securely – with other portals, either locally or at other sites. This is where WhiteHat’s PingFederate integration comes in. That login – secured however you want to secure it – can now be transmitted to WhiteHat’s PingFederate instance, which validates it and then passes it on to Sentinel to actually do the login. Only the fact that the user is valid and their email is exchanged; passwords, internal user IDs, and any other identifying information remains on your server and never leaves it. This means you can now authenticate your Sentinel users as stringently as you do those for your own applications.

That sounds pretty nice, but we also support single sign-on for deep links into Sentinel. Those links actually link to our PingFederate instance, which uses the link to bounce you back to your own single sign-on portal and then back into Sentinel with the federated authentication. If you’re already logged in, then the process just takes you into Sentinel after automatically picking up the authentication.

Past these basic features, it’s now possible for us to extend single sign-on to provide more, like automatic provisioning of Sentinel users, directly adding and removing users in Sentinel simply by doing the same with them locally on your sign-on portal. This makes it much easier for your admins to manage Sentinel users, since they’re just like any other users on your system. We’re also looking into the possibility of integrating our Customer Success Center into PingFederate as well, to make accessing either or both Sentinel and our Customer Success Center far easier and simpler.

Remember, you need your own single sign-on solution to connect up with us – if you have one already, give us a call and ask us about integrating Sentinel. One less ID to remember, and one less password to forget!

The Ghost of Information Disclosure

Information disclosure is a funny thing. Information disclosure can be almost completely innocuous or — as in the case of Heartbleed — it can be devastating. There is a new website called un1c0rn.net that aims to make hacking a lot easier by letting attackers utilize Heartbleed data that has been amassed into one place.

The business model is simple – 0.01 Bitcoins (Around $5) for data. It leaves
no traces on the remote server because the data isn’t stored there anymore,
it’s on un1c0rn’s server. So let’s play a sample attack out.

1) Heartbleed comes out;

2) Some time in the future un1c0rn scans a site that is vulnerable and logs it;

3) A would-be attacker searches through un1c0rn and finds a site of interest;

4) Attacker leverages the information to successfully mount an attack against the target server leveraging the data.

In this model, the attacker’s first packets to the server in question could be
the one that compromises them. But it’s actually more interesting than that. As I was looking through the data I found this query.


For those of you who don’t live and breathe HTTP this is an authorization request with a base64 encoded string (which is trivial to reverse) that contains the usernames and passwords to the sites in question. This simple request found 400 sites with this simple flaw in it. So let’s play out another attack scenario.

1) Heartbleed comes out;

2) Some time in the future un1c0rn scans a site that is vulnerable and logs it;

3) Site is diligent and finds that they are vulnerable, patching up immediately and switching out their SSL certificate with a new one.

4) A would-be attacker searches through un1c0rn and finds a site of interest;

5) Using the information they found they still compromise the site with the username/password, even though the site is no longer vulnerable to the attack in question.

This is the problem with Information Disclosure – it still can be useful even long after the hole that was used to gather the data has been closed. That’s why in the case of Heartbleed and similar attacks not only do you have to fix the hole but you also have to expire all of the passwords, and remove all of the cookies or any other way that a user could gain access to the system.

The moral of the story is that you may find yourself being compromised seemingly almost magically in a scenario like this. How can someone guess a cookie correctly on the first attempt? Or guess a username/password on the first try? Or exploit a hole without ever having looked at your proprietary source code or even having visited your site before? Or find a hidden path to a directory that isn’t linked to from anywhere? Well, it may not be magic – it may be the ghost of Information Disclosure coming back to haunt you.


SSL/TLS MITM vulnerability CVE-2014-0224

We are aware of the OpenSSL advisory posted at https://www.openssl.org/news/secadv_20140605.txt. OpenSSL is vulnerable to a ChangeCipherSpec (CCS) Injection Vulnerability.

An attacker using a carefully crafted handshake can force the use of weak keying material in OpenSSL SSL/TLS clients and servers.

The attack can only be performed between a vulnerable client *and* a vulnerable server. Desktop Web Browser clients (i.e. Firefox, Chrome Internet Explorer) and most mobile browsers( i.e. Safari mobile, Firefox mobile) are not vulnerable, because they do not use OpenSSL. Chrome on Android does use OpenSSL, and may be vulnerable. All other OpenSSL clients are vulnerable in all versions of OpenSSL.

Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.

OpenSSL 0.9.8 SSL/TLS users (client and/or server) should upgrade to 0.9.8za.
OpenSSL 1.0.0 SSL/TLS users (client and/or server) should upgrade to 1.0.0m.
OpenSSL 1.0.1 SSL/TLS users (client and/or server) should upgrade to 1.0.1h.

WhiteHat is actively working on implementing a check for sites under service. We will update this blog with additional information as it is available.

Editor’s note:
June 6, 2014
WhiteHat has added testing to identify websites currently running affected versions of OpenSSL across all of our DAST service lines. These vulnerabilities will open as “Insufficient Transport Layer Protection” in the Sentinel interface. WhiteHat recommends that all assets including non-web application servers and sites that are currently not under service with WhiteHat be tested and patched.

If you have any questions regarding the the new CCS Injection SSL Vulnerability please email support@whitehatsec.com and a representative will be happy to assist.

Easy Things Are Often the Hardest to Get Right: Security Advice from a Development Manager

I’m not a security guy. I haven’t done much hands-on software development for awhile either. I’m a development manager, project manager, and CTO, having spent much of my career building technology for stock exchanges and central banks. About six years ago I helped to launch an online institutional trading platform in the US, where I serve as the CTO today. The reliability and integrity of our technology and operations are critically important, so we worked with some very smart people in the info sec community to make sure that we designed and built security into our systems from the start.

Through this work I got to know about OWASP, SAFECode and WASC, and met many of the people involved in trying to make the software world secure. Over the last couple of years I’ve helped out on some OWASP projects, most recently the OWASP Top 10 Proactive Controls (a Top 10 list for developers instead of security auditors) and at the SANS Institute in their SANS Analyst program. My special areas of interest right now are secure development with Agile and Lean methods, and integrating security into DevOps.

I’ve learned that preaching about security principles, “economy of mechanism” and “complete mediation” and “least common mechanism” (and whatever else Saltzer and Schroeder talked about 40 years ago) and publishing manifestos are a waste of time. There’s nothing actionable; nothing that developers can see or use or test.

I’ve also learned that you can’t legislate “security-in”. Regulatory compliance constraints and formal security policies don’t make software more secure. The best that they can do is act as leverage to get security on the table.

I am only interested in things that work, especially things that work for small teams. Because a lot of software is built by small teams and small companies like mine, and because what works for small teams can be scaled up: look at the success of Agile as it makes its way from teams to the enterprise. I want things that developers can do on their own, without the help of a security team – because small companies don’t have security teams (or even a “security guy”), and because the only way that appsec can succeed at scale is if developers make it part of their jobs.

Here’s what I’ve found works so far:

In design: trust, but verify

When designing, or changing the design, make sure that you understand your dependencies and system contracts, the systems and services and data you depend on – or that depend on you. What happens if a service or system fails or is slow or occasionally times out? Can you trust the quality of the data that you get from them? Can you trust that they will take good care of your data? How can you be sure?

All input is evil

Michael Howard at Microsoft has “one rule” for developers: if there’s one thing that developers can do to make applications more secure, it’s to recognize that

All input is evil unless proven otherwise

and it’s up to developers to defend against evil through careful data checking. Get to know – and love – type-checking and regex.

This is simple to explain and easy to understand. It’s something that developers can think about in design and in coding. There are testing tools that can help check on it (through taint analysis or fuzzing). It will pay back in making the system more reliable and more resilient, catching and preventing run-time errors and failures, as well as making the system more secure.

Write good code

Security builds on top of code quality. Bad code, code with serious bugs, code that is too hard to understand and unsafe to change, will never be secure. Look at some of the most serious recent security problems: the Apple iOS SSL bug and the Heartbleed OpenSSL bug. These weren’t failures in understanding appsec principles or gaps in design – they were caused by sloppy coding.

Identify your high-risk code: security plumbing and other plumbing, network-facing APIs, code that deals with money or control functions or sensitive data, core algorithms. Make sure that your best developers work on this code, that they have extra time to review it carefully to make sure that the code does what it is supposed to do, and that it “won’t boink” if it hits bad data or some other exception occurs. Step through the logic, check API contracts, data validation, error handling and exception handling. Spend extra time testing it.

Never stop testing

Agile and XP and TDD/ATDD and Continuous Integration have made developers responsible for testing their own software to verify that it works by writing automated tests as much as possible. This needs to be extended to security. Developers can write security unit tests and run them in Continuous Integration.

Find a good security testing tool – a dynamic scanner or a static analysis tool – that is affordable and fast and generates minimal noise, and make it part of your automated testing pipeline. Run it as often as possible. Bugs that are caught faster will be fixed faster and cost less to fix. Make security testing something that developers don’t have to think about – it’s always “just there.”

Automated testing is an important part of a testing program, but it isn’t enough to build good software. Like some other shops, we also do a lot of manual exploratory testing. A skilled tester working through key features of the app, pushing the boundaries, trying to understand and break things will find important bugs and tell you if the software really is ready. A good pen test is exploratory testing on steroids. For every bug they find, ask: Why is it broken? How did we miss it? What else could be wrong? Pen tests as a security gate are too little too late. But as a health check on your SDLC? Priceless.

Vulnerabilities are bugs – so fix them

Money spent on training and tools and testing is wasted if you don’t fix the bugs that you find. Security vulnerabilities are bugs – you may have to track them separately for compliance reasons, but to developers they should just be bugs, handled and prioritized and fixed like any other bug. If you have a development culture where bugs are taken seriously (Zero Bug Tolerance), and security vulnerabilities are bugs, then security vulnerabilities will get fixed. If developers aren’t fixing bugs, then you have a serious management problem that you need to take care of.

Training works – to a point

I agree that every developer and tester (and their managers) should get some training in secure development, so that everyone understands the fundamentals. But not everybody will “get it” when it comes to secure development – and not everybody has to. Adobe’s karate belt approach to appsec training makes good sense. Everybody should get white belt level awareness training. Every team should have somebody the rest of the team looks to for technical leadership and mentoring. These people are the ones who need in-depth black belt appsec training, and who can take best advantage of it. They can do most of the heavy-lifting required, help the rest of the team on security questions, and keep the team honest.

Simple is not so simple

These are the things that I’ve found that work: simple rules, simple tools, simple good practices. Make sure that at least your best developers are trained in application security. Understand trust and layering-in design. Write solid, defensive code. Test your code for security like you test it for correctness and usability. If you find security bugs, fix them.

Sounds easy. But what I’ve learned from working in software development for a long time is that the easy things are often the hardest to get right.

About Jim Bird
Jim is a development manager and CTO who has worked in software engineering for more than 25 years. He is currently the co-founder and CTO of a major US-based institutional trading service, where he manages the company’s technology organization and information security programs. Jim has worked as a consultant to IBM and to stock exchanges and banks globally. He was also the CTO of a technology firm (now part of NASDAQ OMX) that built custom IT solutions for stock exchanges and central banks in more than 30 countries. You can follow him on his blog at Building Real Software.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

A Tale of Responsible Disclosure

In the past few months I’ve taken on cycling as a new hobby, and one of the services I have been using to track routes and mileage is a popular webapp known as MapMyRide. They actually have several services that are split up per activity including MapMyFitness, MapMyHike, MapMyWalk, and MapMyRun. Several colleagues have also joined me in recreational cycling, so one day I decided to plan out a route for us to use repeatedly. After the route was created using MapMyRide, I decided to give it the quirky name Tour de <’hackers”>. It would get a laugh out of any XSS hunters because most of us in application security would recognize it as a benign HTML tag that also checks for single and double quote attribute breaks.

Take a close look at the “Share via Tweet/Facebook/Email” section


Oh my…
Whelp, that’s awesome, I’ve now inadvertently done the number one thing an ethical pentester should never do, and that’s play “outside of the sandbox.” You see, companies that don’t have an extremely active security team don’t really distinguish between a security professional that may be using the application legitimately, or an actual intruder that’s digging for exploits. Those of us that are paid to ethically hack into web applications and services are normally protected by various legal documents as per the contract of said penetration tests. XSS isn’t really hard to find – and truthfully I’m quite annoyed with all of its hype – but to a company it really doesn’t matter how cool or complex the exploit is so long as it’s damaging in some fashion to the business or brand. This particular one is persistent and easily wormable. If anyone remembers using MySpace (I admit that I did and accept the shame for it) they might be familiar with the MySpace Samy Worm that infected around a million profiles in less than 24 hours. Samy Kamkar’s XSS injection made you friend request him and added a little text to your profile page that said “but most of all, samy is my hero.” Silly, right? Well, what’s not silly is that his home was raided and he ended up entering a felony plea agreement that involved three years probation without computer use.

Let’s look at the facts. I don’t know if the makers of MapMyRide have an intrusion detection system (IDS) in place or if anything has flagged internally on this input. A quick Google search doesn’t reveal anything about them participating in Bug Bounties or Responsible Disclosure. I also did this on an account that is quite visibly linked to my personal Facebook account, so there’s a first and last name right there. All I can do now is approach them with what I’ve found, state my case and my intent, and hope that they seem me as a “do-good” individual that wants to ethically disclose the issue.

I hunted down a MapMyRide co-owner’s Twitter handle through LinkedIn, who then referred me to their VP of Engineering, Jesse Demmel. Now I can’t just email him saying that I was able to break the HTML document because that might not warrant enough attention. I’m going to have to send them an actual proof of concept and cause more injections to go through. Thankfully (open to interpretation) there wasn’t any input validation in place at all and we could call the cookies easily with <svg/onload=alert(document.cookies)>. Voilà.


Then things started getting a bit more interesting. I realized that the other domains shared this exact same template. This injection isn’t only persistent on mapmyride.com, but on the other four as well!





So, if I were to actually exploit this across all five domains from one entry point, how many people would I affect?

Here are some WolframAlpha statistics:


Based on those numbers, total page views across all sites are approximately 1,353,300 per day with 431,800 daily visitors. That’s not too shabby of a victim pool. With this injection I could make anyone that viewed any of these “routes” deactivate their account, change their profile information, change their profile pictures, steal their addresses, make things they’ve set as private now turn to public, have them each create support tickets to overload customer service, etc. etc., while also causing each victim to become a carrier of the injection which will cause an exponential increase in infection. Basically, anything within the context of the application is now accessible to me through JavaScript execution. I now carefully worded an email to Mr. Demmel explaining the severity and impact of the vulnerability and he responded in less than 24 hours:

“Hey Jonathan,

Sorry I didn’t get back to you sooner as I was traveling. Thanks for the details. We take this very seriously and are actively working on a fix, which we hope to push live shortly.

We have escaping turned on in Django and are using the latest version. This will require a patch and we’re working through that today (and the weekend if necessary).

I really appreciate you bringing this to our attention. We have gone through security audits in the past and have another scheduled soon.


The moral of this story: If you’re going to responsibly disclose I suggest the following things:

  1. Contact the highest person in the organization chain as possible initially. If you start by contacting someone like a customer support representative they may have no idea what you’re talking about. I’ve reported a bug once before and my first contact thought I was talking about spelling/grammar mistakes. This time I went straight to the co-owner.
  2. Don’t threaten with blog posts or say “I’ve reached out to Gawker, HuffingtonPost, or Threatpost and they’re ready to run this story, whatcha gonna do to fix it?” You’ll look like the enemy really quick and won’t be doing yourself any favors.
  3. Don’t ask for a reward. There’s a simple word for this: extortion.
  4. Simply make your intentions known. Know how to spot XSS, SQLi, or RCE by casually browsing an application that you use personally? You’re one-in-a-million (quite literally). Use your knowledge not only to report the problem, but also to tell them how they can fix it in case they are unaware. Intentions and thoughtfulness will go the longest for you in these regards.
  5. If you’re going to blog about it, like I’m doing here, wait until the issue is fixed and no longer exploitable (at least by the same attack vector). Asking for permission is wise as well, so that they can coordinate a release statement at the same time. Again, work with them, not against them.

Happy hacking.

Podcast: An Anthropologist’s Approach to Application Security

In this edition of the WhiteHat Security podcast, I talk with Walt Williams, Senior Security and Compliance Manager Lattice Engines. Hear how Walt went from Anthropologist to Security Engineer, his approach to building relationships with developers, and why his biggest security surprise involved broken windshields.

Want to do a podcast with us? Signup to become an “Unsung Hero”.

About our “Unsung Hero Program”
Every day app sec professionals tirelessly protect the Web, and we recognize that this is largely owed to a series of small victories. These represent untold stories. We want to help share your story. To learn more click here.

Teaching Code in 2014

Guest post by – JD Glaser

“Wounds from a friend can be trusted, but an enemy multiplies kisses” – Proverbs 27:6

This proverb, over 2,000 years old, is directly applicable to all authors of programming material today. By avoiding security coverage, by explicitly teaching insecure examples, authors do the world at large a huge disservice by multiplying both the effect of incorrect knowledge and the development of insecure habits in developers. The teacher is the enemy when their teachings encourage poor practices through example, which ultimately bites the student in the end at no cost to the teacher. In fact, if you are skilled enough to be an effective teacher, the problem is worse than for poor teachers. Good teachers by virtue of their teaching skill greatly multiply the ‘kisses’ of poor example code that eventually becomes ‘acceptable production code’ ad infinitum.

So, to the authors of programming material everywhere, whether you write books or blogs, this article is targeted at you. By choosing to write, you have chosen to teach. Therefore you have a responsibility.

No Excuse for Demonstrating Insecure Code Examples

This is year 2014. There is simply no excuse for not demonstrating and explaining secure code within the examples of your chosen language.

Examples are critical. First, examples teach. Second, examples, one way or the other, find their way into production code. This is a simple fact with huge ramifications. Once a technique is learned, once that initial impact is made upon the mind, that newly learned technique becomes the way it is done. When you teach an insecure example, you perpetuate that poor knowledge in the world and it repeats itself again and again.

Change is difficult for everyone. Pepsi learned this the expensive way. Once people accepted that Coke was It, hundreds of millions of dollars spent on really cool advertising have not been able to change that mindset of that opinion. The mind has only one slot for number one, and Pepsi has remained second ever since. Don’t continue to enforce security in the mind as second.

Security Should Be Ingrained, Not Added

When teaching, even if you are ‘just’ calculating points on a sample graph for those new to chart programming, if any of that data comes from user supplied input via that simple AJAX call to get the user going, your sample code should filter that sample data. When you save it, your sample code should escape it for the chosen database; when you display it to the user, the sample code of your chosen language should escape it for the intended display context. When your sample data needs to be encrypted, your sample code should apply modern cryptography. There are no innocent examples anymore.

Security should be ingrained, not added. Programmers need to be trained to see security measures in line with normal functionality. In this way, they are trained to incorporate security measures initially as code is written, and also to identify when security measures are missing.

When Security Measures are removed for the sake of ‘clarity’ the student is being taught unconsciously that security makes things less clear. The student is also being trained directly by example to add security ‘later’.

Learning Node.js In 2014

Most of the latest Node.js books have some great material, but only one out of several took the time to properly teach, via integrated code examples, how to properly escape Node.js code for use with MySQL. Kudos to Pedro Teixeira, who wrote Professional Node.js from Wrox Press, for teaching proper security measures as an integral part of the lesson to those adapting to Node.js.

Contrast this with Node.js for PHP Developers from O’Reilly Press, where the examples explicitly demonstrate how to insecurely code Node.js with SQL Injection holes. The code in this book actually teaches the next new wave how to code wide open vulnerabilities to servers. Considering the rapid adoption of Node.js for server applications, and the millions who will use it, this is a real problem.

The fact that the retired legacy method of building SQL statements through insecure string concatenation was chosen by the author as the instruction method for this book really demonstrates the power of ingrained learning. Once something is learned, for good or bad, a person keeps repeating what they know for years. Again, change is difficult.

The developers taught by these code examples will build apps that we may all use at some point. The security vulnerabilities they contain will effect everyone. It makes one wonder if you are the author whose book or blog taught the person who coded the latest Home Depot penetration. This will be code that has to be retrofitted later at great time and great expense. It is not a theoretical problem. It does have a large dollar figure attached to its cost.

For some reason, authors of otherwise good material choose to avoid teaching security in an integral way. Either they are not knowledgeable about security, in which it is time to up their game, or have fallen into the ‘clarity omission’ trap. This is a wide spread practice, adopted by almost everyone, of declaring that since this ‘example code’ is not production code, insecure code is excusable, and therefore critical facts are not presented. This is a mistake with far-reaching implications.

I recently watched a popular video on PHP training from an instructor who is in all other respects a good instructor. Apparently, this video has educated at least 15,000 developers so far. In it, the author explicitly states briefly that output escaping should be done. However, in his very next statement, he declares that for the sake of brevity, the technique won’t be demonstrated, and the examples given out as part of the training do not incorporate the technique. The opportunity to ingrain the idea that output escaping is necessary, and that it should be an automated part of a developer’s toolkit, has just been lost on the majority of 15,000 students because the code to which they will later turn for reference is lacking. Most, if not all, will ignore it as a practice until mandated by an external force.

Stop the Madness

In the real world, when security is not incorporated at the beginning, it costs additional time and money to retrofit it later. This cost is never paid until a breach is announced on the front page of a news service and someone is sued for negligence. Teachers have a responsibility to teach this.

Coders code as they are taught. Coders are primarily taught through books and blog articles. Blog articles are especially critical for learning as they are the fastest way to learn the latest language technique. Therefore, bloggers are equally at fault when integrated security is absent from their examples.

The fact is that if you are a writer, you are a trainer. You are in fact training a developer how to do something over and over again. Security should be integral. Security books on their own, as secondary topics, should not be needed. Think about that.

The past decade has been spent railing against insecure programming practices, but the question needs to be asked, who is doing the teaching? And what is being taught?

You, the writer, are at the root of this widespread security problem. Again, this is 2014 and these issues have been in the spotlight for 10+ years. This is not about mistakes, or being a security expert. This is about the complete avoidance of the basics. What habit is a young programmer learning now that will effect the code going into service next year? and the years after?

The Truth Hurts

If you are still inadvertently training coders how to write blatantly insecure code, either through ignorance or omission, you have no business training others, especially if you are successful. If you want to teach, if you make money teaching programming, you need to stop the madness, educate yourself, and reverse the trend.

Write those three extra lines of filtering code and add that extra paragraph explaining the purpose. Teach developers how to see security ingrained in code.

Stop multiplying the sweet kiss of simplicity and avoidance. Stop making yourself the enemy of those who like to keep their credit cards private.

About JD
In his own words JD, “Did a brief stint of 15 years in security. Headed the development of TripWire for Windows and Foundscan. Founded NT OBJECTives, a company to scan web stuff. BlackHat speaker on Windows forensic issues. Built the Windows Forensic Toolkit and FPort, with the Department of Justice has used to help convict child pornographers.”

Details on Internet Explorer Zero-Day Exploit

A new Zero-Day exploit for Internet Explorer was released on Saturday by FireEye Research Labs. At its core the new exploit takes advantage of a known Flash technique that can be used to access memory. Memory is then corrupted in a way that completely bypasses the built in Microsoft Window’s protection. This then gains the attacker full control which allows the attacker to run his own maliciously crafted code on the victims machine. Internet Explorer versions 6-11 are all currently vulnerable to attack. Details of the exploit can be found here: http://www.fireeye.com/blog/uncategorized/2014/04/new-zero-day-exploit-targeting-internet-explorer-versions-9-through-11-identified-in-targeted-attacks.html.

Since the vulnerability relies on corrupting memory through Flash, an easy mitigation technique is to simply disable Flash. In addition if you are using different browsers, such as Firefox or WhiteHat’s Aviator, you will not be affected. There have already been known attacks exploiting the new IE vulnerability so users are encouraged to take immediate action to mitigate their risk.

For users interested in an alternative browser to Internet Explorer, WhiteHat Aviator is now available for Windows users and can be downloaded here: https://www.whitehatsec.com/aviator/.