URLs are content

Justifications for the federal government’s controversial mass surveillance programs have involved the distinction between the contents of communications and associated “meta-data” about those communications. Finding out that two people spoke on the phone requires less red tape than listening to the conversations themselves. While “meta-data” doesn’t sound especially ominous, analysts can use graph theory to draw surprisingly powerful inferences from it. A funny illustration of that can be found in Kieran Healy’s blog post, Using Metadata to find Paul Revere.

On November 10, the Third Circuit Court of Appeals made a ruling that web browsing histories are “content” under the Wiretap Act. This implies that the government will need a warrant before collecting such browsing histories. Wired summarized the point the court was making:

A visit to “webmd.com,” for instance, might count as metadata, as Cato Institute senior fellow Julian Sanchez explains. But a visit to “www.webmd.com/family-pregnancy” clearly reveals something about the visitor’s communications with WebMD, not just the fact of the visit. “It’s not a hard call,” says Sanchez. “The specific URL I visit at nytimes.com or cato.org or webmd.com tells you very specifically what the meaning or purport of my communications are.”

Interestingly, this party accused of violating the Wiretap Act in this case wasn’t the federal government. It was Google. The court ruled that Google had collected content in the sense of the Wiretap Act, but that’s okay because you can’t eavesdrop on your own conversation. I’m not an attorney, but the legal technicalities were well-explained in the Washington Post.

The technical technicalities are also interesting.

Basically, a cookie is a secret between your browser and an individual web server. The secret is in the form of a key-value pair, like id=12345. Once a cookie is “set,” it will accompany every request the browser sends to the server that set the cookie. If the server makes sure that each browser it interacts with has a different cookie, it can distinguish individual visitors. That’s what it means to be “logged in” to a website: after proving your identity with a username and password, the cookie assigns you a “session cookie.” When you visit https://www.example.com/my-profile, you see your own profile because the server read your cookie, and your cookie was tied to your account when you logged in.

Cookies can be set in two ways. The browser might request something from a server (HTML, JavaScript, CSS, image, etc.). The server sends back the requested file, and the response contains “Set-Cookie” headers. Alternatively, JavaScript on the page might set cookies using document.cookie. That is, cookies can be set server-side or client-side.

A cookie is nothing more than a place for application developers to store short strings of data. These are some of the common security considerations with cookies:

  • Is inappropriate data being stored in cookies?
  • Can an attacker guess the values of other people’s cookies?
  • Are cookies being sent across unencrypted connections?
  • Should the cookies get a special “HttpOnly” flag that makes them JavaScript-inaccessible, to protect them from potential cross-site scripting attacks?

OWASP has a more detailed discussion of cookie security here.

When a user requests a web page and receives an HTML document, that document can instruct their browser to communicate with many different third parties. Should all of those third parties be able to track the user, possibly across multiple websites?

Enough people feel uncomfortable with third-party cookies that browsers include options for disabling them. The case before the Third Circuit Court of Appeals was about Google’s practices in 2012, which involved exploiting a browser bug to set cookies in Apple’s Safari browser, even when users had explicitly disabled third-party cookies. Consequently, Google was able to track individual browsers across multiple websites. At issue was whether the list of URLs the browser visited consisted of “content.” The court ruled that it did.

The technical details of what Google was doing are described here.

Data is often submitted to websites through HTML forms. It’s natural to assume that submitting a form is always intentional, but forms can also be submitted by JavaScript, or without user interaction. It’s easy to assume that, if a user is submitting a form to a server, they’ve “consented” to communication with that server. That assumption led to the bug that was exploited by Google.

Safari prevented third party servers from setting cookies, unless a form was submitted to the third party. Google supplied code that made browsers submit forms to them without user interaction. In response to those form submissions, tracking cookies were set. This circumvented the user’s efforts not to comply with third-party Set-Cookie headers. Incidentally, automatically submitting a form through JavaScript is also the way an attacker would carry out cross-site scripting or cross-site request forgery attacks.

To recap: Apple and Google had a technical arms race about tracking cookies. There was a lawsuit, and now we’re clear that the government needs a warrant to look at browser histories, because URL paths and query strings are very revealing.

The court suggested that there’s a distinction to be made between the domain and the rest of the URL, but that suggestion was not legally binding.

Buyer beware: Don’t get more than you bargained for this Cyber Monday

Cyber Monday is just a few days away now, and no doubt this year will set new records for online spending. Online sales in the US alone are expected to reach $3 billion on Cyber Monday, November 30th, which will be one of the largest single days for online sales in history. Unfortunately, however, we’ve found that over a quarter of UK and US-based consumers will be shopping for bargains and making purchases without first checking to see if the website of the retailer they are buying from is secure.


A survey1 conducted by Opinion Matters on behalf of WhiteHat Security discovered this disturbing fact, as well as the fact that shoppers in the US are more likely to put themselves at risk than those in the UK, with more than a third of US-based respondents admitting that they wouldn’t check the website’s security before purchasing. This is particularly worrying given that more than half of shoppers are expecting to use their credit or debit card to purchase goods this Black Friday weekend.


The consumer survey also found that a third of UK and US-based shoppers are not sure, or definitely do not know how to identify if a website is secure.


Of course, the retailers themselves have a big part to play in website security. Researchers from our Threat Research Center (TRC) analyzed retail websites between July and September 20152 and found that they are more likely to exhibit serious vulnerabilities compared to other industries. The most commonly occurring critical vulnerability classes for the retail industry were:


  • Insufficient Transport Layer Protection (with 64% likelihood): When applications do not take measures to authenticate, encrypt, and protect sensitive network traffic, data such as payment card details and personal information can be left exposed and attackers may intercept and view the information.
  • Cross Site Scripting (with 57% likelihood): Attackers can use a vulnerable website as a vehicle to deliver malicious instructions to a victim’s browser. This can lead to further attacks such as keylogging, impersonating the user, phishing and identity theft.
  • Information Leakage (with 54% likelihood): Insecure applications may reveal sensitive data that can be used by an attacker to exploit the target web application, its hosting network, or its users.
  • Brute Force (with 38% likelihood): Most commonly targeting log-in credentials, brute force attacks can also be used to retrieve the session identifier of another user, enabling the attacker to retrieve personal information and perform actions on behalf of the user.
  • Cross Site Request Forgery (with 29% likelihood): Using social engineering (such as sending a link via email or chat), attackers can trick users into submitting a request, such as transferring funds or changing their email address or password.


In response to the survey’s findings, my colleague and WhiteHat founder, Jeremiah Grossman, said, “This research suggests that when it comes to website security awareness, not only is there still some way to go on the part of the consumer, but the retailers themselves could benefit from re-assessing their security measures, particularly when considering the volume and nature of customer information that will pass through their websites this Cyber Monday.”


WhiteHat is in the business of helping organizations in the retail and other sectors secure their applications and websites. But for consumers, Grossman offers up a few simple tricks that can help shoppers stay safe online over this holiday shopping season:


  • Look out for ‘HTTPS’ when browsing: HTTP – the letters that show up in front of the URL when browsing online – indicates that the web page is using a non-secure way of transmitting data. Data can be intercepted and read at any point between the computer and the website. HTTPS on the other hand means that all the data being transmitted is encrypted. Look out for the HTTPS coloured in either green or red and a lock icon.
  • Install a modern web browser and keep it up to date: Most people are already using one of the well known web browsers, but it is also very important that they are kept up to date with the latest security patches.
  • Be wary of public WiFi: While connecting to free WiFi networks seems like a good idea, it can be extremely dangerous as it has become relatively easy for attackers to set up WiFi hotspots to spy on traffic going back and forth between users and websites. Never trust a WiFi network and avoid banking, purchasing or sensitive transactions while connected to public WiFi.
  • Go direct to the website: There will be plenty of ‘big discount’ emails around over the next few days that will entice shoppers to websites for bargain purchases. Shoppers should make sure that they go direct to the site from their web browser, rather than clicking through the email.
  • Make your passwords hard to guess: Most people wouldn’t have the same key for their car, home, office etc., and for the same reason, it makes sense to have hard-to-guess, unique passwords for online accounts.
  • Install ad blocking extensions: Malicious software often infects computers through viewing or clicking on online advertisements, so it is not a bad idea to install an ad blocking extension that either allows users to surf the web without ads, or completely block the invisible trackers that ads use to build profiles of online habits.
  • Stick to the apps you trust: When making purchases on a mobile phone, shoppers are much better off sticking to apps from companies they know and trust, rather than relying on mobile browsers and email.


If you’re a retailer interested in learning more about the security posture of your applications and websites, sign up for a free website security risk assessment. And if you’re a consumer… well, buyer beware. Follow the tips provided here for a safer holiday shopping experience.



1The WhiteHat Security survey of 4,244 online shoppers in the UK and US was conducted between 13 November 2015 and 19 November 2015.


4WhiteHat Security threat researchers conducted likelihood analysis of critical vulnerabilities in retail websites using data collected between 1 July 2015 and 30 September 2015.




Saving Systems from SQLi

There is absolutely nothing special about the TalkTalk breach — and that is the problem. If you didn’t already see the news about TalkTalk, a UK-based provider of telephone and broadband services, their customer database was hacked and reportedly 4 million records were pilfered. A major organization’s website is hacked, millions of records containing PII are taken, and the data is held for ransom. Oh, and the alleged perpetrator(s) were teenagers, not professional cyber-criminals. This is the type of story that has been told for years now in every geographic region and industry.

In this particular case, while many important technical details are still coming to light, it appears – according to some reputable media sources – the breach was carried out through SQL Injection (SQLi). SQLi gives a remote attacker the ability to run commands against the backend database, including potentially stealing all the data contained in it. This sounds bad because it is.

Just this year, the Verizon Data Breach Investigations Report, found that SQLi was used in 19 percent of web application attacks. And WhiteHat’s own research reveals that 6 percent of websites tested with Sentinel have at least one SQLi vulnerability exposed. So SQLi is very common, and what’s more, it’s been around a long time. In fact, this Christmas marks its 17th birthday.

The more we learn about incidents like TalkTalk, the more we see that these breaches are preventable. We know how to write code that’s resilient to SQLi. We have several ways to to identify SQLi in vulnerable code. We know multiple methods for fixing SQLi vulnerabilities and defending against incoming attacks. We, the InfoSec industry, know basically everything about SQLi. Yet for some reason the breaches keep happening, the headlines keep appearing, and millions of people continue to have their personal information exposed. The question then becomes: Why? Why, when we know so much about these attacks, do they keep happening?

One answer is that those who are best positioned to solve the problem are not motivated to take care of the issue – or perhaps they are just ignorant of things like SQLi and the danger it presents. Certainly the companies and organizations being attacked this way have a reason to protect themselves, since they lose money whenever an attack occurs. The Verizon report estimates that one million records stolen could cost a company nearly $1.2m. For the TalkTalk hack, with potentially four million records stolen (though some reports are now indicating much lower numbers), there could be nearly $2m in damages.

Imagine, millions of dollars in damages and millions of angry customers based on an issue that could have been found and fixed in mere days – if that. It’s time to get serious about Web security, like really serious, and I’m not just talking about corporations, but InfoSec vendors as well.

Like many other vendors, WhiteHat’s vulnerability scanning service can help customers find vulnerabilities such as SQLi before the bad guys exploit them. This lets companies proactively protect their information, since hacking into a website will be significantly more challenging. But even more importantly, organizations need to know that security vendors truly have their back and that their vendor’s interests are aligned with their own. Sentinel Elite’s security guarantee is designed to do exactly that.

If Sentinel Elite fails to find a vulnerability such as SQLi, and exploitation results in a breach like TalkTalk’s, WhiteHat will not only refund the cost of the service, but also cover up to $500k in financial damages. This means that WhiteHat customers can be confident that WhiteHat shares their commitment to not just detecting vulnerabilities, but actively working to prevent breaches.

Will security guarantees prevent all breaches? Probably not, as perfect security is impossible, but security guarantees will make a HUGE difference in making sure vulnerabilities are remediated. All it takes to stop these breaches from happening is doing the things we already know how to do. Doing the things we already know work. Security guarantees motivate all parties involved to do what’s necessary to prevent breaches like TalkTalk.

University Networks

The Atlantic Monthly just published a piece about the computer security challenges facing universities. Those challenges are serious:

“Universities are extremely attractive targets,” explained Richard Bejtlich, the Chief Security Strategist at FireEye, which acquired Mandiant, the firm that investigated the hacking incident at the [New York] Times. “The sort of information they have can be very valuable — including very rich personal information that criminal groups want, and R&D data related to either basic science or grant-related research of great interest to nation state groups. Then, on the infrastructure side they also provide some of the best platforms for attacking other parties—high bandwidth, great servers, some of the best computing infrastructure in the world and a corresponding lack of interest in security.”

The issue is framed in terms of “corporate lockdown” vs. “bring your own device,” with an emphasis on network security:

There are two schools of thought on computer security at institutions of higher education. One thought is that universities are lagging behind companies in their security efforts and need to embrace a more locked-down, corporate approach to security. The other thought holds that companies are, in fact, coming around to the academic institutions’ perspective on security—with employees bringing their own devices to work, and an increasing emphasis on monitoring network activity rather than enforcing security by trying to keep out the outside world.

There’s a nod to application security, and it’s actually a great example of setting policies to incentivize users to set stronger passwords (this is not easy!):

A company, for instance, may mandate a new security system or patch for everyone on its network, but a crucial element of implementing security measures in an academic setting is often providing users with options that will meet their needs, rather than forcing them to acquiesce to changes. For example, Parks said that at the University of Idaho, users are given the choice to set passwords that are at least 15 characters in length and, if they do so, their passwords last 400 days before expiring, whereas shorter passwords must be changed every 90 days (more than 70 percent of users have chosen to create passwords that are at least 15 characters, he added).

Getting hacked is about losing control of one’s data, and the worries in the last two passages have to do with things the university can’t directly control: the security of devices that users connect to the network, and the strength of passwords chosen by users. Things beyond one’s control are generally anxiety-provoking.

Taking a step back, the data that’s of interest to hackers is found in databases, which are frequently queried by web applications. Those web applications might have vulnerabilities, and a university’s application code is under the university’s control. The application code matters in practice. Over the summer, Team GhostShell dumped data stolen from a large number of universities. According to Symantec:

In keeping with its previous modus operandi, it is likely that the group compromised the databases by way of SQL injection attacks and poorly configured PHP scripts; however, this has not been confirmed. Previous data dumps from the 2012 hacks revealed that the team used SQLmap, a popular SQL injection tool used by hackers.

Preventing SQL injection is a solved problem, but bored teenagers exploit it on university websites:

The attack, which has implications for the integrity of USyd’s security infrastructure, compromised the personal details of approximately 5,000 students and did not come to the University’s attention until February 6.

The hacker, who goes by the online alias Abdilo, told Honi that the attack had yielded email addresses and ‘pass combo lists’, though he has no intention of using the information for malicious ends.

“99% of my targets are just shit i decide to mess with because of southpark or other tv shows,” he wrote.

As for Sydney’s breach on February 2, Abdilo claimed that he had very little trouble in accessing the information, rating the university’s database security with a “0” out of 10.

“I was taunting them for awhile, they finally figured it out,” he said.

That’s a nightmare, but solving SQL injection is a lot more straightforward than working on machine learning algorithms to improve intrusion detection systems. Less academic, if you will. The challenges in the Atlantic article are real, but effective measures aren’t always the most exciting. Fixing legacy applications that students use to look at their grades or schedule doctor’s appointments doesn’t have the drama of finding currently-invisible intruders. There is no incentive or process for the faculty to work on improving the software development life cycle when their priority is often research productivity or serving on administrative committees. In these instances, partnering with a third-party SaaS application security provider is the best solution for these urgent needs.

In a sense, it’s good news that some of the most urgently needed fixes are already within our abilities.

When departments work at cross-purposes

Back in August, we wrote about how self-discipline can be one of the hardest parts of security, as illustrated by Snowden and the NSA. Just recently, Salon published an article about similar issues that plagued the CIA during the Cold War: How to explain the KGB’s amazing success identifying CIA agents in the field?

So many of their agents were being uncovered by the Soviets that they assumed there must’ve been double agents…somewhere. Apparently there was a lot of inward-focused paranoia. The truth was a lot more mundane, but also very actionable — not only for the CIA, but for business in general: two departments (in this case, two agencies) in one organization with incompatible, non-coordinated policies:

So how, exactly, did Totrov reconstitute CIA personnel listings without access to the files themselves or those who put them together?

His approach required a clever combination of clear insight into human behavior, root common sense and strict logic.

In the world of secret intelligence the first rule is that of the ancient Chinese philosopher of war Sun Tzu: To defeat the enemy, you have above all to know yourself. The KGB was a huge bureaucracy within a bureaucracy — the Soviet Union. Any Soviet citizen had an intimate acquaintance with how bureaucracies function. They are fundamentally creatures of habit and, as any cryptanalyst knows, the key to breaking the adversary’s cipher is to find repetitions. The same applies to the parallel universe of human counterintelligence.

The difference between Totrov and his fellow citizens was that whereas others at home and abroad would assume the Soviet Union was somehow unique, he applied his understanding of his own society to a society that on the surface seemed unique, but which, in respect of how government worked, was not in fact that much different: the United States.

From an organizational point of view, what’s fascinating is that the problem came from two different agencies with different missions having incompatible, uncoordinated policies: policies for Foreign Service Officers and policies for CIA officers were different enough to allow the Soviet Union to identify individuals who were theoretically Foreign Service Officers but who did not receive the same treatment as actual Foreign Service Officers. Pay and policy differentials made it easy to separate actual Foreign Service Officers from CIA agents.

Thus one productive line of inquiry quickly yielded evidence: the differences in the way agency officers undercover as diplomats were treated from genuine foreign service officers (FSOs). The pay scale at entry was much higher for a CIA officer; after three to four years abroad a genuine FSO could return home, whereas an agency employee could not; real FSOs had to be recruited between the ages of 21 and 31, whereas this did not apply to an agency officer; only real FSOs had to attend the Institute of Foreign Service for three months before entering the service; naturalized Americans could not become FSOs for at least nine years but they could become agency employees; when agency officers returned home, they did not normally appear in State Department listings; should they appear they were classified as research and planning, research and intelligence, consular or chancery for security affairs; unlike FSOs, agency officers could change their place of work for no apparent reason; their published biographies contained obvious gaps; agency officers could be relocated within the country to which they were posted, FSOs were not; agency officers usually had more than one working foreign language; their cover was usually as a “political” or “consular” official (often vice-consul); internal embassy reorganizations usually left agency personnel untouched, whether their rank, their office space or their telephones; their offices were located in restricted zones within the embassy; they would appear on the streets during the working day using public telephone boxes; they would arrange meetings for the evening, out of town, usually around 7.30 p.m. or 8.00 p.m.; and whereas FSOs had to observe strict rules about attending dinner, agency officers could come and go as they pleased.

You don’t need to infiltrate the CIA if the CIA and the State Department can’t agree on how to treat their staff and what rules to apply!

One way of looking at the problem was that the diplomats had their own goals, and they set policies appropriate to those goals. By necessity, they didn’t actually know the overall goals of their own embassies. It’s not unusual for different subdivisions of an organization to have conflicting goals. The question is how to manage those tensions. What was the point of requiring a 9 year wait after naturalization before someone could work as a foreign service officer? A different executive agency, with higher needs for the integrity of its agents, didn’t consider the wait necessary. Eliminating the wait would’ve eliminated an obvious difference between agents and normal diplomats.

But are we sure the wait wasn’t necessary? It creates a large obstacle for our adversaries: they need to think 9 years ahead if they want to supply their own mole instead of turning one of our diplomats. On the other hand, it created too large an obstacle for ourselves.

Is it more important to defend against foreign agents or to create high-quality cover for our own agents? Two agencies disagreed and pursued their own interests without resolving the disagreement. Either policy could have been effective; having both policies was an information give-away.

How can this sort of issue arise for private businesses? Too often individual departments can set policies that come into conflict with one another. For instance, an IT department may with perfectly reasonable justification decide to standardize on a single browser. A second department decides to develop internal tools that rely on browser add-ons like ActiveX or Java applets. When a vulnerability is discovered in those add-ons to the standard browser, the organization finds it is now dependent on an inherently insecure tool. Neither department is responsible for the situation; both acted in good faith within their arena. The problem was caused by a lack of any one responsible for determining how to set policies for the best good of the organization as a whole.

Security policies need to be set to take all the organization’s goals into consideration; to do that, someone has to be looking at the whole picture.

PGP: Still hard to use after 16 years

Earlier this month, SC magazine ran an article about this tweet from Joseph Bonneau at the Electronic Frontier Foundation:

Email from Phil Zimmerman: “Sorry, but I cannot decrypt this message. I don’t have a version of PGP that runs on any of my devices”

PGP, short for Pretty Good Privacy, is an email encryption system invented by Phil Zimmerman in 1991. So why isn’t Zimmerman eating his own dog food?

“The irony is not lost on me,” he says in this article at Motherboard, which is about PGP’s usability problems. Jon Callas, former Chief Scientist at PGP, Inc., tweeted that “We have done a good job of teaching people that crypto is hard, but cryptographers think that UX is easy.” As a cryptographer, it would be easy to forget that 25% of people have math anxiety. Cryptographers are used to creating timing attack-resistant implementations of AES. You can get a great sense of what’s involved from this explanation of AES, which is illustrated with stick figures. The steps are very complicated. In the general population, surprisingly few people can follow complex written instructions.

All the way back in 1999, there was a paper presented at USENIX called “Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.” It included a small study with a dozen reasonably intelligent people:

The user test was run with twelve different participants, all of whom were experienced users of email, and none of whom could describe the difference between public and private key cryptography prior to the test sessions. The participants all had attended at least some college, and some had graduate degrees. Their ages ranged from 20 to 49, and their professions were diversely distributed, including graphic artists, programmers, a medical student, administrators and a writer.

The participants were given 90 minutes to learn and use PGP, under realistic conditions:

Our test scenario was that the participant had volunteered to help with a political campaign and had been given the job of campaign coordinator (the party affiliation and campaign issues were left to the participant’s imagination, so as not to offend anyone). The participant’s task was to send out campaign plan updates to the other members of the campaign team by email, using PGP for privacy and authentication. Since presumably volunteering for a political campaign implies a personal investment in the campaign’s success, we hoped that the participants would be appropriately motivated to protect the secrecy of their messages…

After briefing the participants on the test scenario and tutoring them on the use of Eudora, they were given an initial task description which provided them with a secret message (a proposed itinerary for the candidate), the names and email addresses of the campaign manager and four other campaign team members, and a request to please send the secret message to the five team members in a signed and encrypted email. In order to complete this task, a participant had to generate a key pair, get the team members’ public keys, make their own public key available to the team members, type the (short) secret message into an email, sign the email using their private key, encrypt the email using the five team members’ public keys, and send the result. In addition, we designed the test so that one of the team members had an RSA key while the others all had Diffie-Hellman/DSS keys, so that if a participant encrypted one copy of the message for all five team members (which was the expected interpretation of the task), they would encounter the mixed key types warning message. Participants were told that after accomplishing that initial task, they should wait to receive email from the campaign team members and follow any instructions they gave.

If that’s mystifying, that’s the point. You can get a very good sense of how you would’ve done, and how much has changed in 16 years, by reading the Electronic Frontier Foundation’s guide to using PGP on a Mac. The study participants were given 90 minutes to complete the task.

Things went wrong immediately:

Three of the twelve test participants (P4, P9, and P11) accidentally emailed the secret to the team members without encryption. Two of the three (P9 and P11) realized immediately that they had done so, but P4 appeared to believe that the security was supposed to be transparent to him and that the encryption had taken place. In all three cases the error occurred while the participants were trying to figure out the system by exploring.

Cryptographers are absolutely right that cryptography is hard:

Among the eleven participants who figured out how to encrypt, failure to understand the public key model was widespread. Seven participants (P1, P2, P7, P8, P9, P10 and P11) used only their own public keys to encrypt email to the team members. Of those seven, only P8 and P10 eventually succeeded in sending correctly encrypted email to the team members before the end of the 90 minute test session (P9 figured out that she needed to use the campaign manager’s public key, but then sent email to the the entire team encrypted only with that key), and they did so only after they had received fairly explicit email prompting from the test monitor posing as the team members. P1, P7 and P11 appeared to develop an understanding that they needed the team members’ public keys (for P1 and P11, this was also after they had received prompting email), but still did not succeed at correctly encrypting email. P2 never appeared to understand what was wrong, even after twice receiving feedback that the team members could not decrypt his email.
Another of the eleven (P5) so completely misunderstood the model that he generated key pairs for each team member rather than for himself, and then attempted to send the secret in an email encrypted with the five public keys he had generated. Even after receiving feedback that the team members were unable to decrypt his email, he did not manage to recover from this error.

The user interface can make it even harder:

P7 gave up on using the key server after one failed attempt in which she tried to retrieve the campaign manager’s public key but got nothing back (perhaps due to mis-typing the name). P1 spent 25 minutes trying and failing to import a key from an email message; he copied the key to the clipboard but then kept trying to decrypt it rather than import it. P12 also had difficulty trying to import a key from an email message: the key was one she already had in her key ring, and when her copy and paste of the key failed to have any effect on the PGPKeys display, she assumed that her attempt had failed and kept trying. Eventually she became so confused that she began trying to decrypt the key instead.

This is all so frustrating and unpleasant for people that they simply won’t use PGP. We actually have a way of encrypting email that’s not known to be breakable by the NSA. In practice, the human factors are even more difficult than stumping the NSA’s cryptographers!

Complexity and Storage Slow Attackers Down

Back in 2013, WhiteHat founder Jeremiah Grossman forgot an important password, and Jeremi Gosney of Stricture Consulting Group helped him crack it. Gosney knows password cracking, and he’s up for a challenge, but he knew it’d be futile trying to crack the leaked Ashley Madison passwords. Dean Pierce gave it a shot, and Ars Technica provides some context.

Ashley Madison made mistakes, but password storage wasn’t one of them. This is what came of Pierce’s efforts:

After five days, he was able to crack only 4,007 of the weakest passwords, which comes to just 0.0668 percent of the six million passwords in his pool.

It’s like Jeremiah said after his difficult experience:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

Imagine the average person that doesn’t even work in IT! Logging in to a website feels simpler than it is. It feels like, “The website checked my password, and now I’m logged in.”

Actually, “being logged in” means that the server gave your browser a secret number, AND your browser includes that number every time it makes a request, AND the server has a table of which number goes with which person, AND the server sends you the right stuff based on who you are. Usernames and passwords have to do with whether the server gives your browser the secret number in the first place.

It’s natural to assume that “checking your password” means that the server knows your password, and it compares it to what you typed in the login form. By now, everyone has heard that they’re supposed to have an impossible-to-remember password, but the reasons aren’t usually explained – people have their own problems besides the finer points of PBKDF2 vs. bcrypt).

If you’ve never had to think about it, it’s also natural to assume that hackers guessing your password are literally trying to log in as you. Even professional programmers can make that assumption, when password storage is outside their area of expertise. Our clients’ developers sometimes object to findings about password complexity or other brute force issues because they throttle login attempts, lock accounts after 3 incorrect guesses, etc. If that were true, hackers would be greatly limited by how long it takes to make each request over the network. Account lockouts are probably enough to discourage a person’s acquaintances, but they aren’t a protection against offline password cracking.

Password complexity requirements (include mixed case, include numbers, include symbols) are there to protect you once an organization has already been compromised (like Ashley Madison). In that scenario, password complexity is what you can do to help yourself. Proper password storage is what the organization can do. The key to that is in what exactly “checking your password” means.

When the server receives your login attempt, it runs your password through something called a hash function. When you set your password, the server ran your password through the hash function and stored the result, not your password. The server should only keep the password long enough to run it through the hash function. The difference between secure and insecure password storage is in the choice of hash function.

If your enemy is using brute force against you and trying every single thing, your best bet is to slow them down. That’s the thinking behind account lockouts and the design of functions like bcrypt. Running data through a hash function might be fast or slow, depending on the hash function. They have many applications. You can use them to confirm that large files haven’t been corrupted, and for that purpose it’s good for them to be fast. SHA256 would be a hash function suitable for that.

A common mistake is using a deliberately fast hash function, when a deliberately slow one is appropriate. Password storage is an unusual situation where we want the computation to be as slow and inefficient as practicable.

In the case of hackers who’ve compromised an account database, they have a table of usernames and strings like “$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy”. Cracking the password means that they make a guess, run it through the hash function, and get the same string. If you use a complicated password, they have to try more passwords. That’s what you can do to slow them down. What organizations can do is to choose a hash function that makes each individual check very time-consuming. It’s cryptography, so big numbers are involved. The “only” thing protecting the passwords of Ashley Madison users is that trying a fraction of the possible passwords is too time-consuming to be practical.

Consumers have all the necessary information to read about password storage best practices and pressure companies to use those practices. At least one website is devoted to the cause. It’s interesting that computers are forcing ordinary people to think about these things, and not just spies.

The Death of the Full Stack Developer

When I got started in computer security, back in 1995, there wasn’t much to it — but there wasn’t much to web applications themselves. If you wanted to be a web application developer, you had to know a few basic skills. These are the kinds of things a developer would need to build a somewhat complex website back in the day:

  • ISP/Service Provider
  • Switching and routing with ACLs
  • DNS
  • Telnet
  • *NIX
  • Apache
  • vi/Emacs
  • HTML
  • CGI/Perl/SSI
  • Berkley Database
  • Images

It was a pretty long list of things to get started, but if you were determined and persevered, you could learn them in relatively short order. Ideally you might have someone who was good at networking and host security to help you out if you wanted to focus on the web side, but doing it all yourself wasn’t unheard of. It was even possible to be an expert in a few of these, though it was rare to find anyone who knew all of them.

Things have changed dramatically over the 20 years that I’ve been working in security. Now this is what a fairly common stack and provisioning technology might consist of:

  • Eclipse
  • Github
  • Docker
  • Jenkins
  • Cucumber
  • Gauntlt
  • Amazon EC2
  • Amazon AMI
  • SSH keys
  • Duosec 2FA
  • WAF
  • IDS
  • Anti-virus
  • DNS
  • DKIM
  • SPF
  • Apache
  • Relational Database
  • Amazon Glacier
  • PHP
  • Apparmor
  • Suhosin
  • WordPress CMS
  • WordPress Plugins
  • API to WordPress.org for updates
  • API to anti-spam filter ruleset
  • API to merchant processor
  • Varnish
  • Stunnel
  • SSL/TLS Certificates
  • Certificate Authority
  • CDN
  • S3
  • JavaScript
  • jQuery
  • Google Analytics
  • Conversion tracking
  • Optimizely
  • CSS
  • Images
  • Sprites

Unlike before, there is literally no one on earth who could claim to understand every aspect of each of those things. They may be familiar with the concepts, but they can’t know all of these things all at once, especially given how quickly technologies change. Since there is no one who can understand all of those things at once, we have seen the gradual death of the full-stack developer.

It stands to reason, then, that there has been a similar decline in the numbers of full-stack security experts. People may know quite a bit about a lot of these technologies, but when it comes down to it, there’s a very real chance that any single security person will become more and more specialized over time — it’s simply hard to avoid specialization, given the growing complexity of modern apps. We may eventually see the death of the full-stack security person as well as a result.

If that is indeed the case, where does this leave enterprises that need to build secure and operationally functional applications? It means that there will be more and more silos where people will handle an ever-shrinking set of features and functionality in progressively greater depth. It means that companies that can augment security or operations in one or more areas will be adopted because there will be literally no other choice; failure to use a diverse and potentially external expertise in security/operations will ensure sec-ops failure.

At its heart, this is a result of economic forces – more code needs to be delivered and there are fewer people who understand what it’s actually doing. So outsource what you can’t know since there is too much for any one person to know about their own stack. This leads us back to the Internet Services Supply Chain problem as well – can you really trust your service providers when they have to trust other services providers and so on? All of this highlights the need for better visibility into what is really being tested, as well as the need to find security that scales and to implement operational hardware and software that is secure by default.

Developers and Security Tools

A recent study from NC State states that, “the two things that were most strongly associated with using security tools were peer influence and corporate culture. As a former developer, and as someone who has reviewed the source code of countless web applications, I can say these tools are almost impossible to use for the average developer. Security tools are invariably written for security experts and consultants. Tools produce a huge percentage of false alarms – if you are lucky, you will comb through 100 false alarms to find one legitimate issue.

The false assumption here is that running a tool will result in better security. Peer pressure to do security doesn’t quite make sense because most developers are not trained in security. And since the average tool produces a large number of false alarms, and since the pressure to ship new features as quickly as possible is so high, there will never be enough time, training or background for the average developer to effectively evaluate their own security.

The “evangelist” model the study mentions does seem to work well among some of the WhiteHat Security clients. Anecdotally, I know many organizations that will accept one volunteer per security group as a security “marshall” (something similar akin to a floor “fire marshall”). That volunteer receives special security training, but ultimately acts as a bridge between his or her individual development team and the security team.

Placing the burden of security entirely on the developers is unfair, as is making them choose between fixing vulnerabilities and shipping new code. One of the dirty secrets of technology is this: even though developers often are shouldered with the responsibility of security, risk and security are really business decisions. Security is also a process that goes far beyond the development process. Developers cannot be solely responsible for application security. Business analysts, architects, developers, quality assurance, security teams, and operations all play a critical role in managing technology risk. And above all, managers (including the C-level team) must provide direction, set levels or tolerable risk, and it is ultimately responsible for making the decision to ship new code or fix vulnerabilities. That is ultimately a business decision.

It Can Happen to Anyone

Earlier this summer, The Intercept published some details about the NSA’s XKEYSCORE program. Those details included some security issues around logging and authorization:

As hard as software developers may try, it’s nearly impossible to write bug-free source code. To compensate for this, developers often rely on multiple layers of security; if attackers can get through one layer, they may still be thwarted by other layers. XKEYSCORE appears to do a bad job of this.

When systems administrators log into XKEYSCORE servers to configure them, they appear to use a shared account, under the name “oper.” Adams notes, “That means that changes made by an administrator cannot be logged.” If one administrator does something malicious on an XKEYSCORE server using the “oper” user, it’s possible that the digital trail of what was done wouldn’t lead back to the administrator, since multiple operators use the account.

There appears to be another way an ill-intentioned systems administrator may be able to cover their tracks. Analysts wishing to query XKEYSCORE sign in via a web browser, and their searches are logged. This creates an audit trail, on which the system relies to ensure that users aren’t doing overly broad searches that would pull up U.S. citizens’ web traffic. Systems administrators, however, are able to run MySQL queries. The documents indicate that administrators have the ability to directly query the MySQL databases, where the collected data is stored, apparently bypassing the audit trail.

These are exactly the same kinds of problems that led to the Snowden leaks:

As a system administrator, Snowden was allowed to look at any file he wanted, and his actions were largely unaudited. “At certain levels, you are the audit,” said an intelligence official.

He was also able to access NSAnet, the agency’s intranet, without leaving any signature, said a person briefed on the postmortem of Snowden’s theft. He was essentially a “ghost user,” said the source, making it difficult to trace when he signed on or what files he accessed.

If he wanted, he would even have been able to pose as any other user with access to NSAnet, said the source.

The NSA obviously had the in-house expertise to design the system differently. Surely they know the importance of logging from their own experience hacking things and responding to breaches. How could this happen?

The most cynical explanation is that the point is not to log everything. Plausible deniability is a goal of intelligence agencies. Nobody can subpoena records that don’t exist. The simple fact of tracking an individual can be highly sensitive (e.g., foreign heads of state).

There’s also a simpler explanation: basic organizational problems. From the same NBC News article:

It’s 2013 and the NSA is stuck in 2003 technology,” said an intelligence official.

Jason Healey, a former cyber-security official in the Bush Administration, said the Defense Department and the NSA have “frittered away years” trying to catch up to the security technology and practices used in private industry. “The DoD and especially NSA are known for awesome cyber security, but this seems somewhat misplaced,” said Healey, now a cyber expert at the Atlantic Council. “They are great at some sophisticated tasks but oddly bad at many of the simplest.

In other words, lack of upgrades, “not invented here syndrome,” outsourcing with inadequate follow-up and accountability, and other familiar issues affect even the NSA. Very smart groups of people have these issues, even when they understand them intellectually.

Each individual department of the NSA probably faces challenges very similar to those of a lot of software companies: things need to be done yesterday, and done cheaper. New features are prioritized above technical debt. “It’s just an internal system, and we can trust our own people, anyway…”

Security has costs. Those costs include money, time, and convenience — making a system secure creates obstacles, so there’s always a temptation to ignore security “just this once,” “just for this purpose,” “just for now.” Security is a game in which the adversary tests your intelligence and creativity, yes; but most of all, the adversary tests your thoroughness and your discipline.