Category Archives: Technical Insight

Saving Systems from SQLi

There is absolutely nothing special about the TalkTalk breach — and that is the problem. If you didn’t already see the news about TalkTalk, a UK-based provider of telephone and broadband services, their customer database was hacked and reportedly 4 million records were pilfered. A major organization’s website is hacked, millions of records containing PII are taken, and the data is held for ransom. Oh, and the alleged perpetrator(s) were teenagers, not professional cyber-criminals. This is the type of story that has been told for years now in every geographic region and industry.

In this particular case, while many important technical details are still coming to light, it appears – according to some reputable media sources – the breach was carried out through SQL Injection (SQLi). SQLi gives a remote attacker the ability to run commands against the backend database, including potentially stealing all the data contained in it. This sounds bad because it is.

Just this year, the Verizon Data Breach Investigations Report, found that SQLi was used in 19 percent of web application attacks. And WhiteHat’s own research reveals that 6 percent of websites tested with Sentinel have at least one SQLi vulnerability exposed. So SQLi is very common, and what’s more, it’s been around a long time. In fact, this Christmas marks its 17th birthday.

The more we learn about incidents like TalkTalk, the more we see that these breaches are preventable. We know how to write code that’s resilient to SQLi. We have several ways to to identify SQLi in vulnerable code. We know multiple methods for fixing SQLi vulnerabilities and defending against incoming attacks. We, the InfoSec industry, know basically everything about SQLi. Yet for some reason the breaches keep happening, the headlines keep appearing, and millions of people continue to have their personal information exposed. The question then becomes: Why? Why, when we know so much about these attacks, do they keep happening?

One answer is that those who are best positioned to solve the problem are not motivated to take care of the issue – or perhaps they are just ignorant of things like SQLi and the danger it presents. Certainly the companies and organizations being attacked this way have a reason to protect themselves, since they lose money whenever an attack occurs. The Verizon report estimates that one million records stolen could cost a company nearly $1.2m. For the TalkTalk hack, with potentially four million records stolen (though some reports are now indicating much lower numbers), there could be nearly $2m in damages.

Imagine, millions of dollars in damages and millions of angry customers based on an issue that could have been found and fixed in mere days – if that. It’s time to get serious about Web security, like really serious, and I’m not just talking about corporations, but InfoSec vendors as well.

Like many other vendors, WhiteHat’s vulnerability scanning service can help customers find vulnerabilities such as SQLi before the bad guys exploit them. This lets companies proactively protect their information, since hacking into a website will be significantly more challenging. But even more importantly, organizations need to know that security vendors truly have their back and that their vendor’s interests are aligned with their own. Sentinel Elite’s security guarantee is designed to do exactly that.

If Sentinel Elite fails to find a vulnerability such as SQLi, and exploitation results in a breach like TalkTalk’s, WhiteHat will not only refund the cost of the service, but also cover up to $500k in financial damages. This means that WhiteHat customers can be confident that WhiteHat shares their commitment to not just detecting vulnerabilities, but actively working to prevent breaches.

Will security guarantees prevent all breaches? Probably not, as perfect security is impossible, but security guarantees will make a HUGE difference in making sure vulnerabilities are remediated. All it takes to stop these breaches from happening is doing the things we already know how to do. Doing the things we already know work. Security guarantees motivate all parties involved to do what’s necessary to prevent breaches like TalkTalk.

University Networks

The Atlantic Monthly just published a piece about the computer security challenges facing universities. Those challenges are serious:

“Universities are extremely attractive targets,” explained Richard Bejtlich, the Chief Security Strategist at FireEye, which acquired Mandiant, the firm that investigated the hacking incident at the [New York] Times. “The sort of information they have can be very valuable — including very rich personal information that criminal groups want, and R&D data related to either basic science or grant-related research of great interest to nation state groups. Then, on the infrastructure side they also provide some of the best platforms for attacking other parties—high bandwidth, great servers, some of the best computing infrastructure in the world and a corresponding lack of interest in security.”

The issue is framed in terms of “corporate lockdown” vs. “bring your own device,” with an emphasis on network security:

There are two schools of thought on computer security at institutions of higher education. One thought is that universities are lagging behind companies in their security efforts and need to embrace a more locked-down, corporate approach to security. The other thought holds that companies are, in fact, coming around to the academic institutions’ perspective on security—with employees bringing their own devices to work, and an increasing emphasis on monitoring network activity rather than enforcing security by trying to keep out the outside world.

There’s a nod to application security, and it’s actually a great example of setting policies to incentivize users to set stronger passwords (this is not easy!):

A company, for instance, may mandate a new security system or patch for everyone on its network, but a crucial element of implementing security measures in an academic setting is often providing users with options that will meet their needs, rather than forcing them to acquiesce to changes. For example, Parks said that at the University of Idaho, users are given the choice to set passwords that are at least 15 characters in length and, if they do so, their passwords last 400 days before expiring, whereas shorter passwords must be changed every 90 days (more than 70 percent of users have chosen to create passwords that are at least 15 characters, he added).

Getting hacked is about losing control of one’s data, and the worries in the last two passages have to do with things the university can’t directly control: the security of devices that users connect to the network, and the strength of passwords chosen by users. Things beyond one’s control are generally anxiety-provoking.

Taking a step back, the data that’s of interest to hackers is found in databases, which are frequently queried by web applications. Those web applications might have vulnerabilities, and a university’s application code is under the university’s control. The application code matters in practice. Over the summer, Team GhostShell dumped data stolen from a large number of universities. According to Symantec:

In keeping with its previous modus operandi, it is likely that the group compromised the databases by way of SQL injection attacks and poorly configured PHP scripts; however, this has not been confirmed. Previous data dumps from the 2012 hacks revealed that the team used SQLmap, a popular SQL injection tool used by hackers.

Preventing SQL injection is a solved problem, but bored teenagers exploit it on university websites:

The attack, which has implications for the integrity of USyd’s security infrastructure, compromised the personal details of approximately 5,000 students and did not come to the University’s attention until February 6.

The hacker, who goes by the online alias Abdilo, told Honi that the attack had yielded email addresses and ‘pass combo lists’, though he has no intention of using the information for malicious ends.

“99% of my targets are just shit i decide to mess with because of southpark or other tv shows,” he wrote.

As for Sydney’s breach on February 2, Abdilo claimed that he had very little trouble in accessing the information, rating the university’s database security with a “0” out of 10.

“I was taunting them for awhile, they finally figured it out,” he said.

That’s a nightmare, but solving SQL injection is a lot more straightforward than working on machine learning algorithms to improve intrusion detection systems. Less academic, if you will. The challenges in the Atlantic article are real, but effective measures aren’t always the most exciting. Fixing legacy applications that students use to look at their grades or schedule doctor’s appointments doesn’t have the drama of finding currently-invisible intruders. There is no incentive or process for the faculty to work on improving the software development life cycle when their priority is often research productivity or serving on administrative committees. In these instances, partnering with a third-party SaaS application security provider is the best solution for these urgent needs.

In a sense, it’s good news that some of the most urgently needed fixes are already within our abilities.

When departments work at cross-purposes

Back in August, we wrote about how self-discipline can be one of the hardest parts of security, as illustrated by Snowden and the NSA. Just recently, Salon published an article about similar issues that plagued the CIA during the Cold War: How to explain the KGB’s amazing success identifying CIA agents in the field?

So many of their agents were being uncovered by the Soviets that they assumed there must’ve been double agents…somewhere. Apparently there was a lot of inward-focused paranoia. The truth was a lot more mundane, but also very actionable — not only for the CIA, but for business in general: two departments (in this case, two agencies) in one organization with incompatible, non-coordinated policies:

So how, exactly, did Totrov reconstitute CIA personnel listings without access to the files themselves or those who put them together?

His approach required a clever combination of clear insight into human behavior, root common sense and strict logic.

In the world of secret intelligence the first rule is that of the ancient Chinese philosopher of war Sun Tzu: To defeat the enemy, you have above all to know yourself. The KGB was a huge bureaucracy within a bureaucracy — the Soviet Union. Any Soviet citizen had an intimate acquaintance with how bureaucracies function. They are fundamentally creatures of habit and, as any cryptanalyst knows, the key to breaking the adversary’s cipher is to find repetitions. The same applies to the parallel universe of human counterintelligence.

The difference between Totrov and his fellow citizens was that whereas others at home and abroad would assume the Soviet Union was somehow unique, he applied his understanding of his own society to a society that on the surface seemed unique, but which, in respect of how government worked, was not in fact that much different: the United States.

From an organizational point of view, what’s fascinating is that the problem came from two different agencies with different missions having incompatible, uncoordinated policies: policies for Foreign Service Officers and policies for CIA officers were different enough to allow the Soviet Union to identify individuals who were theoretically Foreign Service Officers but who did not receive the same treatment as actual Foreign Service Officers. Pay and policy differentials made it easy to separate actual Foreign Service Officers from CIA agents.

Thus one productive line of inquiry quickly yielded evidence: the differences in the way agency officers undercover as diplomats were treated from genuine foreign service officers (FSOs). The pay scale at entry was much higher for a CIA officer; after three to four years abroad a genuine FSO could return home, whereas an agency employee could not; real FSOs had to be recruited between the ages of 21 and 31, whereas this did not apply to an agency officer; only real FSOs had to attend the Institute of Foreign Service for three months before entering the service; naturalized Americans could not become FSOs for at least nine years but they could become agency employees; when agency officers returned home, they did not normally appear in State Department listings; should they appear they were classified as research and planning, research and intelligence, consular or chancery for security affairs; unlike FSOs, agency officers could change their place of work for no apparent reason; their published biographies contained obvious gaps; agency officers could be relocated within the country to which they were posted, FSOs were not; agency officers usually had more than one working foreign language; their cover was usually as a “political” or “consular” official (often vice-consul); internal embassy reorganizations usually left agency personnel untouched, whether their rank, their office space or their telephones; their offices were located in restricted zones within the embassy; they would appear on the streets during the working day using public telephone boxes; they would arrange meetings for the evening, out of town, usually around 7.30 p.m. or 8.00 p.m.; and whereas FSOs had to observe strict rules about attending dinner, agency officers could come and go as they pleased.

You don’t need to infiltrate the CIA if the CIA and the State Department can’t agree on how to treat their staff and what rules to apply!

One way of looking at the problem was that the diplomats had their own goals, and they set policies appropriate to those goals. By necessity, they didn’t actually know the overall goals of their own embassies. It’s not unusual for different subdivisions of an organization to have conflicting goals. The question is how to manage those tensions. What was the point of requiring a 9 year wait after naturalization before someone could work as a foreign service officer? A different executive agency, with higher needs for the integrity of its agents, didn’t consider the wait necessary. Eliminating the wait would’ve eliminated an obvious difference between agents and normal diplomats.

But are we sure the wait wasn’t necessary? It creates a large obstacle for our adversaries: they need to think 9 years ahead if they want to supply their own mole instead of turning one of our diplomats. On the other hand, it created too large an obstacle for ourselves.

Is it more important to defend against foreign agents or to create high-quality cover for our own agents? Two agencies disagreed and pursued their own interests without resolving the disagreement. Either policy could have been effective; having both policies was an information give-away.

How can this sort of issue arise for private businesses? Too often individual departments can set policies that come into conflict with one another. For instance, an IT department may with perfectly reasonable justification decide to standardize on a single browser. A second department decides to develop internal tools that rely on browser add-ons like ActiveX or Java applets. When a vulnerability is discovered in those add-ons to the standard browser, the organization finds it is now dependent on an inherently insecure tool. Neither department is responsible for the situation; both acted in good faith within their arena. The problem was caused by a lack of any one responsible for determining how to set policies for the best good of the organization as a whole.

Security policies need to be set to take all the organization’s goals into consideration; to do that, someone has to be looking at the whole picture.

PGP: Still hard to use after 16 years

Earlier this month, SC magazine ran an article about this tweet from Joseph Bonneau at the Electronic Frontier Foundation:

Email from Phil Zimmerman: “Sorry, but I cannot decrypt this message. I don’t have a version of PGP that runs on any of my devices”

PGP, short for Pretty Good Privacy, is an email encryption system invented by Phil Zimmerman in 1991. So why isn’t Zimmerman eating his own dog food?

“The irony is not lost on me,” he says in this article at Motherboard, which is about PGP’s usability problems. Jon Callas, former Chief Scientist at PGP, Inc., tweeted that “We have done a good job of teaching people that crypto is hard, but cryptographers think that UX is easy.” As a cryptographer, it would be easy to forget that 25% of people have math anxiety. Cryptographers are used to creating timing attack-resistant implementations of AES. You can get a great sense of what’s involved from this explanation of AES, which is illustrated with stick figures. The steps are very complicated. In the general population, surprisingly few people can follow complex written instructions.

All the way back in 1999, there was a paper presented at USENIX called “Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.” It included a small study with a dozen reasonably intelligent people:

The user test was run with twelve different participants, all of whom were experienced users of email, and none of whom could describe the difference between public and private key cryptography prior to the test sessions. The participants all had attended at least some college, and some had graduate degrees. Their ages ranged from 20 to 49, and their professions were diversely distributed, including graphic artists, programmers, a medical student, administrators and a writer.

The participants were given 90 minutes to learn and use PGP, under realistic conditions:

Our test scenario was that the participant had volunteered to help with a political campaign and had been given the job of campaign coordinator (the party affiliation and campaign issues were left to the participant’s imagination, so as not to offend anyone). The participant’s task was to send out campaign plan updates to the other members of the campaign team by email, using PGP for privacy and authentication. Since presumably volunteering for a political campaign implies a personal investment in the campaign’s success, we hoped that the participants would be appropriately motivated to protect the secrecy of their messages…

After briefing the participants on the test scenario and tutoring them on the use of Eudora, they were given an initial task description which provided them with a secret message (a proposed itinerary for the candidate), the names and email addresses of the campaign manager and four other campaign team members, and a request to please send the secret message to the five team members in a signed and encrypted email. In order to complete this task, a participant had to generate a key pair, get the team members’ public keys, make their own public key available to the team members, type the (short) secret message into an email, sign the email using their private key, encrypt the email using the five team members’ public keys, and send the result. In addition, we designed the test so that one of the team members had an RSA key while the others all had Diffie-Hellman/DSS keys, so that if a participant encrypted one copy of the message for all five team members (which was the expected interpretation of the task), they would encounter the mixed key types warning message. Participants were told that after accomplishing that initial task, they should wait to receive email from the campaign team members and follow any instructions they gave.

If that’s mystifying, that’s the point. You can get a very good sense of how you would’ve done, and how much has changed in 16 years, by reading the Electronic Frontier Foundation’s guide to using PGP on a Mac. The study participants were given 90 minutes to complete the task.

Things went wrong immediately:

Three of the twelve test participants (P4, P9, and P11) accidentally emailed the secret to the team members without encryption. Two of the three (P9 and P11) realized immediately that they had done so, but P4 appeared to believe that the security was supposed to be transparent to him and that the encryption had taken place. In all three cases the error occurred while the participants were trying to figure out the system by exploring.

Cryptographers are absolutely right that cryptography is hard:

Among the eleven participants who figured out how to encrypt, failure to understand the public key model was widespread. Seven participants (P1, P2, P7, P8, P9, P10 and P11) used only their own public keys to encrypt email to the team members. Of those seven, only P8 and P10 eventually succeeded in sending correctly encrypted email to the team members before the end of the 90 minute test session (P9 figured out that she needed to use the campaign manager’s public key, but then sent email to the the entire team encrypted only with that key), and they did so only after they had received fairly explicit email prompting from the test monitor posing as the team members. P1, P7 and P11 appeared to develop an understanding that they needed the team members’ public keys (for P1 and P11, this was also after they had received prompting email), but still did not succeed at correctly encrypting email. P2 never appeared to understand what was wrong, even after twice receiving feedback that the team members could not decrypt his email.
Another of the eleven (P5) so completely misunderstood the model that he generated key pairs for each team member rather than for himself, and then attempted to send the secret in an email encrypted with the five public keys he had generated. Even after receiving feedback that the team members were unable to decrypt his email, he did not manage to recover from this error.

The user interface can make it even harder:

P7 gave up on using the key server after one failed attempt in which she tried to retrieve the campaign manager’s public key but got nothing back (perhaps due to mis-typing the name). P1 spent 25 minutes trying and failing to import a key from an email message; he copied the key to the clipboard but then kept trying to decrypt it rather than import it. P12 also had difficulty trying to import a key from an email message: the key was one she already had in her key ring, and when her copy and paste of the key failed to have any effect on the PGPKeys display, she assumed that her attempt had failed and kept trying. Eventually she became so confused that she began trying to decrypt the key instead.

This is all so frustrating and unpleasant for people that they simply won’t use PGP. We actually have a way of encrypting email that’s not known to be breakable by the NSA. In practice, the human factors are even more difficult than stumping the NSA’s cryptographers!

Complexity and Storage Slow Attackers Down

Back in 2013, WhiteHat founder Jeremiah Grossman forgot an important password, and Jeremi Gosney of Stricture Consulting Group helped him crack it. Gosney knows password cracking, and he’s up for a challenge, but he knew it’d be futile trying to crack the leaked Ashley Madison passwords. Dean Pierce gave it a shot, and Ars Technica provides some context.

Ashley Madison made mistakes, but password storage wasn’t one of them. This is what came of Pierce’s efforts:

After five days, he was able to crack only 4,007 of the weakest passwords, which comes to just 0.0668 percent of the six million passwords in his pool.

It’s like Jeremiah said after his difficult experience:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

Imagine the average person that doesn’t even work in IT! Logging in to a website feels simpler than it is. It feels like, “The website checked my password, and now I’m logged in.”

Actually, “being logged in” means that the server gave your browser a secret number, AND your browser includes that number every time it makes a request, AND the server has a table of which number goes with which person, AND the server sends you the right stuff based on who you are. Usernames and passwords have to do with whether the server gives your browser the secret number in the first place.

It’s natural to assume that “checking your password” means that the server knows your password, and it compares it to what you typed in the login form. By now, everyone has heard that they’re supposed to have an impossible-to-remember password, but the reasons aren’t usually explained – people have their own problems besides the finer points of PBKDF2 vs. bcrypt).

If you’ve never had to think about it, it’s also natural to assume that hackers guessing your password are literally trying to log in as you. Even professional programmers can make that assumption, when password storage is outside their area of expertise. Our clients’ developers sometimes object to findings about password complexity or other brute force issues because they throttle login attempts, lock accounts after 3 incorrect guesses, etc. If that were true, hackers would be greatly limited by how long it takes to make each request over the network. Account lockouts are probably enough to discourage a person’s acquaintances, but they aren’t a protection against offline password cracking.

Password complexity requirements (include mixed case, include numbers, include symbols) are there to protect you once an organization has already been compromised (like Ashley Madison). In that scenario, password complexity is what you can do to help yourself. Proper password storage is what the organization can do. The key to that is in what exactly “checking your password” means.

When the server receives your login attempt, it runs your password through something called a hash function. When you set your password, the server ran your password through the hash function and stored the result, not your password. The server should only keep the password long enough to run it through the hash function. The difference between secure and insecure password storage is in the choice of hash function.

If your enemy is using brute force against you and trying every single thing, your best bet is to slow them down. That’s the thinking behind account lockouts and the design of functions like bcrypt. Running data through a hash function might be fast or slow, depending on the hash function. They have many applications. You can use them to confirm that large files haven’t been corrupted, and for that purpose it’s good for them to be fast. SHA256 would be a hash function suitable for that.

A common mistake is using a deliberately fast hash function, when a deliberately slow one is appropriate. Password storage is an unusual situation where we want the computation to be as slow and inefficient as practicable.

In the case of hackers who’ve compromised an account database, they have a table of usernames and strings like “$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy”. Cracking the password means that they make a guess, run it through the hash function, and get the same string. If you use a complicated password, they have to try more passwords. That’s what you can do to slow them down. What organizations can do is to choose a hash function that makes each individual check very time-consuming. It’s cryptography, so big numbers are involved. The “only” thing protecting the passwords of Ashley Madison users is that trying a fraction of the possible passwords is too time-consuming to be practical.

Consumers have all the necessary information to read about password storage best practices and pressure companies to use those practices. At least one website is devoted to the cause. It’s interesting that computers are forcing ordinary people to think about these things, and not just spies.

The Death of the Full Stack Developer

When I got started in computer security, back in 1995, there wasn’t much to it — but there wasn’t much to web applications themselves. If you wanted to be a web application developer, you had to know a few basic skills. These are the kinds of things a developer would need to build a somewhat complex website back in the day:

  • ISP/Service Provider
  • Switching and routing with ACLs
  • DNS
  • Telnet
  • *NIX
  • Apache
  • vi/Emacs
  • HTML
  • CGI/Perl/SSI
  • Berkley Database
  • Images

It was a pretty long list of things to get started, but if you were determined and persevered, you could learn them in relatively short order. Ideally you might have someone who was good at networking and host security to help you out if you wanted to focus on the web side, but doing it all yourself wasn’t unheard of. It was even possible to be an expert in a few of these, though it was rare to find anyone who knew all of them.

Things have changed dramatically over the 20 years that I’ve been working in security. Now this is what a fairly common stack and provisioning technology might consist of:

  • Eclipse
  • Github
  • Docker
  • Jenkins
  • Cucumber
  • Gauntlt
  • Amazon EC2
  • Amazon AMI
  • SSH keys
  • Duosec 2FA
  • WAF
  • IDS
  • Anti-virus
  • DNS
  • DKIM
  • SPF
  • Apache
  • Relational Database
  • Amazon Glacier
  • PHP
  • Apparmor
  • Suhosin
  • WordPress CMS
  • WordPress Plugins
  • API to for updates
  • API to anti-spam filter ruleset
  • API to merchant processor
  • Varnish
  • Stunnel
  • SSL/TLS Certificates
  • Certificate Authority
  • CDN
  • S3
  • JavaScript
  • jQuery
  • Google Analytics
  • Conversion tracking
  • Optimizely
  • CSS
  • Images
  • Sprites

Unlike before, there is literally no one on earth who could claim to understand every aspect of each of those things. They may be familiar with the concepts, but they can’t know all of these things all at once, especially given how quickly technologies change. Since there is no one who can understand all of those things at once, we have seen the gradual death of the full-stack developer.

It stands to reason, then, that there has been a similar decline in the numbers of full-stack security experts. People may know quite a bit about a lot of these technologies, but when it comes down to it, there’s a very real chance that any single security person will become more and more specialized over time — it’s simply hard to avoid specialization, given the growing complexity of modern apps. We may eventually see the death of the full-stack security person as well as a result.

If that is indeed the case, where does this leave enterprises that need to build secure and operationally functional applications? It means that there will be more and more silos where people will handle an ever-shrinking set of features and functionality in progressively greater depth. It means that companies that can augment security or operations in one or more areas will be adopted because there will be literally no other choice; failure to use a diverse and potentially external expertise in security/operations will ensure sec-ops failure.

At its heart, this is a result of economic forces – more code needs to be delivered and there are fewer people who understand what it’s actually doing. So outsource what you can’t know since there is too much for any one person to know about their own stack. This leads us back to the Internet Services Supply Chain problem as well – can you really trust your service providers when they have to trust other services providers and so on? All of this highlights the need for better visibility into what is really being tested, as well as the need to find security that scales and to implement operational hardware and software that is secure by default.

Developers and Security Tools

A recent study from NC State states that, “the two things that were most strongly associated with using security tools were peer influence and corporate culture. As a former developer, and as someone who has reviewed the source code of countless web applications, I can say these tools are almost impossible to use for the average developer. Security tools are invariably written for security experts and consultants. Tools produce a huge percentage of false alarms – if you are lucky, you will comb through 100 false alarms to find one legitimate issue.

The false assumption here is that running a tool will result in better security. Peer pressure to do security doesn’t quite make sense because most developers are not trained in security. And since the average tool produces a large number of false alarms, and since the pressure to ship new features as quickly as possible is so high, there will never be enough time, training or background for the average developer to effectively evaluate their own security.

The “evangelist” model the study mentions does seem to work well among some of the WhiteHat Security clients. Anecdotally, I know many organizations that will accept one volunteer per security group as a security “marshall” (something similar akin to a floor “fire marshall”). That volunteer receives special security training, but ultimately acts as a bridge between his or her individual development team and the security team.

Placing the burden of security entirely on the developers is unfair, as is making them choose between fixing vulnerabilities and shipping new code. One of the dirty secrets of technology is this: even though developers often are shouldered with the responsibility of security, risk and security are really business decisions. Security is also a process that goes far beyond the development process. Developers cannot be solely responsible for application security. Business analysts, architects, developers, quality assurance, security teams, and operations all play a critical role in managing technology risk. And above all, managers (including the C-level team) must provide direction, set levels or tolerable risk, and it is ultimately responsible for making the decision to ship new code or fix vulnerabilities. That is ultimately a business decision.

It Can Happen to Anyone

Earlier this summer, The Intercept published some details about the NSA’s XKEYSCORE program. Those details included some security issues around logging and authorization:

As hard as software developers may try, it’s nearly impossible to write bug-free source code. To compensate for this, developers often rely on multiple layers of security; if attackers can get through one layer, they may still be thwarted by other layers. XKEYSCORE appears to do a bad job of this.

When systems administrators log into XKEYSCORE servers to configure them, they appear to use a shared account, under the name “oper.” Adams notes, “That means that changes made by an administrator cannot be logged.” If one administrator does something malicious on an XKEYSCORE server using the “oper” user, it’s possible that the digital trail of what was done wouldn’t lead back to the administrator, since multiple operators use the account.

There appears to be another way an ill-intentioned systems administrator may be able to cover their tracks. Analysts wishing to query XKEYSCORE sign in via a web browser, and their searches are logged. This creates an audit trail, on which the system relies to ensure that users aren’t doing overly broad searches that would pull up U.S. citizens’ web traffic. Systems administrators, however, are able to run MySQL queries. The documents indicate that administrators have the ability to directly query the MySQL databases, where the collected data is stored, apparently bypassing the audit trail.

These are exactly the same kinds of problems that led to the Snowden leaks:

As a system administrator, Snowden was allowed to look at any file he wanted, and his actions were largely unaudited. “At certain levels, you are the audit,” said an intelligence official.

He was also able to access NSAnet, the agency’s intranet, without leaving any signature, said a person briefed on the postmortem of Snowden’s theft. He was essentially a “ghost user,” said the source, making it difficult to trace when he signed on or what files he accessed.

If he wanted, he would even have been able to pose as any other user with access to NSAnet, said the source.

The NSA obviously had the in-house expertise to design the system differently. Surely they know the importance of logging from their own experience hacking things and responding to breaches. How could this happen?

The most cynical explanation is that the point is not to log everything. Plausible deniability is a goal of intelligence agencies. Nobody can subpoena records that don’t exist. The simple fact of tracking an individual can be highly sensitive (e.g., foreign heads of state).

There’s also a simpler explanation: basic organizational problems. From the same NBC News article:

It’s 2013 and the NSA is stuck in 2003 technology,” said an intelligence official.

Jason Healey, a former cyber-security official in the Bush Administration, said the Defense Department and the NSA have “frittered away years” trying to catch up to the security technology and practices used in private industry. “The DoD and especially NSA are known for awesome cyber security, but this seems somewhat misplaced,” said Healey, now a cyber expert at the Atlantic Council. “They are great at some sophisticated tasks but oddly bad at many of the simplest.

In other words, lack of upgrades, “not invented here syndrome,” outsourcing with inadequate follow-up and accountability, and other familiar issues affect even the NSA. Very smart groups of people have these issues, even when they understand them intellectually.

Each individual department of the NSA probably faces challenges very similar to those of a lot of software companies: things need to be done yesterday, and done cheaper. New features are prioritized above technical debt. “It’s just an internal system, and we can trust our own people, anyway…”

Security has costs. Those costs include money, time, and convenience — making a system secure creates obstacles, so there’s always a temptation to ignore security “just this once,” “just for this purpose,” “just for now.” Security is a game in which the adversary tests your intelligence and creativity, yes; but most of all, the adversary tests your thoroughness and your discipline.

Conspiracy Theory and the Internet of Things

I came across this article about smart devices on Alternet, which tells us that “we are far from a digital Orwellian nightmare.” We’re told that worrying about smart televisions, smart phones, and smart meters is for “conspiracy theorists.”

It’s a great case study in not having a security mindset.

This is what David Petraeus said about the Internet of Things at the In-Q-Tel CEO summit in 2012, while he was head of the CIA:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as radio-frequency identification, sensor networks, tiny embedded servers, and energy harvesters—all connected to the next-generation Internet using abundant, low cost, and high-power computing—the latter now going to cloud computing, in many areas greater and greater supercomputing, and, ultimately, heading to quantum computing.

In practice, these technologies could lead to rapid integration of data from closed societies and provide near-continuous, persistent monitoring of virtually anywhere we choose. “Transformational” is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft. Taken together, these developments change our notions of secrecy and create innumerable challenges—as well as opportunities.

In-Q-Tel is a venture capital firm that invests “with the sole purpose of delivering these cutting-edge technologies to IC [intelligence community] end users quickly and efficiently.” Quickly means 3 years, for their purposes.

It’s been more than 3 years since Petraeus made those remarks. “Is the CIA meeting its stated goals?” is a fair question. Evil space lizards are an absurd conspiracy theory, for comparison.

Smart Televisions

The concerns are confidently dismissed:

Digital Trends points out that smart televisions aren’t “always listening” as they are being portrayed in the media. In fact, such televisions are asleep most of the time, and are only awaken [sic] when they hear a pre-programmed phrase like “Hi, TV.” So, any conversation you may be having before waking the television is not captured or reported. In fact, when the television is listening, it informs the user it is in this mode by beeping and displaying a microphone icon. And when the television enters into listening mode, it doesn’t comprehend anything except a catalog of pre-programmed, executable commands.

Mistaken assumption: gadgets work as intended.

Here’s a Washington Post story from 2013:

The FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years, and has used that technique mainly in terrorism cases or the most serious criminal investigations, said Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, now on the advisory board of Subsentio, a firm that helps telecommunications carriers comply with federal wiretap statutes.

Logically speaking, how does the smart TV know it’s heard a pre-programmed phrase? The microphone must be on so that ambient sounds and the pre-programmed phrases can be compared. We already know the device can transmit data over the internet. The issue is whether or not data can be transmitted at the wrong time, to the wrong people. What if there was a simple bug that kept the microphone from shutting off, once it’s turned on? That would be analogous to insufficient session expiration in a web app, which is pretty common.

The author admits that voice data is being sent to servers advanced enough to detect regional dialects. A low-profile third party contractor has the ability to know whether someone with a different accent is in your living room:

With smart televisions, some information, like IP address and other stored data may be transmitted as well. According to Samsung, its speech recognition technology can also be used to better recognize regional dialects and accents and other things to enhance the user experience. To do all these things, smart television makers like Samsung must employ third-party applications and servers to help them decipher the information it takes in, but this information is encrypted during transmission and not retained or for sale, at least according to the company’s privacy policy.

Can we trust that the encryption is done correctly, and nobody’s stolen the keys? Can we trust that the third parties doing natural language processing haven’t been compromised?

Smart Phones

The Alternet piece has an anecdote of someone telling the author to “Never plug your phone in at a public place; they’ll steal all your information.” Someone can be technically unsophisticated but have the right intuitions. The man doesn’t understand that his phone broadcasts radio waves into the environment, so he has an inaccurate mental model of the threat. He knows that there is a threat.

Then this passage:

A few months back, a series of videos were posted to YouTube and Facebook claiming that the stickers affixed to cellphone batteries are transmitters used for data collection and spying. The initial video showed a man peeling a Near Field Communication transmitter off the wrapper on his Samsung Galaxy S4 battery. The person speaking on the video claims this “chip” allows personal information, such as photographs, text messages, videos and emails to be shared with nearby devices and “the company.” He recommended that the sticker be removed from the phone’s battery….

And that sticker isn’t some nefarious implant the phone manufacturer uses to spy on you; it’s nothing more than a coil antenna to facilitate NFC transmission. If you peel this sticker from your battery, it will compromise your smartphone and likely render it useless for apps that use NFC, like Apple Pay and Google Wallet.

As Ars Technica put it in 2012:

By exploiting multiple security weakness in the industry standard known as Near Field Communication, smartphone hacker Charlie Miller can take control of handsets made by Samsung and Nokia. The attack works by putting the phone a few centimeters away from a quarter-sized chip, or touching it to another NFC-enabled phone. Code on the attacker-controlled chip or handset is beamed to the target phone over the air, then opens malicious files or webpages that exploit known vulnerabilities in a document reader or browser, or in some cases in the operating system itself.

Here, the author didn’t imagine a scenario where a someone might get a malicious device within a few centimeters of his phone. “Can I borrow your phone?” “Place all items from your pockets in the tray before stepping through the security checkpoint.” “Scan this barcode for free stuff!”

Smart Meters

Finally, the Alternet piece has this to say about smart meters:

In recent years, privacy activists have targeted smart meters, saying they collect detailed data about energy consumption. These conspiracy theorists are known to flood public utility commission meetings, claiming that smart meters can do a lot of sneaky things like transmit the television shows they watch, the appliances they use, the music they listen to, the websites they visit and their electronic banking use. They believe smart meters are the ultimate spying tool, making the electrical grid and the utilities that run it the ultimate spies.

Again, people can have the right intuitions about things without being technical specialists. That doesn’t mean their concerns are absurd:

The SmartMeters feature digital displays, rather than the spinning-usage wheels seen on older electromagnetic models. They track how much energy is used and when, and transmit that data directly to PG&E. This eliminates the need for paid meter readers, since the utility can immediately access customers’ usage records remotely and, theoretically, find out whether they are consuming, say, exactly 2,000 watts for exactly 12 hours a day.

That’s a problem, because usage patterns like that are telltale signs of indoor marijuana grow operations, which will often run air or water filtration systems round the clock, but leave grow lights turned on for half the day to simulate the sun, according to the Silicon Valley Americans for Safe Access, a cannabis users’ advocacy group.

What’s to stop PG&E from sharing this sensitive information with law enforcement? SmartMeters “pose a direct privacy threat to patients who … grow their own medicine,” says Lauren Vasquez, Silicon Valley ASA’s interim director. “The power company may report suspected pot growers to police, or the police may demand that PG&E turn over customer records.”

Even if you’re not doing anything ambiguously legal, the first thing that you do when you get home is probably turning the lights on. Different appliances use different amounts of power. By reporting power consumption at higher intervals, smart meters can give away a lot about what’s going on in a house.


That’s not the same as what you were watching on TV, but the content of phone conversations isn’t all that’s interesting about them, either.

It’s hard to trust things.

Hammering at speed limits

Slate has a well-written article explaining an interesting new vulnerability called “Rowhammer.” The white paper is here, and the code repository is here. Here’s the abstract describing the basic idea:

As DRAM has been scaling to increase in density, the cells are less isolated from each other. Recent studies have found that repeated accesses to DRAM rows can cause random bit flips in an adjacent row, resulting in the so called Rowhammer bug. This bug has already been exploited to gain root privileges and to evade a sandbox, showing the severity of faulting single bits for security. However, these exploits are written in native code and use special instructions to flush data from the cache.

In this paper we present Rowhammer.js, a JavaScript-based implementation of the Rowhammer attack. Our attack uses an eviction strategy found by a generic algorithm that improves the eviction rate compared to existing eviction strategies from 95.2% to 99.99%. Rowhammer.js is the first remote software-induced hardware-fault attack. In contrast to other fault attacks it does not require physical access to the machine, or the execution of native code or access to special instructions. As JavaScript-based fault attacks can be performed on millions of users stealthily and simultaneously, we propose countermeasures that can be implemented immediately.

What’s interesting to me is how much computers had to advance to make this situation possible. First, RAM had to be miniaturized enough that adjacent memory cells can interfere with one another. The RAM’s refresh rate had to be optimally low, because refreshing memory use up time and power.

Lots more had to happen before this was remotely exploitable via JavaScript. JavaScript wasn’t designed to quickly process binary data, or really even deal with binary data. It’s a high-level language without pointers or direct memory access. Semantically, JavaScript arrays aren’t contiguous blocks of memory like C arrays. They’re special objects whose keys happen to be integers. The third and fourth entries of an array aren’t necessarily next to each other in memory, and each item in an array might have a different type.

Typed Arrays were introduced for performance reasons, to facilitate working with binary data. They were especially necessary for allowing fast 3D graphics with WebGL. Typed Arrays do occupy adjacent memory cells, unlike normal arrays.

In other words, computers had to get faster, websites had to get fancier, and performance expectations had to go up.

Preventing the attack would kill performance at a hardware level: the RAM would have to spend up to 35% of its time refreshing itself. Short of hardware changes, the authors suggest that browser vendors implement tests for the vulnerability, then throttle JavaScript performance if the system is found to be vulnerable. JavaScript should have to be actively enabled by the user when they visit a website.

It seems unlikely that browser vendors are going to voluntarily undo all the work they’ve done, but they could. It would be difficult to explain to the “average person” that their computer needs to slow down because of some “Rowhammer” business. Rowhammer exists because both hardware and software vendors listened to consumer demands for higher performance. We could slow down, but it’s psychologically intolerable. The known technical solution is emotionally unacceptable to real people, so more advanced technical solutions will be attempted.

In a very similar way, we could make a huge dent in pollution and greenhouse gas emissions if we slowed down our cars and ships by half. For the vast majority of human history, we got by and nobody could travel close to 40 mph, let alone 80 mph.

When you really have information that people want, the safest thing is probably just to use typewriters. Russia already does, and Germany has considered it.

The solutions to security problems have to be technically correct and psychologically acceptable, and it’s the second part that’s hard.