Category Archives: Web Application Security

Complexity and Storage Slow Attackers Down

Back in 2013, WhiteHat founder Jeremiah Grossman forgot an important password, and Jeremi Gosney of Stricture Consulting Group helped him crack it. Gosney knows password cracking, and he’s up for a challenge, but he knew it’d be futile trying to crack the leaked Ashley Madison passwords. Dean Pierce gave it a shot, and Ars Technica provides some context.

Ashley Madison made mistakes, but password storage wasn’t one of them. This is what came of Pierce’s efforts:

After five days, he was able to crack only 4,007 of the weakest passwords, which comes to just 0.0668 percent of the six million passwords in his pool.

It’s like Jeremiah said after his difficult experience:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

Imagine the average person that doesn’t even work in IT! Logging in to a website feels simpler than it is. It feels like, “The website checked my password, and now I’m logged in.”

Actually, “being logged in” means that the server gave your browser a secret number, AND your browser includes that number every time it makes a request, AND the server has a table of which number goes with which person, AND the server sends you the right stuff based on who you are. Usernames and passwords have to do with whether the server gives your browser the secret number in the first place.

It’s natural to assume that “checking your password” means that the server knows your password, and it compares it to what you typed in the login form. By now, everyone has heard that they’re supposed to have an impossible-to-remember password, but the reasons aren’t usually explained – people have their own problems besides the finer points of PBKDF2 vs. bcrypt).

If you’ve never had to think about it, it’s also natural to assume that hackers guessing your password are literally trying to log in as you. Even professional programmers can make that assumption, when password storage is outside their area of expertise. Our clients’ developers sometimes object to findings about password complexity or other brute force issues because they throttle login attempts, lock accounts after 3 incorrect guesses, etc. If that were true, hackers would be greatly limited by how long it takes to make each request over the network. Account lockouts are probably enough to discourage a person’s acquaintances, but they aren’t a protection against offline password cracking.

Password complexity requirements (include mixed case, include numbers, include symbols) are there to protect you once an organization has already been compromised (like Ashley Madison). In that scenario, password complexity is what you can do to help yourself. Proper password storage is what the organization can do. The key to that is in what exactly “checking your password” means.

When the server receives your login attempt, it runs your password through something called a hash function. When you set your password, the server ran your password through the hash function and stored the result, not your password. The server should only keep the password long enough to run it through the hash function. The difference between secure and insecure password storage is in the choice of hash function.

If your enemy is using brute force against you and trying every single thing, your best bet is to slow them down. That’s the thinking behind account lockouts and the design of functions like bcrypt. Running data through a hash function might be fast or slow, depending on the hash function. They have many applications. You can use them to confirm that large files haven’t been corrupted, and for that purpose it’s good for them to be fast. SHA256 would be a hash function suitable for that.

A common mistake is using a deliberately fast hash function, when a deliberately slow one is appropriate. Password storage is an unusual situation where we want the computation to be as slow and inefficient as practicable.

In the case of hackers who’ve compromised an account database, they have a table of usernames and strings like “$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy”. Cracking the password means that they make a guess, run it through the hash function, and get the same string. If you use a complicated password, they have to try more passwords. That’s what you can do to slow them down. What organizations can do is to choose a hash function that makes each individual check very time-consuming. It’s cryptography, so big numbers are involved. The “only” thing protecting the passwords of Ashley Madison users is that trying a fraction of the possible passwords is too time-consuming to be practical.

Consumers have all the necessary information to read about password storage best practices and pressure companies to use those practices. At least one website is devoted to the cause. It’s interesting that computers are forcing ordinary people to think about these things, and not just spies.

The Death of the Full Stack Developer

When I got started in computer security, back in 1995, there wasn’t much to it — but there wasn’t much to web applications themselves. If you wanted to be a web application developer, you had to know a few basic skills. These are the kinds of things a developer would need to build a somewhat complex website back in the day:

  • ISP/Service Provider
  • Switching and routing with ACLs
  • DNS
  • Telnet
  • *NIX
  • Apache
  • vi/Emacs
  • HTML
  • CGI/Perl/SSI
  • Berkley Database
  • Images

It was a pretty long list of things to get started, but if you were determined and persevered, you could learn them in relatively short order. Ideally you might have someone who was good at networking and host security to help you out if you wanted to focus on the web side, but doing it all yourself wasn’t unheard of. It was even possible to be an expert in a few of these, though it was rare to find anyone who knew all of them.

Things have changed dramatically over the 20 years that I’ve been working in security. Now this is what a fairly common stack and provisioning technology might consist of:

  • Eclipse
  • Github
  • Docker
  • Jenkins
  • Cucumber
  • Gauntlt
  • Amazon EC2
  • Amazon AMI
  • SSH keys
  • Duosec 2FA
  • WAF
  • IDS
  • Anti-virus
  • DNS
  • DKIM
  • SPF
  • Apache
  • Relational Database
  • Amazon Glacier
  • PHP
  • Apparmor
  • Suhosin
  • WordPress CMS
  • WordPress Plugins
  • API to for updates
  • API to anti-spam filter ruleset
  • API to merchant processor
  • Varnish
  • Stunnel
  • SSL/TLS Certificates
  • Certificate Authority
  • CDN
  • S3
  • JavaScript
  • jQuery
  • Google Analytics
  • Conversion tracking
  • Optimizely
  • CSS
  • Images
  • Sprites

Unlike before, there is literally no one on earth who could claim to understand every aspect of each of those things. They may be familiar with the concepts, but they can’t know all of these things all at once, especially given how quickly technologies change. Since there is no one who can understand all of those things at once, we have seen the gradual death of the full-stack developer.

It stands to reason, then, that there has been a similar decline in the numbers of full-stack security experts. People may know quite a bit about a lot of these technologies, but when it comes down to it, there’s a very real chance that any single security person will become more and more specialized over time — it’s simply hard to avoid specialization, given the growing complexity of modern apps. We may eventually see the death of the full-stack security person as well as a result.

If that is indeed the case, where does this leave enterprises that need to build secure and operationally functional applications? It means that there will be more and more silos where people will handle an ever-shrinking set of features and functionality in progressively greater depth. It means that companies that can augment security or operations in one or more areas will be adopted because there will be literally no other choice; failure to use a diverse and potentially external expertise in security/operations will ensure sec-ops failure.

At its heart, this is a result of economic forces – more code needs to be delivered and there are fewer people who understand what it’s actually doing. So outsource what you can’t know since there is too much for any one person to know about their own stack. This leads us back to the Internet Services Supply Chain problem as well – can you really trust your service providers when they have to trust other services providers and so on? All of this highlights the need for better visibility into what is really being tested, as well as the need to find security that scales and to implement operational hardware and software that is secure by default.

Developers and Security Tools

A recent study from NC State states that, “the two things that were most strongly associated with using security tools were peer influence and corporate culture. As a former developer, and as someone who has reviewed the source code of countless web applications, I can say these tools are almost impossible to use for the average developer. Security tools are invariably written for security experts and consultants. Tools produce a huge percentage of false alarms – if you are lucky, you will comb through 100 false alarms to find one legitimate issue.

The false assumption here is that running a tool will result in better security. Peer pressure to do security doesn’t quite make sense because most developers are not trained in security. And since the average tool produces a large number of false alarms, and since the pressure to ship new features as quickly as possible is so high, there will never be enough time, training or background for the average developer to effectively evaluate their own security.

The “evangelist” model the study mentions does seem to work well among some of the WhiteHat Security clients. Anecdotally, I know many organizations that will accept one volunteer per security group as a security “marshall” (something similar akin to a floor “fire marshall”). That volunteer receives special security training, but ultimately acts as a bridge between his or her individual development team and the security team.

Placing the burden of security entirely on the developers is unfair, as is making them choose between fixing vulnerabilities and shipping new code. One of the dirty secrets of technology is this: even though developers often are shouldered with the responsibility of security, risk and security are really business decisions. Security is also a process that goes far beyond the development process. Developers cannot be solely responsible for application security. Business analysts, architects, developers, quality assurance, security teams, and operations all play a critical role in managing technology risk. And above all, managers (including the C-level team) must provide direction, set levels or tolerable risk, and it is ultimately responsible for making the decision to ship new code or fix vulnerabilities. That is ultimately a business decision.

It Can Happen to Anyone

Earlier this summer, The Intercept published some details about the NSA’s XKEYSCORE program. Those details included some security issues around logging and authorization:

As hard as software developers may try, it’s nearly impossible to write bug-free source code. To compensate for this, developers often rely on multiple layers of security; if attackers can get through one layer, they may still be thwarted by other layers. XKEYSCORE appears to do a bad job of this.

When systems administrators log into XKEYSCORE servers to configure them, they appear to use a shared account, under the name “oper.” Adams notes, “That means that changes made by an administrator cannot be logged.” If one administrator does something malicious on an XKEYSCORE server using the “oper” user, it’s possible that the digital trail of what was done wouldn’t lead back to the administrator, since multiple operators use the account.

There appears to be another way an ill-intentioned systems administrator may be able to cover their tracks. Analysts wishing to query XKEYSCORE sign in via a web browser, and their searches are logged. This creates an audit trail, on which the system relies to ensure that users aren’t doing overly broad searches that would pull up U.S. citizens’ web traffic. Systems administrators, however, are able to run MySQL queries. The documents indicate that administrators have the ability to directly query the MySQL databases, where the collected data is stored, apparently bypassing the audit trail.

These are exactly the same kinds of problems that led to the Snowden leaks:

As a system administrator, Snowden was allowed to look at any file he wanted, and his actions were largely unaudited. “At certain levels, you are the audit,” said an intelligence official.

He was also able to access NSAnet, the agency’s intranet, without leaving any signature, said a person briefed on the postmortem of Snowden’s theft. He was essentially a “ghost user,” said the source, making it difficult to trace when he signed on or what files he accessed.

If he wanted, he would even have been able to pose as any other user with access to NSAnet, said the source.

The NSA obviously had the in-house expertise to design the system differently. Surely they know the importance of logging from their own experience hacking things and responding to breaches. How could this happen?

The most cynical explanation is that the point is not to log everything. Plausible deniability is a goal of intelligence agencies. Nobody can subpoena records that don’t exist. The simple fact of tracking an individual can be highly sensitive (e.g., foreign heads of state).

There’s also a simpler explanation: basic organizational problems. From the same NBC News article:

It’s 2013 and the NSA is stuck in 2003 technology,” said an intelligence official.

Jason Healey, a former cyber-security official in the Bush Administration, said the Defense Department and the NSA have “frittered away years” trying to catch up to the security technology and practices used in private industry. “The DoD and especially NSA are known for awesome cyber security, but this seems somewhat misplaced,” said Healey, now a cyber expert at the Atlantic Council. “They are great at some sophisticated tasks but oddly bad at many of the simplest.

In other words, lack of upgrades, “not invented here syndrome,” outsourcing with inadequate follow-up and accountability, and other familiar issues affect even the NSA. Very smart groups of people have these issues, even when they understand them intellectually.

Each individual department of the NSA probably faces challenges very similar to those of a lot of software companies: things need to be done yesterday, and done cheaper. New features are prioritized above technical debt. “It’s just an internal system, and we can trust our own people, anyway…”

Security has costs. Those costs include money, time, and convenience — making a system secure creates obstacles, so there’s always a temptation to ignore security “just this once,” “just for this purpose,” “just for now.” Security is a game in which the adversary tests your intelligence and creativity, yes; but most of all, the adversary tests your thoroughness and your discipline.

Conspiracy Theory and the Internet of Things

I came across this article about smart devices on Alternet, which tells us that “we are far from a digital Orwellian nightmare.” We’re told that worrying about smart televisions, smart phones, and smart meters is for “conspiracy theorists.”

It’s a great case study in not having a security mindset.

This is what David Petraeus said about the Internet of Things at the In-Q-Tel CEO summit in 2012, while he was head of the CIA:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as radio-frequency identification, sensor networks, tiny embedded servers, and energy harvesters—all connected to the next-generation Internet using abundant, low cost, and high-power computing—the latter now going to cloud computing, in many areas greater and greater supercomputing, and, ultimately, heading to quantum computing.

In practice, these technologies could lead to rapid integration of data from closed societies and provide near-continuous, persistent monitoring of virtually anywhere we choose. “Transformational” is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft. Taken together, these developments change our notions of secrecy and create innumerable challenges—as well as opportunities.

In-Q-Tel is a venture capital firm that invests “with the sole purpose of delivering these cutting-edge technologies to IC [intelligence community] end users quickly and efficiently.” Quickly means 3 years, for their purposes.

It’s been more than 3 years since Petraeus made those remarks. “Is the CIA meeting its stated goals?” is a fair question. Evil space lizards are an absurd conspiracy theory, for comparison.

Smart Televisions

The concerns are confidently dismissed:

Digital Trends points out that smart televisions aren’t “always listening” as they are being portrayed in the media. In fact, such televisions are asleep most of the time, and are only awaken [sic] when they hear a pre-programmed phrase like “Hi, TV.” So, any conversation you may be having before waking the television is not captured or reported. In fact, when the television is listening, it informs the user it is in this mode by beeping and displaying a microphone icon. And when the television enters into listening mode, it doesn’t comprehend anything except a catalog of pre-programmed, executable commands.

Mistaken assumption: gadgets work as intended.

Here’s a Washington Post story from 2013:

The FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years, and has used that technique mainly in terrorism cases or the most serious criminal investigations, said Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, now on the advisory board of Subsentio, a firm that helps telecommunications carriers comply with federal wiretap statutes.

Logically speaking, how does the smart TV know it’s heard a pre-programmed phrase? The microphone must be on so that ambient sounds and the pre-programmed phrases can be compared. We already know the device can transmit data over the internet. The issue is whether or not data can be transmitted at the wrong time, to the wrong people. What if there was a simple bug that kept the microphone from shutting off, once it’s turned on? That would be analogous to insufficient session expiration in a web app, which is pretty common.

The author admits that voice data is being sent to servers advanced enough to detect regional dialects. A low-profile third party contractor has the ability to know whether someone with a different accent is in your living room:

With smart televisions, some information, like IP address and other stored data may be transmitted as well. According to Samsung, its speech recognition technology can also be used to better recognize regional dialects and accents and other things to enhance the user experience. To do all these things, smart television makers like Samsung must employ third-party applications and servers to help them decipher the information it takes in, but this information is encrypted during transmission and not retained or for sale, at least according to the company’s privacy policy.

Can we trust that the encryption is done correctly, and nobody’s stolen the keys? Can we trust that the third parties doing natural language processing haven’t been compromised?

Smart Phones

The Alternet piece has an anecdote of someone telling the author to “Never plug your phone in at a public place; they’ll steal all your information.” Someone can be technically unsophisticated but have the right intuitions. The man doesn’t understand that his phone broadcasts radio waves into the environment, so he has an inaccurate mental model of the threat. He knows that there is a threat.

Then this passage:

A few months back, a series of videos were posted to YouTube and Facebook claiming that the stickers affixed to cellphone batteries are transmitters used for data collection and spying. The initial video showed a man peeling a Near Field Communication transmitter off the wrapper on his Samsung Galaxy S4 battery. The person speaking on the video claims this “chip” allows personal information, such as photographs, text messages, videos and emails to be shared with nearby devices and “the company.” He recommended that the sticker be removed from the phone’s battery….

And that sticker isn’t some nefarious implant the phone manufacturer uses to spy on you; it’s nothing more than a coil antenna to facilitate NFC transmission. If you peel this sticker from your battery, it will compromise your smartphone and likely render it useless for apps that use NFC, like Apple Pay and Google Wallet.

As Ars Technica put it in 2012:

By exploiting multiple security weakness in the industry standard known as Near Field Communication, smartphone hacker Charlie Miller can take control of handsets made by Samsung and Nokia. The attack works by putting the phone a few centimeters away from a quarter-sized chip, or touching it to another NFC-enabled phone. Code on the attacker-controlled chip or handset is beamed to the target phone over the air, then opens malicious files or webpages that exploit known vulnerabilities in a document reader or browser, or in some cases in the operating system itself.

Here, the author didn’t imagine a scenario where a someone might get a malicious device within a few centimeters of his phone. “Can I borrow your phone?” “Place all items from your pockets in the tray before stepping through the security checkpoint.” “Scan this barcode for free stuff!”

Smart Meters

Finally, the Alternet piece has this to say about smart meters:

In recent years, privacy activists have targeted smart meters, saying they collect detailed data about energy consumption. These conspiracy theorists are known to flood public utility commission meetings, claiming that smart meters can do a lot of sneaky things like transmit the television shows they watch, the appliances they use, the music they listen to, the websites they visit and their electronic banking use. They believe smart meters are the ultimate spying tool, making the electrical grid and the utilities that run it the ultimate spies.

Again, people can have the right intuitions about things without being technical specialists. That doesn’t mean their concerns are absurd:

The SmartMeters feature digital displays, rather than the spinning-usage wheels seen on older electromagnetic models. They track how much energy is used and when, and transmit that data directly to PG&E. This eliminates the need for paid meter readers, since the utility can immediately access customers’ usage records remotely and, theoretically, find out whether they are consuming, say, exactly 2,000 watts for exactly 12 hours a day.

That’s a problem, because usage patterns like that are telltale signs of indoor marijuana grow operations, which will often run air or water filtration systems round the clock, but leave grow lights turned on for half the day to simulate the sun, according to the Silicon Valley Americans for Safe Access, a cannabis users’ advocacy group.

What’s to stop PG&E from sharing this sensitive information with law enforcement? SmartMeters “pose a direct privacy threat to patients who … grow their own medicine,” says Lauren Vasquez, Silicon Valley ASA’s interim director. “The power company may report suspected pot growers to police, or the police may demand that PG&E turn over customer records.”

Even if you’re not doing anything ambiguously legal, the first thing that you do when you get home is probably turning the lights on. Different appliances use different amounts of power. By reporting power consumption at higher intervals, smart meters can give away a lot about what’s going on in a house.


That’s not the same as what you were watching on TV, but the content of phone conversations isn’t all that’s interesting about them, either.

It’s hard to trust things.

Hammering at speed limits

Slate has a well-written article explaining an interesting new vulnerability called “Rowhammer.” The white paper is here, and the code repository is here. Here’s the abstract describing the basic idea:

As DRAM has been scaling to increase in density, the cells are less isolated from each other. Recent studies have found that repeated accesses to DRAM rows can cause random bit flips in an adjacent row, resulting in the so called Rowhammer bug. This bug has already been exploited to gain root privileges and to evade a sandbox, showing the severity of faulting single bits for security. However, these exploits are written in native code and use special instructions to flush data from the cache.

In this paper we present Rowhammer.js, a JavaScript-based implementation of the Rowhammer attack. Our attack uses an eviction strategy found by a generic algorithm that improves the eviction rate compared to existing eviction strategies from 95.2% to 99.99%. Rowhammer.js is the first remote software-induced hardware-fault attack. In contrast to other fault attacks it does not require physical access to the machine, or the execution of native code or access to special instructions. As JavaScript-based fault attacks can be performed on millions of users stealthily and simultaneously, we propose countermeasures that can be implemented immediately.

What’s interesting to me is how much computers had to advance to make this situation possible. First, RAM had to be miniaturized enough that adjacent memory cells can interfere with one another. The RAM’s refresh rate had to be optimally low, because refreshing memory use up time and power.

Lots more had to happen before this was remotely exploitable via JavaScript. JavaScript wasn’t designed to quickly process binary data, or really even deal with binary data. It’s a high-level language without pointers or direct memory access. Semantically, JavaScript arrays aren’t contiguous blocks of memory like C arrays. They’re special objects whose keys happen to be integers. The third and fourth entries of an array aren’t necessarily next to each other in memory, and each item in an array might have a different type.

Typed Arrays were introduced for performance reasons, to facilitate working with binary data. They were especially necessary for allowing fast 3D graphics with WebGL. Typed Arrays do occupy adjacent memory cells, unlike normal arrays.

In other words, computers had to get faster, websites had to get fancier, and performance expectations had to go up.

Preventing the attack would kill performance at a hardware level: the RAM would have to spend up to 35% of its time refreshing itself. Short of hardware changes, the authors suggest that browser vendors implement tests for the vulnerability, then throttle JavaScript performance if the system is found to be vulnerable. JavaScript should have to be actively enabled by the user when they visit a website.

It seems unlikely that browser vendors are going to voluntarily undo all the work they’ve done, but they could. It would be difficult to explain to the “average person” that their computer needs to slow down because of some “Rowhammer” business. Rowhammer exists because both hardware and software vendors listened to consumer demands for higher performance. We could slow down, but it’s psychologically intolerable. The known technical solution is emotionally unacceptable to real people, so more advanced technical solutions will be attempted.

In a very similar way, we could make a huge dent in pollution and greenhouse gas emissions if we slowed down our cars and ships by half. For the vast majority of human history, we got by and nobody could travel close to 40 mph, let alone 80 mph.

When you really have information that people want, the safest thing is probably just to use typewriters. Russia already does, and Germany has considered it.

The solutions to security problems have to be technically correct and psychologically acceptable, and it’s the second part that’s hard.

Security Pictures

Security pictures are being used in a multitude of web applications to apply an extra step in securing the login process. However, are these security pictures being used properly? Could the use of security pictures actually aid hackers? Such questions passed through my mind when testing an application’s login process that relied on security pictures to provide an extra layer of security.

I was performing a business logic assessment for an insurance application that used security pictures as part of its login process, but something seemed off. The first step was to enter your username; if the username was found in the database then you would be presented with your security picture e.g. a dog, cat, iguana. If the username was not in the database then a message saying that you haven’t setup your security picture yet was displayed. Besides the clear potential for a brute force attack on usernames, there was another vulnerability hiding – you could view other users’ security pictures just by guessing the usernames in the first step.

Before I started to dwell into how should I classify the possible vulnerability in my assessment, I had to do some quick research in a couple of topics: what are security pictures used for? And, how do other applications use them effectively?

I always wondered what extra security the picture added. How could a picture of an iguana I chose protect me from danger at all? Or add an extra layer of security when I log in? Security pictures are mainly used to protect users from phishing attacks. For example, if an attacker tries to reproduce a banking login screen, a target user who is accustomed to see an iguana picture before entering his or her password would pause for a moment, then notice that something is not right since Iggy isn’t there anymore. The absence of a security picture produces that mental pause causing the user in most cases to not enter their password.

After finding about the true purpose of security pictures, I had to see how other applications use them in a less broken way. So I visited my bank’s website, entered my username, but instead of having my security picture displayed right away I was asked to answer my security question. Once the secret answer was entered my security picture would be displayed on top of the password input field. This approach to use a security picture was secure.

What seemed off in the beginning was the fact that because attackers can get users security pictures with a brute force attack, they can go a step further into phishing and use the security pictures of target users to create an even stronger phishing attack. This enhanced phishing attack would reassure the victim that they are in the right website because their security picture is there as usual.

Now that is clear that the finding was indeed a vulnerability, I had to think about how to classify it and what score to award. I classified it as Abuse of Functionality since WhiteHat Security defines Abuse of Functionality as:

“Abuse of Functionality is an attack technique that uses a web site’s own features and functionality to attack itself or others. Abuse of Functionality can be described as the abuse of an application’s intended functionality to perform an undesirable outcome. These attacks have varied results such as consuming resources, circumventing access controls, or leaking information. The potential and level of abuse will vary from web site to web site and application to application. Abuse of functionality attacks are often a combination of other attack types and/or utilize other attack vectors.”

In this case an attacker could use the application’s own authentication functionality to attack other users by combining the results of a brute force attack and the security pictures to create a powerful phishing attack. For the scores I have chosen to use Impact and Likelihood, which are given low, medium, and high values. Impact determines the potential damage a vulnerability inflicts and Likelihood estimates how likely it is for the vulnerability to be exploited. In terms of Likelihood, I would rate this a medium because it is very time consuming to setup a phishing attack and you will have to perform a brute force attack first to obtain valid usernames, then pick from the usernames the specific victims to attack; As for Impact, I would categorize this as high because once the phishing attack is sent the victim would most likely lose his or her credentials.

Security pictures can indeed help you add an extra layer of security to your application’s login process. However, put on your black hat for a moment and think how could a hacker use your own security against the application? As presented here, sometimes the medicine can be worse than the disease.

Why is Passive Mixed Content so serious?

One of the most important tools in web security is Transport Layer Security (TLS). It not only protects sensitive information during transit, but also verifies that the content has not been modified. The user can be confident that content delivered via HTTPS is exactly what the website sent. The user can exchange sensitive information with the website, secure in the knowledge that it won’t be altered or intercepted. However, this increase in security comes with an increased overhead cost. It is tempting to be concerned only about encryption and ignore the necessity to validate on both ends, but any resources that are called on a secure page should be similarly protected, not just the ones containing secret content.

Most web security professionals agree that active content — JavaScript, Flash, etc. — should only be sourced in via HTTPS. After all, an attacker can use a Man-in-the-Middle attack to replace non-secure content on the fly. This is clearly a security risk. Active content has access to the content of the Document Object Model (DOM), and the means to exfiltrate that data. Any attack that is possible with Cross-Site Scripting is also achievable using active mixed content.

The controversy begins when the discussion turns to passive content — images, videos, etc. It may be difficult to imagine how an attacker could inflict anything worse than mild annoyance by replacing such content. There are two attack scenarios which are commonly cited.

An unsophisticated attacker could, perhaps, damage the reputation of a company by including offensive or illegal content. However, the attack would only be effective while the attacker maintains a privileged position on the network. If the user moves to a different Wi-Fi network, the attacker is out of the loop. It would be easy to demonstrate to the press or law enforcement that the company is not responsible, so the impact would be negligible.
If a particular browser’s image parsing process is vulnerable, a highly sophisticated attacker can deliver a specially crafted, malformed file using a passive mixed content vulnerability. In this case, the delivery method is incidental, and the vulnerability lies with the client, rather than the server. This attack requires advanced intelligence about the specific target’s browser, and an un-patched or unreported vulnerability in that specific browser, so the threat is negligible.
However, there is an attack scenario that requires little technical sophistication, yet may result in a complete account takeover. First, assume the attacker has established a privileged position in the network by spoofing a public Wi-Fi access point. The attacker can now return any response to non-encrypted requests coming over the air. From this position, the attacker can return a 302 “Found” temporary redirect to a non-encrypted request for the passive content on the target site. The location header for this request is a resource under their control, configured to respond with a 401 “Unauthorized” response containing a WWW-Authenticate header with a value of Basic realm=”Please confirm your credentials.” The user’s browser will halt loading the page and display an authentication prompt. Some percentage of users will inevitably enter their credentials, which will be submitted directly to the attacker. Even worse, this attack can be automated and generalized to such a degree that an attacker could use commodity hardware to set up a fake Wi-Fi hotspot in a public place and harvest passwords from any number of sites.

Protecting against this attack is relatively simple. For users, be very suspicious of any unexpected login prompts, especially if it doesn’t look like part of the website. For developers, source in all resources using HTTPS on every secure page.

Web Security for the Tech Impaired: What is two factor authentication?

You may have heard the term ‘two-factor’ or ‘multi-factor’ authentication. If you haven’t heard of these terms, chances are you’ve experienced this and not even known it. The interesting thing is that two factor authentication is one of the best ways to protect your accounts from being hacked.

So what exactly is it? Well traditional authentication will ask you for your username and password. This is an example of a system that relies on one factor — something you KNOW — as the sole authentication method to your account. If another person knows your username and password they can also login to your account. This is how many account compromises happen, a hacker simply runs through possible passwords of accounts they want to hack and will eventually guess the correct password through what is known as a ‘brute force’ attack.

In two-factor authentication, we take the concept of security a step further. Instead of only relying on something that you KNOW we also rely on something that you HAVE in your possession. You may have already been doing this and not even realized it — have you logged into your bank or credit card only to see a message like ‘This is the first time you have logged in from this machine; we have sent an authentication code to the cell phone number on file for your account — please enter that number and your password” or words to that effect? That is an example of a site that is using two-factor authentication. By using the cell phone number they have on file to send you a text to confirm that you are who you say you are, they are relying on not only something you KNOW but also something you HAVE. If an attacker were to steal or guess your username and password, they would not be able to successfully login to your account because you would receive a text out of the blue for an account you didn’t login to. At that moment you would know someone is probably trying to login to your account.

This system works with anything you have. Text is the primary means of two factor authentication as most people have easy access to a cell phone and it’s easy to read the code to enter onto the site. This system works just as well with a phone call that provides you with a code or with an email. Anything that you HAVE will work with two factor authentication. You may notice that most sites will only ask you for this information once; typically sites will ask you the very first time you log in from a given device (be it mobile, desktop or tablet). After that, the site will remember what devices you’ve signed on with and allow those devices to login without requiring the second factor, the auth code. If you typically log in with your home computer, and then remember you need to check your balance at work, the site will ask you to log in with two-factor authentication because it does not recognize that device. The thought is that a hacker is unlikely to hack into your account by breaking into your house and using your own computer to login.

Now you may be saying ‘that sounds great! Where do I sign up?’. Unfortunately not all systems support two factor authentication. However, the industry is slowly progressing that way. Sometimes it isn’t enabled by default but is an options in a ‘settings’ or ‘account’ menu on the site. To see a list of common sites and status on supporting two-factor auth, is a great resource. I highly recommend turning this service on for any account that supports it. Typically, it’s extremely quick and easy to do and will make your accounts far more secure then ever before.

#HackerKast 43: Ashley Madison Hacked, Firefox Tracking Services and Cookies, HTML5 Malware Evasion Techniques, Miami Cops Use Waze

Hey Everybody! Welcome to another HackerKast. Lets get right to it!

We had to start off with the big story of the week which was that Ashley Madison got hacked. For those of you fortunate enough to not know what Ashley Madison is, it is a dating website dedicated to members who are in relationships and looking to have affairs. This breach was a twist from most other breaches as the hacker is threatening to release all of the stolen data unless the website shuts its doors for good. Ashley Madison’s upcoming IPO could also be messed up now that the 7 million user’s data are lost and no longer private. Our friend Troy Hunt also posted a business logic flaw that allowed you to harvest registered email addresses from the forgot password functionality that didn’t rely on the leaking of the breach.

Next, in browser news, Robert was looking at an about:config setting in Mozilla Firefox that can turn off tracking services and cookies. Some studies that looked into this measured that, with this flag turned off, load time went down by 44% and bandwidth usage was down 30%. This flag is a small win for privacy but still leaks user info to Google but not to a lot of other sites. Not a perfect option since you can use a lot of browser add-ons that do a better job but this one is baked into Firefox. This is a huge usage statistic that people’s bandwidth and load time improved so drastically.

In related news, an Apple iAd executive left Apple and made some noise on his way out. He seemed to be frustrated that Apple has tons of user’s data and since they respect some level of privacy they are not living up to their full potential. This is good news for you and I who care about privacy. Where it gets worse is that he left to go to a company called Drawbridge which is focused on deanonymizing users based on lots of data of shared wifi networks, unique machine IDs, etc.

I liked this next story since it is a creative business logic issue which are always my favorite. This issue was involved with the mobile GPS directions app called Waze. What Waze does is uses crowd sourcing in order to provide real time traffic data to help reroute users around jams with a more accurate and speedier result. The other major use of Waze is reporting cops and speed traps on the road. Turns out that cops have caught wind of this and I’m assuming it’s hit their fine-based economy bottom line because hundreds of cops in Miami downloaded the app and start submitting fake cop reports. By doing this the information becomes a lot less reliable for users and cops can probably catch more people. We discussed the ethics here and whether Google (who owns Waze) would want to go toe-to-toe on this issue.

Next up, we touched on this year’s State of Application Security Report that is put out by SANS Institute every year. We didn’t go through the whole thing due to time constraints but it is full of interesting data as usual. They broke up this report into 2 major sections that they studied, Builders & Defenders. Some of the major pain points were Asset Management, such as finding every Internet facing web application which is always a challenge. Another was modifying production code and potentially breaking the app in lieu of trying to fix a security issue. The builders on the other hand were basically the inverse which was focused on delivering features and time to market. Builders also feel they are lacking knowledge in security which has been a known issue for a long time.

Last up was some straight up web app research which is always a lot of fun. Some research recently came out and was expanded on that proved that drive by download malware could avoid detection by using some common HTML5 APIs. One popular technique to download malware to a user’s machine is to chunk up the malware upon download and then reassemble it all locally later. A lot of malware detection has caught up to this and it gets detected. The same malware that would be detected using traditional methods was undetected using some combination of HTML5 techniques such as localStorage, Web Workers, etc. Great research! Looking forward to more follow ups on this.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay


Ashely Madison Hacked
Your affairs were never discreet – Ashley Madison always disclosed customer identities
Firefox’s tracking cookie blacklist reduces website load time by 44%
Former iAd exec leaves Apple, suggests company platform is held back by user data privacy policy
Miami Cops Actively Working to Sabotage Waze with Fake Submissions
2015 State of Application Security: Closing the Gap
Researchers prove HTML5 can be used to hide malware

Notable stories this week that didn’t make the cut:

Self Driving Cars Could Destroy Fine Based Economy
Hackers Remotely Kill a Jeep on the Highway—With Me in It
Redstar OS Watermarking
The Death of the SIM card is Nigh
How I got XSS’d by my ad network
FTC Takes Action Against LifeLock for Alleged Violations of 2010 Order
OpenSSH Keyboard Interactive Authentication Brute Force Vuln
NSA releases new security tool