Category Archives: Vulnerabilities

BYOD Makes Application Security a Matter of National Security

Several publications have commented on a new study from Harvard’s Berkman Center for Internet and Society. The study was called “Don’t Panic: Making Progress on the ‘Going Dark’ Debate.”

Apple and others have designed products with so-called “end-to-end encryption,” meaning that a message between two users can only be decrypted by those users. In comparison, text messages are unencrypted by default, making them available to federal law enforcement or intelligence agencies that request them. The idea of end-to-end encryption is that companies would only be able to provide metadata and ciphertext. They’d be physically unable to provide the requested plaintext.

In response, government officials invoked terrorism. The talking point is that losing access to all communications at all times means the secret investigations keeping us safe would “go dark,” and the terrorists could plot in secret. Who knows what they’d do?

Unfortunately, the law hasn’t kept pace with technology, and this disconnect has created a significant public safety problem. We call it “Going Dark,” and what it means is this: Those charged with protecting our people aren’t always able to access the evidence we need to prosecute crime and prevent terrorism even with lawful authority. We have the legal authority to intercept and access communications and information pursuant to court order, but we often lack the technical ability to do so…And both real-time communication and stored data are increasingly encrypted.

The Harvard study sought to refute the talking point. It argues that “going dark” misrepresents the overall situation. As discussed in an earlier post, the government has literally invested in using the “Internet of Things” for surveillance. That effort is likely to open up many opportunities for remotely activating microphones, cameras, and other sensors that a few encrypted texts won’t make much difference. Furthermore, using cryptography is difficult, and the business models of many companies rely on access to the contents of user’s communications. For example, Google scans the contents of emails in order to provide targeted advertising. Encrypting all emails would be self-defeating for them, as a business.

Those are the main conclusions of the report, which is the result of a series of discussions with many participants. The signatories of the report endorse its “general viewpoints and judgments,” without agreeing on every detail. The more interesting viewpoints couldn’t be endorsed by everybody, and those appear in a set of three appendices. The appendices have gotten less attention than the main findings, and they come closer to arguing that things should go dark.

The first is by Susan Landau, who made an important point about BYOD (Bring Your Own Device):

Each terrorist attack grabs headlines, but the insidious theft of U.S. intellectual property – software, business plans, designs for airplanes, automobiles, pharmaceuticals, etc. – by other nations does not. The latter is the real national-security threat and a strong reason for national policy to favor ubiquitous use of encryption

There was an era when Blackberrys were the communication device of choice for the corporate world; these devices, unlike the recent iPhones and Androids, can provide cleartext of the communications to the phone’s owner (the corporation for whom the user works). Thus businesses favored Blackberrys.

But apps drive the phone business. With the introduction of iPhones and Androids, consumers voted with their hands. People don’t like to carry two devices, and users choose to use a single consumer device for all communications. We have moved to a world of BYOD. In some instances, e.g., jobs in certain government agencies, finance, and the Defense Industrial Base, the workplace can require that work communications occur only over approved devices. But such control is largely ineffective in most work situations. So instead of Research in Motion developing a large consumer user base, the company lost market share as employees forced businesses to accept their use of personal devices for corporate communications. Thus access to U.S. intellectual property lies not only on corporate servers – which may or may not be well protected – but on millions of private communication devices.

In other words, national security may depend on the security of communications channels also used by terrorists. Math works the same for everyone, for better or worse.

Landau goes on to imply that corporate application security is, in essence, national security:

There are, after all, other ways of going after communications content than providing law enforcement with “exceptional access” to encrypted communications. These include using the existing vulnerabilities present in the apps and systems of the devices themselves. While such an approach makes investigations more expensive, this approach is a tradeoff enabling the vast majority of communications to be far more secure.

In his appendix to the Harvard paper, Bruce Schneier makes a complementary point:

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they’re in. If they’re not, it’s because you’re not high enough on their priority list to bother with. Widespread encryption forces the listener – whether a foreign government, criminal, or terrorist – to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

As always, as the NSA understands very well, the issue is defense in depth. Even with the best encryption, it’s still possible for an attacker to guess your key. Risk management is understanding the difference between what’s possible and what’s probable, and taking steps to make problems less probable. Attackers prefer scenarios where success is probable.

NSA Directorates

An earlier post made the point that security problems can come from subdivisions of an organization pursuing incompatible goals. In the Cold War, for example, lack of coordination between the CIA and the State Department allowed the KGB to identify undercover agents.

The Guardian reports that the NSA is reorganizing to address this issue. Previously, its offensive and defensive functions were carried out by two “directorates”: the Signals Intelligence Directorate and the Information Assurance Directorate, respectively. Now, the two directorates will merge.

It seems to be a controversial decision:

Merging the two departments goes against the recommendation of some computer security experts, technology executives and the Obama administration’s surveillance reform commission, all of which have argued that those two missions are inherently contradictory and need to be further separated.

The NSA could decide not tell a tech company to patch a security flaw, they argue, if it knows it could be used to hack into a targeted machine. This could leave consumers at risk.

It’s doubtful that the NSA considers consumer protection part of its main objectives. This is how the Information Assurance Directorate describes its own purpose:

IAD delivers mission enhancing cyber security technologies, products, and services that enable customers and clients to secure their networks; trusted engineering solutions that provide customers with flexible, timely and risk sensitive security solutions; as well as, traditional IA engineering and fielded solutions support.

As explained here, “customer” is NSA jargon for “the White House, the State Department, the CIA, the US mission to the UN, the Defense Intelligence Agency and others.” It doesn’t refer to “customers” in the sense of citizens doing business with companies.

Simultaneously patching and exploiting the same vulnerabilities seems like an inefficient use of agency resources, unless it’s important to keep up appearances. After the Dual Elliptic Curve Deterministic Random Bit Generator and the Snowden revelations, the NSA is no longer credible as a source of assurance (except for its customers). Now, the agency can make a single decision about each vulnerability it finds: patch or exploit?

Other former officials said the restructuring at Fort Meade just formalizes what was already happening there. After all, NSA’s hackers and defenders work side by side in the agency’s Threat Operations Center in southern Maryland.

“Sometimes you got to just own it,” said Dave Aitel, a former NSA researcher and now chief executive at the security company Immunity. “Actually, come to think of it, that’s a great new motto for them too.”

Even President Obama’s surveillance reform commission from 2013, which recommended that the Information Assurance Directorate should become its own agency, acknowledged the following (page 194 of PDF):

There are, of course, strong technical reasons for information-sharing between the offense and defense for cyber security. Individual experts learn by having experience both in penetrating systems and in seeking to block penetration. Such collaboration could and must occur even if IAD is organizationally separate.

As David Graeber puts it in The Utopia of Rules, “All bureaucracies are to a certain degree utopian, in the sense that they propose an abstract ideal that real human beings can never live up to.” If something needs to be done, with a lot of potential costs and benefits, it can’t help to use an organizational chart to hide the ways that work gets done in practice.

Top 10 Web Hacking Techniques of 2015

With 2015 coming to a close, the time has come for us to pay homage to top tier security researchers from the past year and properly acknowledge all of the hard work that has been given back to the Infosec community. We do this through a nifty yearly process known as The Top 10 Web Hacking Techniques list. Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its tenth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent research. Previous Top 10’s and the number of new attack techniques discovered in each year are as follows:
2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51), 2012 (56), 2013 (31), and 2014 (46).

The vulnerabilities and hacks that make this list are chosen by the collective insight of the infosec community.  We rely 100% on nominations, either your own or another researcher, for an entry to make this list!

Phase 1: Open community submissions [Jan 11-Feb 1]

Comment this post or email us top10Webhacks[/at/]whitehatsec[\dot\]com with your submissions from now until Feb 1st. The submissions will be reviewed and verified.

Phase 2: Open community voting for the final 15 [Feb 1-Feb 8]
Each verified attack technique will be added to a survey which will be linked below on Feb 1st The survey will remain open until Feb 8th. Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end, all points from all ballots will be tabulated to ascertain the top 15 overall.

Phase 3: Panel of Security Experts Voting [Feb 8-Feb 15]

From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as Phase 2, the judges will rank the final 15 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2015!

Prizes [to be announced]

The winner of this year’s top 10 will receive a prize!

Current List of 2015 Submissions (in no particular order)
– LogJam
Abusing XSLT for Practical Attacks
Java Deserialization w/ Apache Commons Collections in WebLogic, WebSphere, JBoss, Jenkins, and OpenNMS
– Breaking HTTPS with BGP Hijacking
– Pawn Storm (CVE-2015-7645)
– Superfish SSL MitM
– Bypass Surgery – Abusing CDNs with SSRF Flash and DNS 
– Google Drive SSO Phishing
– Dom Flow – Untangling The DOM For More Easy-Juicy Bugs
– Password mining from AWS/Parse Tokens
– St. Louis Federal Reserve DNS Redirect
– Exploiting XXE in File Upload Functionality
– Expansions on FREAK attack
– eDellRoot
– WordPress Core RCE
– FileCry – The New Age of XXE
– Server-Side Template Injection: RCE for the Modern Web App
– IE11 RCE
– Understanding and Managing Entropy Usage
– Attack Surface for Project Spartan’s EdgeHTML Rendering Engine
– Web Timing Attacks Made Practical
– Winning the Online Banking War
– CNNINC SSL MitM
– New Methods in Automated XSS Detection: Dynamic XSS Testing Without Using Static Payloads
– Practical Timing Attacks using Mathematical Amplification of Time Difference in == Operator
– The old is new, again. CVE20112461 is back!
– illusoryTLS
– Hunting ASynchronous Vulnerabilities
– New Evasions for Web Application Firewalls
– Magic Hashes
– Formaction Scriptless attack updates
– The Unexpected Dangers of Dynamic JavaScript
– Who Are You? A Statistical Approach to Protecting LinkedIn Logins(CSS UI Redressing Issue)
– Evading All Web Application filters
– Multiple Facebook Messenger CSRF’s
– Relative Path Overwrite
– SMTP Injection via Recipient Email Address
– Serverside Template Injection
– Hunting Asynchronous Vulnerabilities

Edit 3: Nominations have now ended and voting has begun! https://www.surveymonkey.co.uk/r/RXJF3QW ***CLOSED***

Edit 2: Submissions have been extended to February 1st! Keep sending in those submissions! Currently we have 32 entries!

Edit: We will be updating this post with nominations as they are received and vetted for relevance.  Please email them to Top10Webhacks[/at/]whitehatsec[\dot\]com.

Final 15:
– Abusing CDN’s with SSRF Flash and DNS
– Abusing XLST for Practical Attacks
– Breaking HTTPS With BGP Hijacking
– Evading All* WAF XSS Filters
– Exploiting XXE in File Parsing Functionality
– FileCry – The New Age of XXE
– FREAK(Factoring attack on RSA-Export Keys)
– Hunting ASynchronous Vulnerabilities
– IllusoryTLS
– LogJam
– Magic Hashes
– Pawnstorm
– Relative Path Overwrite
– Server Side Template Injection
– Web Timing Attacks Made Practical

 

HTTP Methods

Much of the internet operates on HTTP, Hyper Text Transfer Protocol. With HTTP, the user sends a request and the server replies with its response. These requests are like the pneumatic tubes at the bank — a delivery system for the ultimate content. A user clicks a link; a request is sent to the server; the server replies with a response; the response has the content; the content is displayed for the user.

Request Methods
Different kinds of requests (methods) exist for different types of actions, though some types of actions can be requested in more than one way (using more than one method).  Here are some of the more common methods:

  • POST requests write to the server.
  • GET requests read from the server.
  • HEAD is similar to GET, but retrieves only headers (headers contain meta-information, while rest of the content is in the response body.)
  • PUT requests allow for the creation and replacement of resources on the server.
  • DELETE requests delete resources.

Browsers and Crawlers
Browsers and most web crawlers (search engine crawlers or WhiteHat’s scanner or other production safe crawlers) treat method types differently. Production safe crawlers will send some requests and refrain from sending others based on idempotency (see next section) and safety. Browsers will also treat the methods differently; for instance, browsers will cache or store some methods in the history, but not others.

Idempotency and Safety
Idempotency and safety are important attributes of HTTP methods. An idempotent request can be called repeatedly with the same results as if it only had been executed once. If a user clicks a thumbnail of a cat picture and every click of the picture returns the same big cat picture, that HTTP request is idempotent. Non-idempotent requests can change each time they are called. So if a user clicks to post a comment, and each click produces a new comment, that is a non-idempotent request.

Safe requests are requests that don’t alter a resource; non-safe requests have the ability to change a resource. For example, a user posting a comment is using a non-safe request, because the user is changing some resource on the web page; however, the user clicking the cat thumbnail is a safe request, because clicking the cat picture does not change the resource on the server.

Production safe crawlers consider certain methods as always safe and idempotent, e.g. GET requests. Consequently, crawlers will send GET requests arbitrarily without worrying about the effect of repeated requests or that the request might change the resource. However, safe crawlers will recognize other methods, e.g. POST requests, as non-idempotent and unsafe. So, good web crawlers won’t send POST requests.

Why This Matters
While crawlers deem certain methods safe or unsafe, a specific request is not safe or idempotent just because it’s a certain request method. For example, GET requests should always be both idempotent and safe, while POST requests are not required to be either safe or idempotent. It is possible, however, for an unsafe, non-idempotent request to be sent as a GET request. A web site that uses a GET request where a POST request should be required can result in problems. For instance:

  • When an unsafe, non-idempotent request is sent as a GET request, crawlers will not recognize the request as dangerous and may call the method repeatedly. If a web site’s “Contact Us” functionality uses GET requests, a web crawler could inadvertently end up spamming the server or someone’s email. If the functionality is accessed by POST requests, the web crawler would recognize the non-idempotent nature of POST requests and avoid it.
  • When an unsafe or non-idempotent GET request is used to transmit sensitive data, that data will be stored in the browser’s history as part of the GET request history. On a public computer, a malicious user could steal a password or credit card information merely by looking at the history if that data is sent via GET. The body of a POST request will not be stored in the browser history, and consequently, the sensitive information stays hidden.

It comes down to using the right HTTP method for the right job. If you don’t want a web crawler arbitrarily executing the request or you don’t want the body of the request stored in the browser history, use a POST request. But if the request is harmless no matter how often it’s sent, and does not contain sensitive data, a GET request will work just fine.

“Insufficient Authorization – The Basics” Webinar Questions – Part I

Recently we offered webinar on a really interesting Insufficient Authorization vulnerability: a site that allows the user to live chat with a customer service representative updated the transcript using a request parameter that an attacker could have manipulated in order to view a different transcript, potentially giving access to a great deal of confidential information; using an “email me this conversation” request in combination with various chatID parameters could have allowed an attacker to collect sensitive information from a wide variety of customer conversations.

To view the webinar, please click here.

So many excellent questions were raised that we thought it would be valuable to share them in a pair of blog posts — here is the first set of questions and answers:

Did you complete this exploit within a network or from the outside?
Here at WhiteHat, we do what is called black box testing. We test apps from outside of their network, knowing nothing of the internal workings of the application or its data mapping. This makes testing more authentic, because we can probably assume the attacker isn’t inside of the network, either.

What is the standard way to remediate these vulnerabilities? Via safer coding?
The best way to remediate this vulnerability is to implement a granular access control policy, and ensure that the application’s sensitive data and functionalities are only available to the users/admins who have the appropriate permissions.

Can you please elaborate on Generic Framework Solution and Custom Code Solution?

Most frameworks have options for access control. The best thing to do is take advantage of these, and restrict the appropriate resources/functionalities so that only people who actually require the access are allowed access. The best approach to custom coding a solution is to apply the least-privilege principle across all data access: allow each role access only to the data that is actually required to perform the related tasks. In addition, data should never be stored in the application’s root directory; this minimizes the possibility that those files can be found by an unauthorized user who simply knows where to look.

Can you talk about the tools you used to capture and manage the cookies and parameters as you attempted the exploit?
During testing, we have a plethora of tools available. For this particular test, I only used a standard proxy suite. This allows for capturing requests directly from your internet browser, editing and sending the requests, and viewing the responses. Usually, this is all that is needed to exploit an application.

What resources do you recommend for a person that is interested in learning how to perform Pen Testing?
Books, the internet, and more books! A few books that I recommend are The Hacker Playbook, The Web Application Hacker’s Handbook, and Ethical Hacking and Penetration Testing Guide. Take a look at the OWASP top ten, and dig further into each vulnerability.

How did you select this target?
Here at WhiteHat, the team that is responsible for the majority of penetration testing has a list of clients that need business logic assessments. (For a business logic assessment, we test every identifiable functionality of a site.) Each team member independently chooses a site to perform an assessment on. This particular application just happened to be the one I chose that day.

Does the use of SSL/TLS affect the exploitability of this vulnerability?
The use of SSL/TLS does not affect the exploitability. SSL/TLS simply prevents man-in-the-middle attacks, meaning that an attacker can’t relay and possibly alter the communication between a user’s browser and the web server. The proxy that I use breaks the SSL connection between my browser and the server, so that encrypted data can be viewed and modified within the proxy. Requests are then sent over SSL from the proxy.

We hope you found this interesting; more questions and answers will be coming soon — !

An idea to help secure U.S. cybersecurity…

… and looking for the right person to show us how to do so.

A few years back I was watching a presentation given by General Keith B. Alexander, who was at the time Commander, U.S. Cyber Command and previously Director of the National Security Agency (NSA). Gen. Alexander’s remarks focused on the cybersecurity climate from his perspective and the impact on U.S. national and economic security. One comment he made caught my attention, specifically that the Department of Defense has 15,000 networks to protect. As an application security person I can only imagine how many total websites, a favorite target among hackers, that equates to. I’d bet very few of DoD’s websites by percentage get professionally assessed for vulnerabilities. Anyway, from this it became clear the General understands big picture cybersecurity problems in terms of scale.

At about 1:05:00 into the video the General opened the floor to questions and the most interesting one came from a Veteran. He said there are a lot of Veterans that would like to help with the country’s cybersecurity efforts, and asked if there were any programs available enabling them to do so. The General answered that he didn’t know for sure, but he didn’t think so. I did some research and according to a Bureau of Labor Statistics report from Sep, 2015, — there are roughly 449,000 unemployed veterans. This was fascinating to me: as I see it, this is a ready-and-willing labor force that perhaps at least a small percentage of which could apply their skills to cybersecurity.

This got me thinking and an idea hit me, but before sharing it, I need to explain a bit how WhiteHat works internally for it to make sense.

WhiteHat assesses websites for vulnerabilities. If customers fix those issues, they are far less likely to get hacked. Simple. What makes WhiteHat different is we’re able to perform these assessments at scale. And, I’m not talking just basic scanning, but true quality assessments with business logic tests carried out by real experts, a strict requirement. The challenge is that AppSec skills are extremely scarce and sought after. Ask any hiring manager. Recognizing the severe skill shortage more than a decade ago, WhiteHat created it’s Threat Research Center — our Web hacker army. TRC is specifically equipped, complete with a training program and unparalleled playground of permission-to-hack websites, to hire eager entry-level talent and turn them into experienced professionals quickly. Age and background of the applicants doesn’t matter. Today, WhiteHat has proven itself to be the best – and only – place for newcomers to get into the industry.

President Obama addressed the nation’s military on September 11, 2015 and mentioned the increasingly challenging state of cyber warfare: “What we’ve seen by both state and non-state actors is the increasing sophistication of hacking, the ability to penetrate systems that we previously thought would be secure. And it is moving fast.” The same website vulnerability issues that we’ve addressed in the private sector are felt in the defense realm.

This is where the idea comes in…

Let’s say the DoD launched a cybersecurity program to assess all of its websites for vulnerabilities. The result would be fewer breaches that are much harder to carry out. To do this the DoD would obviously need a scalable vulnerability scanning technology, but more importantly, the necessary AppSec manpower. This is where WhiteHat would come in as we have all the pieces. Financial issues aside, WhiteHat would be able to conduct all these assessments, continuously, and could do so using veteran labor — exclusively. We have the tech, the hiring process, the training program, pretty close to everything the program would require. All we need is a DoD program to partner up with.

If such a plan and program existed, everyone would win.

  • The DoD would be able to increase their cybersecurity defenses at scale and better protect the nation.
  • A large number of U.S. military veterans could be put to work towards a common cause, protecting the country’s cybersecurity, while acquiring InfoSec skills in the highest demand. Something the President said he wanted to do.
  • WhiteHat continues to grow its Web hacker army. Indeed, we already employ several veterans in the TRC who represent many of our best and brightest.

Of course there are details that need to be addressed, like how the DoD’s website vulnerability data would be safeguarded and the security of WhiteHat’s infrastructure would have to be closely audited (but considering who we already count as customers, I’m confident we’d be able to satisfy any reasonable standard). Or maybe installed onto one of their networks, which is fine too. And then those doing the work, veterans whose backgrounds are already vetted and more trusted than the average “Johnny pen-tester.”

So, the question is … now what?

Over the past 3 years I’ve discussed this idea with dozens of people, both inside and outside the government, and while everyone agrees it’s a good idea, getting traction has been difficult to say the least. Some cybersecurity training programs exist for veterans, but they tend to be either small, dormant, or not something that really protects U.S. cybersecurity.

Referring to emerging cyberthreats in a lecture at Stanford in June 2015, Secretary of Defense Ashton Carter said, “We find the alignment in open partnership, by working together. Indeed, history shows that we’ve succeeded in finding solutions to these kinds of tough questions when our commercial, civil, and government sectors work together as partners.” It would seem that even the highest levels of leadership in the DoD agree that this is the only path forward that makes sense for securing the nation’s digital assets.

At this point, the best path forward is to simply put the idea out there for open discussion, and hopefully the “right person” will see it. Someone in the government who can help us carry it forward and contact us. If you are such a person, or know who is, we welcome the opportunity to talk — leaders within the VA, the DoD, or other parts of government. And hey, if you think the idea is crazy, stupid, or not viable for some reason… I am also interested in hearing why you think so (twitter: @jeremiahg).

The Ad Blocking Wars: Ad Blockers vs. Ad-Tech

More and more people find online ads to be annoying, invasive, dangerous, insulting, distracting, expensive, and just understandable, and have decided to install an ad blocker. In fact, the number of people using ad blockers is skyrocketing. According to PageFair’s 2015 Ad Blocking Report, there are now 198 million active adblock users around the world with a global growth rate of 41% in the last 12 months. Publishers are visibly feeling the pain and fighting back against ad blockers.

Key to the conflict between ads and ad blockers is the Document Object Model, or DOM. Whenever you view a web page, your browser creates a DOM – a model of the page. This is a programmatic representation of the page that lets JavaScript convert static content into something more dynamic. Whatever is in control of the DOM will control what you see – including whether or not you see ads. Ad blockers are designed to prevent the DOM from including advertisements, while the page is designed to display them. This inherent conflict, this fight for control over the DOM, is where the Ad Blockers vs. Ad-Tech war is waged.

A recent high profile example of this conflict is Yahoo Mail’s recent reported attempt to prevent ad-blocking users from accessing their email, which upset a lot of people. This is just one conflict in an inevitable war over who is in control of what you see in your browser DOM – Ad Blockers vs. Ad-Tech (ad networks, advertisers, publishers, etc.).

Robert Hansen and I recently performed a thought experiment to see how this technological escalation plays out, and who eventually wins. I played the part of the Ad Blocker and he played Ad-Tech, each of us responding to the action of the other.

Here is what we came up with…

  1. Ad-Tech: Deliver ads to user’s browser.
  2. User: Decides to install an ad blocker.
  3. Ad Blocker: Creates a black list of fully qualified domain names / URLs that are known to serve ads. Blocks the browser from making connections to those locations.
  4. Ad-Tech: Create new fully qualified domain names / URLs that are not on black lists so their ads are not blocked. (i.e. Fast Flux)
  5. Ad Blocker: Crowd-source black list to keep it up-to-date and continue effectively blocking. Allow certain ‘safe’ ads through (i.e. Acceptable Ads Initiative)
  6. Ad-Tech: Load third-party JavaScript on to the web page, which detect when, and if, ads have been blocked. If ads are blocked, deny the user the content or service they wanted.

** Current stage of the Ad Blocking Wars ***

  1. Ad Blocker: Maintain a black list of fully qualified domain names / URL of where ad blocking detection code is hosted and block the browser from making connections to those locations.
  2. Ad-Tech: Relocate ad or ad blocking detection code to first-party website location. Ad blockers cannot block this code without also blocking the web page the user wanted use. (i.e. sponsored ads, like found on Google SERPs and Facebook)
  3. Ad Blocker: Detect the presence of ads, but not block them. Instead, make the ads invisible (i.e. visibility: hidden;). Do not send tracking cookies back to hosting server to help preserve privacy.
  4. Ad-Tech: Detect when ads are hidden in the DOM. If ads are hidden, deny the user the content or service they wanted.
  5. Ad Blocker: Allow ads to be visible, but move them WAY out of the way where they cannot be seen. Do not send tracking cookies back to hosting server to help preserve privacy.
  6. Ad-Tech: Deliver JavaScript code that detects any unauthorized modification to browser DOM where the ad is to be displayed. If the ad’s DOM is modified, deny the user the content or service they wanted.
  7. Ad Blocker: Detect the presence of first-party ad blocking detection code. Block the browser from loading that code.
  8. Ad-Tech: Move ad blocking detection code to a location that cannot be safely blocked without negatively impact the user experience. (i.e. Amazon AWS).
  9. Ad Blocker: Crawl the DOM looking for ad blocking detection code, on all domains, first and third-party. Remove the JavaScript code or do not let it execute in the browser.
  10. Ad-Tech: Implement minification and polymorphism techniques designed to hinder isolation and removal of ad blocking detection code.
  11. Ad Blocker: Crawl the DOM looking for ad blocking detection code, reverse code obfuscation techniques on all domains, first and third-party. Remove the offending JavaScript code or do not let it execute in the browser.
  12. Ad-Tech: Integrate ad blocking detection code inside of core website JavaScript functionality. If the JavaScript code fails to run, the web page is designed to be unusable.

GAME OVER. Ad-Tech Wins.

The steps above will not necessarily play out exactly in this order as the war escalates. What matters more is how the war always ends. No matter how Robert and I sliced it, Ad-Tech eventually wins. Their control and access over the DOM appears dominant.

If you look at it closely, the Ad-Tech industry behaves quite similarly to the malware industry. The techniques and delivery are consistent. Ad-Tech wants to deliver and execute code users don’t want and they’ll bypass the user’s security controls to do exactly that! So it really should come as no surprise that malware purveyors heavily utilize online advertising channels to infect millions of users. And if this is the way is history plays out, where eventually users and their ad blockers lose, antivirus tools are the only options left – and antivirus is basically a coin flip.

The only recourse left is not technical… the courts.

 

Resources:
http://digiday.com/brands/yahoo-mail-blocking-ad-block-users-accessing-email/
https://blog.pagefair.com/2015/ad-blocking-report/

“Crash Course – PCI DSS 3.1 is here. Are you ready?” Part II

Thanks to all who attended our recent webinar, “Crash Course – PCI DSS 3.1 is here. Are you ready?”. During the stream, there were a number of great questions asked by attendees that didn’t get answered due to the limited time. This blog post is a means to answer many of those questions.

Still have questions? Want to know more about PCI DSS 3.1? Want a copy of our PCI DSS Compliance for Dummies eBook? Visit here to learn more.

Is an onsite pen test a requirement in order to meet the PCI-DSS 3.1 requirement?
PCI-DSS 3.1 does require a pen test to meet requirement 11.3. However, it does not implicitly state that it needs to be onsite. Allowing access to internal web applications and systems via secure VPN or other means can be leveraged to allow outside access.

Requirements for devices such as iPads are not currently specified (although could be implied throughout requirements)…if using a third party secure app to process payments, what else must be done to harden the iPad, if anything?
When using 3rd party applications or services, you must maintain the same level of security with the partner in question. Sections 12.8.2 and 12.9 speak to keeping written documentation to ensure that there is agreement between both parties about what security measures are in place.

In regards to the 2nd part of the question, no additional measures need to be taken on the devices themselves. The application itself must adapt to be secure on whatever device it is running on, even if it is a bug/flaw in the device itself. It is a matter of what control you have to protect your card data. Unless you have a contract with the device vendor, a flaw on the device would fall out of that realm of control.

What about default system accounts (i.e. root) that cannot be removed – is changing the default password no longer enough?
The removal of test data mentioned in section 6.4 focuses on test data and accounts that are used during development. This is separate from section 2.1 that deals with vendor supplied system and default passwords. For things such as root that cannot be removed, changing the defaults is sufficient.

For a test account in production, we implement an agency with a Generic account, which becomes the base of the users underneath it. Is this considered a test account or do they mean ‘backdoor’ accounts for testing?
This is a pretty specific example, but it sounds like this “generic account” is core to your architecture. This does not seem like a test account. However, if it is possible to log into this “generic account,” then it is exposed to the same risk as any user would be.

Does the Cardholder name or the expire date need to be encrypted if you do not store the strip or the actual card number?
PCI DSS in general does not apply if the PAN is not stored, processed, or transmitted. If you are processing this data but not storing the PAN with the Cardholder Name and Expiration date, then you are not required to protect the latter two. PAN should always be protected.

For EMV compliance, is Chip + PIN the only PCI-compliant method, or is the commonly used Chip only compliant?
Either Chip+Signature or Chip+Pin is currently PCI compliant so long as the same PCI standards are followed as the full magnetic strip. PCI DSS has not taken a stance on Chip+Signature vs. Chip+PIN yet, likely because the latter has not been widely adopted. The US is a laggard in this regard, but is moving towards that direction.

If “Company X” accepts responsibility for PCI Compliance, and they use a WiFi that is secured by a password, are they fully compliant?
Using WiFi secured by a password for card transactions does not violate you for PCI inherently. However there are several sections of PCI that have strict requirements around implementing and maintaining a wireless network in conjunction with cardholder data.  They discuss this more thoroughly in PCI’s section on “wireless” (page 11).

Is social engineering a requirement of a pen test or is the control specific to a network-based attack?
Social engineering is not a requirement of the pen test or network based attacks. However, social engineering is mentioned in section 8.2.2 in regards to verifying identities before modifying user credentials. Social engineering tests would undoubtedly help in this area, but isn’t a hard requirement.

Does having a third party host your data and process your cards take away the risk?
We enter credit card information into a 3rd party website and do not keep this information.  Are we still under PCI?
If we use an API with a 3rd party to enter the credit card information, what is it that we need to consider for PCI?
Yes, in this situation you would still need to comply to PCI standards. When using 3rd party applications or services you must maintain the same level of security with the partner in question.  Sections 12.8.2 and 12.9 speak to keeping written agreements to ensure that there is agreement between both parties about what security measures are in place. Using an API with a 3rd party would be considered part of the “processing” of the cardholder data so any systems that leverage that API or transmit any cardholder data would need to conform to PCI standards even if no storage is occurring on your end.

How do you see PCI compliance changing, if at all, in the next few years as chip and PIN cards become ubiquitous in the U.S.?
PCI compliance will continue to change based on industry standards.  Chip+PIN cards are not yet widely adopted in the U.S., which may have had an impact in it not being a requirement in PCI DSS 3.1  As the U.S. and other geographic regions adopt Chip+PIN in larger numbers, we expect PCI to adopt it as a requirement to push even harder for full adoption.

Regarding change 6 (Requirements 6.5.1 – 6.5.10 applies to all internal and external applications), does PCI have hard standards regarding when zero day vulns or urgent/critical vulns need to be remediated?
PCI DSS does not have any specific requirements around patching zero day vulnerabilities.  However it does recommend 30 day patch installations for “critical or at-risk systems.”

Some vulns take longer than 60/90 days to be remediated. How does this impact a PCI review?
PCI DSS does not specify any particular remediation timelines. However, there must be a plan to remediate at a minimum. If you can show that you have a timeline to remediate  vulnerabilities that have been open for a longer period of time, you should still meet compliance.  If you give a plan to an assessor, and do not deliver on that plan, then there will likely be an issue.

Are V-LAN setups a legitimate way to segment PCI devices from the rest of the network?
When using 3rd party applications or services, you must maintain the same level of security with the partner in question.  Sections 12.8.2 and 12.9 speak to keeping written agreements to ensure that there is agreement between both parties about what security measures are in place.

Does the new PCI requirement state that we need to encrypt, not tokenize, storing PAN and CVV?
CVV storage is not permitted at all. Tokenization of PANs is considered to be the best practice, but is not a requirement. The requirement for PAN storage simply reads “PCI DSS requires PAN to be rendered unreadable anywhere it is stored.” This includes several methods of storage: hashing, encryption,  tokenization, etc.

For those of us using a third party processor for payments, has the requirement to have all elements of all payment pages delivered to the customers browser originate only and directly from a PCI DSS validated 3rd party have much impact on many companies?
Yes, the requirement to have an agreement with 3rd party services can have a disruptive impact on many companies.  What happens many times is that no agreement is put in place in regards to security testing ahead of time. Then either the company or their service providers are audited, which leads to a rush to get everything assessed in time. Identifying services and partners that deal with cardholder data ahead of time and putting agreements in place can alleviate a lot of problems.

Any specific requirements for Smartcards, CHIP and signature, CHIP and PIN, or use of mobile phones for payments?
Chip cards do not have any specific requirements in PCI as of today.  The data they contain must be treated the same as the magnetic strip data. Use of mobile phones for payments must be validated on a per app basis, and of course enforcing policies that do not allow these devices to be used on untrusted networks.

Do PCI rules apply to debit cards?
Yes, PCI applies to credit and debit card data.

How would you track all systems if you are scaling based on demand?
If these are identical systems scaling based on demand, then an inventory of each individual appliance would not be necessary as long as there is a documented process on how the system scales up, scales down, and maintains PCI security standards while doing so.

How do we remain PCI compliant if we use an open source solution as part of our application that has a known vulnerability and (at a given time) has not yet been remediated?
Remaining PCI compliant is an ongoing process.  If you take a snapshot of any organizations open vulnerabilities you are likely going to find issues.  Being able to show that you have processes in place for remediating vulnerabilities, including those introduced by 3rd party solutions is part of meeting compliance.

How can we secure/protect sensitive data in  memory  in  Java and Mainframe Cobol environments?
Sensitive data existing in volatile memory in clear text is unavoidable.  The application must understand what it is storing before it can store it.  However there are several attacks that can be avoided to expose these values in memory such as Buffer Overflow attacks.  Preventing these types of attacks will eliminate the exposure of sensitive data in memory.

Is OWASP top 10 vulnerabilities  the only modules to be discussed with developers to be compliant with PCI DSS requirements 6.5?
PCI DSS recommends that developers be able to identify and resolve the vulnerabilities in sections 6.5.1-6.5.10.  The OWASP top 10 covers many vulnerabilities from these sections, but not all of them. Additional training would be required to fully cover those sections.

URLs are content

Justifications for the federal government’s controversial mass surveillance programs have involved the distinction between the contents of communications and associated “meta-data” about those communications. Finding out that two people spoke on the phone requires less red tape than listening to the conversations themselves. While “meta-data” doesn’t sound especially ominous, analysts can use graph theory to draw surprisingly powerful inferences from it. A funny illustration of that can be found in Kieran Healy’s blog post, Using Metadata to find Paul Revere.

On November 10, the Third Circuit Court of Appeals made a ruling that web browsing histories are “content” under the Wiretap Act. This implies that the government will need a warrant before collecting such browsing histories. Wired summarized the point the court was making:

A visit to “webmd.com,” for instance, might count as metadata, as Cato Institute senior fellow Julian Sanchez explains. But a visit to “www.webmd.com/family-pregnancy” clearly reveals something about the visitor’s communications with WebMD, not just the fact of the visit. “It’s not a hard call,” says Sanchez. “The specific URL I visit at nytimes.com or cato.org or webmd.com tells you very specifically what the meaning or purport of my communications are.”

Interestingly, this party accused of violating the Wiretap Act in this case wasn’t the federal government. It was Google. The court ruled that Google had collected content in the sense of the Wiretap Act, but that’s okay because you can’t eavesdrop on your own conversation. I’m not an attorney, but the legal technicalities were well-explained in the Washington Post.

The technical technicalities are also interesting.

Basically, a cookie is a secret between your browser and an individual web server. The secret is in the form of a key-value pair, like id=12345. Once a cookie is “set,” it will accompany every request the browser sends to the server that set the cookie. If the server makes sure that each browser it interacts with has a different cookie, it can distinguish individual visitors. That’s what it means to be “logged in” to a website: after proving your identity with a username and password, the cookie assigns you a “session cookie.” When you visit https://www.example.com/my-profile, you see your own profile because the server read your cookie, and your cookie was tied to your account when you logged in.

Cookies can be set in two ways. The browser might request something from a server (HTML, JavaScript, CSS, image, etc.). The server sends back the requested file, and the response contains “Set-Cookie” headers. Alternatively, JavaScript on the page might set cookies using document.cookie. That is, cookies can be set server-side or client-side.

A cookie is nothing more than a place for application developers to store short strings of data. These are some of the common security considerations with cookies:

  • Is inappropriate data being stored in cookies?
  • Can an attacker guess the values of other people’s cookies?
  • Are cookies being sent across unencrypted connections?
  • Should the cookies get a special “HttpOnly” flag that makes them JavaScript-inaccessible, to protect them from potential cross-site scripting attacks?

OWASP has a more detailed discussion of cookie security here.

When a user requests a web page and receives an HTML document, that document can instruct their browser to communicate with many different third parties. Should all of those third parties be able to track the user, possibly across multiple websites?

Enough people feel uncomfortable with third-party cookies that browsers include options for disabling them. The case before the Third Circuit Court of Appeals was about Google’s practices in 2012, which involved exploiting a browser bug to set cookies in Apple’s Safari browser, even when users had explicitly disabled third-party cookies. Consequently, Google was able to track individual browsers across multiple websites. At issue was whether the list of URLs the browser visited consisted of “content.” The court ruled that it did.

The technical details of what Google was doing are described here.

Data is often submitted to websites through HTML forms. It’s natural to assume that submitting a form is always intentional, but forms can also be submitted by JavaScript, or without user interaction. It’s easy to assume that, if a user is submitting a form to a server, they’ve “consented” to communication with that server. That assumption led to the bug that was exploited by Google.

Safari prevented third party servers from setting cookies, unless a form was submitted to the third party. Google supplied code that made browsers submit forms to them without user interaction. In response to those form submissions, tracking cookies were set. This circumvented the user’s efforts not to comply with third-party Set-Cookie headers. Incidentally, automatically submitting a form through JavaScript is also the way an attacker would carry out cross-site scripting or cross-site request forgery attacks.

To recap: Apple and Google had a technical arms race about tracking cookies. There was a lawsuit, and now we’re clear that the government needs a warrant to look at browser histories, because URL paths and query strings are very revealing.

The court suggested that there’s a distinction to be made between the domain and the rest of the URL, but that suggestion was not legally binding.

Buyer beware: Don’t get more than you bargained for this Cyber Monday

Cyber Monday is just a few days away now, and no doubt this year will set new records for online spending. Online sales in the US alone are expected to reach $3 billion on Cyber Monday, November 30th, which will be one of the largest single days for online sales in history. Unfortunately, however, we’ve found that over a quarter of UK and US-based consumers will be shopping for bargains and making purchases without first checking to see if the website of the retailer they are buying from is secure.

 

A survey1 conducted by Opinion Matters on behalf of WhiteHat Security discovered this disturbing fact, as well as the fact that shoppers in the US are more likely to put themselves at risk than those in the UK, with more than a third of US-based respondents admitting that they wouldn’t check the website’s security before purchasing. This is particularly worrying given that more than half of shoppers are expecting to use their credit or debit card to purchase goods this Black Friday weekend.

 

The consumer survey also found that a third of UK and US-based shoppers are not sure, or definitely do not know how to identify if a website is secure.

 

Of course, the retailers themselves have a big part to play in website security. Researchers from our Threat Research Center (TRC) analyzed retail websites between July and September 20152 and found that they are more likely to exhibit serious vulnerabilities compared to other industries. The most commonly occurring critical vulnerability classes for the retail industry were:

 

  • Insufficient Transport Layer Protection (with 64% likelihood): When applications do not take measures to authenticate, encrypt, and protect sensitive network traffic, data such as payment card details and personal information can be left exposed and attackers may intercept and view the information.
  • Cross Site Scripting (with 57% likelihood): Attackers can use a vulnerable website as a vehicle to deliver malicious instructions to a victim’s browser. This can lead to further attacks such as keylogging, impersonating the user, phishing and identity theft.
  • Information Leakage (with 54% likelihood): Insecure applications may reveal sensitive data that can be used by an attacker to exploit the target web application, its hosting network, or its users.
  • Brute Force (with 38% likelihood): Most commonly targeting log-in credentials, brute force attacks can also be used to retrieve the session identifier of another user, enabling the attacker to retrieve personal information and perform actions on behalf of the user.
  • Cross Site Request Forgery (with 29% likelihood): Using social engineering (such as sending a link via email or chat), attackers can trick users into submitting a request, such as transferring funds or changing their email address or password.

 

In response to the survey’s findings, my colleague and WhiteHat founder, Jeremiah Grossman, said, “This research suggests that when it comes to website security awareness, not only is there still some way to go on the part of the consumer, but the retailers themselves could benefit from re-assessing their security measures, particularly when considering the volume and nature of customer information that will pass through their websites this Cyber Monday.”

 

WhiteHat is in the business of helping organizations in the retail and other sectors secure their applications and websites. But for consumers, Grossman offers up a few simple tricks that can help shoppers stay safe online over this holiday shopping season:

 

  • Look out for ‘HTTPS’ when browsing: HTTP – the letters that show up in front of the URL when browsing online – indicates that the web page is using a non-secure way of transmitting data. Data can be intercepted and read at any point between the computer and the website. HTTPS on the other hand means that all the data being transmitted is encrypted. Look out for the HTTPS coloured in either green or red and a lock icon.
  • Install a modern web browser and keep it up to date: Most people are already using one of the well known web browsers, but it is also very important that they are kept up to date with the latest security patches.
  • Be wary of public WiFi: While connecting to free WiFi networks seems like a good idea, it can be extremely dangerous as it has become relatively easy for attackers to set up WiFi hotspots to spy on traffic going back and forth between users and websites. Never trust a WiFi network and avoid banking, purchasing or sensitive transactions while connected to public WiFi.
  • Go direct to the website: There will be plenty of ‘big discount’ emails around over the next few days that will entice shoppers to websites for bargain purchases. Shoppers should make sure that they go direct to the site from their web browser, rather than clicking through the email.
  • Make your passwords hard to guess: Most people wouldn’t have the same key for their car, home, office etc., and for the same reason, it makes sense to have hard-to-guess, unique passwords for online accounts.
  • Install ad blocking extensions: Malicious software often infects computers through viewing or clicking on online advertisements, so it is not a bad idea to install an ad blocking extension that either allows users to surf the web without ads, or completely block the invisible trackers that ads use to build profiles of online habits.
  • Stick to the apps you trust: When making purchases on a mobile phone, shoppers are much better off sticking to apps from companies they know and trust, rather than relying on mobile browsers and email.

 

If you’re a retailer interested in learning more about the security posture of your applications and websites, sign up for a free website security risk assessment. And if you’re a consumer… well, buyer beware. Follow the tips provided here for a safer holiday shopping experience.

 

 

1The WhiteHat Security survey of 4,244 online shoppers in the UK and US was conducted between 13 November 2015 and 19 November 2015.

 

4WhiteHat Security threat researchers conducted likelihood analysis of critical vulnerabilities in retail websites using data collected between 1 July 2015 and 30 September 2015.