Category Archives: Tools and Applications

NSA Directorates

An earlier post made the point that security problems can come from subdivisions of an organization pursuing incompatible goals. In the Cold War, for example, lack of coordination between the CIA and the State Department allowed the KGB to identify undercover agents.

The Guardian reports that the NSA is reorganizing to address this issue. Previously, its offensive and defensive functions were carried out by two “directorates”: the Signals Intelligence Directorate and the Information Assurance Directorate, respectively. Now, the two directorates will merge.

It seems to be a controversial decision:

Merging the two departments goes against the recommendation of some computer security experts, technology executives and the Obama administration’s surveillance reform commission, all of which have argued that those two missions are inherently contradictory and need to be further separated.

The NSA could decide not tell a tech company to patch a security flaw, they argue, if it knows it could be used to hack into a targeted machine. This could leave consumers at risk.

It’s doubtful that the NSA considers consumer protection part of its main objectives. This is how the Information Assurance Directorate describes its own purpose:

IAD delivers mission enhancing cyber security technologies, products, and services that enable customers and clients to secure their networks; trusted engineering solutions that provide customers with flexible, timely and risk sensitive security solutions; as well as, traditional IA engineering and fielded solutions support.

As explained here, “customer” is NSA jargon for “the White House, the State Department, the CIA, the US mission to the UN, the Defense Intelligence Agency and others.” It doesn’t refer to “customers” in the sense of citizens doing business with companies.

Simultaneously patching and exploiting the same vulnerabilities seems like an inefficient use of agency resources, unless it’s important to keep up appearances. After the Dual Elliptic Curve Deterministic Random Bit Generator and the Snowden revelations, the NSA is no longer credible as a source of assurance (except for its customers). Now, the agency can make a single decision about each vulnerability it finds: patch or exploit?

Other former officials said the restructuring at Fort Meade just formalizes what was already happening there. After all, NSA’s hackers and defenders work side by side in the agency’s Threat Operations Center in southern Maryland.

“Sometimes you got to just own it,” said Dave Aitel, a former NSA researcher and now chief executive at the security company Immunity. “Actually, come to think of it, that’s a great new motto for them too.”

Even President Obama’s surveillance reform commission from 2013, which recommended that the Information Assurance Directorate should become its own agency, acknowledged the following (page 194 of PDF):

There are, of course, strong technical reasons for information-sharing between the offense and defense for cyber security. Individual experts learn by having experience both in penetrating systems and in seeking to block penetration. Such collaboration could and must occur even if IAD is organizationally separate.

As David Graeber puts it in The Utopia of Rules, “All bureaucracies are to a certain degree utopian, in the sense that they propose an abstract ideal that real human beings can never live up to.” If something needs to be done, with a lot of potential costs and benefits, it can’t help to use an organizational chart to hide the ways that work gets done in practice.

Top 10 Web Hacking Techniques of 2015

With 2015 coming to a close, the time has come for us to pay homage to top tier security researchers from the past year and properly acknowledge all of the hard work that has been given back to the Infosec community. We do this through a nifty yearly process known as The Top 10 Web Hacking Techniques list. Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its tenth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent research. Previous Top 10’s and the number of new attack techniques discovered in each year are as follows:
2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51), 2012 (56), 2013 (31), and 2014 (46).

The vulnerabilities and hacks that make this list are chosen by the collective insight of the infosec community.  We rely 100% on nominations, either your own or another researcher, for an entry to make this list!

Phase 1: Open community submissions [Jan 11-Feb 1]

Comment this post or email us top10Webhacks[/at/]whitehatsec[\dot\]com with your submissions from now until Feb 1st. The submissions will be reviewed and verified.

Phase 2: Open community voting for the final 15 [Feb 1-Feb 8]
Each verified attack technique will be added to a survey which will be linked below on Feb 1st The survey will remain open until Feb 8th. Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end, all points from all ballots will be tabulated to ascertain the top 15 overall.

Phase 3: Panel of Security Experts Voting [Feb 8-Feb 15]

From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as Phase 2, the judges will rank the final 15 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2015!

Prizes [to be announced]

The winner of this year’s top 10 will receive a prize!

Current List of 2015 Submissions (in no particular order)
– LogJam
Abusing XSLT for Practical Attacks
Java Deserialization w/ Apache Commons Collections in WebLogic, WebSphere, JBoss, Jenkins, and OpenNMS
– Breaking HTTPS with BGP Hijacking
– Pawn Storm (CVE-2015-7645)
– Superfish SSL MitM
– Bypass Surgery – Abusing CDNs with SSRF Flash and DNS 
– Google Drive SSO Phishing
– Dom Flow – Untangling The DOM For More Easy-Juicy Bugs
– Password mining from AWS/Parse Tokens
– St. Louis Federal Reserve DNS Redirect
– Exploiting XXE in File Upload Functionality
– Expansions on FREAK attack
– eDellRoot
– WordPress Core RCE
– FileCry – The New Age of XXE
– Server-Side Template Injection: RCE for the Modern Web App
– IE11 RCE
– Understanding and Managing Entropy Usage
– Attack Surface for Project Spartan’s EdgeHTML Rendering Engine
– Web Timing Attacks Made Practical
– Winning the Online Banking War
– New Methods in Automated XSS Detection: Dynamic XSS Testing Without Using Static Payloads
– Practical Timing Attacks using Mathematical Amplification of Time Difference in == Operator
– The old is new, again. CVE20112461 is back!
– illusoryTLS
– Hunting ASynchronous Vulnerabilities
– New Evasions for Web Application Firewalls
– Magic Hashes
– Formaction Scriptless attack updates
– The Unexpected Dangers of Dynamic JavaScript
– Who Are You? A Statistical Approach to Protecting LinkedIn Logins(CSS UI Redressing Issue)
– Evading All Web Application filters
– Multiple Facebook Messenger CSRF’s
– Relative Path Overwrite
– SMTP Injection via Recipient Email Address
– Serverside Template Injection
– Hunting Asynchronous Vulnerabilities

Edit 3: Nominations have now ended and voting has begun! ***CLOSED***

Edit 2: Submissions have been extended to February 1st! Keep sending in those submissions! Currently we have 32 entries!

Edit: We will be updating this post with nominations as they are received and vetted for relevance.  Please email them to Top10Webhacks[/at/]whitehatsec[\dot\]com.

Final 15:
– Abusing CDN’s with SSRF Flash and DNS
– Abusing XLST for Practical Attacks
– Breaking HTTPS With BGP Hijacking
– Evading All* WAF XSS Filters
– Exploiting XXE in File Parsing Functionality
– FileCry – The New Age of XXE
– FREAK(Factoring attack on RSA-Export Keys)
– Hunting ASynchronous Vulnerabilities
– IllusoryTLS
– LogJam
– Magic Hashes
– Pawnstorm
– Relative Path Overwrite
– Server Side Template Injection
– Web Timing Attacks Made Practical


HTTP Methods

Much of the internet operates on HTTP, Hyper Text Transfer Protocol. With HTTP, the user sends a request and the server replies with its response. These requests are like the pneumatic tubes at the bank — a delivery system for the ultimate content. A user clicks a link; a request is sent to the server; the server replies with a response; the response has the content; the content is displayed for the user.

Request Methods
Different kinds of requests (methods) exist for different types of actions, though some types of actions can be requested in more than one way (using more than one method).  Here are some of the more common methods:

  • POST requests write to the server.
  • GET requests read from the server.
  • HEAD is similar to GET, but retrieves only headers (headers contain meta-information, while rest of the content is in the response body.)
  • PUT requests allow for the creation and replacement of resources on the server.
  • DELETE requests delete resources.

Browsers and Crawlers
Browsers and most web crawlers (search engine crawlers or WhiteHat’s scanner or other production safe crawlers) treat method types differently. Production safe crawlers will send some requests and refrain from sending others based on idempotency (see next section) and safety. Browsers will also treat the methods differently; for instance, browsers will cache or store some methods in the history, but not others.

Idempotency and Safety
Idempotency and safety are important attributes of HTTP methods. An idempotent request can be called repeatedly with the same results as if it only had been executed once. If a user clicks a thumbnail of a cat picture and every click of the picture returns the same big cat picture, that HTTP request is idempotent. Non-idempotent requests can change each time they are called. So if a user clicks to post a comment, and each click produces a new comment, that is a non-idempotent request.

Safe requests are requests that don’t alter a resource; non-safe requests have the ability to change a resource. For example, a user posting a comment is using a non-safe request, because the user is changing some resource on the web page; however, the user clicking the cat thumbnail is a safe request, because clicking the cat picture does not change the resource on the server.

Production safe crawlers consider certain methods as always safe and idempotent, e.g. GET requests. Consequently, crawlers will send GET requests arbitrarily without worrying about the effect of repeated requests or that the request might change the resource. However, safe crawlers will recognize other methods, e.g. POST requests, as non-idempotent and unsafe. So, good web crawlers won’t send POST requests.

Why This Matters
While crawlers deem certain methods safe or unsafe, a specific request is not safe or idempotent just because it’s a certain request method. For example, GET requests should always be both idempotent and safe, while POST requests are not required to be either safe or idempotent. It is possible, however, for an unsafe, non-idempotent request to be sent as a GET request. A web site that uses a GET request where a POST request should be required can result in problems. For instance:

  • When an unsafe, non-idempotent request is sent as a GET request, crawlers will not recognize the request as dangerous and may call the method repeatedly. If a web site’s “Contact Us” functionality uses GET requests, a web crawler could inadvertently end up spamming the server or someone’s email. If the functionality is accessed by POST requests, the web crawler would recognize the non-idempotent nature of POST requests and avoid it.
  • When an unsafe or non-idempotent GET request is used to transmit sensitive data, that data will be stored in the browser’s history as part of the GET request history. On a public computer, a malicious user could steal a password or credit card information merely by looking at the history if that data is sent via GET. The body of a POST request will not be stored in the browser history, and consequently, the sensitive information stays hidden.

It comes down to using the right HTTP method for the right job. If you don’t want a web crawler arbitrarily executing the request or you don’t want the body of the request stored in the browser history, use a POST request. But if the request is harmless no matter how often it’s sent, and does not contain sensitive data, a GET request will work just fine.

“Insufficient Authorization – The Basics” Webinar Questions – Part I

Recently we offered webinar on a really interesting Insufficient Authorization vulnerability: a site that allows the user to live chat with a customer service representative updated the transcript using a request parameter that an attacker could have manipulated in order to view a different transcript, potentially giving access to a great deal of confidential information; using an “email me this conversation” request in combination with various chatID parameters could have allowed an attacker to collect sensitive information from a wide variety of customer conversations.

To view the webinar, please click here.

So many excellent questions were raised that we thought it would be valuable to share them in a pair of blog posts — here is the first set of questions and answers:

Did you complete this exploit within a network or from the outside?
Here at WhiteHat, we do what is called black box testing. We test apps from outside of their network, knowing nothing of the internal workings of the application or its data mapping. This makes testing more authentic, because we can probably assume the attacker isn’t inside of the network, either.

What is the standard way to remediate these vulnerabilities? Via safer coding?
The best way to remediate this vulnerability is to implement a granular access control policy, and ensure that the application’s sensitive data and functionalities are only available to the users/admins who have the appropriate permissions.

Can you please elaborate on Generic Framework Solution and Custom Code Solution?

Most frameworks have options for access control. The best thing to do is take advantage of these, and restrict the appropriate resources/functionalities so that only people who actually require the access are allowed access. The best approach to custom coding a solution is to apply the least-privilege principle across all data access: allow each role access only to the data that is actually required to perform the related tasks. In addition, data should never be stored in the application’s root directory; this minimizes the possibility that those files can be found by an unauthorized user who simply knows where to look.

Can you talk about the tools you used to capture and manage the cookies and parameters as you attempted the exploit?
During testing, we have a plethora of tools available. For this particular test, I only used a standard proxy suite. This allows for capturing requests directly from your internet browser, editing and sending the requests, and viewing the responses. Usually, this is all that is needed to exploit an application.

What resources do you recommend for a person that is interested in learning how to perform Pen Testing?
Books, the internet, and more books! A few books that I recommend are The Hacker Playbook, The Web Application Hacker’s Handbook, and Ethical Hacking and Penetration Testing Guide. Take a look at the OWASP top ten, and dig further into each vulnerability.

How did you select this target?
Here at WhiteHat, the team that is responsible for the majority of penetration testing has a list of clients that need business logic assessments. (For a business logic assessment, we test every identifiable functionality of a site.) Each team member independently chooses a site to perform an assessment on. This particular application just happened to be the one I chose that day.

Does the use of SSL/TLS affect the exploitability of this vulnerability?
The use of SSL/TLS does not affect the exploitability. SSL/TLS simply prevents man-in-the-middle attacks, meaning that an attacker can’t relay and possibly alter the communication between a user’s browser and the web server. The proxy that I use breaks the SSL connection between my browser and the server, so that encrypted data can be viewed and modified within the proxy. Requests are then sent over SSL from the proxy.

We hope you found this interesting; more questions and answers will be coming soon — !

The Ad Blocking Wars: Ad Blockers vs. Ad-Tech

More and more people find online ads to be annoying, invasive, dangerous, insulting, distracting, expensive, and just understandable, and have decided to install an ad blocker. In fact, the number of people using ad blockers is skyrocketing. According to PageFair’s 2015 Ad Blocking Report, there are now 198 million active adblock users around the world with a global growth rate of 41% in the last 12 months. Publishers are visibly feeling the pain and fighting back against ad blockers.

Key to the conflict between ads and ad blockers is the Document Object Model, or DOM. Whenever you view a web page, your browser creates a DOM – a model of the page. This is a programmatic representation of the page that lets JavaScript convert static content into something more dynamic. Whatever is in control of the DOM will control what you see – including whether or not you see ads. Ad blockers are designed to prevent the DOM from including advertisements, while the page is designed to display them. This inherent conflict, this fight for control over the DOM, is where the Ad Blockers vs. Ad-Tech war is waged.

A recent high profile example of this conflict is Yahoo Mail’s recent reported attempt to prevent ad-blocking users from accessing their email, which upset a lot of people. This is just one conflict in an inevitable war over who is in control of what you see in your browser DOM – Ad Blockers vs. Ad-Tech (ad networks, advertisers, publishers, etc.).

Robert Hansen and I recently performed a thought experiment to see how this technological escalation plays out, and who eventually wins. I played the part of the Ad Blocker and he played Ad-Tech, each of us responding to the action of the other.

Here is what we came up with…

  1. Ad-Tech: Deliver ads to user’s browser.
  2. User: Decides to install an ad blocker.
  3. Ad Blocker: Creates a black list of fully qualified domain names / URLs that are known to serve ads. Blocks the browser from making connections to those locations.
  4. Ad-Tech: Create new fully qualified domain names / URLs that are not on black lists so their ads are not blocked. (i.e. Fast Flux)
  5. Ad Blocker: Crowd-source black list to keep it up-to-date and continue effectively blocking. Allow certain ‘safe’ ads through (i.e. Acceptable Ads Initiative)
  6. Ad-Tech: Load third-party JavaScript on to the web page, which detect when, and if, ads have been blocked. If ads are blocked, deny the user the content or service they wanted.

** Current stage of the Ad Blocking Wars ***

  1. Ad Blocker: Maintain a black list of fully qualified domain names / URL of where ad blocking detection code is hosted and block the browser from making connections to those locations.
  2. Ad-Tech: Relocate ad or ad blocking detection code to first-party website location. Ad blockers cannot block this code without also blocking the web page the user wanted use. (i.e. sponsored ads, like found on Google SERPs and Facebook)
  3. Ad Blocker: Detect the presence of ads, but not block them. Instead, make the ads invisible (i.e. visibility: hidden;). Do not send tracking cookies back to hosting server to help preserve privacy.
  4. Ad-Tech: Detect when ads are hidden in the DOM. If ads are hidden, deny the user the content or service they wanted.
  5. Ad Blocker: Allow ads to be visible, but move them WAY out of the way where they cannot be seen. Do not send tracking cookies back to hosting server to help preserve privacy.
  6. Ad-Tech: Deliver JavaScript code that detects any unauthorized modification to browser DOM where the ad is to be displayed. If the ad’s DOM is modified, deny the user the content or service they wanted.
  7. Ad Blocker: Detect the presence of first-party ad blocking detection code. Block the browser from loading that code.
  8. Ad-Tech: Move ad blocking detection code to a location that cannot be safely blocked without negatively impact the user experience. (i.e. Amazon AWS).
  9. Ad Blocker: Crawl the DOM looking for ad blocking detection code, on all domains, first and third-party. Remove the JavaScript code or do not let it execute in the browser.
  10. Ad-Tech: Implement minification and polymorphism techniques designed to hinder isolation and removal of ad blocking detection code.
  11. Ad Blocker: Crawl the DOM looking for ad blocking detection code, reverse code obfuscation techniques on all domains, first and third-party. Remove the offending JavaScript code or do not let it execute in the browser.
  12. Ad-Tech: Integrate ad blocking detection code inside of core website JavaScript functionality. If the JavaScript code fails to run, the web page is designed to be unusable.

GAME OVER. Ad-Tech Wins.

The steps above will not necessarily play out exactly in this order as the war escalates. What matters more is how the war always ends. No matter how Robert and I sliced it, Ad-Tech eventually wins. Their control and access over the DOM appears dominant.

If you look at it closely, the Ad-Tech industry behaves quite similarly to the malware industry. The techniques and delivery are consistent. Ad-Tech wants to deliver and execute code users don’t want and they’ll bypass the user’s security controls to do exactly that! So it really should come as no surprise that malware purveyors heavily utilize online advertising channels to infect millions of users. And if this is the way is history plays out, where eventually users and their ad blockers lose, antivirus tools are the only options left – and antivirus is basically a coin flip.

The only recourse left is not technical… the courts.



“Crash Course – PCI DSS 3.1 is here. Are you ready?” Part II

Thanks to all who attended our recent webinar, “Crash Course – PCI DSS 3.1 is here. Are you ready?”. During the stream, there were a number of great questions asked by attendees that didn’t get answered due to the limited time. This blog post is a means to answer many of those questions.

Still have questions? Want to know more about PCI DSS 3.1? Want a copy of our PCI DSS Compliance for Dummies eBook? Visit here to learn more.

Is an onsite pen test a requirement in order to meet the PCI-DSS 3.1 requirement?
PCI-DSS 3.1 does require a pen test to meet requirement 11.3. However, it does not implicitly state that it needs to be onsite. Allowing access to internal web applications and systems via secure VPN or other means can be leveraged to allow outside access.

Requirements for devices such as iPads are not currently specified (although could be implied throughout requirements)…if using a third party secure app to process payments, what else must be done to harden the iPad, if anything?
When using 3rd party applications or services, you must maintain the same level of security with the partner in question. Sections 12.8.2 and 12.9 speak to keeping written documentation to ensure that there is agreement between both parties about what security measures are in place.

In regards to the 2nd part of the question, no additional measures need to be taken on the devices themselves. The application itself must adapt to be secure on whatever device it is running on, even if it is a bug/flaw in the device itself. It is a matter of what control you have to protect your card data. Unless you have a contract with the device vendor, a flaw on the device would fall out of that realm of control.

What about default system accounts (i.e. root) that cannot be removed – is changing the default password no longer enough?
The removal of test data mentioned in section 6.4 focuses on test data and accounts that are used during development. This is separate from section 2.1 that deals with vendor supplied system and default passwords. For things such as root that cannot be removed, changing the defaults is sufficient.

For a test account in production, we implement an agency with a Generic account, which becomes the base of the users underneath it. Is this considered a test account or do they mean ‘backdoor’ accounts for testing?
This is a pretty specific example, but it sounds like this “generic account” is core to your architecture. This does not seem like a test account. However, if it is possible to log into this “generic account,” then it is exposed to the same risk as any user would be.

Does the Cardholder name or the expire date need to be encrypted if you do not store the strip or the actual card number?
PCI DSS in general does not apply if the PAN is not stored, processed, or transmitted. If you are processing this data but not storing the PAN with the Cardholder Name and Expiration date, then you are not required to protect the latter two. PAN should always be protected.

For EMV compliance, is Chip + PIN the only PCI-compliant method, or is the commonly used Chip only compliant?
Either Chip+Signature or Chip+Pin is currently PCI compliant so long as the same PCI standards are followed as the full magnetic strip. PCI DSS has not taken a stance on Chip+Signature vs. Chip+PIN yet, likely because the latter has not been widely adopted. The US is a laggard in this regard, but is moving towards that direction.

If “Company X” accepts responsibility for PCI Compliance, and they use a WiFi that is secured by a password, are they fully compliant?
Using WiFi secured by a password for card transactions does not violate you for PCI inherently. However there are several sections of PCI that have strict requirements around implementing and maintaining a wireless network in conjunction with cardholder data.  They discuss this more thoroughly in PCI’s section on “wireless” (page 11).

Is social engineering a requirement of a pen test or is the control specific to a network-based attack?
Social engineering is not a requirement of the pen test or network based attacks. However, social engineering is mentioned in section 8.2.2 in regards to verifying identities before modifying user credentials. Social engineering tests would undoubtedly help in this area, but isn’t a hard requirement.

Does having a third party host your data and process your cards take away the risk?
We enter credit card information into a 3rd party website and do not keep this information.  Are we still under PCI?
If we use an API with a 3rd party to enter the credit card information, what is it that we need to consider for PCI?
Yes, in this situation you would still need to comply to PCI standards. When using 3rd party applications or services you must maintain the same level of security with the partner in question.  Sections 12.8.2 and 12.9 speak to keeping written agreements to ensure that there is agreement between both parties about what security measures are in place. Using an API with a 3rd party would be considered part of the “processing” of the cardholder data so any systems that leverage that API or transmit any cardholder data would need to conform to PCI standards even if no storage is occurring on your end.

How do you see PCI compliance changing, if at all, in the next few years as chip and PIN cards become ubiquitous in the U.S.?
PCI compliance will continue to change based on industry standards.  Chip+PIN cards are not yet widely adopted in the U.S., which may have had an impact in it not being a requirement in PCI DSS 3.1  As the U.S. and other geographic regions adopt Chip+PIN in larger numbers, we expect PCI to adopt it as a requirement to push even harder for full adoption.

Regarding change 6 (Requirements 6.5.1 – 6.5.10 applies to all internal and external applications), does PCI have hard standards regarding when zero day vulns or urgent/critical vulns need to be remediated?
PCI DSS does not have any specific requirements around patching zero day vulnerabilities.  However it does recommend 30 day patch installations for “critical or at-risk systems.”

Some vulns take longer than 60/90 days to be remediated. How does this impact a PCI review?
PCI DSS does not specify any particular remediation timelines. However, there must be a plan to remediate at a minimum. If you can show that you have a timeline to remediate  vulnerabilities that have been open for a longer period of time, you should still meet compliance.  If you give a plan to an assessor, and do not deliver on that plan, then there will likely be an issue.

Are V-LAN setups a legitimate way to segment PCI devices from the rest of the network?
When using 3rd party applications or services, you must maintain the same level of security with the partner in question.  Sections 12.8.2 and 12.9 speak to keeping written agreements to ensure that there is agreement between both parties about what security measures are in place.

Does the new PCI requirement state that we need to encrypt, not tokenize, storing PAN and CVV?
CVV storage is not permitted at all. Tokenization of PANs is considered to be the best practice, but is not a requirement. The requirement for PAN storage simply reads “PCI DSS requires PAN to be rendered unreadable anywhere it is stored.” This includes several methods of storage: hashing, encryption,  tokenization, etc.

For those of us using a third party processor for payments, has the requirement to have all elements of all payment pages delivered to the customers browser originate only and directly from a PCI DSS validated 3rd party have much impact on many companies?
Yes, the requirement to have an agreement with 3rd party services can have a disruptive impact on many companies.  What happens many times is that no agreement is put in place in regards to security testing ahead of time. Then either the company or their service providers are audited, which leads to a rush to get everything assessed in time. Identifying services and partners that deal with cardholder data ahead of time and putting agreements in place can alleviate a lot of problems.

Any specific requirements for Smartcards, CHIP and signature, CHIP and PIN, or use of mobile phones for payments?
Chip cards do not have any specific requirements in PCI as of today.  The data they contain must be treated the same as the magnetic strip data. Use of mobile phones for payments must be validated on a per app basis, and of course enforcing policies that do not allow these devices to be used on untrusted networks.

Do PCI rules apply to debit cards?
Yes, PCI applies to credit and debit card data.

How would you track all systems if you are scaling based on demand?
If these are identical systems scaling based on demand, then an inventory of each individual appliance would not be necessary as long as there is a documented process on how the system scales up, scales down, and maintains PCI security standards while doing so.

How do we remain PCI compliant if we use an open source solution as part of our application that has a known vulnerability and (at a given time) has not yet been remediated?
Remaining PCI compliant is an ongoing process.  If you take a snapshot of any organizations open vulnerabilities you are likely going to find issues.  Being able to show that you have processes in place for remediating vulnerabilities, including those introduced by 3rd party solutions is part of meeting compliance.

How can we secure/protect sensitive data in  memory  in  Java and Mainframe Cobol environments?
Sensitive data existing in volatile memory in clear text is unavoidable.  The application must understand what it is storing before it can store it.  However there are several attacks that can be avoided to expose these values in memory such as Buffer Overflow attacks.  Preventing these types of attacks will eliminate the exposure of sensitive data in memory.

Is OWASP top 10 vulnerabilities  the only modules to be discussed with developers to be compliant with PCI DSS requirements 6.5?
PCI DSS recommends that developers be able to identify and resolve the vulnerabilities in sections 6.5.1-6.5.10.  The OWASP top 10 covers many vulnerabilities from these sections, but not all of them. Additional training would be required to fully cover those sections.

URLs are content

Justifications for the federal government’s controversial mass surveillance programs have involved the distinction between the contents of communications and associated “meta-data” about those communications. Finding out that two people spoke on the phone requires less red tape than listening to the conversations themselves. While “meta-data” doesn’t sound especially ominous, analysts can use graph theory to draw surprisingly powerful inferences from it. A funny illustration of that can be found in Kieran Healy’s blog post, Using Metadata to find Paul Revere.

On November 10, the Third Circuit Court of Appeals made a ruling that web browsing histories are “content” under the Wiretap Act. This implies that the government will need a warrant before collecting such browsing histories. Wired summarized the point the court was making:

A visit to “,” for instance, might count as metadata, as Cato Institute senior fellow Julian Sanchez explains. But a visit to “” clearly reveals something about the visitor’s communications with WebMD, not just the fact of the visit. “It’s not a hard call,” says Sanchez. “The specific URL I visit at or or tells you very specifically what the meaning or purport of my communications are.”

Interestingly, this party accused of violating the Wiretap Act in this case wasn’t the federal government. It was Google. The court ruled that Google had collected content in the sense of the Wiretap Act, but that’s okay because you can’t eavesdrop on your own conversation. I’m not an attorney, but the legal technicalities were well-explained in the Washington Post.

The technical technicalities are also interesting.

Basically, a cookie is a secret between your browser and an individual web server. The secret is in the form of a key-value pair, like id=12345. Once a cookie is “set,” it will accompany every request the browser sends to the server that set the cookie. If the server makes sure that each browser it interacts with has a different cookie, it can distinguish individual visitors. That’s what it means to be “logged in” to a website: after proving your identity with a username and password, the cookie assigns you a “session cookie.” When you visit, you see your own profile because the server read your cookie, and your cookie was tied to your account when you logged in.

Cookies can be set in two ways. The browser might request something from a server (HTML, JavaScript, CSS, image, etc.). The server sends back the requested file, and the response contains “Set-Cookie” headers. Alternatively, JavaScript on the page might set cookies using document.cookie. That is, cookies can be set server-side or client-side.

A cookie is nothing more than a place for application developers to store short strings of data. These are some of the common security considerations with cookies:

  • Is inappropriate data being stored in cookies?
  • Can an attacker guess the values of other people’s cookies?
  • Are cookies being sent across unencrypted connections?
  • Should the cookies get a special “HttpOnly” flag that makes them JavaScript-inaccessible, to protect them from potential cross-site scripting attacks?

OWASP has a more detailed discussion of cookie security here.

When a user requests a web page and receives an HTML document, that document can instruct their browser to communicate with many different third parties. Should all of those third parties be able to track the user, possibly across multiple websites?

Enough people feel uncomfortable with third-party cookies that browsers include options for disabling them. The case before the Third Circuit Court of Appeals was about Google’s practices in 2012, which involved exploiting a browser bug to set cookies in Apple’s Safari browser, even when users had explicitly disabled third-party cookies. Consequently, Google was able to track individual browsers across multiple websites. At issue was whether the list of URLs the browser visited consisted of “content.” The court ruled that it did.

The technical details of what Google was doing are described here.

Data is often submitted to websites through HTML forms. It’s natural to assume that submitting a form is always intentional, but forms can also be submitted by JavaScript, or without user interaction. It’s easy to assume that, if a user is submitting a form to a server, they’ve “consented” to communication with that server. That assumption led to the bug that was exploited by Google.

Safari prevented third party servers from setting cookies, unless a form was submitted to the third party. Google supplied code that made browsers submit forms to them without user interaction. In response to those form submissions, tracking cookies were set. This circumvented the user’s efforts not to comply with third-party Set-Cookie headers. Incidentally, automatically submitting a form through JavaScript is also the way an attacker would carry out cross-site scripting or cross-site request forgery attacks.

To recap: Apple and Google had a technical arms race about tracking cookies. There was a lawsuit, and now we’re clear that the government needs a warrant to look at browser histories, because URL paths and query strings are very revealing.

The court suggested that there’s a distinction to be made between the domain and the rest of the URL, but that suggestion was not legally binding.

Complexity and Storage Slow Attackers Down

Back in 2013, WhiteHat founder Jeremiah Grossman forgot an important password, and Jeremi Gosney of Stricture Consulting Group helped him crack it. Gosney knows password cracking, and he’s up for a challenge, but he knew it’d be futile trying to crack the leaked Ashley Madison passwords. Dean Pierce gave it a shot, and Ars Technica provides some context.

Ashley Madison made mistakes, but password storage wasn’t one of them. This is what came of Pierce’s efforts:

After five days, he was able to crack only 4,007 of the weakest passwords, which comes to just 0.0668 percent of the six million passwords in his pool.

It’s like Jeremiah said after his difficult experience:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

Imagine the average person that doesn’t even work in IT! Logging in to a website feels simpler than it is. It feels like, “The website checked my password, and now I’m logged in.”

Actually, “being logged in” means that the server gave your browser a secret number, AND your browser includes that number every time it makes a request, AND the server has a table of which number goes with which person, AND the server sends you the right stuff based on who you are. Usernames and passwords have to do with whether the server gives your browser the secret number in the first place.

It’s natural to assume that “checking your password” means that the server knows your password, and it compares it to what you typed in the login form. By now, everyone has heard that they’re supposed to have an impossible-to-remember password, but the reasons aren’t usually explained – people have their own problems besides the finer points of PBKDF2 vs. bcrypt).

If you’ve never had to think about it, it’s also natural to assume that hackers guessing your password are literally trying to log in as you. Even professional programmers can make that assumption, when password storage is outside their area of expertise. Our clients’ developers sometimes object to findings about password complexity or other brute force issues because they throttle login attempts, lock accounts after 3 incorrect guesses, etc. If that were true, hackers would be greatly limited by how long it takes to make each request over the network. Account lockouts are probably enough to discourage a person’s acquaintances, but they aren’t a protection against offline password cracking.

Password complexity requirements (include mixed case, include numbers, include symbols) are there to protect you once an organization has already been compromised (like Ashley Madison). In that scenario, password complexity is what you can do to help yourself. Proper password storage is what the organization can do. The key to that is in what exactly “checking your password” means.

When the server receives your login attempt, it runs your password through something called a hash function. When you set your password, the server ran your password through the hash function and stored the result, not your password. The server should only keep the password long enough to run it through the hash function. The difference between secure and insecure password storage is in the choice of hash function.

If your enemy is using brute force against you and trying every single thing, your best bet is to slow them down. That’s the thinking behind account lockouts and the design of functions like bcrypt. Running data through a hash function might be fast or slow, depending on the hash function. They have many applications. You can use them to confirm that large files haven’t been corrupted, and for that purpose it’s good for them to be fast. SHA256 would be a hash function suitable for that.

A common mistake is using a deliberately fast hash function, when a deliberately slow one is appropriate. Password storage is an unusual situation where we want the computation to be as slow and inefficient as practicable.

In the case of hackers who’ve compromised an account database, they have a table of usernames and strings like “$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy”. Cracking the password means that they make a guess, run it through the hash function, and get the same string. If you use a complicated password, they have to try more passwords. That’s what you can do to slow them down. What organizations can do is to choose a hash function that makes each individual check very time-consuming. It’s cryptography, so big numbers are involved. The “only” thing protecting the passwords of Ashley Madison users is that trying a fraction of the possible passwords is too time-consuming to be practical.

Consumers have all the necessary information to read about password storage best practices and pressure companies to use those practices. At least one website is devoted to the cause. It’s interesting that computers are forcing ordinary people to think about these things, and not just spies.

Conspiracy Theory and the Internet of Things

I came across this article about smart devices on Alternet, which tells us that “we are far from a digital Orwellian nightmare.” We’re told that worrying about smart televisions, smart phones, and smart meters is for “conspiracy theorists.”

It’s a great case study in not having a security mindset.

This is what David Petraeus said about the Internet of Things at the In-Q-Tel CEO summit in 2012, while he was head of the CIA:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as radio-frequency identification, sensor networks, tiny embedded servers, and energy harvesters—all connected to the next-generation Internet using abundant, low cost, and high-power computing—the latter now going to cloud computing, in many areas greater and greater supercomputing, and, ultimately, heading to quantum computing.

In practice, these technologies could lead to rapid integration of data from closed societies and provide near-continuous, persistent monitoring of virtually anywhere we choose. “Transformational” is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft. Taken together, these developments change our notions of secrecy and create innumerable challenges—as well as opportunities.

In-Q-Tel is a venture capital firm that invests “with the sole purpose of delivering these cutting-edge technologies to IC [intelligence community] end users quickly and efficiently.” Quickly means 3 years, for their purposes.

It’s been more than 3 years since Petraeus made those remarks. “Is the CIA meeting its stated goals?” is a fair question. Evil space lizards are an absurd conspiracy theory, for comparison.

Smart Televisions

The concerns are confidently dismissed:

Digital Trends points out that smart televisions aren’t “always listening” as they are being portrayed in the media. In fact, such televisions are asleep most of the time, and are only awaken [sic] when they hear a pre-programmed phrase like “Hi, TV.” So, any conversation you may be having before waking the television is not captured or reported. In fact, when the television is listening, it informs the user it is in this mode by beeping and displaying a microphone icon. And when the television enters into listening mode, it doesn’t comprehend anything except a catalog of pre-programmed, executable commands.

Mistaken assumption: gadgets work as intended.

Here’s a Washington Post story from 2013:

The FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years, and has used that technique mainly in terrorism cases or the most serious criminal investigations, said Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, now on the advisory board of Subsentio, a firm that helps telecommunications carriers comply with federal wiretap statutes.

Logically speaking, how does the smart TV know it’s heard a pre-programmed phrase? The microphone must be on so that ambient sounds and the pre-programmed phrases can be compared. We already know the device can transmit data over the internet. The issue is whether or not data can be transmitted at the wrong time, to the wrong people. What if there was a simple bug that kept the microphone from shutting off, once it’s turned on? That would be analogous to insufficient session expiration in a web app, which is pretty common.

The author admits that voice data is being sent to servers advanced enough to detect regional dialects. A low-profile third party contractor has the ability to know whether someone with a different accent is in your living room:

With smart televisions, some information, like IP address and other stored data may be transmitted as well. According to Samsung, its speech recognition technology can also be used to better recognize regional dialects and accents and other things to enhance the user experience. To do all these things, smart television makers like Samsung must employ third-party applications and servers to help them decipher the information it takes in, but this information is encrypted during transmission and not retained or for sale, at least according to the company’s privacy policy.

Can we trust that the encryption is done correctly, and nobody’s stolen the keys? Can we trust that the third parties doing natural language processing haven’t been compromised?

Smart Phones

The Alternet piece has an anecdote of someone telling the author to “Never plug your phone in at a public place; they’ll steal all your information.” Someone can be technically unsophisticated but have the right intuitions. The man doesn’t understand that his phone broadcasts radio waves into the environment, so he has an inaccurate mental model of the threat. He knows that there is a threat.

Then this passage:

A few months back, a series of videos were posted to YouTube and Facebook claiming that the stickers affixed to cellphone batteries are transmitters used for data collection and spying. The initial video showed a man peeling a Near Field Communication transmitter off the wrapper on his Samsung Galaxy S4 battery. The person speaking on the video claims this “chip” allows personal information, such as photographs, text messages, videos and emails to be shared with nearby devices and “the company.” He recommended that the sticker be removed from the phone’s battery….

And that sticker isn’t some nefarious implant the phone manufacturer uses to spy on you; it’s nothing more than a coil antenna to facilitate NFC transmission. If you peel this sticker from your battery, it will compromise your smartphone and likely render it useless for apps that use NFC, like Apple Pay and Google Wallet.

As Ars Technica put it in 2012:

By exploiting multiple security weakness in the industry standard known as Near Field Communication, smartphone hacker Charlie Miller can take control of handsets made by Samsung and Nokia. The attack works by putting the phone a few centimeters away from a quarter-sized chip, or touching it to another NFC-enabled phone. Code on the attacker-controlled chip or handset is beamed to the target phone over the air, then opens malicious files or webpages that exploit known vulnerabilities in a document reader or browser, or in some cases in the operating system itself.

Here, the author didn’t imagine a scenario where a someone might get a malicious device within a few centimeters of his phone. “Can I borrow your phone?” “Place all items from your pockets in the tray before stepping through the security checkpoint.” “Scan this barcode for free stuff!”

Smart Meters

Finally, the Alternet piece has this to say about smart meters:

In recent years, privacy activists have targeted smart meters, saying they collect detailed data about energy consumption. These conspiracy theorists are known to flood public utility commission meetings, claiming that smart meters can do a lot of sneaky things like transmit the television shows they watch, the appliances they use, the music they listen to, the websites they visit and their electronic banking use. They believe smart meters are the ultimate spying tool, making the electrical grid and the utilities that run it the ultimate spies.

Again, people can have the right intuitions about things without being technical specialists. That doesn’t mean their concerns are absurd:

The SmartMeters feature digital displays, rather than the spinning-usage wheels seen on older electromagnetic models. They track how much energy is used and when, and transmit that data directly to PG&E. This eliminates the need for paid meter readers, since the utility can immediately access customers’ usage records remotely and, theoretically, find out whether they are consuming, say, exactly 2,000 watts for exactly 12 hours a day.

That’s a problem, because usage patterns like that are telltale signs of indoor marijuana grow operations, which will often run air or water filtration systems round the clock, but leave grow lights turned on for half the day to simulate the sun, according to the Silicon Valley Americans for Safe Access, a cannabis users’ advocacy group.

What’s to stop PG&E from sharing this sensitive information with law enforcement? SmartMeters “pose a direct privacy threat to patients who … grow their own medicine,” says Lauren Vasquez, Silicon Valley ASA’s interim director. “The power company may report suspected pot growers to police, or the police may demand that PG&E turn over customer records.”

Even if you’re not doing anything ambiguously legal, the first thing that you do when you get home is probably turning the lights on. Different appliances use different amounts of power. By reporting power consumption at higher intervals, smart meters can give away a lot about what’s going on in a house.


That’s not the same as what you were watching on TV, but the content of phone conversations isn’t all that’s interesting about them, either.

It’s hard to trust things.

Security Pictures

Security pictures are being used in a multitude of web applications to apply an extra step in securing the login process. However, are these security pictures being used properly? Could the use of security pictures actually aid hackers? Such questions passed through my mind when testing an application’s login process that relied on security pictures to provide an extra layer of security.

I was performing a business logic assessment for an insurance application that used security pictures as part of its login process, but something seemed off. The first step was to enter your username; if the username was found in the database then you would be presented with your security picture e.g. a dog, cat, iguana. If the username was not in the database then a message saying that you haven’t setup your security picture yet was displayed. Besides the clear potential for a brute force attack on usernames, there was another vulnerability hiding – you could view other users’ security pictures just by guessing the usernames in the first step.

Before I started to dwell into how should I classify the possible vulnerability in my assessment, I had to do some quick research in a couple of topics: what are security pictures used for? And, how do other applications use them effectively?

I always wondered what extra security the picture added. How could a picture of an iguana I chose protect me from danger at all? Or add an extra layer of security when I log in? Security pictures are mainly used to protect users from phishing attacks. For example, if an attacker tries to reproduce a banking login screen, a target user who is accustomed to see an iguana picture before entering his or her password would pause for a moment, then notice that something is not right since Iggy isn’t there anymore. The absence of a security picture produces that mental pause causing the user in most cases to not enter their password.

After finding about the true purpose of security pictures, I had to see how other applications use them in a less broken way. So I visited my bank’s website, entered my username, but instead of having my security picture displayed right away I was asked to answer my security question. Once the secret answer was entered my security picture would be displayed on top of the password input field. This approach to use a security picture was secure.

What seemed off in the beginning was the fact that because attackers can get users security pictures with a brute force attack, they can go a step further into phishing and use the security pictures of target users to create an even stronger phishing attack. This enhanced phishing attack would reassure the victim that they are in the right website because their security picture is there as usual.

Now that is clear that the finding was indeed a vulnerability, I had to think about how to classify it and what score to award. I classified it as Abuse of Functionality since WhiteHat Security defines Abuse of Functionality as:

“Abuse of Functionality is an attack technique that uses a web site’s own features and functionality to attack itself or others. Abuse of Functionality can be described as the abuse of an application’s intended functionality to perform an undesirable outcome. These attacks have varied results such as consuming resources, circumventing access controls, or leaking information. The potential and level of abuse will vary from web site to web site and application to application. Abuse of functionality attacks are often a combination of other attack types and/or utilize other attack vectors.”

In this case an attacker could use the application’s own authentication functionality to attack other users by combining the results of a brute force attack and the security pictures to create a powerful phishing attack. For the scores I have chosen to use Impact and Likelihood, which are given low, medium, and high values. Impact determines the potential damage a vulnerability inflicts and Likelihood estimates how likely it is for the vulnerability to be exploited. In terms of Likelihood, I would rate this a medium because it is very time consuming to setup a phishing attack and you will have to perform a brute force attack first to obtain valid usernames, then pick from the usernames the specific victims to attack; As for Impact, I would categorize this as high because once the phishing attack is sent the victim would most likely lose his or her credentials.

Security pictures can indeed help you add an extra layer of security to your application’s login process. However, put on your black hat for a moment and think how could a hacker use your own security against the application? As presented here, sometimes the medicine can be worse than the disease.