Category Archives: Technical Insight

NSA Directorates

An earlier post made the point that security problems can come from subdivisions of an organization pursuing incompatible goals. In the Cold War, for example, lack of coordination between the CIA and the State Department allowed the KGB to identify undercover agents.

The Guardian reports that the NSA is reorganizing to address this issue. Previously, its offensive and defensive functions were carried out by two “directorates”: the Signals Intelligence Directorate and the Information Assurance Directorate, respectively. Now, the two directorates will merge.

It seems to be a controversial decision:

Merging the two departments goes against the recommendation of some computer security experts, technology executives and the Obama administration’s surveillance reform commission, all of which have argued that those two missions are inherently contradictory and need to be further separated.

The NSA could decide not tell a tech company to patch a security flaw, they argue, if it knows it could be used to hack into a targeted machine. This could leave consumers at risk.

It’s doubtful that the NSA considers consumer protection part of its main objectives. This is how the Information Assurance Directorate describes its own purpose:

IAD delivers mission enhancing cyber security technologies, products, and services that enable customers and clients to secure their networks; trusted engineering solutions that provide customers with flexible, timely and risk sensitive security solutions; as well as, traditional IA engineering and fielded solutions support.

As explained here, “customer” is NSA jargon for “the White House, the State Department, the CIA, the US mission to the UN, the Defense Intelligence Agency and others.” It doesn’t refer to “customers” in the sense of citizens doing business with companies.

Simultaneously patching and exploiting the same vulnerabilities seems like an inefficient use of agency resources, unless it’s important to keep up appearances. After the Dual Elliptic Curve Deterministic Random Bit Generator and the Snowden revelations, the NSA is no longer credible as a source of assurance (except for its customers). Now, the agency can make a single decision about each vulnerability it finds: patch or exploit?

Other former officials said the restructuring at Fort Meade just formalizes what was already happening there. After all, NSA’s hackers and defenders work side by side in the agency’s Threat Operations Center in southern Maryland.

“Sometimes you got to just own it,” said Dave Aitel, a former NSA researcher and now chief executive at the security company Immunity. “Actually, come to think of it, that’s a great new motto for them too.”

Even President Obama’s surveillance reform commission from 2013, which recommended that the Information Assurance Directorate should become its own agency, acknowledged the following (page 194 of PDF):

There are, of course, strong technical reasons for information-sharing between the offense and defense for cyber security. Individual experts learn by having experience both in penetrating systems and in seeking to block penetration. Such collaboration could and must occur even if IAD is organizationally separate.

As David Graeber puts it in The Utopia of Rules, “All bureaucracies are to a certain degree utopian, in the sense that they propose an abstract ideal that real human beings can never live up to.” If something needs to be done, with a lot of potential costs and benefits, it can’t help to use an organizational chart to hide the ways that work gets done in practice.

Top 10 Web Hacking Techniques of 2015

Edit 3: Nominations have now ended and voting has begun!

Edit 2: Submissions have been extended to February 1st! Keep sending in those submissions! Currently we have 32 entries!

Edit: We will be updating this post with nominations as they are received and vetted for relevance.  Please email them to Top10Webhacks[/at/]whitehatsec[\dot\]com.


With 2015 coming to a close, the time has come for us to pay homage to top tier security researchers from the past year and properly acknowledge all of the hard work that has been given back to the Infosec community. We do this through a nifty yearly process known as The Top 10 Web Hacking Techniques list. Every year the security community produces a stunning number of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivalents. Beyond individual vulnerabilities with CVE numbers or system compromises, we are solely focused on new and creative methods of Web-based attack. Now in its tenth year, the Top 10 Web Hacking Techniques list encourages information sharing, provides a centralized knowledge base, and recognizes researchers who contribute excellent research. Previous Top 10’s and the number of new attack techniques discovered in each year are as follows:
2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51), 2012 (56), 2013 (31), and 2014 (46).

The vulnerabilities and hacks that make this list are chosen by the collective insight of the infosec community.  We rely 100% on nominations, either your own or another researcher, for an entry to make this list!

Phase 1: Open community submissions [Jan 11-Feb 1]

Comment this post or email us top10Webhacks[/at/]whitehatsec[\dot\]com with your submissions from now until Feb 1st. The submissions will be reviewed and verified.

Phase 2: Open community voting for the final 15 [Feb 1-Feb 8]
Each verified attack technique will be added to a survey which will be linked below on Feb 1st The survey will remain open until Feb 8th. Each attack technique (listed alphabetically) receives points depending on how high the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end, all points from all ballots will be tabulated to ascertain the top 15 overall.

Phase 3: Panel of Security Experts Voting [Feb 8-Feb 15]

From the result of the open community voting, the final 15 Web Hacking Techniques will be ranked based on votes by a panel of security experts. (Panel to be announced soon!) Using the exact same voting process as Phase 2, the judges will rank the final 15 based on novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top 10 Web Hacking Techniques of 2015!

Prizes [to be announced]

The winner of this year’s top 10 will receive a prize!

Current List of 2015 Submissions (in no particular order)
– LogJam
Abusing XSLT for Practical Attacks
Java Deserialization w/ Apache Commons Collections in WebLogic, WebSphere, JBoss, Jenkins, and OpenNMS
– Breaking HTTPS with BGP Hijacking
– Pawn Storm (CVE-2015-7645)
– Superfish SSL MitM
– Bypass Surgery – Abusing CDNs with SSRF Flash and DNS 
– Google Drive SSO Phishing
– Dom Flow – Untangling The DOM For More Easy-Juicy Bugs
– Password mining from AWS/Parse Tokens
– St. Louis Federal Reserve DNS Redirect
– Exploiting XXE in File Upload Functionality
– Expansions on FREAK attack
– eDellRoot
– WordPress Core RCE
– FileCry – The New Age of XXE
– Server-Side Template Injection: RCE for the Modern Web App
– IE11 RCE
– Understanding and Managing Entropy Usage
– Attack Surface for Project Spartan’s EdgeHTML Rendering Engine
– Web Timing Attacks Made Practical
– Winning the Online Banking War
– New Methods in Automated XSS Detection: Dynamic XSS Testing Without Using Static Payloads
– Practical Timing Attacks using Mathematical Amplification of Time Difference in == Operator
– The old is new, again. CVE20112461 is back!
– illusoryTLS
– Hunting ASynchronous Vulnerabilities
– New Evasions for Web Application Firewalls
– Magic Hashes
– Formaction Scriptless attack updates
– The Unexpected Dangers of Dynamic JavaScript
– Who Are You? A Statistical Approach to Protecting LinkedIn Logins(CSS UI Redressing Issue)
– Evading All Web Application filters
– Multiple Facebook Messenger CSRF’s
– Relative Path Overwrite
– SMTP Injection via Recipient Email Address
– Serverside Template Injection
– Hunting Asynchronous Vulnerabilities


HTTP Methods

Much of the internet operates on HTTP, Hyper Text Transfer Protocol. With HTTP, the user sends a request and the server replies with its response. These requests are like the pneumatic tubes at the bank — a delivery system for the ultimate content. A user clicks a link; a request is sent to the server; the server replies with a response; the response has the content; the content is displayed for the user.

Request Methods
Different kinds of requests (methods) exist for different types of actions, though some types of actions can be requested in more than one way (using more than one method).  Here are some of the more common methods:

  • POST requests write to the server.
  • GET requests read from the server.
  • HEAD is similar to GET, but retrieves only headers (headers contain meta-information, while rest of the content is in the response body.)
  • PUT requests allow for the creation and replacement of resources on the server.
  • DELETE requests delete resources.

Browsers and Crawlers
Browsers and most web crawlers (search engine crawlers or WhiteHat’s scanner or other production safe crawlers) treat method types differently. Production safe crawlers will send some requests and refrain from sending others based on idempotency (see next section) and safety. Browsers will also treat the methods differently; for instance, browsers will cache or store some methods in the history, but not others.

Idempotency and Safety
Idempotency and safety are important attributes of HTTP methods. An idempotent request can be called repeatedly with the same results as if it only had been executed once. If a user clicks a thumbnail of a cat picture and every click of the picture returns the same big cat picture, that HTTP request is idempotent. Non-idempotent requests can change each time they are called. So if a user clicks to post a comment, and each click produces a new comment, that is a non-idempotent request.

Safe requests are requests that don’t alter a resource; non-safe requests have the ability to change a resource. For example, a user posting a comment is using a non-safe request, because the user is changing some resource on the web page; however, the user clicking the cat thumbnail is a safe request, because clicking the cat picture does not change the resource on the server.

Production safe crawlers consider certain methods as always safe and idempotent, e.g. GET requests. Consequently, crawlers will send GET requests arbitrarily without worrying about the effect of repeated requests or that the request might change the resource. However, safe crawlers will recognize other methods, e.g. POST requests, as non-idempotent and unsafe. So, good web crawlers won’t send POST requests.

Why This Matters
While crawlers deem certain methods safe or unsafe, a specific request is not safe or idempotent just because it’s a certain request method. For example, GET requests should always be both idempotent and safe, while POST requests are not required to be either safe or idempotent. It is possible, however, for an unsafe, non-idempotent request to be sent as a GET request. A web site that uses a GET request where a POST request should be required can result in problems. For instance:

  • When an unsafe, non-idempotent request is sent as a GET request, crawlers will not recognize the request as dangerous and may call the method repeatedly. If a web site’s “Contact Us” functionality uses GET requests, a web crawler could inadvertently end up spamming the server or someone’s email. If the functionality is accessed by POST requests, the web crawler would recognize the non-idempotent nature of POST requests and avoid it.
  • When an unsafe or non-idempotent GET request is used to transmit sensitive data, that data will be stored in the browser’s history as part of the GET request history. On a public computer, a malicious user could steal a password or credit card information merely by looking at the history if that data is sent via GET. The body of a POST request will not be stored in the browser history, and consequently, the sensitive information stays hidden.

It comes down to using the right HTTP method for the right job. If you don’t want a web crawler arbitrarily executing the request or you don’t want the body of the request stored in the browser history, use a POST request. But if the request is harmless no matter how often it’s sent, and does not contain sensitive data, a GET request will work just fine.

“Insufficient Authorization – The Basics” Webinar Questions – Part I

Recently we offered webinar on a really interesting Insufficient Authorization vulnerability: a site that allows the user to live chat with a customer service representative updated the transcript using a request parameter that an attacker could have manipulated in order to view a different transcript, potentially giving access to a great deal of confidential information; using an “email me this conversation” request in combination with various chatID parameters could have allowed an attacker to collect sensitive information from a wide variety of customer conversations.

To view the webinar, please click here.

So many excellent questions were raised that we thought it would be valuable to share them in a pair of blog posts — here is the first set of questions and answers:

Did you complete this exploit within a network or from the outside?
Here at WhiteHat, we do what is called black box testing. We test apps from outside of their network, knowing nothing of the internal workings of the application or its data mapping. This makes testing more authentic, because we can probably assume the attacker isn’t inside of the network, either.

What is the standard way to remediate these vulnerabilities? Via safer coding?
The best way to remediate this vulnerability is to implement a granular access control policy, and ensure that the application’s sensitive data and functionalities are only available to the users/admins who have the appropriate permissions.

Can you please elaborate on Generic Framework Solution and Custom Code Solution?

Most frameworks have options for access control. The best thing to do is take advantage of these, and restrict the appropriate resources/functionalities so that only people who actually require the access are allowed access. The best approach to custom coding a solution is to apply the least-privilege principle across all data access: allow each role access only to the data that is actually required to perform the related tasks. In addition, data should never be stored in the application’s root directory; this minimizes the possibility that those files can be found by an unauthorized user who simply knows where to look.

Can you talk about the tools you used to capture and manage the cookies and parameters as you attempted the exploit?
During testing, we have a plethora of tools available. For this particular test, I only used a standard proxy suite. This allows for capturing requests directly from your internet browser, editing and sending the requests, and viewing the responses. Usually, this is all that is needed to exploit an application.

What resources do you recommend for a person that is interested in learning how to perform Pen Testing?
Books, the internet, and more books! A few books that I recommend are The Hacker Playbook, The Web Application Hacker’s Handbook, and Ethical Hacking and Penetration Testing Guide. Take a look at the OWASP top ten, and dig further into each vulnerability.

How did you select this target?
Here at WhiteHat, the team that is responsible for the majority of penetration testing has a list of clients that need business logic assessments. (For a business logic assessment, we test every identifiable functionality of a site.) Each team member independently chooses a site to perform an assessment on. This particular application just happened to be the one I chose that day.

Does the use of SSL/TLS affect the exploitability of this vulnerability?
The use of SSL/TLS does not affect the exploitability. SSL/TLS simply prevents man-in-the-middle attacks, meaning that an attacker can’t relay and possibly alter the communication between a user’s browser and the web server. The proxy that I use breaks the SSL connection between my browser and the server, so that encrypted data can be viewed and modified within the proxy. Requests are then sent over SSL from the proxy.

We hope you found this interesting; more questions and answers will be coming soon — !

The Ad Blocking Wars: Ad Blockers vs. Ad-Tech

More and more people find online ads to be annoying, invasive, dangerous, insulting, distracting, expensive, and just understandable, and have decided to install an ad blocker. In fact, the number of people using ad blockers is skyrocketing. According to PageFair’s 2015 Ad Blocking Report, there are now 198 million active adblock users around the world with a global growth rate of 41% in the last 12 months. Publishers are visibly feeling the pain and fighting back against ad blockers.

Key to the conflict between ads and ad blockers is the Document Object Model, or DOM. Whenever you view a web page, your browser creates a DOM – a model of the page. This is a programmatic representation of the page that lets JavaScript convert static content into something more dynamic. Whatever is in control of the DOM will control what you see – including whether or not you see ads. Ad blockers are designed to prevent the DOM from including advertisements, while the page is designed to display them. This inherent conflict, this fight for control over the DOM, is where the Ad Blockers vs. Ad-Tech war is waged.

A recent high profile example of this conflict is Yahoo Mail’s recent reported attempt to prevent ad-blocking users from accessing their email, which upset a lot of people. This is just one conflict in an inevitable war over who is in control of what you see in your browser DOM – Ad Blockers vs. Ad-Tech (ad networks, advertisers, publishers, etc.).

Robert Hansen and I recently performed a thought experiment to see how this technological escalation plays out, and who eventually wins. I played the part of the Ad Blocker and he played Ad-Tech, each of us responding to the action of the other.

Here is what we came up with…

  1. Ad-Tech: Deliver ads to user’s browser.
  2. User: Decides to install an ad blocker.
  3. Ad Blocker: Creates a black list of fully qualified domain names / URLs that are known to serve ads. Blocks the browser from making connections to those locations.
  4. Ad-Tech: Create new fully qualified domain names / URLs that are not on black lists so their ads are not blocked. (i.e. Fast Flux)
  5. Ad Blocker: Crowd-source black list to keep it up-to-date and continue effectively blocking. Allow certain ‘safe’ ads through (i.e. Acceptable Ads Initiative)
  6. Ad-Tech: Load third-party JavaScript on to the web page, which detect when, and if, ads have been blocked. If ads are blocked, deny the user the content or service they wanted.

** Current stage of the Ad Blocking Wars ***

  1. Ad Blocker: Maintain a black list of fully qualified domain names / URL of where ad blocking detection code is hosted and block the browser from making connections to those locations.
  2. Ad-Tech: Relocate ad or ad blocking detection code to first-party website location. Ad blockers cannot block this code without also blocking the web page the user wanted use. (i.e. sponsored ads, like found on Google SERPs and Facebook)
  3. Ad Blocker: Detect the presence of ads, but not block them. Instead, make the ads invisible (i.e. visibility: hidden;). Do not send tracking cookies back to hosting server to help preserve privacy.
  4. Ad-Tech: Detect when ads are hidden in the DOM. If ads are hidden, deny the user the content or service they wanted.
  5. Ad Blocker: Allow ads to be visible, but move them WAY out of the way where they cannot be seen. Do not send tracking cookies back to hosting server to help preserve privacy.
  6. Ad-Tech: Deliver JavaScript code that detects any unauthorized modification to browser DOM where the ad is to be displayed. If the ad’s DOM is modified, deny the user the content or service they wanted.
  7. Ad Blocker: Detect the presence of first-party ad blocking detection code. Block the browser from loading that code.
  8. Ad-Tech: Move ad blocking detection code to a location that cannot be safely blocked without negatively impact the user experience. (i.e. Amazon AWS).
  9. Ad Blocker: Crawl the DOM looking for ad blocking detection code, on all domains, first and third-party. Remove the JavaScript code or do not let it execute in the browser.
  10. Ad-Tech: Implement minification and polymorphism techniques designed to hinder isolation and removal of ad blocking detection code.
  11. Ad Blocker: Crawl the DOM looking for ad blocking detection code, reverse code obfuscation techniques on all domains, first and third-party. Remove the offending JavaScript code or do not let it execute in the browser.
  12. Ad-Tech: Integrate ad blocking detection code inside of core website JavaScript functionality. If the JavaScript code fails to run, the web page is designed to be unusable.

GAME OVER. Ad-Tech Wins.

The steps above will not necessarily play out exactly in this order as the war escalates. What matters more is how the war always ends. No matter how Robert and I sliced it, Ad-Tech eventually wins. Their control and access over the DOM appears dominant.

If you look at it closely, the Ad-Tech industry behaves quite similarly to the malware industry. The techniques and delivery are consistent. Ad-Tech wants to deliver and execute code users don’t want and they’ll bypass the user’s security controls to do exactly that! So it really should come as no surprise that malware purveyors heavily utilize online advertising channels to infect millions of users. And if this is the way is history plays out, where eventually users and their ad blockers lose, antivirus tools are the only options left – and antivirus is basically a coin flip.

The only recourse left is not technical… the courts.



“Crash Course – PCI DSS 3.1 is here. Are you ready?” Part II

Thanks to all who attended our recent webinar, “Crash Course – PCI DSS 3.1 is here. Are you ready?”. During the stream, there were a number of great questions asked by attendees that didn’t get answered due to the limited time. This blog post is a means to answer many of those questions.

Still have questions? Want to know more about PCI DSS 3.1? Want a copy of our PCI DSS Compliance for Dummies eBook? Visit here to learn more.

Is an onsite pen test a requirement in order to meet the PCI-DSS 3.1 requirement?
PCI-DSS 3.1 does require a pen test to meet requirement 11.3. However, it does not implicitly state that it needs to be onsite. Allowing access to internal web applications and systems via secure VPN or other means can be leveraged to allow outside access.

Requirements for devices such as iPads are not currently specified (although could be implied throughout requirements)…if using a third party secure app to process payments, what else must be done to harden the iPad, if anything?
When using 3rd party applications or services, you must maintain the same level of security with the partner in question. Sections 12.8.2 and 12.9 speak to keeping written documentation to ensure that there is agreement between both parties about what security measures are in place.

In regards to the 2nd part of the question, no additional measures need to be taken on the devices themselves. The application itself must adapt to be secure on whatever device it is running on, even if it is a bug/flaw in the device itself. It is a matter of what control you have to protect your card data. Unless you have a contract with the device vendor, a flaw on the device would fall out of that realm of control.

What about default system accounts (i.e. root) that cannot be removed – is changing the default password no longer enough?
The removal of test data mentioned in section 6.4 focuses on test data and accounts that are used during development. This is separate from section 2.1 that deals with vendor supplied system and default passwords. For things such as root that cannot be removed, changing the defaults is sufficient.

For a test account in production, we implement an agency with a Generic account, which becomes the base of the users underneath it. Is this considered a test account or do they mean ‘backdoor’ accounts for testing?
This is a pretty specific example, but it sounds like this “generic account” is core to your architecture. This does not seem like a test account. However, if it is possible to log into this “generic account,” then it is exposed to the same risk as any user would be.

Does the Cardholder name or the expire date need to be encrypted if you do not store the strip or the actual card number?
PCI DSS in general does not apply if the PAN is not stored, processed, or transmitted. If you are processing this data but not storing the PAN with the Cardholder Name and Expiration date, then you are not required to protect the latter two. PAN should always be protected.

For EMV compliance, is Chip + PIN the only PCI-compliant method, or is the commonly used Chip only compliant?
Either Chip+Signature or Chip+Pin is currently PCI compliant so long as the same PCI standards are followed as the full magnetic strip. PCI DSS has not taken a stance on Chip+Signature vs. Chip+PIN yet, likely because the latter has not been widely adopted. The US is a laggard in this regard, but is moving towards that direction.

If “Company X” accepts responsibility for PCI Compliance, and they use a WiFi that is secured by a password, are they fully compliant?
Using WiFi secured by a password for card transactions does not violate you for PCI inherently. However there are several sections of PCI that have strict requirements around implementing and maintaining a wireless network in conjunction with cardholder data.  They discuss this more thoroughly in PCI’s section on “wireless” (page 11).

Is social engineering a requirement of a pen test or is the control specific to a network-based attack?
Social engineering is not a requirement of the pen test or network based attacks. However, social engineering is mentioned in section 8.2.2 in regards to verifying identities before modifying user credentials. Social engineering tests would undoubtedly help in this area, but isn’t a hard requirement.

Does having a third party host your data and process your cards take away the risk?
We enter credit card information into a 3rd party website and do not keep this information.  Are we still under PCI?
If we use an API with a 3rd party to enter the credit card information, what is it that we need to consider for PCI?
Yes, in this situation you would still need to comply to PCI standards. When using 3rd party applications or services you must maintain the same level of security with the partner in question.  Sections 12.8.2 and 12.9 speak to keeping written agreements to ensure that there is agreement between both parties about what security measures are in place. Using an API with a 3rd party would be considered part of the “processing” of the cardholder data so any systems that leverage that API or transmit any cardholder data would need to conform to PCI standards even if no storage is occurring on your end.

How do you see PCI compliance changing, if at all, in the next few years as chip and PIN cards become ubiquitous in the U.S.?
PCI compliance will continue to change based on industry standards.  Chip+PIN cards are not yet widely adopted in the U.S., which may have had an impact in it not being a requirement in PCI DSS 3.1  As the U.S. and other geographic regions adopt Chip+PIN in larger numbers, we expect PCI to adopt it as a requirement to push even harder for full adoption.

Regarding change 6 (Requirements 6.5.1 – 6.5.10 applies to all internal and external applications), does PCI have hard standards regarding when zero day vulns or urgent/critical vulns need to be remediated?
PCI DSS does not have any specific requirements around patching zero day vulnerabilities.  However it does recommend 30 day patch installations for “critical or at-risk systems.”

Some vulns take longer than 60/90 days to be remediated. How does this impact a PCI review?
PCI DSS does not specify any particular remediation timelines. However, there must be a plan to remediate at a minimum. If you can show that you have a timeline to remediate  vulnerabilities that have been open for a longer period of time, you should still meet compliance.  If you give a plan to an assessor, and do not deliver on that plan, then there will likely be an issue.

Are V-LAN setups a legitimate way to segment PCI devices from the rest of the network?
When using 3rd party applications or services, you must maintain the same level of security with the partner in question.  Sections 12.8.2 and 12.9 speak to keeping written agreements to ensure that there is agreement between both parties about what security measures are in place.

Does the new PCI requirement state that we need to encrypt, not tokenize, storing PAN and CVV?
CVV storage is not permitted at all. Tokenization of PANs is considered to be the best practice, but is not a requirement. The requirement for PAN storage simply reads “PCI DSS requires PAN to be rendered unreadable anywhere it is stored.” This includes several methods of storage: hashing, encryption,  tokenization, etc.

For those of us using a third party processor for payments, has the requirement to have all elements of all payment pages delivered to the customers browser originate only and directly from a PCI DSS validated 3rd party have much impact on many companies?
Yes, the requirement to have an agreement with 3rd party services can have a disruptive impact on many companies.  What happens many times is that no agreement is put in place in regards to security testing ahead of time. Then either the company or their service providers are audited, which leads to a rush to get everything assessed in time. Identifying services and partners that deal with cardholder data ahead of time and putting agreements in place can alleviate a lot of problems.

Any specific requirements for Smartcards, CHIP and signature, CHIP and PIN, or use of mobile phones for payments?
Chip cards do not have any specific requirements in PCI as of today.  The data they contain must be treated the same as the magnetic strip data. Use of mobile phones for payments must be validated on a per app basis, and of course enforcing policies that do not allow these devices to be used on untrusted networks.

Do PCI rules apply to debit cards?
Yes, PCI applies to credit and debit card data.

How would you track all systems if you are scaling based on demand?
If these are identical systems scaling based on demand, then an inventory of each individual appliance would not be necessary as long as there is a documented process on how the system scales up, scales down, and maintains PCI security standards while doing so.

How do we remain PCI compliant if we use an open source solution as part of our application that has a known vulnerability and (at a given time) has not yet been remediated?
Remaining PCI compliant is an ongoing process.  If you take a snapshot of any organizations open vulnerabilities you are likely going to find issues.  Being able to show that you have processes in place for remediating vulnerabilities, including those introduced by 3rd party solutions is part of meeting compliance.

How can we secure/protect sensitive data in  memory  in  Java and Mainframe Cobol environments?
Sensitive data existing in volatile memory in clear text is unavoidable.  The application must understand what it is storing before it can store it.  However there are several attacks that can be avoided to expose these values in memory such as Buffer Overflow attacks.  Preventing these types of attacks will eliminate the exposure of sensitive data in memory.

Is OWASP top 10 vulnerabilities  the only modules to be discussed with developers to be compliant with PCI DSS requirements 6.5?
PCI DSS recommends that developers be able to identify and resolve the vulnerabilities in sections 6.5.1-6.5.10.  The OWASP top 10 covers many vulnerabilities from these sections, but not all of them. Additional training would be required to fully cover those sections.

URLs are content

Justifications for the federal government’s controversial mass surveillance programs have involved the distinction between the contents of communications and associated “meta-data” about those communications. Finding out that two people spoke on the phone requires less red tape than listening to the conversations themselves. While “meta-data” doesn’t sound especially ominous, analysts can use graph theory to draw surprisingly powerful inferences from it. A funny illustration of that can be found in Kieran Healy’s blog post, Using Metadata to find Paul Revere.

On November 10, the Third Circuit Court of Appeals made a ruling that web browsing histories are “content” under the Wiretap Act. This implies that the government will need a warrant before collecting such browsing histories. Wired summarized the point the court was making:

A visit to “,” for instance, might count as metadata, as Cato Institute senior fellow Julian Sanchez explains. But a visit to “” clearly reveals something about the visitor’s communications with WebMD, not just the fact of the visit. “It’s not a hard call,” says Sanchez. “The specific URL I visit at or or tells you very specifically what the meaning or purport of my communications are.”

Interestingly, this party accused of violating the Wiretap Act in this case wasn’t the federal government. It was Google. The court ruled that Google had collected content in the sense of the Wiretap Act, but that’s okay because you can’t eavesdrop on your own conversation. I’m not an attorney, but the legal technicalities were well-explained in the Washington Post.

The technical technicalities are also interesting.

Basically, a cookie is a secret between your browser and an individual web server. The secret is in the form of a key-value pair, like id=12345. Once a cookie is “set,” it will accompany every request the browser sends to the server that set the cookie. If the server makes sure that each browser it interacts with has a different cookie, it can distinguish individual visitors. That’s what it means to be “logged in” to a website: after proving your identity with a username and password, the cookie assigns you a “session cookie.” When you visit, you see your own profile because the server read your cookie, and your cookie was tied to your account when you logged in.

Cookies can be set in two ways. The browser might request something from a server (HTML, JavaScript, CSS, image, etc.). The server sends back the requested file, and the response contains “Set-Cookie” headers. Alternatively, JavaScript on the page might set cookies using document.cookie. That is, cookies can be set server-side or client-side.

A cookie is nothing more than a place for application developers to store short strings of data. These are some of the common security considerations with cookies:

  • Is inappropriate data being stored in cookies?
  • Can an attacker guess the values of other people’s cookies?
  • Are cookies being sent across unencrypted connections?
  • Should the cookies get a special “HttpOnly” flag that makes them JavaScript-inaccessible, to protect them from potential cross-site scripting attacks?

OWASP has a more detailed discussion of cookie security here.

When a user requests a web page and receives an HTML document, that document can instruct their browser to communicate with many different third parties. Should all of those third parties be able to track the user, possibly across multiple websites?

Enough people feel uncomfortable with third-party cookies that browsers include options for disabling them. The case before the Third Circuit Court of Appeals was about Google’s practices in 2012, which involved exploiting a browser bug to set cookies in Apple’s Safari browser, even when users had explicitly disabled third-party cookies. Consequently, Google was able to track individual browsers across multiple websites. At issue was whether the list of URLs the browser visited consisted of “content.” The court ruled that it did.

The technical details of what Google was doing are described here.

Data is often submitted to websites through HTML forms. It’s natural to assume that submitting a form is always intentional, but forms can also be submitted by JavaScript, or without user interaction. It’s easy to assume that, if a user is submitting a form to a server, they’ve “consented” to communication with that server. That assumption led to the bug that was exploited by Google.

Safari prevented third party servers from setting cookies, unless a form was submitted to the third party. Google supplied code that made browsers submit forms to them without user interaction. In response to those form submissions, tracking cookies were set. This circumvented the user’s efforts not to comply with third-party Set-Cookie headers. Incidentally, automatically submitting a form through JavaScript is also the way an attacker would carry out cross-site scripting or cross-site request forgery attacks.

To recap: Apple and Google had a technical arms race about tracking cookies. There was a lawsuit, and now we’re clear that the government needs a warrant to look at browser histories, because URL paths and query strings are very revealing.

The court suggested that there’s a distinction to be made between the domain and the rest of the URL, but that suggestion was not legally binding.

Saving Systems from SQLi

There is absolutely nothing special about the TalkTalk breach — and that is the problem. If you didn’t already see the news about TalkTalk, a UK-based provider of telephone and broadband services, their customer database was hacked and reportedly 4 million records were pilfered. A major organization’s website is hacked, millions of records containing PII are taken, and the data is held for ransom. Oh, and the alleged perpetrator(s) were teenagers, not professional cyber-criminals. This is the type of story that has been told for years now in every geographic region and industry.

In this particular case, while many important technical details are still coming to light, it appears – according to some reputable media sources – the breach was carried out through SQL Injection (SQLi). SQLi gives a remote attacker the ability to run commands against the backend database, including potentially stealing all the data contained in it. This sounds bad because it is.

Just this year, the Verizon Data Breach Investigations Report, found that SQLi was used in 19 percent of web application attacks. And WhiteHat’s own research reveals that 6 percent of websites tested with Sentinel have at least one SQLi vulnerability exposed. So SQLi is very common, and what’s more, it’s been around a long time. In fact, this Christmas marks its 17th birthday.

The more we learn about incidents like TalkTalk, the more we see that these breaches are preventable. We know how to write code that’s resilient to SQLi. We have several ways to to identify SQLi in vulnerable code. We know multiple methods for fixing SQLi vulnerabilities and defending against incoming attacks. We, the InfoSec industry, know basically everything about SQLi. Yet for some reason the breaches keep happening, the headlines keep appearing, and millions of people continue to have their personal information exposed. The question then becomes: Why? Why, when we know so much about these attacks, do they keep happening?

One answer is that those who are best positioned to solve the problem are not motivated to take care of the issue – or perhaps they are just ignorant of things like SQLi and the danger it presents. Certainly the companies and organizations being attacked this way have a reason to protect themselves, since they lose money whenever an attack occurs. The Verizon report estimates that one million records stolen could cost a company nearly $1.2m. For the TalkTalk hack, with potentially four million records stolen (though some reports are now indicating much lower numbers), there could be nearly $2m in damages.

Imagine, millions of dollars in damages and millions of angry customers based on an issue that could have been found and fixed in mere days – if that. It’s time to get serious about Web security, like really serious, and I’m not just talking about corporations, but InfoSec vendors as well.

Like many other vendors, WhiteHat’s vulnerability scanning service can help customers find vulnerabilities such as SQLi before the bad guys exploit them. This lets companies proactively protect their information, since hacking into a website will be significantly more challenging. But even more importantly, organizations need to know that security vendors truly have their back and that their vendor’s interests are aligned with their own. Sentinel Elite’s security guarantee is designed to do exactly that.

If Sentinel Elite fails to find a vulnerability such as SQLi, and exploitation results in a breach like TalkTalk’s, WhiteHat will not only refund the cost of the service, but also cover up to $500k in financial damages. This means that WhiteHat customers can be confident that WhiteHat shares their commitment to not just detecting vulnerabilities, but actively working to prevent breaches.

Will security guarantees prevent all breaches? Probably not, as perfect security is impossible, but security guarantees will make a HUGE difference in making sure vulnerabilities are remediated. All it takes to stop these breaches from happening is doing the things we already know how to do. Doing the things we already know work. Security guarantees motivate all parties involved to do what’s necessary to prevent breaches like TalkTalk.

University Networks

The Atlantic Monthly just published a piece about the computer security challenges facing universities. Those challenges are serious:

“Universities are extremely attractive targets,” explained Richard Bejtlich, the Chief Security Strategist at FireEye, which acquired Mandiant, the firm that investigated the hacking incident at the [New York] Times. “The sort of information they have can be very valuable — including very rich personal information that criminal groups want, and R&D data related to either basic science or grant-related research of great interest to nation state groups. Then, on the infrastructure side they also provide some of the best platforms for attacking other parties—high bandwidth, great servers, some of the best computing infrastructure in the world and a corresponding lack of interest in security.”

The issue is framed in terms of “corporate lockdown” vs. “bring your own device,” with an emphasis on network security:

There are two schools of thought on computer security at institutions of higher education. One thought is that universities are lagging behind companies in their security efforts and need to embrace a more locked-down, corporate approach to security. The other thought holds that companies are, in fact, coming around to the academic institutions’ perspective on security—with employees bringing their own devices to work, and an increasing emphasis on monitoring network activity rather than enforcing security by trying to keep out the outside world.

There’s a nod to application security, and it’s actually a great example of setting policies to incentivize users to set stronger passwords (this is not easy!):

A company, for instance, may mandate a new security system or patch for everyone on its network, but a crucial element of implementing security measures in an academic setting is often providing users with options that will meet their needs, rather than forcing them to acquiesce to changes. For example, Parks said that at the University of Idaho, users are given the choice to set passwords that are at least 15 characters in length and, if they do so, their passwords last 400 days before expiring, whereas shorter passwords must be changed every 90 days (more than 70 percent of users have chosen to create passwords that are at least 15 characters, he added).

Getting hacked is about losing control of one’s data, and the worries in the last two passages have to do with things the university can’t directly control: the security of devices that users connect to the network, and the strength of passwords chosen by users. Things beyond one’s control are generally anxiety-provoking.

Taking a step back, the data that’s of interest to hackers is found in databases, which are frequently queried by web applications. Those web applications might have vulnerabilities, and a university’s application code is under the university’s control. The application code matters in practice. Over the summer, Team GhostShell dumped data stolen from a large number of universities. According to Symantec:

In keeping with its previous modus operandi, it is likely that the group compromised the databases by way of SQL injection attacks and poorly configured PHP scripts; however, this has not been confirmed. Previous data dumps from the 2012 hacks revealed that the team used SQLmap, a popular SQL injection tool used by hackers.

Preventing SQL injection is a solved problem, but bored teenagers exploit it on university websites:

The attack, which has implications for the integrity of USyd’s security infrastructure, compromised the personal details of approximately 5,000 students and did not come to the University’s attention until February 6.

The hacker, who goes by the online alias Abdilo, told Honi that the attack had yielded email addresses and ‘pass combo lists’, though he has no intention of using the information for malicious ends.

“99% of my targets are just shit i decide to mess with because of southpark or other tv shows,” he wrote.

As for Sydney’s breach on February 2, Abdilo claimed that he had very little trouble in accessing the information, rating the university’s database security with a “0” out of 10.

“I was taunting them for awhile, they finally figured it out,” he said.

That’s a nightmare, but solving SQL injection is a lot more straightforward than working on machine learning algorithms to improve intrusion detection systems. Less academic, if you will. The challenges in the Atlantic article are real, but effective measures aren’t always the most exciting. Fixing legacy applications that students use to look at their grades or schedule doctor’s appointments doesn’t have the drama of finding currently-invisible intruders. There is no incentive or process for the faculty to work on improving the software development life cycle when their priority is often research productivity or serving on administrative committees. In these instances, partnering with a third-party SaaS application security provider is the best solution for these urgent needs.

In a sense, it’s good news that some of the most urgently needed fixes are already within our abilities.

When departments work at cross-purposes

Back in August, we wrote about how self-discipline can be one of the hardest parts of security, as illustrated by Snowden and the NSA. Just recently, Salon published an article about similar issues that plagued the CIA during the Cold War: How to explain the KGB’s amazing success identifying CIA agents in the field?

So many of their agents were being uncovered by the Soviets that they assumed there must’ve been double agents…somewhere. Apparently there was a lot of inward-focused paranoia. The truth was a lot more mundane, but also very actionable — not only for the CIA, but for business in general: two departments (in this case, two agencies) in one organization with incompatible, non-coordinated policies:

So how, exactly, did Totrov reconstitute CIA personnel listings without access to the files themselves or those who put them together?

His approach required a clever combination of clear insight into human behavior, root common sense and strict logic.

In the world of secret intelligence the first rule is that of the ancient Chinese philosopher of war Sun Tzu: To defeat the enemy, you have above all to know yourself. The KGB was a huge bureaucracy within a bureaucracy — the Soviet Union. Any Soviet citizen had an intimate acquaintance with how bureaucracies function. They are fundamentally creatures of habit and, as any cryptanalyst knows, the key to breaking the adversary’s cipher is to find repetitions. The same applies to the parallel universe of human counterintelligence.

The difference between Totrov and his fellow citizens was that whereas others at home and abroad would assume the Soviet Union was somehow unique, he applied his understanding of his own society to a society that on the surface seemed unique, but which, in respect of how government worked, was not in fact that much different: the United States.

From an organizational point of view, what’s fascinating is that the problem came from two different agencies with different missions having incompatible, uncoordinated policies: policies for Foreign Service Officers and policies for CIA officers were different enough to allow the Soviet Union to identify individuals who were theoretically Foreign Service Officers but who did not receive the same treatment as actual Foreign Service Officers. Pay and policy differentials made it easy to separate actual Foreign Service Officers from CIA agents.

Thus one productive line of inquiry quickly yielded evidence: the differences in the way agency officers undercover as diplomats were treated from genuine foreign service officers (FSOs). The pay scale at entry was much higher for a CIA officer; after three to four years abroad a genuine FSO could return home, whereas an agency employee could not; real FSOs had to be recruited between the ages of 21 and 31, whereas this did not apply to an agency officer; only real FSOs had to attend the Institute of Foreign Service for three months before entering the service; naturalized Americans could not become FSOs for at least nine years but they could become agency employees; when agency officers returned home, they did not normally appear in State Department listings; should they appear they were classified as research and planning, research and intelligence, consular or chancery for security affairs; unlike FSOs, agency officers could change their place of work for no apparent reason; their published biographies contained obvious gaps; agency officers could be relocated within the country to which they were posted, FSOs were not; agency officers usually had more than one working foreign language; their cover was usually as a “political” or “consular” official (often vice-consul); internal embassy reorganizations usually left agency personnel untouched, whether their rank, their office space or their telephones; their offices were located in restricted zones within the embassy; they would appear on the streets during the working day using public telephone boxes; they would arrange meetings for the evening, out of town, usually around 7.30 p.m. or 8.00 p.m.; and whereas FSOs had to observe strict rules about attending dinner, agency officers could come and go as they pleased.

You don’t need to infiltrate the CIA if the CIA and the State Department can’t agree on how to treat their staff and what rules to apply!

One way of looking at the problem was that the diplomats had their own goals, and they set policies appropriate to those goals. By necessity, they didn’t actually know the overall goals of their own embassies. It’s not unusual for different subdivisions of an organization to have conflicting goals. The question is how to manage those tensions. What was the point of requiring a 9 year wait after naturalization before someone could work as a foreign service officer? A different executive agency, with higher needs for the integrity of its agents, didn’t consider the wait necessary. Eliminating the wait would’ve eliminated an obvious difference between agents and normal diplomats.

But are we sure the wait wasn’t necessary? It creates a large obstacle for our adversaries: they need to think 9 years ahead if they want to supply their own mole instead of turning one of our diplomats. On the other hand, it created too large an obstacle for ourselves.

Is it more important to defend against foreign agents or to create high-quality cover for our own agents? Two agencies disagreed and pursued their own interests without resolving the disagreement. Either policy could have been effective; having both policies was an information give-away.

How can this sort of issue arise for private businesses? Too often individual departments can set policies that come into conflict with one another. For instance, an IT department may with perfectly reasonable justification decide to standardize on a single browser. A second department decides to develop internal tools that rely on browser add-ons like ActiveX or Java applets. When a vulnerability is discovered in those add-ons to the standard browser, the organization finds it is now dependent on an inherently insecure tool. Neither department is responsible for the situation; both acted in good faith within their arena. The problem was caused by a lack of any one responsible for determining how to set policies for the best good of the organization as a whole.

Security policies need to be set to take all the organization’s goals into consideration; to do that, someone has to be looking at the whole picture.