Hammering at speed limits

Slate has a well-written article explaining an interesting new vulnerability called “Rowhammer.” The white paper is here, and the code repository is here. Here’s the abstract describing the basic idea:

As DRAM has been scaling to increase in density, the cells are less isolated from each other. Recent studies have found that repeated accesses to DRAM rows can cause random bit flips in an adjacent row, resulting in the so called Rowhammer bug. This bug has already been exploited to gain root privileges and to evade a sandbox, showing the severity of faulting single bits for security. However, these exploits are written in native code and use special instructions to flush data from the cache.

In this paper we present Rowhammer.js, a JavaScript-based implementation of the Rowhammer attack. Our attack uses an eviction strategy found by a generic algorithm that improves the eviction rate compared to existing eviction strategies from 95.2% to 99.99%. Rowhammer.js is the first remote software-induced hardware-fault attack. In contrast to other fault attacks it does not require physical access to the machine, or the execution of native code or access to special instructions. As JavaScript-based fault attacks can be performed on millions of users stealthily and simultaneously, we propose countermeasures that can be implemented immediately.

What’s interesting to me is how much computers had to advance to make this situation possible. First, RAM had to be miniaturized enough that adjacent memory cells can interfere with one another. The RAM’s refresh rate had to be optimally low, because refreshing memory use up time and power.

Lots more had to happen before this was remotely exploitable via JavaScript. JavaScript wasn’t designed to quickly process binary data, or really even deal with binary data. It’s a high-level language without pointers or direct memory access. Semantically, JavaScript arrays aren’t contiguous blocks of memory like C arrays. They’re special objects whose keys happen to be integers. The third and fourth entries of an array aren’t necessarily next to each other in memory, and each item in an array might have a different type.

Typed Arrays were introduced for performance reasons, to facilitate working with binary data. They were especially necessary for allowing fast 3D graphics with WebGL. Typed Arrays do occupy adjacent memory cells, unlike normal arrays.

In other words, computers had to get faster, websites had to get fancier, and performance expectations had to go up.

Preventing the attack would kill performance at a hardware level: the RAM would have to spend up to 35% of its time refreshing itself. Short of hardware changes, the authors suggest that browser vendors implement tests for the vulnerability, then throttle JavaScript performance if the system is found to be vulnerable. JavaScript should have to be actively enabled by the user when they visit a website.

It seems unlikely that browser vendors are going to voluntarily undo all the work they’ve done, but they could. It would be difficult to explain to the “average person” that their computer needs to slow down because of some “Rowhammer” business. Rowhammer exists because both hardware and software vendors listened to consumer demands for higher performance. We could slow down, but it’s psychologically intolerable. The known technical solution is emotionally unacceptable to real people, so more advanced technical solutions will be attempted.

In a very similar way, we could make a huge dent in pollution and greenhouse gas emissions if we slowed down our cars and ships by half. For the vast majority of human history, we got by and nobody could travel close to 40 mph, let alone 80 mph.

When you really have information that people want, the safest thing is probably just to use typewriters. Russia already does, and Germany has considered it.

The solutions to security problems have to be technically correct and psychologically acceptable, and it’s the second part that’s hard.

Security Pictures

Security pictures are being used in a multitude of web applications to apply an extra step in securing the login process. However, are these security pictures being used properly? Could the use of security pictures actually aid hackers? Such questions passed through my mind when testing an application’s login process that relied on security pictures to provide an extra layer of security.

I was performing a business logic assessment for an insurance application that used security pictures as part of its login process, but something seemed off. The first step was to enter your username; if the username was found in the database then you would be presented with your security picture e.g. a dog, cat, iguana. If the username was not in the database then a message saying that you haven’t setup your security picture yet was displayed. Besides the clear potential for a brute force attack on usernames, there was another vulnerability hiding – you could view other users’ security pictures just by guessing the usernames in the first step.

Before I started to dwell into how should I classify the possible vulnerability in my assessment, I had to do some quick research in a couple of topics: what are security pictures used for? And, how do other applications use them effectively?

I always wondered what extra security the picture added. How could a picture of an iguana I chose protect me from danger at all? Or add an extra layer of security when I log in? Security pictures are mainly used to protect users from phishing attacks. For example, if an attacker tries to reproduce a banking login screen, a target user who is accustomed to see an iguana picture before entering his or her password would pause for a moment, then notice that something is not right since Iggy isn’t there anymore. The absence of a security picture produces that mental pause causing the user in most cases to not enter their password.

After finding about the true purpose of security pictures, I had to see how other applications use them in a less broken way. So I visited my bank’s website, entered my username, but instead of having my security picture displayed right away I was asked to answer my security question. Once the secret answer was entered my security picture would be displayed on top of the password input field. This approach to use a security picture was secure.

What seemed off in the beginning was the fact that because attackers can get users security pictures with a brute force attack, they can go a step further into phishing and use the security pictures of target users to create an even stronger phishing attack. This enhanced phishing attack would reassure the victim that they are in the right website because their security picture is there as usual.

Now that is clear that the finding was indeed a vulnerability, I had to think about how to classify it and what score to award. I classified it as Abuse of Functionality since WhiteHat Security defines Abuse of Functionality as:

“Abuse of Functionality is an attack technique that uses a web site’s own features and functionality to attack itself or others. Abuse of Functionality can be described as the abuse of an application’s intended functionality to perform an undesirable outcome. These attacks have varied results such as consuming resources, circumventing access controls, or leaking information. The potential and level of abuse will vary from web site to web site and application to application. Abuse of functionality attacks are often a combination of other attack types and/or utilize other attack vectors.”

In this case an attacker could use the application’s own authentication functionality to attack other users by combining the results of a brute force attack and the security pictures to create a powerful phishing attack. For the scores I have chosen to use Impact and Likelihood, which are given low, medium, and high values. Impact determines the potential damage a vulnerability inflicts and Likelihood estimates how likely it is for the vulnerability to be exploited. In terms of Likelihood, I would rate this a medium because it is very time consuming to setup a phishing attack and you will have to perform a brute force attack first to obtain valid usernames, then pick from the usernames the specific victims to attack; As for Impact, I would categorize this as high because once the phishing attack is sent the victim would most likely lose his or her credentials.

Security pictures can indeed help you add an extra layer of security to your application’s login process. However, put on your black hat for a moment and think how could a hacker use your own security against the application? As presented here, sometimes the medicine can be worse than the disease.

Why is Passive Mixed Content so serious?

One of the most important tools in web security is Transport Layer Security (TLS). It not only protects sensitive information during transit, but also verifies that the content has not been modified. The user can be confident that content delivered via HTTPS is exactly what the website sent. The user can exchange sensitive information with the website, secure in the knowledge that it won’t be altered or intercepted. However, this increase in security comes with an increased overhead cost. It is tempting to be concerned only about encryption and ignore the necessity to validate on both ends, but any resources that are called on a secure page should be similarly protected, not just the ones containing secret content.

Most web security professionals agree that active content — JavaScript, Flash, etc. — should only be sourced in via HTTPS. After all, an attacker can use a Man-in-the-Middle attack to replace non-secure content on the fly. This is clearly a security risk. Active content has access to the content of the Document Object Model (DOM), and the means to exfiltrate that data. Any attack that is possible with Cross-Site Scripting is also achievable using active mixed content.

The controversy begins when the discussion turns to passive content — images, videos, etc. It may be difficult to imagine how an attacker could inflict anything worse than mild annoyance by replacing such content. There are two attack scenarios which are commonly cited.

An unsophisticated attacker could, perhaps, damage the reputation of a company by including offensive or illegal content. However, the attack would only be effective while the attacker maintains a privileged position on the network. If the user moves to a different Wi-Fi network, the attacker is out of the loop. It would be easy to demonstrate to the press or law enforcement that the company is not responsible, so the impact would be negligible.
If a particular browser’s image parsing process is vulnerable, a highly sophisticated attacker can deliver a specially crafted, malformed file using a passive mixed content vulnerability. In this case, the delivery method is incidental, and the vulnerability lies with the client, rather than the server. This attack requires advanced intelligence about the specific target’s browser, and an un-patched or unreported vulnerability in that specific browser, so the threat is negligible.
However, there is an attack scenario that requires little technical sophistication, yet may result in a complete account takeover. First, assume the attacker has established a privileged position in the network by spoofing a public Wi-Fi access point. The attacker can now return any response to non-encrypted requests coming over the air. From this position, the attacker can return a 302 “Found” temporary redirect to a non-encrypted request for the passive content on the target site. The location header for this request is a resource under their control, configured to respond with a 401 “Unauthorized” response containing a WWW-Authenticate header with a value of Basic realm=”Please confirm your credentials.” The user’s browser will halt loading the page and display an authentication prompt. Some percentage of users will inevitably enter their credentials, which will be submitted directly to the attacker. Even worse, this attack can be automated and generalized to such a degree that an attacker could use commodity hardware to set up a fake Wi-Fi hotspot in a public place and harvest passwords from any number of sites.

Protecting against this attack is relatively simple. For users, be very suspicious of any unexpected login prompts, especially if it doesn’t look like part of the website. For developers, source in all resources using HTTPS on every secure page.

Web Security for the Tech Impaired: What is two factor authentication?

You may have heard the term ‘two-factor’ or ‘multi-factor’ authentication. If you haven’t heard of these terms, chances are you’ve experienced this and not even known it. The interesting thing is that two factor authentication is one of the best ways to protect your accounts from being hacked.

So what exactly is it? Well traditional authentication will ask you for your username and password. This is an example of a system that relies on one factor — something you KNOW — as the sole authentication method to your account. If another person knows your username and password they can also login to your account. This is how many account compromises happen, a hacker simply runs through possible passwords of accounts they want to hack and will eventually guess the correct password through what is known as a ‘brute force’ attack.

In two-factor authentication, we take the concept of security a step further. Instead of only relying on something that you KNOW we also rely on something that you HAVE in your possession. You may have already been doing this and not even realized it — have you logged into your bank or credit card only to see a message like ‘This is the first time you have logged in from this machine; we have sent an authentication code to the cell phone number on file for your account — please enter that number and your password” or words to that effect? That is an example of a site that is using two-factor authentication. By using the cell phone number they have on file to send you a text to confirm that you are who you say you are, they are relying on not only something you KNOW but also something you HAVE. If an attacker were to steal or guess your username and password, they would not be able to successfully login to your account because you would receive a text out of the blue for an account you didn’t login to. At that moment you would know someone is probably trying to login to your account.

This system works with anything you have. Text is the primary means of two factor authentication as most people have easy access to a cell phone and it’s easy to read the code to enter onto the site. This system works just as well with a phone call that provides you with a code or with an email. Anything that you HAVE will work with two factor authentication. You may notice that most sites will only ask you for this information once; typically sites will ask you the very first time you log in from a given device (be it mobile, desktop or tablet). After that, the site will remember what devices you’ve signed on with and allow those devices to login without requiring the second factor, the auth code. If you typically log in with your home computer, and then remember you need to check your balance at work, the site will ask you to log in with two-factor authentication because it does not recognize that device. The thought is that a hacker is unlikely to hack into your account by breaking into your house and using your own computer to login.

Now you may be saying ‘that sounds great! Where do I sign up?’. Unfortunately not all systems support two factor authentication. However, the industry is slowly progressing that way. Sometimes it isn’t enabled by default but is an options in a ‘settings’ or ‘account’ menu on the site. To see a list of common sites and status on supporting two-factor auth, https://twofactorauth.org/ is a great resource. I highly recommend turning this service on for any account that supports it. Typically, it’s extremely quick and easy to do and will make your accounts far more secure then ever before.

#HackerKast 43: Ashley Madison Hacked, Firefox Tracking Services and Cookies, HTML5 Malware Evasion Techniques, Miami Cops Use Waze

Hey Everybody! Welcome to another HackerKast. Lets get right to it!

We had to start off with the big story of the week which was that Ashley Madison got hacked. For those of you fortunate enough to not know what Ashley Madison is, it is a dating website dedicated to members who are in relationships and looking to have affairs. This breach was a twist from most other breaches as the hacker is threatening to release all of the stolen data unless the website shuts its doors for good. Ashley Madison’s upcoming IPO could also be messed up now that the 7 million user’s data are lost and no longer private. Our friend Troy Hunt also posted a business logic flaw that allowed you to harvest registered email addresses from the forgot password functionality that didn’t rely on the leaking of the breach.

Next, in browser news, Robert was looking at an about:config setting in Mozilla Firefox that can turn off tracking services and cookies. Some studies that looked into this measured that, with this flag turned off, load time went down by 44% and bandwidth usage was down 30%. This flag is a small win for privacy but still leaks user info to Google but not to a lot of other sites. Not a perfect option since you can use a lot of browser add-ons that do a better job but this one is baked into Firefox. This is a huge usage statistic that people’s bandwidth and load time improved so drastically.

In related news, an Apple iAd executive left Apple and made some noise on his way out. He seemed to be frustrated that Apple has tons of user’s data and since they respect some level of privacy they are not living up to their full potential. This is good news for you and I who care about privacy. Where it gets worse is that he left to go to a company called Drawbridge which is focused on deanonymizing users based on lots of data of shared wifi networks, unique machine IDs, etc.

I liked this next story since it is a creative business logic issue which are always my favorite. This issue was involved with the mobile GPS directions app called Waze. What Waze does is uses crowd sourcing in order to provide real time traffic data to help reroute users around jams with a more accurate and speedier result. The other major use of Waze is reporting cops and speed traps on the road. Turns out that cops have caught wind of this and I’m assuming it’s hit their fine-based economy bottom line because hundreds of cops in Miami downloaded the app and start submitting fake cop reports. By doing this the information becomes a lot less reliable for users and cops can probably catch more people. We discussed the ethics here and whether Google (who owns Waze) would want to go toe-to-toe on this issue.

Next up, we touched on this year’s State of Application Security Report that is put out by SANS Institute every year. We didn’t go through the whole thing due to time constraints but it is full of interesting data as usual. They broke up this report into 2 major sections that they studied, Builders & Defenders. Some of the major pain points were Asset Management, such as finding every Internet facing web application which is always a challenge. Another was modifying production code and potentially breaking the app in lieu of trying to fix a security issue. The builders on the other hand were basically the inverse which was focused on delivering features and time to market. Builders also feel they are lacking knowledge in security which has been a known issue for a long time.

Last up was some straight up web app research which is always a lot of fun. Some research recently came out and was expanded on that proved that drive by download malware could avoid detection by using some common HTML5 APIs. One popular technique to download malware to a user’s machine is to chunk up the malware upon download and then reassemble it all locally later. A lot of malware detection has caught up to this and it gets detected. The same malware that would be detected using traditional methods was undetected using some combination of HTML5 techniques such as localStorage, Web Workers, etc. Great research! Looking forward to more follow ups on this.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay


Ashely Madison Hacked
Your affairs were never discreet – Ashley Madison always disclosed customer identities
Firefox’s tracking cookie blacklist reduces website load time by 44%
Former iAd exec leaves Apple, suggests company platform is held back by user data privacy policy
Miami Cops Actively Working to Sabotage Waze with Fake Submissions
2015 State of Application Security: Closing the Gap
Researchers prove HTML5 can be used to hide malware

Notable stories this week that didn’t make the cut:

Self Driving Cars Could Destroy Fine Based Economy
Hackers Remotely Kill a Jeep on the Highway—With Me in It
Redstar OS Watermarking
The Death of the SIM card is Nigh
How I got XSS’d by my ad network
FTC Takes Action Against LifeLock for Alleged Violations of 2010 Order
OpenSSH Keyboard Interactive Authentication Brute Force Vuln
NSA releases new security tool

Bayes’ Theorem and What We Do

Back in 2012, The Atlantic Monthly published a behind-the-scenes article about Google Maps. This is the passage that struck me:

The best way to figure out if you can make a left turn at a particular intersection is still to have a person look at a sign — whether that’s a human driving or a human looking at an image generated by a Street View car.

There is an analogy to be made to one of Google’s other impressive projects: Google Translate. What looks like machine intelligence is actually only a recombination of human intelligence. Translate relies on massive bodies of text that have been translated into different languages by humans; it then is able to extract words and phrases that match up. The algorithms are not actually that complex, but they work because of the massive amounts of data (i.e. human intelligence) that go into the task on the front end.

Google Maps has executed a similar operation. Humans are coding every bit of the logic of the road onto a representation of the world so that computers can simply duplicate (infinitely, instantly) the judgments that a person already made…

…I came away convinced that the geographic data Google has assembled is not likely to be matched by any other company. The secret to this success isn’t, as you might expect, Google’s facility with data, but rather its willingness to commit humans to combining and cleaning data about the physical world. Google’s map offerings build in the human intelligence on the front end, and that’s what allows its computers to tell you the best route from San Francisco to Boston.

Even for Google, massive and sophisticated automation is only a first step. Human judgment is also an unavoidable part of documenting web application vulnerabilities. The reason isn’t necessarily obvious: Bayes’ theorem.

Bayes' Theorem Img 1

“P(A|B)” means “the probability of A, given B.”

Wikipedia explains the concept in terms of drug testing:

Suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users and 99% true negative results for non-drug users. Suppose that 0.5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability he or she is a user?

Bayes' Theorem Img 2

The reason the correct answer of 33% is counter-intuitive is called base rate neglect. If you have a very accurate test for something that happens infrequently, that test will usually report false positives. That’s worth repeating: if you’re looking for a needle in a haystack, the best possible tests will usually report false positives.

Filtering out false positives is an important part of our service, over and above the scanning technology itself. Because most URLs aren’t vulnerable to most things, we see a lot of false positives. They’re the price of automated scanning.

We also see a lot of duplicates. A website might have a search box on every page that’s vulnerable to cross-site scripting, but it’s not helpful to get a security report that’s more or less a list of the pages on your website. It is helpful to be told there’s a problem with the search box. Once.

Machine learning is getting better every day, but we don’t have time to wait for computers to read and understand websites as well as humans. Here and now, we need to find vulnerabilities, and scanners can cover a site more efficiently than a human. Unfortunately, false positives are an unavoidable part of that.

Someone has to sort them out. When everything is working right, that part of our service is invisible, just like the people hand-correcting Google Maps.

#HackerKast 42: Hacking Team, LastPass Clickjacking, Cowboy Adventure Game Distributes Malware, Droopescan, WhiteHat Acceleration Services

Welcome to the Episode in which we describe the answer to the Ultimate Question of Life, the Universe, and Everything. Maybe we’ll just stick to security but we’ve now done 42 of these things.

Kicking off this week with a gigantic combined story about Hacking Team, the story that keeps on giving. We touched on this breach last week but as people have been plowing through the 400GB of data that was leaked more and more 0-days are being discovered. Seems no operating system of browser is safe and Flash/Java felt the love in full force. At least 3 Flash 0-days have made their way into popular exploit kits so this is fully weaponized and being used in the wild. This, along with Facebook CISO Alex Stamos public statement against Flash, have proved to be a catalyst to both Firefox and Chrome blocking Flash BY DEFAULT. This is amazing. Huge step in the right direction and we are very interested to see where it goes.

Some other crazy revelation from combing through the breach data is, the guys over at Hacking Team were joking around about assassinating ACLU Technologist Chris Soghoian. Chris does a lot of work and public speaking against foreign governments weaponizing exploits which was apparently causing Hacking Team pain. It is a crazy world we live in when we have to accept that the industry we live in is costing enough people enough money that this kind of conversation about assassinations is bound to happen.

Next, some pure awesome web app hacking technique beauty. This week we saw an attack against LastPass password management browser plugin which utilized Clickjacking to steal stored passwords. We love clickjacking and browser security so this story had us all drooling. Before we dove in, props to LastPass security team for being super responsive anytime a security issue is brought to their attention. The PoC used in this case involved Tumblr in an iFrame. The attackers can then fool the user into clicking through the different LastPass prompts which caused the user’s password to be auto-filled into a textbox, which would then be sent to the attacker. Video of the PoC below:

Now if I had a dime for every time I downloaded a Cowboy Adventure game and it caused me problems… Well at least a million Android users would have 10 cents. This super popular game distributed via the Google Play store decided to become malicious and start installing malware onto it’s user’s phones. These mobile apps and devices have tons of permissions which makes these types of malware particularly dangerous as behind the firewall launching points for bigger attacks. Usually we are seeing this type of thing just used to generate ad fraud money for the attacker.

Next, we touched on a new CMS scanning tool that came out called Droopescan which is geared toward Drupal sites. Think, WPScan or CMS Map type tools but for Drupal. This is wildly important tool to exist as, if you’re a regular listener to HackerKast, you’ll know that CMS plugins and old versions are full of holes and have a huge target on their backs. These things are also very easy to find by scanning the entire Internet.

Lastly, we did some shameless self promotion of a project I’ve been working on under my rock for the past few months, WhiteHat Acceleration Services. When we look at our stats report year after year, and the time to fix vulnerabilities is astronomical and isn’t getting much better. This year our customers averaged 193 days to fix any given vulnerability that we identified. We’ve now set out to help that problem out. WhiteHat has been finding vulnerabilities in websites for over 10 years. Today we start helping you FIX them also.

This is the first of 6 new “Acceleration Services” offerings I’ve been tasked with launching this year. Check it out.

Thanks for listening! Check us out on iTunes if you want an audio only version to your phone. Subscribe Here
Join the conversation over on Twitter at #HackerKast
or write us directly @jeremiahg, @rsnake, @mattjay


Adobe Flash Zero Day CVE 2015
Third Hacking Team Zero Day Found
Pawn Storm uses Exploit for Unpached Java Flaw
Mozilla Blocks All Versions of Adobe Flash Until Publicly Known Security Vulns are Fixed
Google and Mozilla Pull Adobe Flash
Hacking Team Employee Jokes about Assassinating ACLU Technologist Chris Soghoian
Stealing Lastpass Passwords With Clickjacking
Cowboy Adventure Game Malware Affecting 1MM Android Users
WhiteHat Acceleration Services

Notable stories this week that didn’t make the cut:

Google accidentally reveals data on ‘right to be forgotten’ requests
Michael DeKort’s Jumbawumba
University Rolls Out Adblock Plus, Saves 40 Percent Network Bandwidth
XSSYA 2.0 Released
OPM Hack of Fingerprints breaks Biometrics
Federal Judge overturns Arizona’s Nude Photo Law
Top Five Takeaways Todays Hearings Encryption
XKeyscore Exposé Reaffirms the Need to Rid the Web of Tracking Cookies
Land Rover recalls 65,000 cars because of software bug that could lead to theft

Lowering Defenses to Increase Security

Starting at WhiteHat was a career change for me. I wasn’t sure exactly what to expect, but I knew there was a lot of unfamiliar terminology: “MD5 signature”, “base64″, “cross-site request forgery”, “‘Referer’ header”, to name a few.

When I started testing real websites, I was surprised that a lot of what I was doing looked like this:


Everything was definitely not that simple…but a lot of things were. How could I be correcting the work of people who knew so much more about computers than me? I’d talk to customers on the phone, and they already knew how to fix the vulnerabilities. In fact, they were even already aware of them, in some cases! Periodically, WhiteHat publishes statistics about how long it takes vulnerabilities to get fixed in the real world, and how many known vulnerabilities are ever fixed. The most recent report is available here, with an introduction by Jeremiah Grossman here.

SQL injection was first publicly described in 1998, and we’re still seeing it after 17 years. Somehow, the social aspects of the problem are more difficult than the technical aspects. This has been true since the very beginning of modern computing:

Apart from some less-than-ideal inherent characteristics of the Enigma, in practice the system’s greatest weakness was the way that it was used. The basic principle of this sort of enciphering machine is that it should deliver a very long stream of transformations that are difficult for a cryptanalyst to predict. Some of the instructions to operators, however, and their sloppy habits, had the opposite effect. Without these operating shortcomings, Enigma would, almost certainly, not have been broken.

Speaking of the beginning of computing, The Psychology of Computer Programming (1971) has the following passage about John von Neumann:

John von Neumann himself was perhaps the first programmer to recognize his inadequacies with respect to examination of his own work. Those who knew him have said that he was constantly asserting what a lousy programmer he was, and that he incessantly pushed his programs on other people to read for errors and clumsiness. Yet the common image today of von Neumann is of the unparalleled computer genius: flawless in his every action. And indeed, there can be no doubt of von Neumann’s genius. His very ability to realize his human limitations put him head and shoulders above the average programmer today.

Average people can be trained to accept their humanity – their inability to function like a machine – to value it and work with others so as to keep it under the kind of control needed if programming is to be successful.

The passage above is from a section of the book called “Egoless Programming.” It goes on to describe an anecdote in which a programmer named Bill is having a bad day, and calls Marilyn over to look at his code. After she finds 17 bugs in 13 statements, he responds by seeing the humor in the situation and telling everyone about it. In turn, Marilyn thinks that there must be more bugs if she could spot 17, and 3 more were spotted by others. The code was put into production and had no problems for 9 years.

The author of the book, Gerald Weinberg, made another interesting observation:

Now, what cognitive dissonance has to do with our programming conflict should be vividly clear. A programmer who truly sees his program as an extension of his own ego is not going to be trying to find all the errors in that program. On the contrary, he is going to be trying to prove that the program is correct, even if this means the oversight of errors which are monstrous to another eye. All programmers are familiar with the symptoms of this dissonance resolution — in others, of course…And let there be no mistake about it: the human eye has an almost infinite capacity for not seeing what it does not want to see. People who have specialized in debugging other people’s programs can verify this assertion with literally thousands of cases. Programmers, if left to their own devices, will ignore the most glaring errors in their output—errors that anyone else can see in an instant. Thus, if we are going to attack the problem of making good programs, and if we are going to start at the fundamental level of meeting specifications, we are going to have to do something about the perfectly normal human tendency to believe that one’s “own” program is correct in the face of hard physical evidence to the contrary.

What is to be done about the problem of the ego in programming? A typical text on management would say that the manager should exhort all his programmers to redouble their efforts to find their errors. Perhaps he would go around asking them to show him their errors each day. This method, however, would fail by going precisely in the opposite direction to what our knowledge of psychology would dictate, for the average person is going to view such an investigation as a personal trial. Besides, not all programmers have managers — or managers who would know an error even if they saw one outlined in red.

No, the solution to this problem lies not in a direct attack — for attack can only lead to defense, and defense is what we are trying to eliminate. Instead, the problem of the ego must be overcome by a restructuring of the social environment and, through this means, a restructuring of the value system of the programmers in that environment.

By the nature of what we do, WhiteHat does try to find mistakes in other people’s work. It’s not personal, and those mistakes are rarely unique! In the big picture, what brought us computers was the scientific method, that is, the willingness to learn from mistakes.

#HackerKast 41: HackingTeam, Adobe Flash Bug, UK Government’s Possible Encryption Ban

Hello everyone! Welcome to Week 41! Hope everyone enjoyed the holiday last week. Let’s get right to it:

First off, we talked about HackingTeam which is an Italian survaillence firm which sells its tools to governments to spy on citizens. We don’t know much about the breach itself in terms of technical details but the fact that this is a security company who builds malware makes it super interesting. One of the things revealed in their malware source code that was breached was weaponized child pornography which would plant this nasty stuff on victim’s computers. Also in the mix was some 0-days, most notably a previously unknown flash bug.

We covered a bit about the Flash bug which Adobe has already released a patch for and which is now available in exploit kits and Metasploit. HD Moore’s law in full effect here as we are seeing how fast these things get picked up and weaponized. We quickly rehashed some advice from the past of enabling click-to-play or uninstall this stuff completely as these things pop up constantly. It is also super telling that the only way we know about this bug is that it was leaked from an already existing exploit kit being hoarded by a private firm. There are likely tons of these floating around. Another behavior of some of these Flash bugs is once you are compromised by them, they patch the hole they used in order to make sure other hackers can’t get in.

Another story that keeps rearing its head is the UK government trying to ban encryption entirely. They’ve been talking about this for a while now but it keeps bubbling up in political news stories. Governments want the ability to spy on their own citizens as a whole and encryption is not allowing them to. We touched on the same conversation going on in the USA where the FBI wants a “golden key” scenario where there would still be encryption but they’d have the backdoor to decrypt everything. This is inherently insecure and an awful idea but lots of people keep bringing it up. This is closest to becoming a reality in the UK which would make even things like iMessage illegal and unusable.

We’re all looking forward to Vegas for BlackHat in a few weeks. Be sure to hunt us down to say hi!

Thanks for listening! Check us out on iTunes if you want an audio only version for your phone. Subscribe Here

Join the conversation over on Twitter at #HackerKast or write us directly @jeremiahg, @rsnake, @mattjay

OpenSSL CVE-2015-1793

OpenSSL released a security advisory regarding CVE-2015-1793, a bug in the implementation of the certificate verification process:

… from version 1.0.1n and 1.0.2b) will attempt to find an alternative certificate chain if the first attempt to build such a chain fails. An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and “issue” an invalid certificate.

This largely impacts clients which verify certificates and servers leveraging client authentication. Additionally, most major browsers, IE, FF and Chrome, do not utilize OpenSSL as the client for TLS connections. Thus while this is a high severity vulnerability it also carries a low impact. Due to the nature of this particular issue implementing a test in Sentinel is unnecessary.

If you have any questions please contact WhiteHat Customer Support at support@whitehatsec.com.

The following OpenSSL versions are affected:

* 1.0.2c, 1.0.2b
* 1.0.1n, 1.0.1o

The recommended solution is to update the affected version of OpenSSL:

* OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2d
* OpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p