Tag Archives: web

The Ubiquitousness of Web apps and the Browsers Who Love Them

Web apps are everywhere. But what are they really? How do they differ from a website? Or do they? I imagine that if we were to ask a hundred different people this question, we would get a hundred different answers.

So I’ll throw in my own definition for the purposes of this post. It makes sense to me, but feel free to disagree and share your thoughts in the comments.

A website exists to relay information. A website may have some dynamic functionality, but it exists mostly to relay information to users. That information can be useful or not (think joke websites, for example :P), but no website in itself provides an immersive experience. On the other hand, web apps, or web applications, do. They involve the user. They have logic to behave interactively. They provide functionality beyond simply delivering information. Users create information, modify information, share information.

Examples of these powerful applications include Facebook, Google’s Gmail and Calendar, Google Plus, and bulletin boards. Even embedded devices are providing web applications to configure them and monitor them. Firewalls, routers, printers, switches, even VoIP phones are configurable through a browser and have been for years. More and more devices are now including network stacks that in some way or another, speak HTTP.

Mmmmm, the memories. I had such fun abusing these embedded devices back in the good old days. Routers would let me reconfigure them without needing to authenticate. Phones would give me their secrets if I just pointed a browser at them. In 2006, I even found and reported a vulnerability in a popular embedded device. Read more about the vulnerability at ZeroDay Initiative.

A long time ago, a few others and I built a rather popular anti-spam service. It utilized HTTP. Why not? HTTP is cheap. We didn’t have to build custom protocols and all the pain that goes with that. I thought that as powerful and popular and awesome as the service was then, we were abusing HTTP. We were using it for things it was never meant to handle. Today, I see what people are building with HTTP, and I think they are abusing it more than ever. But why shouldn’t they? Everything useful lives on the web anyway. Development is cheap (relatively), because the infrastructure is already there. Custom protocols don’t have to be developed, though some people do build on top of HTTP to provide additional functionality. It’s easy to get users to point a web browser at a pretty web interface. It’s not easy to get users to download and install some untrusted piece of client-based software (well, sometimes. heh). Overall, though, users are generally more willing to browse to a website than they are to load a dedicated program each time. Plus, with the marketing buzz surrounding “the cloud” (as if it didn’t exist before), both users and developers are more willing to deploy a web app that has all the functionality of a desktop app, while also allowing users to interact with one another, to share data, and to keep all of the data generated off of their hard drives. Though data storage is getting cheaper, it’s still fairly expensive for the average consumer. And as data needs increase for consumers, so do their hard drive needs. Therefore, it’s a win/win situation to store data on the Internet (which is, and has always been, the cloud).

I would be remiss if I failed to mention the increasing usage of web browsers. Not only do computers and cell phones have Web browsers, but so do more and more embedded devices. My Wii has a web browser (yes, I’m a proud owner of a Wii :D); the PlayStation and Xbox have Web browsers; TVs do; and even microwaves, washer/dryers, and refrigerators will soon be available with web browsers, and therefore be “internet-enabled.” To be honest, this both excites and frighten me.

My fear is that these embedded devices will become the central source of information for the digital home. As a society, we are already on information overload. I think it is one thing for the washer/dryer to send an email when they are in need of changing loads. Recipe management or reminders to get food on the refrigerator is pretty nice. But to put video phones, email, and web browsers on these devices? Shouldn’t that be left to the computer? This introduces many exciting things for a member of the offensive security community such as myself, and should instill trepidation in to the average consumer.

Obviously, my focus for this post is on embedded devices. The tech industry has been building and breaking web apps for a few years now; but to me, embedded devices with web apps as well as browsers are largely uncharted territory. This brings to mind the SCADA talk that’s been happening for years now. Sure, there have been successful attacks on a few pieces of SCADA infrastructure over the years, but until fairly recently, there hasn’t been a huge focus on SCADA security, from the good guys or the bad guys.

I want to stop that from happening with consumer devices and open eyes to the potential problems that could happen if we continue down this road unchecked. How bad would it be if a microwave could be controlled by a virus because the browser had a vulnerability on it? How bad would it be if your water heater succumbed to a rootkit because the web app on it was vulnerable to file upload manipulation? Or, how bad would it be if instead of cooling and freezing your food, your refrigerator cooked your food because your next-door neighbor wanted to nose around your unprotected wireless network and thought it’d be funny to break your refrigerator remotely? You might say I’m simply trying to induce a panic. Instead, I like to think I’m getting people to consider the potential risks of allowing such device functionality into their homes before they go out and purchase these appliances and devices.

Are you afraid yet? I don’t mean to cause nightmares, but a serious reality check is intended here.

I get it. These devices that include web apps and browsers on them are going to make our lives easier in many ways. I think it is wonderful to be able to have a refrigerator that can remind me via email that I need to pick up milk on the way home, because it was connected to the Internet and knew (whether by the cool detecting technology that’s on the way or because I told it to remind me) that that particular task needed to be done. My refrigerator is also a sensible – and very useful – place to keep recipe information. Personally, I prefer a centralized server in the home and access it through any of my devices that way, but it’s something.

But if we don’t do something as consumers, this is going to get out of control. More than it is now. I am genuinely afraid of what might happen if someone from a Country of Ill-Repute™ got hold of my toaster over the internet. Maybe you should be too.

The 3:00 A.M. Incident Response Phone Call − A Success Story

It’s 3:00 A.M., and you receive the dreaded IR phone call. Your CSO is demanding an immediate response to an attack on your company’s resources. Dreary and lethargic, you stumble out of bed and VPN into your network. You pull up your centralized log management and see that there have been literally thousands of requests to your website in the span of time that typically sees between 50 and 100 requests. You feel your heart rate pick up, your palms get damp….

You’re under attack.

You begin rummaging through your network changelogs for the past twenty-four hours, attempting to see if there have been any major changes to the infrastructure or major software roll-outs across the network. But you find there have been no network changes, and no previously unvetted software updates have been pushed. “Damn,” you mutter to yourself, “if only the problem were that easy to identify….”

Your fingers flash across the keyboard in a rush as your Chief of the Network Operations Center floods your instant messenger with requests for updates.

C-NOC: “I guess since you’re up at this ungodly hour, CSO has you running IR for the breach?”

Me@3: “Yeah, any word from the network side? Hopefully we’re not seeing any data exfiltration from internal, right?”

C-NOC: “No, just a metric ton of smtp requests coming from the log management…. What alert controls did you have in place in case of an attack?”

Me@3: ”Crap, sorry  John, guess I forgot to put the alert mail cap in place…wait a second, I have to go, John. I totally forgot to check one of the most obvious things!”

C-NOC: ”Ha, you forgot to check the WAF? Noob :P

{C-NOC John has disconnected}

You have to love an environment where even the most severe problems result in good-hearted ribbing between colleagues.

You quickly surf to the URL where your WAF typically resides, and find the elegant interface filled with thousands of requests, which appear to be the result of someone running a fuzzer against the account information pages. It seems as if someone is attempting to SQLmap to iterate through all possible injections.

You laugh maniacally to yourself and lean back in your office chair, thoroughly satisfied with your department’s preparations for this very problem.  Just three weeks ago, you completed the transition from raw user interaction with the SQL database to a more secure parameterized transaction. As you pour yourself a bowl of cereal, you begin mentally drafting the incident report to your boss.

It’s going to be a good day.

Our Process — How We Do What We Do and Why

A while back I published what became an extremely popular post, looking behind the scenes at WhiteHat Sentinel’s backend infrastructure. On display were massive database storage clusters, high-end virtualization chassis, super fast ethernet backplanes, fat pipes to the internet, near complete system redundancy, round-the-clock physical security, and so on. Seriously cool stuff that, at the time, was to support the 2,000 websites under WhiteHat Sentinel subscription where we performed weekly vulnerability assessments.

Today, only seven months later, that number has nearly doubled to 4,000. A level of success we’re very proud of. I guess we’re doing something right, because no one else, consultancy or SaaS provider, comes anywhere close. This is not said to brag or show off, but to underscore that scalability is a critical part of solving one of the many Web security challenges many companies face, and an area we focus on daily at WhiteHat.

To meet the demand we scaled up basically everything. Sentinel now peaks at over 800 concurrent scans, sends roughly 300 million HTTP requests per month, a subset of which are 3.85 million security checks sent each week, resulting in around 185 thousand potential vulnerabilities that our Threat Research Center (TRC) processes each day (Verified, False-Positives, and Duplicates), and collectively generate 6TBs of data per week. This system of epic proportions has taken millions in R&D and years of effort by many of the top minds in Web security to build.


Clearly Sentinel is not some off-the-shelf, toy, commercial desktop scanner. Nor is it a consultant body shop hiding behind a curtain. Sentinel is a true enterprise class vulnerability assessment platform, leveraging a vast knowledge-base of Web security intelligence.

This is important because a large number of corporations have hundreds, even thousands of websites each, that all need to be protected. Being able to achieve the aforementioned figures, without sacrificing assessment quality, requires not only seriously advanced automation technology, but development of a completely new process of performing website vulnerability assessments. As a security pro and vendor who values transparency, this process, our secret sauce, something radically different than anything else out there, deserves to be better explained.

As a basis for comparison, the typical one-off consultant assessment/pen-test is conducted by a single person using an ad hoc methodology, with one vulnerability scan, and one website at a time. Generally, high-end consultants are be capable of thoroughly assessing roughly twenty websites in a year, each a single time. An annual ratio of 20:1 (assessment to people).

To start off, our highly acclaimed and fast growing Threat Research Center is the department responsible for service delivery. At over 40 people strong, the entire team is located at WhiteHat headquarters in Santa Clara, California. All daily TRC workload is coordinated via a special software-based workflow management system, named “Console,” we purpose-built to shuttle millions of discreet tasks across hundreds/thousands of websites that need to be completed.

Work units include initial scan set-ups, configuring the ideal assessment schedule, URL rule creation, form training, security check customization, business logic flaw testing, vulnerability verification, findings review meetings, customer support, etc. Each of these work units is able to be handled by any available TRC expert, or team of experts, who specialize and are proficient in a specific area of Web security, that might take place during different stages of the assessment process. Once everything is finished, every follow-on assessment becomes automated.

That is the real paradigm buster, a technology-driven website vulnerability assessment process capable of overcoming the arcane one-person-one-assessment-at-a-time model that stifles scalability. It’s as if the efficiency of Henry Ford’s assembly line met the speed of a NASCAR pit crew — this model dramatically decreases man hours necessary per assessment, leverages the available skills of the TRC, and delivers consistently over time. No other technology can do this.

As a long time Web security pro, to see such a symphony of innovation come together is really a sight to behold. And if there is any question about quality, we expect Sentinel PE testing coverage to meet or exceed that of any consultancy anywhere in the world. That is, no vulnerability that exposes the website or users to a real risk of compromise should be missed.

Let’s get down to brass tacks. If all tasks were to be combined, a single member of TRC could effectively perform ongoing vulnerability assessments on 100 websites a year. At 100:1, Sentinel PE is 5x more efficient than the traditional consulting model. Certainly impressive, but this is an apples to oranges comparison. The “100” in the 100:1 ratio is websites NOT assessments like the earlier cited 20:1 consultant ratio. The vast majority of Sentinel customer websites receive weekly assessments, not annual one-time one-offs. So the more accurate calculation would equal 5200:1 (52 weeks). Sentinel also comes in varied flavors of coverage. SE and BE measure in at 220:1 and 400:1 websites to TRC members respectively.

The customer experience perspective

Whenever a new customer website is added to WhiteHat Sentinel, a series of assessment tasks are generated by the system and automatically delegated via a proprietary backend workflow management system — “Console.” Each task is picked up and completed by either a scanner technology component or a member of our Threat Research Center (TRC) — our team of Web security experts responsible for all service delivery.

Scanner tasks include logging-in to acquire session cookies, site crawling, locating forms that need valid data, customizing attack injections, vulnerability identification, etc. Tasks requiring some amount of hands-on work are scan tuning, vulnerability verification, custom test creation, filling out forms with valid data, business logic testing, etc. After every task has been completed and instrumented into Sentinel, a comprehensive assessment can be performed each week in a fully automated fashion, or by whatever frequency the customer preferrers. No additional manual labor is necessary unless a particular website change flags someone in the TRC.

This entire collection of tasks, all of which must be completed when a new website is added to Sentinel, is a process we call “on-boarding.” From start to finish, the full upfront on-boarding process normally takes between 1 – 3 weeks and 2 – 3 scans.

From there, there are people in the TRC purely dedicated to monitoring nearly hundreds of running scans and troubleshooting anything that looks out of place on an ongoing basis. Another team is tasked to simply verify hundreds of thousands of potential scanner flagged vulnerabilities each week such as Cross-Site Scripting, SQL Injection, Information Leakage, and dozens of others. Verified results, also known as false-positive removal, is one of the things our customers say they like best about Sentinel because it means many thousands of findings they didn’t have to waste their time on.

Yet another team’s job is to configure forms with valid data, and marking which are safe for testing. All this diversification of labor frees up time for those who are proficient in business logic flaw testing, allowing them to focus on issues such as Insufficient Authentication, Insufficient Authorization, Abuse of Functionality, and so on. Contrast everything you’ve read so far with a consultant engagement that amounts to a Word or PDF report.

At this point you may be wondering if website size and client-side technology complexity cause us any headaches. The answer is not so much anymore. Over the last seven years we’ve seen and had to adapt to just about every crazy, confusing, and just plain silly website technology implementation the Web has to offer — of which there are painfully many. Then of course we’ve had to add support for Flash, Ajax, Silverlight, JavaScript, Applets, Active X, (broken) HTML(5), CAPTCHAs, etc.

The three most important points here are:

1) Sentinel has been successfully deployed on about 99% of websites we’ve seen. 2) Multi-million page sites are handled regularly without much fanfare. 3) Most boutique consultancies assess maybe a few dozen websites each year. We call this Monday through Friday.

Any questions?

An Incident Is a Terrible Thing to Waste (even those of others)

Hacks happen. The data captured by Verizon’s Data Breach Investigations Report, DataLossDB, and WASC’s Web Hacking Incident Database make this reality painfully obvious. The summary is most incidents, and the bulk of the data lost, is a direct result of vulnerable Web applications being exploited. As further evidence, Forrester’s 2009 research reported, “62% of organizations surveyed experienced breaches in critical applications in a 12 month period.” Dasient, a firm specializing in web-based malware, said “[In 2011] The probability that an average Internet user will hit an infected page after three months of Web browsing is 95 percent.”

These resources and the compromises of Apache.org, Comodo, Gawker, HBGary Federal, MySQL.com, NYSE, Sun.com, Zynga, and countless others are a good excuse to have a conversation with management about your organization’s potential risks.

Despite the facts, the idea of getting hacked is not often a conscious thought in the minds of executives, so of course it’s only a matter of time before the business becomes another statistic. When this happens and the business is suddenly awakened from a culture of security complacency, all eyes will become focused on understanding exactly what happened, why it happened, and how much worse it could have been. In the aftermath of a breach, employee dismissal and business collapses are rare, more often than not security budgets are expanded. Few things free up security dollars faster than a compromise, except for maybe an auditor.The security department will have the full attention of management, the board, and customers who all want to know what steps are being taken to ensure this never happens again. Post breach is an excellent time to put a truly effective security program in place, not just built around point products, but designed around outcomes and to have a lasting impact.

PROTIP: Security as a Differentiator

Update: The counter point, Security is rarely a differentiator, via Mike Rothman (@securityincite)

Every company needs a competitive advantage, the more the better, and “security” can be a powerful differentiator. This is because security is very important to people and is becoming more so with each passing day of computer-hacking-privacy-invading headlines and virus infected PCs. People need and want to be able to trust those who ask for their names, contact details, email, social security numbers, health information, payment data, and so on. When security is made visible (i.e. help customers be and feel safe), the customer may be more inclined to do business with those who clearly take the matter seriously over others who don’t.

“Making security visible” can be achieved through offering strong and flexible security controls, documentation about a comprehensive infosec program, recent audit reports conducted by an independent third-party, contractual SLAs, etc. As an information security professional, imagine being able to ask for budget to do these things and more by saying to a CxO, “If we invest $A on B (security thing), our sales & marketing department research estimates an increase in new customers and financial upside of $C.” As long as C is great then A, then you have a strong business case.

All of sudden security budgets go from being perceived as a unavoidable cost of doing business, where the goal is to spend the least amount of dollars possible, to a vehicle that drives revenue. Nothing beats that! To get there security pros must engage with sales and marketing personnel, and of course customers and prospects, to see how often “security” is a buying criteria. Understand what customers want and the premium value potentially applied to the sale. Successful efforts result in an excellent opportunity to align with the business objectives and everybody wins.