ModSecurity Breach

ModSecurity Blog

« ModSecurity 2.6 RoadMap | Main | ModSecurity Training at Blackhat USA »

What's the Score of the Game?

We, as the webappsec community, should try and move away from "Holy Wars" debating that there is only one right way to address web application vulnerabilities - source code reviews, vulnerability scanning or web application firewalls - and instead focus on the end results. Specifically, instead of obsessing on Inputs (using application x to scan) we should turn our attention towards Outputs (web application hackability). This concept has been skillfully promoted by Richard Bejtlich of TaoSecurity and is called Control-Compliant vs. Field-Assessed security. Here is a short paragraph intro:

In brief, too many organizations, regulators, and government agencies waste precious time and resources devising and auditing "controls," regardless of the effect these controls have or do not have on security. They are far too input-centric; they should become more output-aware. They obsess over recording conditions they believe may be helpful while remaining ignorant of the "score of the game." They practice management by belief and disregard management by fact.

While the impetus for Richard's soapbox rant was the Goverment auditing mindsets, we can still apply this same "input-centric" focus to our current state of webappsec. Due to regulations such as PCI, we are unfortunately framing web security in an input-centric lens and forcing users to checkmark a box stating that they are utilizing process x rather than formulating a strategy to conduct field-assessments to obtain proper metrics on how difficult is it to hack into the site. We are focusing too much on whether a web application's code was either manually or automatically reviewed or if it was scanned with vendor X's scanner, rather than focusing on what is really important - did these activities actually prevent someone from breaking into the web application? If the answer is No, then who really cares what process you followed. More specifically, the fact that your site was PCI compliant at the time of the hack is going to be of little consequence.

Let's take a look at each of these input-centric processes through another great analogy by Richard:

Imagine a football (American-style) team that wants to measure their success during a particular season. Team management decides to measure the height and weight of each player. They time how fast the player runs the 40 yard dash. They note the college from which each player graduated. They collect many other statistics as well, then spend time debating which ones best indicate how successful the football team is. Should the center weigh over 300 pounds? Should the wide receivers have a shoe size of 11 or greater? Should players from the north-west be on the starting line-up? All of this seems perfectly rational to this team. An outsider looks at the situation and says: "Check the scoreboard! You're down 42-7 and you have a 1-6 record. You guys are losers!"

This is an analogy that I have been using more and more recently when discussing source code reviews as they are somewhat like the NFL Scouting Combine. Does measuring each players physical abilities guarantee a player or teams success? Of course not. Does it play a factor in the outcome of an actual game? Usually, however a team's Draft Grade does not always project to actual wins the following season. Similarly, is using an input validation security framework a good idea? Absolutely, however the important point is to look at the web application holistically in a "real game environment" - meaning in production - to see how it performs.

Sticking with the analogy, vulnerability scanning in dev environments is akin to running an Intra-squad scrimmage. It is much more similar to actual game conditions - there is an offense and defense, players are wearing pads and there is a time clock, etc... While this is certainly more realistic to actual game conditions, there is one key element missing - the opponent. Vulnerability scanners do not act in the exact same way that attackers do. Attackers are unpredictable. This is why, even though a team reviews film of their opponent to identify tendencies and devise a game plan to protect their own, it is absolutely critical that a team is able to make adjustments on the fly during a game. It is for this reason that running vulnerability scans in production is critical as you need to test the live system.

Running actual zero-knowledge penetration tests is like Pre-season games in the NFL. The opponent in this case is acting much more like a real attacker would and is actually exploiting vulnerabilities rather than probing and making inferences about vulnerabilities. It is as close as you can get to the real thing, except that the outcome of the game doesn't matter :)

Web application firewalls, that are running in Detection Only modes, are like trying to have a real football game but only doing two-hand touch. If you don't really try and tackle an opponent to the ground (meaning implement blocking capabilities) then you will never truly prevent an attack. Also, as most of you have seen with premiere running backs in the NFL - they have tremendous "evasion" capabilities such as spin moves and stiff-arms that make it difficult for defenders to tackle them. This is the same for web application layer attacks, WAFs need to be running in block mode and have proper anti-evasion normalization features to be able to properly prevent attacks.

It is on the live production network where all or your security preparations will pay off, or on the other hand, it is also where your web application security will crash and burn. I very seldom see development and staging areas that adequately mimic the production environment, which means that you will not truly know how your web application security will fair until it is allowed to be accessed by un-trusted clients. When your web application goes live, it is critical that your entire "team" (developers, assessemnt and operations) is focused and able to quickly respond to the unexpected behavior of clients.  The problem is that these groups do not always communicate effectively and coordinate their efforts.  Specifically, these three groups should be sharing their output with the other two:

Conduct code reviews on all web applications and fix the identified issues.  The code reviews should be conducted when applications are initially being developed and placed into production and also when there are code changes. Any issues that can not be fixed immediately should be identified and passed onto the vulnerabilty scanning and WAF teams for monitoring and remediation.

Conduct vulnerability scans and penetration tests on all web applications.  Should be conducted prior to an application going online and then at regularly scheduled intervals and on an on-demand basis when code changes are made. Any issues identified should be passed onto the Developement and WAF teams for remediation.

Deploy a Web Application Firewall in front of all web server. A WAF will provide protection in production. When the WAF identifies issues with the web application, it can provide reports back to both Development and Vulnerability Scanning teams for remediation and monitoring.  It is this on-the-fly, in-game agility where a WAF shines.

Are game time adjustments always the absolute best option when they are being reviewed in film sessions the following day or on Monday by Arm-Chair Quaterbacks? Nope, but that is OK. They can be adjusted. Also, this film review will also allow for the identification of root-cause issues so that they can be fixed (source code changes) in preparation for the next game.

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e5512c9d3a883300e5524727cb8833

Listed below are links to weblogs that reference What's the Score of the Game?:

That's a nice post. And a tough struggle. Organisations are stronger at writing policies than they are at getting real word data.

Penetration testing has inherent weaknesses too. I prefer it over heavy printed policies, but there is this tendency of enumerating badness versus securing. Dave Wichers presented an idea he dubbed as Agile Security at the Owasp Europe conference (http://www.owasp.org/images/b/b8/AppSecEU08-Agile_and_Secure.ppt).
This could get us beyond that stage.

I believe the key problem - and that's a flaw in Jeremiah's post too - is the believe that you can indeed get valid metrics on security. Gary McGraw dubbed it that way: The results of a code scan result from very bad to we do not know. You have no proof of the absence of flaws. You can only conclude you have not found any. And on top of that, it depends on the skills of the pen testers what you find.

So I would say you can not get to the point where you know the score of an individual game.

This is when statistics comes into play and so it looks more feasible for enterprises where there are big numbers of web applications. Assessing dozens and hundreds of applications per year may indeed give you valid metrics. But we get more into the risk management field that way.

No comment on the deployment of a WAF, though. Way to go.

regs,

Christian

Quite interesting actually, i'll have to bookmark your site now to check out what other things you might say. ;)

The comments to this entry are closed.

Calendar

November 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Feeds

Atom Feed

Search

Categories

Recent Entries

Archives