ModSecurity Trustwave
This blog has moved! Please update your
bookmarks to http://blog.spiderlabs.com/modsecurity/.

ModSecurity Blog: February 2008

ModSecurity User Survey

With the release of ModSecurity 2.5 yesterday, this seemed like the perfect time to get feedback from the user community. The 2.5 release is important as it has included many features that were identified by the user community, so this highlights the need for us (Breach) to have a full understanding of how people are using ModSecurity and any challenges you all are facing.

With this in mind, we have put together the first ModSecurity User Survey.

I urge everyone to please take about 5 minutes and fill out the survey. With this information, we will be able to map out areas where we need to focus research and development to both the ModSecurity code itself, but also with the rule sets and supporting tools.

We will leave the survey open until the end of March.

Thanks for your time everyone.

ModSecurity 2.5 Released

The final version of ModSecurity 2.5.0, the long awaited next stable version of ModSecurity, is now available.  This release offers quite a few new features: set-based matching, a wider variety of string matching operators, transformation caching, support for writing rules as Lua scripts, credit card number validation, enhanced means for maintaining and customizing third party rulesets, and quite a few other features.  Take a look at the main website to see a summary of the new features.

Getting ModSecurity

As always, send questions/comments to the community support mailing list.  You can download the latest releases, view the documentation and subscribe to the mailing list at www.modsecurity.org.

Building ModSecurity 2.5

The documentation has been updated with a new build process for 2.5.  The new process uses the typical 'configure', 'make' and 'make install' approach instead of having to hand edit a Makefile as in previous releases.  This approach allows for a generally easy build for those with libraries in standard locations, but also some flexibility for those with unique systems.  You can take a look at more details in the installation section of the documentation.

Web Hacking Incidents Database Annual Report for 2007

Breach Labs which sponsors WHID has issued an analysis of the Web Hacking landscape in 2007 based on the incidents recorded at WHID. It took some time as we added the new attributes introduced lately to all 2007 incidents and mined the data to find the juicy stuff:

  • The drivers, business or other, behind Web hacking.
  •  
  • The vulnerabilities hackers exploit.
  •  
  • The types of organizations attacked most often.

To be able to answer those questions, WHID tracks the following key attributes for each incident:

  • Attack Method - The technical vulnerability exploited by the attacker to perform the hack.
  •  
  • Outcome - the real-world result of the attack.
  •  
  • Country - the country in which the attacked web site (or owning organization) resides.
  •  
  • Origin - the country from which the attack was launched.
  •  
  • Vertical - the field of operation of the organization that was attacked.

Key findings were: 

  • 67% percent of the attacks in 2007 were "for profit" motivated. Ideological hacking came second.
  •  
  • With 20%, good old SQL injections dominated as the most common techniques used in the attacks. XSS finished 4th with 12 percent and the young and promising CSRF is still only seldom exploited out there and was included in the "others" group.
  •  
  • Over 44% percent of incidents were tied to non-commercial sites such as Government and Education. We assume that this is partially because incidents happen more in these organizations and partially because these organizations are more inclined to report attacks.
  •  
  • On the commercial side, internet-related organizations top the list. This group includes retail shops, comprising mostly e-commerce sites, media companies and pure internet services such as search engines and service providers. It seems that these companies do not compensate for the higher exposure they incur, with proper security procedures.
  •  
  • In incidents where records leaked or where stolen the average number of records affected was 6,000.

The full report can be found at Breach Security Network.

Tangible ROI of a Web Application Firewall (WAF)

One of the challenges facing organizations that need to increase the security of their web applications is to concretely provide appropriate "Return On Investment" (ROI) for procurement justification. Organizations can only allocate a finite amount of budget towards security efforts therefore security managers need to be able to justify any commercial services, tools and appliances they want to deploy. As most people who have worked in information security for an extended period of time know, producing tangible ROI for security efforts that address business driver needs is both quite challenging and critically important.

The challenge for security managers is to not focus on the technical intricacies of the latest complex web application vulnerability or attack. C-level Executives do not have the time, and in most instances the desire, to know the nuances of an HTTP Request Smuggling attack. That is what they are paying you for! Security managers need to function as a type of liaison where they can take data from the Subject Matter Experts (SMEs) and then translate that into a business value that is important to the C-level Executive.

One, almost guaranteed, pain point to most executives are vulnerability scan reports that are presented by auditors. The auditors are usually being brought in from and reporting to a higher-level third party (be it OMB in the Government or PCI for Retail). Executives like to see "clean vulnerability scan reports." While this will certainly not guarantee that your web application is 100% secure, it can certainly help to prove the counter-argument. And to make matters worse, nothing is more frustrating to upper Management than auditor reports list repeat vulnerabilities that either never go away or pull the "Houdini" trick (they disappear for awhile only to suddenly reappear). Sidebar - see Jeremiah Grossman's Blog post for examples of this phenomenon. These situations are usually attributed to breakdowns in the Software Development Life Cycle (SDLC) where code updates are too time consuming or the change control processes are poor.

This is one of the best examples of where a Web Application Firewall can prove its ROI.

At Breach Security, we receive many inbound calls from prospects who are interested in WAF technology but are lacking that "Big Stick" that helps convince upper management to actually make the purchase. The best scenario we have found is to suggest a Before and After; comparison of a vulnerability scan report while they are testing the WAF on their network. The idea is to deploy the WAF in block mode and then initiate a rescan of a protected site. The difference in the reduction of findings is an immediate, quantitative ROI.

Here is a real example. One of our current customers followed this exact roadmap and this is a summary (slightly edited to remove sensitive information) of the information they sent back to us:

Our WAF is installed and running. I have tested its impact on www.example.com and it is operating very admirably. This morning I had the vulnerability scanning team run an on-demand scan to test the efficacy of the appliance, and I was very impressed with the results. Our previous metrics for www.example.com in the last scan were 64 vulnerabilities, across all outside IP addresses (including www.example.com, example2.com, example3.com, etc.) and with the Breach appliance in place, the metric from today's scan was 5 vulnerabilities, with details:

- High vulnerabilities dropped from 38 to 0

- Medium vulnerabilities dropped from 12 to 0

- 1 low vulnerability remains due to simply running a web server (we will eliminate this via exception)

- 1 low vulnerability due to a file/folder naming convention that is typical and attracts unwanted attacks (will be eliminated via rule addition)

Bear in mind that I have applied the appliance with a basic (almost strictly out-of-the-box) configuration and ruleset to protect only www.example.com (192.168.1.100 in the report), and the 35 warnings that remain are for the other websites, and would similarly disappear when protected by the appliance. In my opinion, this was a very successful test that indicates the effectiveness of the appliance.

So, looking at the report after the WAF was in place, the www.example.com web site removed 38 high and 12 medium vulnerabilities and left only 2 low ones (which are really just informational notices). That is pretty darn good and that was just with the default, generic detection ModSecurity rule set! Hopefully this information has helped to provide a possible use-case testing scenario to show tangible ROI of a WAF.

In a future post, I will discuss how custom WAF rule sets can be implemented to address more complex vulnerability issues identified not by a scanner but by actual people who have performed a web assessment/pentest.

Calendar

November 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Feeds

Atom Feed

Search

Categories

Recent Entries

Archives