ModSecurity Trustwave
This blog has moved! Please update your
bookmarks to http://blog.spiderlabs.com/modsecurity/.

ModSecurity Blog: December 2007

OWASP London Chapter December 6th Presentations Now Online

We've had a couple of very interesting presentations on the OWASP London Chapter December 6th meeting. They are now available for download from the Chapter page or directly from here:

Getting good turnout at OWASP London meetings is traditionally difficult. There were 13 people attending the meeting in September (the first meeting I organised) with 18 people showing up in December. (Is this a positive trend? Probably too early to tell.) Some other Chapters don't seem to have problems with low attendance (for example those in Belgium, Israel, New York, to name a few) but having 18 people turn up -- in spite of the horrible weather we've had on the day -- is a success in London.

I am sure there is no shortage of people in London interested in computer security but finding a way to get them to attend OWASP meetings still eludes us. Adrian, one of the speakers at the most recent meeting, shares the sentiment in his blog post. So does Daniel, who had started the Chapter and organised many of its meetings:

Having started the OWASP London chapter and sorted a few London meetings, I can tell you it's a damn hard job. The problem with London is that everyone is busy, and after a long day you often feel like going home and relaxing with the family/wife/girlfriend/boyfriend etc. It sucks, I wish it was more like the US side of things, but it's a price you pay for being in such an aggressive market.

I am sure of a few things, though: hard work and persistence are the essential ingredients. Our biggest problem, however, may be with the lack of proper marketing. I am guessing most of the people who would be interested in attending have no idea about the meetings at all. Oh, well. One more thing to add to my to-do list!

Initial Release Candidate for ModSecurity 2.5.0 (2.5.0-rc1)

The first release candidate for the ModSecurity 2.5 release is now available.  It has been a while since the last development release, so I wanted to go over the new features and enhancements that ModSecurity 2.5 brings.  For the full documentation, go to www.modsecurity.org/documentation

New Features

Numerous features have been added to ModSecurity 2.5.

Experimental Lua Scripting Support

You can now write ModSecurity rules as Lua scripts!  Lua can also be used as an @exec target as well as with @inspectFile.  This feature should be considered experimental and the interface to it may change as we get more feedback.  Go to www.lua.org for more information.

ModSecurity:
SecRuleScript /path/to/script.lua [ACTIONS]
Lua Script:
function main()
    -- Retrieve script parameters.
    local d = m.getvars("ARGS", { "lowercase", "htmlEntityDecode" } );
     -- Loop through the parameters.
    for i = 1, #d do
        -- Examine parameter value.
        if (string.find(d[i].value, "<script")) then
            -- Always specify the name of the variable where the
            -- problem is located in the error message.
            return ("Suspected XSS in variable " .. d[i].name .. ".");
        end
    end
     -- Nothing wrong found.
    return null;
end

Efficient Phrase Matching

Large lists of spam keywords can be a performance bottleneck and tough to manage.  An efficient phrase matching operator is now supported to make this faster and easier (@pm and @pmFromFile).  See my last development release blog entry for more details.  www.modsecurity.org/blog/archives/2007/06/another_modsecu.html

String Matching Operators

Some string matching operators are now supported where regular expression matching is not required (@contains, @containsWord, @streq, @beginsWith and @endsWith, @within).  These operators also support expansion of variables so that you can accomplish more complex matching such as "@streq %{REMOTE_ADDR}".

Geographical Lookups

You can now lookup and act on geographical information from an IP address.  The GEO collection will extract the Country, Region, City, Postal Code, Coordinates as well as DMA and Area codes in the US.

SecRule REMOTE_ADDR "@geoLookup" "chain,drop,msg:'Non-UK IP address'"
SecRule GEO:COUNTRY_CODE "!@streq UK"

Transformations

More transformations are now supported (t:trimLeft, t:trimRight, t:trim, t:jsDecode).  These transformations are now cached so that they do not have to be reapplied for each rule, reducing overhead.

Variables

New variables were added.  You can now easily differentiate between a GET and POST argument (ARGS_GET, ARGS_POST, ARGS_GET_NAMES, ARGS_POST_NAMES) as well as determine what was previously matched (MATCHED_VAR_NAME, MATCHED_VAR, TX_SEVERITY).

Actions

New actions allow for easier logging of raw data (logdata), easier rule flow by skipping after a given rule/marker instead of by a rule count (skipAfter and SecMarker) and allow for more flexible rule exceptions based on any ModSecurity variable (ctl:ruleRemoveById).  Additionally, the "allow" action has been made more flexible by allowing you to specify allowing the request for only the current phase (the old default), for only the request portion or for both the request and response portions (the new default).

Enhancements

Along with all the new ModSecurity 2.5 features, many existing features have been enhanced.

Processing Partial Bodies

In previous releases, ModSecurity would deny a request if the response body was over the limit.  It is now configurable to allow processing of the partial body (SecResponseBodyLimitAction).  Additionally, request body sizes can now be controled without including the size of uploaded files (SecRequestBodyNoFilesLimit).

Better support for 64-bit OSes

ModSecurity now compiles cleanly on Solaris 10 and other 64-bit operating systems.  As Apache (and thus MosDecurity) runs on such a wide variety of OSes, I am asking that you help provide feedback to any portability issues that may arise.

Logging

There have been numerous enhancements to both auditing and debug logging.

Matched Rules Audited

A new audit log part, K, is now available.  Every rule that matched will be logged to this section of the audit log (one per line) if enabled.  This enhances auditing, helps determine why an alert was generated as well as helps to track down any false positives that may occur.

Component Signatures Audited

ModSecurity is becoming more modular.  To better manage external components (rulesets, operators, etc.), each component can add to the signature line logged in the audit log (SecComponentSignature).  This allows for better auditing of components and their versions.

Redundant Audit Logs

To add redundancy, you can now send audit logs to two locations simultaneously (SecAuditLog2).

Enhanced Debugging

The debug log now includes more information on an executing rule.  The ruleset filename, line number and the full rule itself are now logged to the debug log.

Migration

To help support migration from ModSecurity 2.1 to 2.5, you can now use the Apache <IfDefine> directive to exclude 2.5 specific rules and directives.

<IfDefine MODSEC_2.5>
    SecAuditLogParts ABCDEFGHIKZ
</IfDefine>
<IfDefine !MODSEC_2.5>
    SecAuditLogParts ABCDEFGHIZ
</IfDefine>

Feedback

As you can see there are a lot of new features and enhancements in ModSecurity 2.5.  I hope to see some good feedback from the release candidates so that we can get ModSecurity 2.5 polished up and the stable 2.5.0 available as soon as possible.

As always, send questions/comments to the community support mailing list.  You can download the latest releases, view the documentation and subscribe to the mailing list at www.modsecurity.org.

PCI Requirement 6.6 Is About Remediation

There have been many heated debates in web security circles around the wording of the PCI 1.1 Requirement 6.6 section and it all centers around the semantics of the word "either". The Requirement 6.6 states the following:

"6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:

  • Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
  • Installing an application layer firewall in front of web-facing applications.

Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement."

The word either in this context indirectly implies that these two option are functionally equivalent and therefore the user could choose either one and receive the same level of security benefit. Well, as you might guess, this sparked a vast amount of debate from webappsec folks as to the amount of security protection that is gained from a source code review vs. using a web application firewall. To make the waters even murkier, you even had web vulnerability scanner/service vendors asking the PCI counsel if users could run these tools vs. conducting an actual source code review. So now users who were attempting to become compliance with PCI section 6.6 weren't sure what they needed to do or what tool or process was the "best" approach...

I believe that an important issue that is being forgotten when people think of "source code review" is that the actual review is only one portion of the overall process. Most people do not factor in the remediation portion of the process. I could probably be convinced that a manual source code review vs. review with a source code analysis tool vs. running web vuln scanner could yield roughly similar results - they identify what the problems are. What about fixing them?

This is the core issue in 6.6 - to implement some sort of remediation to prevent successful web attacks. Why do I believe this? Well, if you refer to the PCI DSS Security Audit Procedures document it outlines how a PCI auditor will evalute each requirement to confirm whether or not the organization is in compliance. Section 6.6 - Testing Procedures - states the following:

"6.6 For web-based applications, ensure that one of the following methods are in place as follows:

  • Verify that custom application code is periodically reviewed by an organization that specializes in application security; that all coding vulnerabilities were corrected; and that the application was re-evaluated after the corrections"

As you can see, the goal of this section is to show that not only were vulnerabilities identified but that they were also fixed. So whether or not the vulnerabilities were identified by source code review or a scanner does not seem to be the main issue from PCI but rather was the vulnerability actually fixed??? It is the process of actually remediating the vulnerabilities that is taking entirely too long for organizations, if it happens at all. I mean, how many times does an Authorized Scanning Vendor (ASV) find the exact same vulns showing up in scan after scan? They are quickly showing the customer what/where the problems are but they just can't fix them for a variety of reasons:

  • Regression Testing Time; Any source code changes require extensive regression testing in numerous environments which may delay deployment in production by many weeks or even months.
  • Fixing Custom Code is Cost Prohibitive; In-house web assessment identifies vulnerabilities in your custom coded web application, however it is too expensive to recode the application.
  • Legacy Code/Breaking Functionality; Due to support or business requirements, legacy application code can not be patched as prior installs broke functionality. There may even be licensing issues where the vendor will not allow for changes in the code.
  • Outsourced Code; Outsourced applications that would require a new project to fix.
  • Certification and Accredidation is a Pain; For government organizations, the C&A process is very time consuming and any changes to the source code would require a new one.

Whatever the reason, current SDLC processes for quickly fixing vulnerabilities found in production are lacking. This brings us to the web application firewall. If you look at the 2nd party of the 6.6 testing procedures, it states this for WAFs –

"Verify that an application-layer firewall is in place in front of web-facing applications to detect and prevent web-based attacks."

Notice that the WAF has to be in block mode! This again, supports the idea of remediation. So, just because an organization deploys a WAF is not enough to comply with requirement 6.6. You need to be blocking attacks (mainly SQL Injection and XSS as they are the only 2 that are considered HIGH severity). It is for these reasons that I believe that the "intent" of 6.6 is geared towards remediation efforts and not just identification tasks.

Using Transactional Variables Instead of SecRuleRemoveById

Using SecRuleRemoveById to handle false positives

The SecRuleRemoveById directive is most often used when ModSecurity users are trying to deal with a false postive situation. Used on its own, it is a global directive that will disable a rule that was specified before it based on its rule id number. While users can technically take this approach and just use SecRuleRemoveById on its own, we caution against this. Just because a rule triggered a false positive match does not mean that the only recourse is to disable the rule entirely! Remember, the rule was created to address a specific security issue so every effort should be made to only disable a rule or make an exception in certain cases.

Limitations of SecRuleRemoveById

The problem is that SecRuleRemoveByID is somewhat limited in its capabilities for selectively disabling rules. One of the common methods of attempting to selectively disable a Mod rule is to nest the SecRuleRemoveById directive inside of an Apache scope location (such as Location) like this -

<Location /path/to/foo.php>
SecRuleRemoveById 950009
</Location>

There currently aren't many other options for using SecRuleRemoveById to disable a rule other than triggering on URI location as shown above. A similar issue was identified with other global directives and was addressed in ModSecurity 2.0 by making it possible to update these settings on a per rule basis by using the "ctl:" action. In future versions of ModSecurit we will implement a "ctl:RemoveById" action to handle this. In the meantime, however, what else can a user do to selectively disable rules bases on arbitrary request data?

Using Transactional Variables (TX)

The approach that am going to discuss is meant as an example only and its usage should be fully considered prior to implementation. I believe that the TX variable is not currently being widely used by ModSecurity users. This may be caused by two main reasons - 1) The Core Rules don't use them, and 2) We don't have proper "use-case" documentation showing how you might use it more effectively. It is with the later issue that I hope this post will help.

Transaction variables are really cool and Ivan explained their general usage in a SecurityFocus interview. Here are the relevant sections -

The addition of custom variables in ModSecurity v2.0 (along with a number of related improvements) marks a shift toward providing a generic tool that you can use in almost any way you like. Variables can be created using variable expansion or regular expression sub-expressions. Special supports exists for counters, which can be created, incremented, and decremented. They can also be configured to expire or decrease in value over time. With all these changes ModSecurity essentially now provides a very simple programming language designed to deal with HTTP. The ModSecurity Rule Language simply grew organically over time within the constraints of the Apache configuration file.

In practical terms, the addition of variables allows you to move from the "all-or-nothing" type of rules (where a rule can only issue warnings or reject transactions) to a more sensible anomaly-based approach. This increases your options substantially. The all-or-nothing approach works well when you want to prevent exploitation of known problems or enforce positive security, but it does not work equally well for negative security style detection. For the latter it is much better to establish a per-transaction anomaly score and have a multitude of rules that will contribute to it. Then, at the end of your rule set, you can simply test the anomaly score and decide what to do with the transaction: reject it if the score is too large or just issue a warning for a significant but not too large value.

What I am about to show is an implementation of this concept.

Enabling/Disabling rules using TX variables

The first step in this process is to update your your modsecurity_crs_15_customrules.conf file to specify which rules will be active. If you aren't familiar with the the modsecurity_crs_15_customrules.conf file and its usage, please see this prior Blog post. The following two entries use the SecAction directive to set two different TX variables -

# Set the enforce variable to 0 to disable and 1 to enable
# Rule ID 950002 is for "System Command Access"
SecAction "phase:1,pass,nolog,setvar:tx.ruleid_950002_enforced=1, \
setvar:tx.ruleid_950002_matched=0"

As the comment text indicates, you can quickly toggle whether or not this rule is active by changing the tx.ruleid_950002_enforced variable to 0. With this directive, every request will have these two TX variables initially set. If you have ever seen any of those nature shows on television where the researchers capture an animal, tag it and then release it back into the wild, we are essentially doing the same thing. We are just "tagging" the current request with some data that will be updated and/or evaluate by later rules.

Altering the Core Rules

The next step in this process is to update the individual Core Rules files to edit the rules so that instead of applying a disruptive action (such as deny), they will only set a new TX variable upon a match. The idea is to decouple the evaluation of the attack pattern in the transaction from the disruptive action application (which will happen in the next step). Here is an example from the modsecurity_crs_40_generic_attacks.conf file for the command access rule -

#
# Command access
#
SecRule REQUEST_FILENAME "\b(?:n(?:map|et|c)|w(?:guest|sh)|cmd(?:32)?|telnet|rcmd|ftp)\.exe\b" \
        "capture,t:htmlEntityDecode,t:lowercase,log,pass,id:'12345',msg:'System Command Access. \
Matched signature <%{TX.0}>',setvar:tx.ruleid_950002_matched=1"

Now, if an inbound request matched this rule, then the tx variable called "ruleid_950002_matched" will be set to "1". This updates the original setting of this variable from the SecAction in the modsecurity_crs_15_customrules.conf file. This rule will also log the detection of this rule to the error_log file.

Evaluating the TX variables for blocking

The next step is to add a new rule to your modsecurity_crs_60_customrules.conf file to actually implement the blocking aspect of this process -

SecRule TX:RULEID_950002_ENFORCED "@eq 1" "chain,t:none,ctl:auditLogParts=+E,deny,log, \
auditlog,status:501,msg:'System Command Access. Matched signature <%{TX.0}>',id:'950002',severity:'2'"
SecRule TX:RULEID_950002_MATCHED "@eq 1"

The above example is a chained rule set where the first line checks to see if this rule should even be evaluated. If the tx value is set to 1 (meaning yes we are evaluating this rule) then it will go to the 2nd part of the chained rule and check to see if the matched TX value is 1 (meaning that the inbound request matched the RegEx check from the modsecurity_crs_40_generic_attacks.conf file). If both of these TX values return true then the entire chained rule matches and the actions on the 1st line is triggered and the request is denied. Here is what the short error_log message would look like -

[Sat Jun 23 18:04:54 2007] [error] [client 192.168.1.103] ModSecurity: Access denied with code 501 (phase 2). \
Operator EQ match: 1. [id "950002"] [msg "System Command Access. Matched signature "] [severity "CRITICAL"] \
[hostname "www.example.com"] [uri "/bin/ftp.exe"] [unique_id "@D6NJMCoD4QAABSNAoMAAAAA"]

What does this approach do for you?

At this point, you may be asking "Ok, how are these rules any different from the Core Rules? Didn't you just make the rules more complex?" It is true that functionally speaking, these new rules work exactly the same as the current Core Rule ID 950002. If client sent a request with one of those OS commands in them then it would be blocked by either rule set.

Advantages of this approach

The advantage of using this approach is that you now have extended flexibility to decided under what circumstances a rule will be evaluate or by which an exception can be made to a rule.

1) You could disable rules in phase:1. With the current approach of SecRuleRemoveById being used inside Apache scope directives, you could only run within phase:2 or beyond. With this approach, you could easily create a rule that runs in phase:1 and evaluates some variable (perhaps a remote IP or something) and then just sets "setvar:tx.ruleid_950002_enforced=0" and it will disable that rule.

2) Besides just deciding on whether or not the rule itself will be evaluated, you could also selectively decide if an inbound request matches the rule or not. Let's say that you keep having a false positive on rule ID 950002 when a client uses a specific web client (user-agent string). You could then easily add a rule to your modsecurity_crs_60_customrules.conf file to check for this user-agent string value and then use set "setvar:tx.ruleid_950002_matched=0" to set the TX variable back to 0 even if the rule had matched in the modsecurity_crs_40_generic_attacks.conf file :) Here is an example rule you would place in the *60* file before the blocking rule -

SecRule REQUEST_HEADERS:User-Agent "^Browser_1234$" \
"phase:2,log,t:none,id:'123456',setvar:tx.ruleid_950002_matched=0"

As you can see, using this approach you have much more flexibility to determine when and where you want to implement an exception to a rule and you can then use "setvar" to easily change the TX variables. This provides you with many more options than using a global directive.

Disadvantages of this approach

1) This approach pretty much goes against the recommendations that I have been promoting previously about trying to limit editing of the Core Rules themselves.

2) This approach also introduces more directives than would normally be present in your configurations. As we have stated in many previous posts, the more rules that you have the higher the impact on performance. This means that for those users who are concerned with performance may not want to use this approach.

Remember, however, that I said that the purpose of this post is simply to present an alternative approach to evaluating requests and to show a use-case example of using TX variables.

Calendar

November 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Feeds

Atom Feed

Search

Categories

Recent Entries

Archives