ModSecurity Trustwave
This blog has moved! Please update your
bookmarks to http://blog.spiderlabs.com/modsecurity/.

ModSecurity Blog: February 2007

ModSecurity Status Report

I enjoyed talked about ModSecurity (and web application firewalls) in front of the London OWASP Chapter last night. It's been a while since I talked about ModSecurity. Most of my talks last year were of generic nature, discussing web application firewalls with ModSecurity only mentioned here or there. It was a conscientious effort on my part to help the users make up their own minds. But I think I've done enough of that. It's time to go back to banging on my own drum, so to speak.

My talk, now available from the ModSecurity web site, is a good overview of the current state of ModSecurity. There's a bit of everything in it: why web application firewalls (with use cases), current and future ModSecurity features, and a mention of the related projects. There is only 13 slides in the presentation but it covers a lot of ground.

Handling False Positives and Creating Custom Rules

It is inevitable; you will run into some False Positive hits when using web application firewalls. This is not something that is unique to ModSecurity. All web application firewalls will generate false positives from time to time. The following information will help to guide you through the process of identifying, fixing, implementing and testing new custom rules to address false positives.

Every rule set can have false positive in new environments
False Positives happen with ModSecurity + the Core Rules mainly as a by product of the fact that the rules are "generic" in nature. There is no way to know exactly what web application is going to be run behind it. That is why the Core Rules are geared towards blocking the known bad stuff and forcing some HTTP compliance. This catches the vast majority of attacks.

Use DetectionOnly mode
Any new installation should initially use the log only Rule Set version or if no such version is available, set ModSecurity to Detection only using the SecRuleEngine DetectionOnly command. After running ModSecurity in a detection only mode for a while review the events generated and decide if any modification to the rule set should be made before moving to protection mode.

Don't be too hasty to remove a rule
Just because a particular rule is generating a false positive on your site does not mean that you should remove the rule entirely. Remember, these rules were created for a reason. They are intended to block a known attack. By removing this rule completely, you might expose your website to the very attack that the rule was created for. This would be the dreaded False Negative.

ModSecurity rules are open source
Thankfully, since ModSecurity’s rules are open source, this allows you the capability to see exactly what the rule is matching on and also allows you to create your own rules. With closed-source rules, you can not verify what it is looking for so you really have no other option but to remove the offending rule.

The logs are your friend
In order to verify if you indeed have a false positive, you need to review your logs. This means that you need to look in the audit_log file first to see what the ModSecurity message states. It will provide information as to which rule triggered. This same information is also available within the error_log file. The last place to look, and actually the best source of information, is the modsec_debug.log file. This file can show everything that ModSecurity is doing, especially if you turn up the SecDebugLogLevel to 9. Keep in mind, however, that increasing the verboseness of the debug log does impact performance. While increasing the verboseness for all traffic is usually not feasible, what you can do is to create a new rule that uses the “ctl” action to turn up the debugloglevel selectively. For instance, if you identify a False Positive from only one specific user, you could add in a rule such as this:

SecRule REMOTE_ADDR "^192\.168\.10\.69$" phase:1,log,pass,ctl:debugLogLevel=9

This will set the debugLogLevel to 9 only for requests coming from that specific source IP address. Perhaps that still generates a bit too much traffic. You could tighten this down a bit to increase the logging only for the specific file or argument that is causing the false positive:

SecRule REQUEST_URI "^/path/to/script.pl$" phase:1,log,pass,ctl:debugLogLevel=9

or

SecRule ARGS:variablename “something” phase:1,pass,ctl:debugLogLevel=9

Now that you have verbose information in the debug log file, you can review it to ensure that you understand what portion of the request was being inspected when the specific rule trigger and you can also view the payload after all of the transformation functions have been applied.

Try to avoid altering the Core Rules
In general, it is recommended that you try to limit your alteration of the Core Rules as much as possible. The more you alter the rule files, the less likely it will be that you will want to upgrade to the newer releases since you would have to recreate your customizations. What we recommend is that you try to contain your changes to your own custom rules file(s) that are particular to your site. This is where you would want to add new signatures and to also create rules to exclude False Positives from the normal Core Rules files. There are two main ways to integrate your custom rules so that they work with the Core Rules.

1. Adding new white-listing rules

If you need to add new white-listing rules so that you can, for instance, allow a specific client IP address to pass through all of the ModSecurity rules you should place this type of rule after the modsecurity_crs_10_config.conf file but BEFORE the other Core Rules. This is accomplished by creating a new rule file called – modsecurity_crs_15_customrules.conf and place it in the same directory as the other Core Rules. This is assuming you are using the Apache Include directive to call up the Core Rules like this –

<IfModule security2_module>

Include conf/rules/*.conf

</IfModule>

By naming your file with the “_15_” string in it, it will be called up just after the config file. This will ensure that your new white-list rule will be executed early and you can then use such actions as allow and ctl:ruleEngine=Off to allow the request through the remainder of the rules.

2. Adding new negative policy rules

If you need to add new negative policy rules, such as when you need to update a Core Rule that is causing a false positive, you should add these rules to a new rule file that come AFTER all of the other Core Rules. Call this new file something like – modsecurity_crs_60_customrules.conf. Just make sure that number in the filename is higher than any other rules file so it is read last. The rationale for placing these types of rules after the other rules is that you can then match up these new replacement rules with corresponding SecRuleRemoveByID directives that will then disable the specific Core Rule(s) that are causing False Positives. It is important to note that you need to use SecRuleRemoveById AFTER ModSecurity has knowledge of the Rule ID you are actually removing. If you were to place this directive in the modsecurity_crs_15_customrules.conf file, it would not work correctly as the rule ID you are specifying does not exist yet. That is why this directive should be called up in your custom rules file that comes at the end. Using this method allows you to turn off rules without having to actually go into the Core Rules files and comment out or update specific rules.

Fixing the false positive
OK, so now you have identified the specific Core Rule that is causing the false positive. Let’s say that the rule that is causing a false positive is the following one in the modsecurity_crs_40_generic_attacks.conf file –

# XSS

SecRule REQUEST_FILENAME|ARGS|ARGS_NAMES|REQUEST_HEADERS 

"(?:\b(?:on(?:(?:mo(?:use(?:o(?:ver|ut)|down|move|up)|ve)|ke

y(?:press|down|up)|c(?:hange|lick)|s(?:elec|ubmi)t|(?:un)?load|dragdrop|resize|focus|blur)\b\W*?=|abort\b)|(?:l(?:ows

rc\b\W*?\b(?:(?:java|vb)script|shell)|ivescript)|(?:href|url)\b\W*?\b(?:(?:java|vb)script|shell)|mocha):|type\b\W*?\b

(?:text\b(?:\W*?\b(?:j(?:ava)?|ecma)script\b| [vbscript])|application\b\W*?\bx-(?:java|vb)script\b)|s(?:(?:tyle\b\W*=

.*\bexpression\b\W*|ettimeout\b\W*?)\(|rc\b\W*?\b(?:(?:java|vb)script|shell|http):)|(?:c(?:opyparentfolder|reatetextr

ange)|get(?:special|parent)folder|background-image:|@import)\b|a(?:ctivexobject\b|lert\b\W*?\())|<(?:(?:body\b.*?\b(?

:backgroun|onloa)d|input\b.*?\\btype\b\W*?\bimage)\b|!\[CDATA\[|script|meta)|.(?:(?:execscrip|addimpor)t|(?:fromcharc

od|cooki)e|innerhtml)\b)" \

"log,id:950004,severity:2,msg:'Cross-site Scripting (XSS) Attack'"

Your next step is to just copy and paste it into the new modsecurity_crs_60_customrules.conf file. Let’s assume that the false positive hit with this rule is when it is inspecting a specific portion of your Cookie header called Foo. The Cookie data is included within the REQUEST_HEADERS variable. You now need to make a few edits to the rule to update it to remove the false hit. The bolded sections of code are the relevant updates -

# XSS

SecRule REQUEST_FILENAME|ARGS|ARGS_NAMES|REQUEST_HEADERS|

!REQUEST_HEADERS:Cookie|REQUEST_COOKIES|REQUEST_COOKIES_NAMES|!REQUEST_COOKIES:/^Foo$/ 

"(?:\b(?:on(?:(?:mo(?:use(?:o(?:ver|ut)|down|move|up)|ve)|ke

y(?:press|down|up)|c(?:hange|lick)|s(?:elec|ubmi)t|(?:un)?load|dragdrop|resize|focus|blur)\b\W*?=|abort\b)|(?:l(?:ows

rc\b\W*?\b(?:(?:java|vb)script|shell)|ivescript)|(?:href|url)\b\W*?\b(?:(?:java|vb)script|shell)|mocha):|type\b\W*?\b

(?:text\b(?:\W*?\b(?:j(?:ava)?|ecma)script\b| [vbscript])|application\b\W*?\bx-(?:java|vb)script\b)|s(?:(?:tyle\b\W*=

.*\bexpression\b\W*|ettimeout\b\W*?)\(|rc\b\W*?\b(?:(?:java|vb)script|shell|http):)|(?:c(?:opyparentfolder|reatetextr

ange)|get(?:special|parent)folder|background-image:|@import)\b|a(?:ctivexobject\b|lert\b\W*?\())|<(?:(?:body\b.*?\b(?

:backgroun|onloa)d|input\b.*?\\btype\b\W*?\bimage)\b|!\[CDATA\[|script|meta)|.(?:(?:execscrip|addimpor)t|(?:fromcharc

od|cooki)e|innerhtml)\b)" \

"log,id:1,severity:2,msg:'Cross-site Scripting (XSS) Attack'"

This updated rule is doing three things –

1. We are using the exclamation point character to create an inverted rule meaning do NOT inspect the REQUEST_HEADERS variable whose name is Cookie. The problem here is that this variable location is too generic/broad and we are only interested is excluding one specific Cookie location from this check and not the entire Cookie value. We don’t want to allow other possible XSS attack vectors within the Cookie value.

2. Since we still want to inspect the Cookie values, we have now opted to include additional Cookie variables that were not present before. We can now include both REQUEST_COOKIES and REQUEST_COOKIES_NAMES variables to the check. We are then finally using another inverted rule to exclude checking this rule against any Cookie whose name is exactly “Foo.” This is accomplished by using a regular expression argument to the REQUEST_COOKIES variable.

3. Finally, we are also updating the “id” meta-data action by changing to a new number that represents a custom rule range. The range: 1 -99999 is reserved for your internal use.

The last thing to do is to use SecRuleRemoveById to disable the Core Rule that was causing the problem –

SecRuleRemoveById 950004

Testing the new rules
The final step is to actually test out your new configs and verify that the old rule is not executing and the new rule is not triggering a false positive hit. The easiest method to use is to just resend the previously offending request to the web server and then monitor the audit_log file to see if the request becomes blocked or if the ModSecurity message is generated.

Easy Implementation of new Core Rules
With this type of methodology, you can create custom exclusions and fix false positives and it also allows for easy updating of the Core Rules themselves. What we don’t want to have happen is that current Mod users have altered the Core Rules files extensively for their environment that they do not want to upgrade when new Core Rule releases are available for fear of having to re-implement all of their custom configs. With this scenario, you can download new Core Rules versions as they are released and then just copy over your new ModSecurity custom rule files and you are ready to go!

ModSecurity Gets Another Team Member!

Some of you may recall I posted a job advertisment for a C programmer last year. Although this position was filled many weeks ago, Brian Rectanus, our newest team member, started working just this week. Brian's main task will be to work on ModSecurity, making his arrival a significant milestone for the project. Needless to say, this makes me very happy. I have a very long list of TODO items and some of them have been waiting in the queue for years!

Dealing with Impedance Mismatch

In my previous post I described a potential problem with web application firewalls protecting web applications. After getting your attention it is only fair to follow up with a solution.

Firstly, the problem is not as serious as it may appear at the first glance. Secondly, the solution, in the cases when the rules might be affected, is pretty straightforward. It's a simple matter of rewriting rules to avoid the slippery path (or not writing them to be vulnerable to evasion in the first place). You should have in mind, however, that the most important step in dealing with impedance mismatch is to understand the problem exists. Everything else is just a technicality.

It turns out the impedance mismatch problem typically affects only those rules that are designed to focus on named parameters and cookies. This is a feature that is not used frequently. For example, the Core Rules project does not need to do this because it is generic in nature. It does not care what parameters are called. What it really cares is the payload (either parameter names or values) and this is where inspection takes place - irrespective of impedance mismatch.

If you do need to address problems that manifest only with certain named parameters then it's probably because you want to deal with a particular problem in the application you are using. Which means you have a context in which you work and you can take simple precautions. You should always try to avoid using the named-variable approach (e.g. ARGS:parameter) and address all variables instead (i.e. ARGS). My favourite has always been having a rule that warns about parameters with strange characters in them (a space would be a strange parameter in this context).

SecRule ARGS_NAMES "!^[][a-zA-Z0-9_.]+$"

This rule will not work equally well for all environments. You should customise it to better suit your circumstances. Strictly speaking it is not designed to deal with impedance mismatch but to unearth unusual requests. If there are unusual characters used in parameter names (and this is not by design in the application) you will want to know about that so you can investigate.

As far as ModSecurity is concerned, we will pro-actively research other web application environments and document any issues we discover.

Testing Core Rules Protection For An Example SQL Injection Vulnerability

SANS released their 6th edition of the @RISK Weekly News Letter. In it, there were a total of 44 new web application vulnerabilities identified. Keep in mind that almost all of these vulnerabilities (I didn't get a chance to verify each and everyone of them) can be mitigated with the use of the Core Rules. For example, take this specific vulnerability:

07.6.37 CVE: Not Available
Platform: Web Application - SQL Injection
Title: ExoPHPDesk FAQ.PHP SQL Injection
Description: ExoPHPDesk is a web-based help desk application. It is
prone to an SQL injection issue because it fails to sufficiently
sanitize user-supplied data to the "id" parameter of the "faq.php"
script before using it in an SQL query. ExoPHPDesk versions 1.2.1 and
earlier are affected.
Ref: http://www.securityfocus.com/bid/22338

If you go to the SecurityFocus page and click on the "exploit" link you will see this example URL attack:

http://www.example.com/faq.php?action=&type=view&s=&id=-1'%20union%20select%200,concat(char(85),char(115),
char(101),char(114),char(110),char(97),char(109),char(101),char(58),name,char(32),char(124),char(124),char(32),
char(80),char(97),char(115),char(115),char(119),char(111),char(114),char(100),char(58)
,pass),0,0,0,0,0%20from%20phpdesk_admin/*

If you were to send this request to a host that is protected by ModSecurity + the most recent release of the Core Rules, it would be identified by the following rule -

# SQL injection
SecRule REQUEST_FILENAME|ARGS|ARGS_NAMES|REQUEST_HEADERS|!REQUEST_HEADERS:Referer "(?:\b(?:(?:s(?:elect\b(?:.{1,100}?
\b(?:(?:length|count|top)\b.{1,100}?\bfrom|from\b.{1,100}?\bwhere)|.*?\b(?:d(?:ump\b.*\bfrom|ata_type)|(?:to_(?:numbe
|cha)|inst)r))|p_(?:(?:addextendedpro|sqlexe)c|(?:oacreat|prepar)e|execute(?:sql)?|makewebtask)|ql_(?:longvarchar|var
iant))|xp_(?:reg(?:re(?:movemultistring|ad)|delete(?:value|key)|enum(?:value|key)s|addmultistring|write)|e(?:xecresul
tset|numdsn)|(?:terminat|dirtre)e|availablemedia|loginconfig|cmdshell|filelist|makecab|ntsec)|u(?:nion\b.{1,100}?\bse
lect|tl_(?:file|http))|group\b.*\bby\b.{1,100}?\bhaving|load\b\W*?\bdata\b.*\binfile|(?:n?varcha|tbcreato)r|autonomou
s_transaction|open(?:rowset|query)|dbms_java)\b|i(?:n(?:to\b\W*?\b(?:dump|out)file|sert\b\W*?\binto|ner\b\W*?\bjoin)\
b|(?:f(?:\b\W*?\(\W*?\bbenchmark|null\b)|snull\b)\W*?\()|(?:having|or|and)\b\s+?(?:\d{1,10}|'[^=]{1,10}')\s*?[=<>]+|(
?:print\]\b\W*?\@|root)\@|c(?:ast\b\W*?\(|oalesce\b))|(?:;\W*?\b(?:shutdown|drop)|\@\@version)\b|'(?:s(?:qloledb|a)|m
sdasql|dbo)')" \
"capture,t:replaceComments,ctl:auditLogParts=+E,log,auditlog,
msg:'SQL Injection Attack. Matched signature <%{TX.0}>',id:'950001',severity:'2'"

The resulting alert message would look like this:

[Wed Jan 17 11:01:16 2007] [error] [client 192.168.10.10] ModSecurity: Warning. Pattern match
"(?:\\\\b(?:(?:s(?:elect\\\\b(?:.{1,100}?\\\\b(?:(?:length|count|top)\\\\b.{1,100}?\\\\bfrom|
from\\\\b.{1,100}?\\\\bwhere)|.*?\\\\b(?:d(?:ump\\\\b.*\\\\bfrom|ata_type)|(?:to_(?:numbe|cha)|
inst)r))|p_(?:(?:addextendedpro|sqlexe)c|(?:oacreat|prepar)e|execute(?:sql)?|makewebt ..." at
ARGS:id. [id "950001"] [msg "SQL Injection Attack. Matched signature <union select>"] [severity "CRITICAL"]
[hostname "www.example.com"] [uri "/faq.php?action=&type=view&s=&id=-1'%20union%20select%200,concat(char(85),
char(115),char(101),
char(114),char(110),char(97),char(109),char(101),char(58),name,char(32),char(124),char(124),
char(32),char(80),char(97),char(115),char(115),char(119),char(111),char(114),char(100),char(58)
,pass),0,0,0,0,0%20from%20phpdesk_admin/*"] [unique_id "lqn99sCoChsAAHpfWokAAAAA"]

One very important note here:
By default, this SQL Injection rule is inheriting the following SecDefaultAction directive in the modsecurity_crs_40_general_attacks.conf file -

SecDefaultAction "log,pass,phase:2,status:500,t:urlDecodeUni,t:htmlEntityDecode,t:lowercase"

This means that while it did identify the attack, it did not block it. Your best course of action when implementing Core Rules is to run it with - SecRuleEngine DetectionOnly - until you have verified that there are no false positives in your environment. Afterwhich, you should change the SecDefaultAction settings within the rules files to actually use the "deny" action in order to prevent the attacks.

It is a good idea to periodically test out these types of exploit requests to ensure that your ModSecurity installation is functioning properly.

HTTPrint vs. ModSecurity

There was a great email posted to the ModSecurity user mail-list today that asked about ModSecurity's ability (or inability) to trick web server fingerprinting tools such as HTTPrint. The short answer is YES, ModSecurity 2.X can be used to effectively ruin the accuracy of HTTPrint. The most important point here is that ModSecurity 2.X now has a hook in to the Apache PostReadRequest portion of the request cycle (phase:1) where previously it would run much later in the Fixup phase (phase:2). In order understand how HTTPrint works, I suggest that you read this supporting information.

There are many different possibilities for mitigating the effectiveness of these types of fingerprinting scanners. For complete information, I suggest you read the http fingerprinting Appendix section I wrote as part of the WASC Threat Classification document.

HTTPrint a not a typical "banner grabbing" application, as it has more logic to it. It's main fingerprinting technique has to do with the Sematic differences in how web servers/applications respond to various stimuli. Let's take a look.

If I run HTTPrint v0.301 (the most recent version) against a default Apache 2.2.3 web server, it reports the following:

$ ./httprint -h 192.168.10.27 -s signatures.txt
httprint v0.301 (beta) - web server fingerprinting tool
(c) 2003-2005 net-square solutions pvt. ltd. - see readme.txt
http://net-square.com/httprint/
[email protected]

Finger Printing on http://192.168.10.27:80/
Finger Printing Completed on http://192.168.10.27:80/
--------------------------------------------------
Host: 192.168.10.27
Derived Signature:
Apache/2.2.0 (Fedora)
9E431BC86ED3C295811C9DC5811C9DC5050C5D32505FCFE84276E4BB811C9DC5
0D7645B5811C9DC5811C9DC5CD37187C11DDC7D7811C9DC5811C9DC58A91CF57
FCCC535B6ED3C295FCCC535B811C9DC5E2CE6927050C5D336ED3C2959E431BC8
6ED3C295E2CE69262A200B4C6ED3C2956ED3C2956ED3C2956ED3C295E2CE6923
E2CE69236ED3C295811C9DC5E2CE6927E2CE6923

Banner Reported: Apache/2.2.0 (Fedora)
Banner Deduced: Apache/2.0.x
Score: 140
Confidence: 84.34
------------------------
Scores:
Apache/2.0.x: 140 84.34
Apache/1.3.[4-24]: 132 68.91
Apache/1.3.27: 131 67.12
Apache/1.3.26: 130 65.36
Apache/1.3.[1-3]: 127 60.28
--CUT--

As you can see, it correctly fingerprinted my Server as an Apache 2.X server. The only reason that it wasn't any more accurate was that it didn't have a signature for 2.2.3 yet (but that is easily fixed by pasting the fingerprint above into the signatures.txt file with the proper label). Anyways, after running this scan, my Apache logs show this info:

192.168.10.69 - - [15/Jan/2007:12:26:20 -0500] "\x16\x03" 501 214
192.168.10.69 - - [15/Jan/2007:12:26:20 -0500] "GET / HTTP/1.0" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/1.0" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "OPTIONS * HTTP/1.0" 200 -
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "OPTIONS / HTTP/1.0" 200 -
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET /antidisestablishmentarianism HTTP/1.0" 404 226
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "PUT / HTTP/1.0" 405 231
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "JUNKMETHOD / HTTP/1.0" 501 222
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / JUNK/1.0" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "get / http/1.0" 501 215
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "POST / HTTP/1.0" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET /cgi-bin/ HTTP/1.0" 403 210
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET /scripts/ HTTP/1.0" 404 206
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/0.8" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/0.9" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/1.1" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/1.2" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/1.1" 400 226
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/1.2" 400 226
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET / HTTP/3.0" 200 44
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET /.asmx HTTP/1.1" 404 203
192.168.10.69 - - [15/Jan/2007:12:26:21 -0500] "GET /../../ HTTP/1.0" 400 226

What is important to notice are the various HTTP Response Codes that Apache trigger based on the different malformed requests. There were 501s, 400s and some 200s.

Now, if we want to use ModSecurity to combat HTTPrint, we need to use signatures that enforce HTTP Compliance as this is the core of HTTPrint's Semantic tests. Fortunately, the ModSecurity Core Rules come with many different rules that will help to enforce HTTP compliance. After impelmenting ModSecurity + the Core Rules, if I re-run HTTPrint you will see that it isn't even able to complete it's tests and it errors out:

$ ./httprint -h 192.168.10.27 -s signatures.txt
httprint v0.301 (beta) - web server fingerprinting tool
(c) 2003-2005 net-square solutions pvt. ltd. - see readme.txt
http://net-square.com/httprint/
[email protected]

Finger Printing on http://192.168.10.27:80/
Finger Printing Completed on http://192.168.10.27:80/
--------------------------------------------------
Host: 192.168.10.27
Fingerprinting Error: Invalid response from server, check configuration...
--------------------------------------------------

Now, back in the Apache access log file you will see that HTTPrint actually sent only 2 requests and ModSecurity responded with status codes of 400 for both:

# cat access_log
192.168.10.69 - - [15/Jan/2007:12:57:27 -0500] "\x16\x03" 400 226
192.168.10.69 - - [15/Jan/2007:12:57:27 -0500] "GET / HTTP/1.0" 400 226

The error_log shows why ModSecurity blocked the requests:

[Mon Jan 15 12:57:27 2007] [error] [client 192.168.10.69] ModSecurity: Access denied with code 400 (phase 1). Operator EQ match: 0. [id "960008"] [msg "Request Missing a Host Header"] [severity "WARNING"] [uri ""] [unique_id "@om-EcCoChsAABmtbhgAAAAA"]
[Mon Jan 15 12:57:27 2007] [error] [client 192.168.10.69] ModSecurity: Access denied with code 400 (phase 1). Operator EQ match: 0. [id "960015"] [msg "Request Missing an Accept Header"] [hostname "192.168.10.27"] [uri "/"] [unique_id "@onadMCoChsAABmubtAAAAAB"]

As you can see, HTTPrint did not send certain mandatory requests headers (host and accept) so ModSecurity blocked the requests. Now, due to the fact that these first few requests sent by HTTPrint are supposed to be used to baseline normal response, the status codes of 400 were not expected and it therefore errored out.

Even if I remove these 2 Core Rules signatures, there are other rules that would still trigger block the requests. These include the check for a User-Agent request header, verifying that the Host header is an actual name and not an IP address (as that is normally indicative of worm activity, there are also other signatures that can enforce only the GET|HEAD|POST request methods and ensure that the request ends in HTTP and one of the legitimate protocol numbers (0.9, 1.0 or 1.1). With this layered approach, ModSecurity can provide effective defenses against the HTTPrint scanner.

PHP Peculiarities for ModSecurity Users

As I was reviewing the ModSecurity 2.1.0-rc7 Reference Manual I realised it did not contain some very important sections we had in the previous (ModSecurity 1.9.x) manual - those on web application firewall impedance mismatch and PHP peculiarities. Impedance mismatch is a well known problem where, because web application firewalls interpret input data independently from the systems they are protecting, there is a danger of information slipping through because of different interpretations. PHP is especially vulnerable to this issue because the engine was designed to be error friendly and "helpful".

For many years now my policy was to try to document the evasion possibilities and to lay the responsibility on the ModSecurity user to read the manual. I have now decided to make the issue more visible. To start with, here's the part from the 1.9.x manual that is relevant to PHP:

  1. When "register_globals" is set to "On" request parameters are automatically converted to script variables. In some PHP versions it is even possible to override the $GLOBALS array.

  2. Whitespace at the beginning of parameter names is ignored. (This is very dangerous if you are writing rules to target specific named variables.)

  3. The remaining whitespace (in parameter names) is converted to underscores. The same applies to dots and to a "[" if the variable name does not contain a matching closing bracket. (Meaning that if you want to exploit a script through a variable that contains an underscore in the name you can send a parameter with a whitespace or a dot instead.)

  4. Cookies can be treated as request parameters.

  5. The discussion about variable names applies equally to the cookie names.

  6. The order in which parameters are taken from the request and the environment is EGPCS (environment, GET, POST, Cookies, built-in variables). This means that a POST parameter will overwrite the parameters transported on the request line (in QUERY_STRING).

  7. When "magic_quotes_gpc" is set to "On" PHP will use backslash to escape the following characters: single quote, double quote, backslash, and the null byte.

  8. If "magic_quotes_sybase" is set to "On" only the single quote will be escaped using another single quote. In this case the "magic_quotes_gpc" setting becomes irrelevant. The "magic_quotes_sybase" setting completely overrides the "magic_quotes_gpc" behaviour but "magic_quotes_gpc" still must be set to "On" for the Sybase-specific quoting to be work.

  9. PHP will also automatically create nested arrays for you. For example "p[x][y]=1" results in a total of three variables.

(Update Feb 7) Jakub Vrna wrote to clarify that "magic_quotes_gpc" must be "On" for "magic_quotes_sybase" to work. Thanks! This is now fixed. Jakub has a post in Czech on a similar topic.

ModSecurity 2.1.0 Improvements

I have just packaged and released ModSecurity for Apache v2.1.0-rc7, in preparation for the first stable release in the 2.1.x branch. I am very fond of having many release candidates over a period of time. They have an important role of demonstrating how the process of adding new features has ended, and the product is now being polished for a release.

A lot of work has been done in the v2.1.0, with quality being the main focus. Ryan Barnett - a well known member of the ModSecurity community and an employee of Breach Security since last year (and thus a member of the ModSecurity project) - contributed by creating a set of regression tests and updating the documentation. Ofer (whom you already know by know as the person in charge of the Core Rules project) helped by thoroughly testing both ModSecurity and Core Rules, all as part of our parallel effort - the ModSecurity appliance - ModSecurity Pro M1000. Their combined efforts have resulted in a discovery of a number of small issues that were promptly fixed.

But even if you are not affected by some of the problems that were now fixed in v2.1.0 there are good reasons to upgrade - this new version is almost twice as fast for real-life traffic and uses significantly less memory.

We will officially declare v2.1.0 stable in a week or so but I urge you to take the release candidate for a spin to make sure it works for you. It's time to move on and start implementing the next batch of changes. We have some very interesting features on our TODO list!

Calendar

November 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Feeds

Atom Feed

Search

Categories

Recent Entries

Archives