ModSecurity Trustwave
This blog has moved! Please update your
bookmarks to http://blog.spiderlabs.com/modsecurity/.

ModSecurity Blog: March 2005

External Web Application Protection: Impedance Mismatch

Web application firewalls have a difficult job trying to make sense of data that passes by, without any knowledge of the application and its business logic. The protection they provide comes from having an independent layer of security on the outside. Because data validation is done twice, security can be increased without having to touch the application. In some cases, however, the fact that everything is done twice brings problems. Problems can arise in the areas where the communication protocols are not well specified, or where either the device or the application do things that are not in the specification.

The worst offender is the cookie specification. (Actually all four of them: http://wp.netscape.com/newsref/std/cookie_spec.html, http://www.ietf.org/rfc/rfc2109.txt, http://www.ietf.org/rfc/rfc2964.txt, http://www.ietf.org/rfc/rfc2965.txt.) For many of the cases possible in real life there is no mention in the specification - leaving the programmers to do what they think is appropriate. For the largest part this is not a problem when the cookies are well formed, as most of them are. The problem is also very obvious because most applications only parse cookies they themselves send. It becomes a problem when you think from a point of view of a web application firewall, and a determined adversary trying to get past it. I’ll explain with an example.

In the 1.8.x branch and until 1.8.6 (I made improvements in 1.8.7), ModSecurity used a v1 cookie parser. When I wrote the parser I thought it was really good because it could handle both v0 and v1 cookies. However, I made a mistake of not thinking like an attacker would. As Stefan Esser pointed out to me recently, the differences between v0 and v1 formats could be exploited to make a v1 parser see one cookie where a v0 parser would see more. Here it is:

Cookie: innocent="; nasty=payload; third="

You see, a v0 parser does not understand double quotes. It typically only looks for semi-colons and splits the header accordingly. Such a parser sees cookies “innocent”, “nasty”, and “third”. A v1 parser, on the other hand, sees only one cookie - “innocent”.

How is the impedance mismatch affecting the web application firewall users and developers? It certainly makes our lives more difficult, but that’s all right - it’s a part of the game. Developers (of web application firewalls) will have to work to incorporate better and smarter parsing routines. For example, there are two cookie parsers in ModSecurity 1.8.7. The user can choose which one to use. (A v0 format parser is now used by default.) But such improvements, since they cannot be automated, only make using the firewall more difficult - one more thing for the users to think about and configure.

On the other hand, the users, if they don’t want to think about cookie parsers, can always fall back to use those parts of HTTP that are much better defined. Headers, for example. Instead of using COOKIE_innocent to target an individual cookie they can just use HTTP_Cookie to target the whole cookie header. Other variables, such as ARGS, will look at all variables at once no matter how hard adversaries try to mask them.

Where Do Web Application Firewalls Fit in the Overall Defense Strategy?

Some people seem to think that, because I develop a web application firewall, I think web application firewalls are the best thing since sliced bread, and the solution for all web application security problems. It does not happen often but when it does it’s really annoying. Since I don’t believe blindly in web application firewalls I find it really boring to explain my opinion on this subject over and over again. So I thought it would be a good idea to write about it here, and be able to just point these people to my blog and get done with it. So here it is.

In theory, web application security is easy. By now we can say the subject is well researched and documented, so the “only” thing we need to do is work with people who understand it. In real life, however, there will be many obstacles. (These obstacles are not specific to web application security, but to security in general. You could even expand the scope to include software quality to some extent. But I digress.) Some of the problems are:

  • People don’t understand security. It’s particularly bad if the problem is with the management or the project manager. When that happens there won’t be any mention of security in the requirements (and consequently in the schedule). Even if everyone else cares about security they won’t be able to do much about it.
  • When the project manager understands security then it is onto her to make sure other people (developers, architects, administrators, etc) involved in the project understand security too. This variant, although better, is still not easy to pull off because one is always faced with constraints of various kinds: time, money, limited resources (e.g. inexperienced developers).
  • The cost of development is an important factor on almost all projects. In our current economy the security and software quality are not valued much. It is the reality of life that if you would to spend your money making a really secure product, you would probably get beaten by a competitor with an insecure product but with more features. That's why many companies choose features over security.
  • Even when you assume the best possible circumstances the development team will still fail because… we are all human and we make mistakes. With our current approach to software development it is simply not possible to produce 100% bug free software. It so happens that some of the bugs have security consequences.

Life becomes much easier once you accept you will fail. To deal with the problem (in this case “deal” means minimize the chance of total failure) people invented an approach called defense in depth. By now, defense in depth is a well-known and widely accepted security principle. The basic idea is that you don’t want to put all your eggs into the same basket. Instead, assuming any part of the system can fail, you look for ways to configure other parts, or introduce new parts to limit the effect of the failure. The principle is easier to understand with an example. A good defense strategy would include the following elements:

  • Network firewall to protect the network (only one in this example but some highly sensitive projects require multiple firewalls)
  • Host firewalls on all servers
  • Regular monitoring of security mailing lists
  • regular patching
  • Use of adequate logging to record relevant events
  • Active system monitoring
  • Use of host and network-based intrusion detection
  • Use of web application firewalls (web-based intrusion detection)
  • Regular independent security assessment, possibly penetration testing

The above list is just an example. I could go on adding more and more security elements. But even a short list such as this one is sufficient to demonstrate how the defense in depth principle dictates the use of multiple redundant protection systems.

As we can now see, web application firewalls are just one of the elements in the bigger picture. The way I see it, their major advantages are:

  • They allow you to perform full audit logging (yes, the request body), and store the request information for later.
  • They can monitor the traffic to detect unusual behaviour and allow you to know when you are being attacked.
  • In some cases, they can even be configured to prevent attacks.

This is just the stuff intrusion detection and prevention systems have been doing for many years now. The only difference is web application firewalls understand HTTP better.

Finally, there is an important truth to understand. Generic web application firewalls, same as intrusion detection systems, are only good as the people managing them. Out of the box they don’t do much (although you will be hard pressed to get many of the vendors to agree). They must be configured properly by skilled people in order to become effective.

Calendar

November 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Feeds

Atom Feed

Search

Categories

Recent Entries

Archives