ModSecurity Trustwave
This blog has moved! Please update your
bookmarks to http://blog.spiderlabs.com/modsecurity/.

ModSecurity Blog: PCI

PCI Council clarifies Requirement 6.6, ends ambiguities

If you care about the PCI standard, you should head over to my personal blog, where I have published a summary of the clarifications made by the PCI Council regarding Requirement 6.6 (code reviews and application firewalls).

PCI Requirement 6.6 Is About Remediation

There have been many heated debates in web security circles around the wording of the PCI 1.1 Requirement 6.6 section and it all centers around the semantics of the word "either". The Requirement 6.6 states the following:

"6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:

  • Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
  • Installing an application layer firewall in front of web-facing applications.

Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement."

The word either in this context indirectly implies that these two option are functionally equivalent and therefore the user could choose either one and receive the same level of security benefit. Well, as you might guess, this sparked a vast amount of debate from webappsec folks as to the amount of security protection that is gained from a source code review vs. using a web application firewall. To make the waters even murkier, you even had web vulnerability scanner/service vendors asking the PCI counsel if users could run these tools vs. conducting an actual source code review. So now users who were attempting to become compliance with PCI section 6.6 weren't sure what they needed to do or what tool or process was the "best" approach...

I believe that an important issue that is being forgotten when people think of "source code review" is that the actual review is only one portion of the overall process. Most people do not factor in the remediation portion of the process. I could probably be convinced that a manual source code review vs. review with a source code analysis tool vs. running web vuln scanner could yield roughly similar results - they identify what the problems are. What about fixing them?

This is the core issue in 6.6 - to implement some sort of remediation to prevent successful web attacks. Why do I believe this? Well, if you refer to the PCI DSS Security Audit Procedures document it outlines how a PCI auditor will evalute each requirement to confirm whether or not the organization is in compliance. Section 6.6 - Testing Procedures - states the following:

"6.6 For web-based applications, ensure that one of the following methods are in place as follows:

  • Verify that custom application code is periodically reviewed by an organization that specializes in application security; that all coding vulnerabilities were corrected; and that the application was re-evaluated after the corrections"

As you can see, the goal of this section is to show that not only were vulnerabilities identified but that they were also fixed. So whether or not the vulnerabilities were identified by source code review or a scanner does not seem to be the main issue from PCI but rather was the vulnerability actually fixed??? It is the process of actually remediating the vulnerabilities that is taking entirely too long for organizations, if it happens at all. I mean, how many times does an Authorized Scanning Vendor (ASV) find the exact same vulns showing up in scan after scan? They are quickly showing the customer what/where the problems are but they just can't fix them for a variety of reasons:

  • Regression Testing Time; Any source code changes require extensive regression testing in numerous environments which may delay deployment in production by many weeks or even months.
  • Fixing Custom Code is Cost Prohibitive; In-house web assessment identifies vulnerabilities in your custom coded web application, however it is too expensive to recode the application.
  • Legacy Code/Breaking Functionality; Due to support or business requirements, legacy application code can not be patched as prior installs broke functionality. There may even be licensing issues where the vendor will not allow for changes in the code.
  • Outsourced Code; Outsourced applications that would require a new project to fix.
  • Certification and Accredidation is a Pain; For government organizations, the C&A process is very time consuming and any changes to the source code would require a new one.

Whatever the reason, current SDLC processes for quickly fixing vulnerabilities found in production are lacking. This brings us to the web application firewall. If you look at the 2nd party of the 6.6 testing procedures, it states this for WAFs –

"Verify that an application-layer firewall is in place in front of web-facing applications to detect and prevent web-based attacks."

Notice that the WAF has to be in block mode! This again, supports the idea of remediation. So, just because an organization deploys a WAF is not enough to comply with requirement 6.6. You need to be blocking attacks (mainly SQL Injection and XSS as they are the only 2 that are considered HIGH severity). It is for these reasons that I believe that the "intent" of 6.6 is geared towards remediation efforts and not just identification tasks.