PCI Surprises

By Patrick Harbauer

Whenever we perform a PCI assessment for a new client, we invariably have the Gomer Pyle “Surprise!, surprise!” conversation with IT management. And the outcome of the conversation is that IT security controls are more closely monitored and the overall security posture of the organization improves. Here are a few examples:

Swiss Cheese Firewalls – When we perform a PCI assessment, we take firewall rule set reviews very seriously. Besides finding the obvious “ANY ANY” rule infractions, we find rules that were meant to be “temporary for testing” or rules that are no longer needed because entire environments have been decommissioned. It isn’t uncommon to see 10-20% of the firewall rules removed or tightened to allow only protocols, IP’s and ports that are actually required for the cardholder environment to function properly.

Missing Patches – Time and again we find in-scope systems that are not properly patched. This is usually due to overtaxed IT staff who don’t find the time to patch systems or a malfunctioning patching software solution. And in some cases administrators have been burned by a patch that brought down an application and vow to never patch again. What we usually find is that “practice makes perfect” with patching. Organizations that are up to date on patches have well-defined processes and document procedures to perform patching. And that leads us to our next issue…

One Environment – In many cases, organizations that are not up-to-date with their patches do not have isolated test/development and production environments. Besides being a PCI violation to have test/development and production systems on the same network and/or servers, if you do not have a test environment that mirrors production, you are more likely to break production applications when you patch. You will be much more successful remaining current with patches if you have a test environment that mirrors production and where you can address issues before applying the patches to production systems.

These are just a few examples of what we see when performing PCI assessments for new clients and illustrates some of the benefits that come out of a PCI assessment.

The value of life and acceptable risk

Is it ever okay to accept the loss of life as an acceptable risk to doing business?

First off, is this even reasonable? I believe it is. Though not the best approach to calculating the cost vs benefit of a given security measure, it can be enlightening to look at past and present choices and see what they indicate about the value placed on life by how much money was spent trying to protect it.

But life is invaluable…

By value, I don’t simply mean money. Thou for simplicity sake I will use it for the rest of this post. Most people would say that life is invaluable. But that notion, though admirable and on some level true, it is not an accurate statement. Some people would trade theirs or someone elses life for cash. While others would trade their own life for another. In both cases they have, potentially unknowingly, attached a value to life…or at least a particular persons life.

It’s either valued or it’s not…

By definition, if something does not have a defined value it is either worth nothing or everything. Given that society won’t allow life to be valued at nothing (at least usually) then without a defined value, life is invaluable…or in other words…when placed in your hands, there is zero tolerance for failure to protect.

So? It’s always job number 1…

If you’ve been in the security industry for any measurable time, you will recognize the following priority list from somewhere. It shows up as the default when I do Incident Response program development.

1. Preserve life
2. Prevent physical damage to personnel, facilities, or systems
3. Prevent financial loss
…etc…

This presumes that life is the most valuable item a business/government/entity must protect. Very few, if any security professionals will face the protection of life as priority one in their career. Typically we give lip service to the priority of life since that firewall we just bought doesn’t protect life…at least directly.  But what happens when your entire reason for existing is making something safer?

Safer vs Zero tolerance

I chose the term “safer” very carefully and on purpose. It means some risk, or level of failure, is acceptable and still be considered a success. If your entire reason to exist is to protect life, you can not calculate the value of the security measure unless you know the value of life. As has been shown time and time again through history…nothing can be made perfectly safe…ala zero tolerance. But when tasked with zero tolerance, or zero breaches, and you have no understanding of the value of life your only alternative is to spend an unlimited amount of money in the quest for zero tolerance. You’ll still fail…typically spectacularly since any crackpot with an idea is given an opportunity to try his idea. Remember you didn’t start with a value of life so there’s no way to say the crockpots idea is crazy.  If it saves just one child…..

Can you think of an entity in this exact situation? No one willing to put a value on life and an unlimited budget (effectively)?

When zero tolerance bites you in the butt…the TSA

The federal government will never publish how much your life is worth to them, assuming they even wanted to calculate it. They can’t. It would be a political disaster.  So how can we figure out the presumed value so we see if the government expenditures are insane or not?

The TSA has a budget of roughly $7billion per year and a mandate of zero tolerance for loss of life.  Let’s assume that the worst case scenario of doing nothing is a 9/11 style attack every year (3000 dead). So what’s the value of life for an agency tasked with zero tolerance? Simple calculation really…last year the Federal government valued the life of the flying public at $7billion / 3000 or $2.4million per life. 

Just as a point of measure…42,000 people die in car crashes every year and the budget for the NHTSA is $900,000.  So the Federal government values the life of the driving public at at $900,000 / 42,000 or $21 per life. 

Think somebodies priorities are out of kilter a wee bit? Or is it airline deaths get more media attention because they are more spectacular and thus more political pressure to “do something, anything”?

When does acceptable risk come into play?

It can’t…until you put a value on life.

You willing to put a value on life? Not as easy as it sounds. But if you don’t, you’ll end up like the TSA or Medicare. In an unwinnable situation and everyone hates you.

Size is a Factor

How do you protect sensitive data and networks?  The approach you take tends to depend a lot on “size.”  For most organizations, their “size” is simply measured by sales and revenue.  For organizations processing credit cards, the “size” can be defined by the number of credit card transactions they process.  No matter what measuring stick one uses, the larger the “size” of a corporation, the more information and assets it has to protect.

The size of an entity makes decisions around what to protect become more complicated.  Most small organizations know the assets and information they need to protect, but often simply aren’t aware of  – or simply disregard – regulations specifying minimum security controls that must be in place (e.g. PCI requirements).  Large corporations tend to have the opposite problem.  They most often are well aware of the regulations that apply to them, but don’t have a clue how to strategically plan around complying with such regulations or requirements.

In a large and/or complex environment, the information and assets of an entity become exponentially more difficult to protect.  New regulations and/or requirements stipulate compliance to specific security requirements.  For large organizations and corporations that “grew up” without these stipulations, bringing themselves into compliance can be a very daunting task.  One good example is identity management.  Compliance to section 404 of Sarbanes-Oxley in the context of user provisioning, authentication, and access control can be extremely difficult for large organizations.  Legacy systems, lack of a standardized password policy, customized provisioning systems, and inefficient (yet heavily utilized) manual processes are only a few of the major challenges that an enterprise may face.

Another good example is PCI compliance.  Large organizations are charged with implementation of very specific controls to protect credit card data.  For a large enterprise that evolved without these requirements, this can be a major challenge.   Even determining where all the credit card data resides is often a herculean task, let alone architecting and implementing controls to bring the corporation into compliance.

Smaller companies face different sets of challenges.  In the case of PCI requirements, many smaller corporations are often out of PCI compliance and don’t even know it until they get a threatening letter in the mail from one of the major credit payment brands.  Once they get such a letter, it turns into a scramble to  a) figure out what PCI compliance actually is b) figure out how to comply and c) implement the controls to ensure compliance.  The smaller corporation may have an easier time determining PCI in-scope systems and environments, but often faces issues that a large corporation enterprises aren’t as concerned with.  These issues almost always involve budget and resource constraints.

Bottom line, the challenges an entity faces when attempting to protect their data and networks depends a lot on size – regardless of how it’s measured.  Larger corporations have more resources to direct at securing their data, but may have a much more difficult time implementing solutions.  Smaller organizations are much more agile, but often simply don’t have the knowledge or resources to get the job done.

 



Security Organization Redesign

Historically, security organizations have grown up organically, starting 15 – 20 years ago with a single security conscious person who ended up getting tagged with doing the job.  Over the years, that manager asked for new positions, filling a tactical need when issues were presented, creating departments/teams as it made sense. There was no particular plan in place or a long-term strategy. Eventually, you end up with more than a handful of employees and a dysfunctional team. Don’t get me wrong, the team is usually very good at putting out fires and “getting the work done” but, by no means is it robust or optimized.  They typically do not work on the issues that are most important to the company rather these large security groups are playing whack-a-mole with issues and deal with fires as they are presented.  There is no opportunity to get ahead of the game and when the fire is in your area you deal with it the best way you can.  Of course, this can cause inter-personal issues amongst the team members and duplication of efforts, driving even more dysfunction.

As a consultant, it’s easy to say that lack of planning created these problems, but I don’t know many info sec managers who could claim they have a growth plan that goes out 15 years and involves hiring 30-50 new employees.  Most security professionals, for the majority of their careers, are fighting fires

What lessons has Neohapsis learned working with our clients to reorganize their security departments?

Don’t under estimate the angst that will be voiced by the team leaders/managers within the department if they are not included in the decision making process, even if you already know the right decision.

When it comes down to it, there are only so many ways you can design a security organization.  Certain jobs and tasks make sense together.  Certain others require similar skill sets. Technically, you don’t really need to involve many people in the decision if you have someone who knows the culture of the company and has done this before. You could very easily take a CISO and a consultant and develop a new organizational structure and announce it to the CISO’s management team.  You try that, and you’ll be surprised at the uproar about not understanding the nuances of each department and the needs and issues of the individuals.  Though it will take longer, a CISO will find better acceptance with his own management team if they are allowed to go off and work together to propose an org design of their own. It will probably take 30 days and in the end, it will probably look almost identical to what the CISO and consultant would have wanted anyway.  But, the managers’ attitudes will be different and they will have buy-in.  It still doesn’t hurt to get a consultants opinion on the org design, just don’t let your management team think you outsourced their career path.  Even though you could have started your organization change 30 days ago, sometimes it is more about buy-in than being right. That’s a very hard lesson for many security professionals.

Titles are a big deal to security people

Probably the most contentious and politically painful experience, and frankly the biggest complaints from the security team leads and managers, will be coming up with proper titles for the new departments.  As is generally the case in large organizations, there are way too many Directors, VPs, Senior VPs than can honestly be justified by organizational design.  You look over the fence and wonder how everyone in the sales department can be a Senior VP.

What makes this particularly difficult within a security organization is that security professionals by nature view themselves as different or special than everyone else in the organization.  Inevitably, that means corporate HR policy is perceived to be inapplicable to them. The presumption of non-applicability is exactly what security complains about when co-workers ignore security policy. So when company policy dictates a Director title requires X number of direct reports, what do you do with your architecture group that has 5 people with 20 years security experience and no direct reports?  If you don’t title that team as Director’s or better, nobody from the outside will apply for the positions. But if you do, others in the organization will ask why there is a department of 5 people all with Director titles.

In the same vein, titles are routinely viewed by security professionals as a way of bullying co-workers into complying with a particular security policy or decision.  Any perceived lost opportunity to get a title promotion is met with severe angst, no check that…open revolt…even when no salary increase comes with it.

In the end most titles will end up being a mix of corporate policy and what levels in the organization that particular person would have to interact with (eg: need for presumed power). Yes, many feelings will be hurt.

Salaries are all over the map

In similar alignment with titles, salaries are a difficult thing to pin down in the security industry.  Sure you can go to any of a number of surveys and pull an average salary…but often they are for a generic title like “security architect” or “security analyst” or something very specific like “IDS specialist”.  Is your security analyst the same as my security analyst? I can’t tell.  Should a firewall guru get paid the same as a policy guru? Why? Why not?   Eventually you will have to look at existing salaries within the team (obviously), a third party perspective of the market conditions, and the caliber of talent you want applying.  At some level it becomes a throw a number out there and see if you a get nibble approach.

Sounds like too much work…

Are the basic issues outlined above insurmountable?  Of course not.  But they seem to be so minor that many security managers will ignore them and focus on the so called “big picture”.  Little do they know, the big picture was never really in doubt.  It was the little things that were going to give them the biggest head aches and threaten to derail the path to the big picture.

Has this happened in you organization? Did you have a re-org experience to tell? We would love comments.

Virtualization: When and where?

We often field questions from our clients regarding the risks associated with hypervisor / virtualization technology.  Ultimately the technology is still software, and still faces many of the same challenges any commercial software package faces, but there are definitely some areas worth noting.

The following thoughts are by no means a comprehensive overview of all issues, but they should provide the reader with a general foundation for thinking about virtualization-specific risks.

Generally speaking, virtual environments are not that different than physical environments.  They require much of the same care and feeding, but that’s the rub; most companies don’t do a good job of managing their physical environments, either.  Virtualization can simply make existing issues worse.

For example, if an organization doesn’t have a vulnerability management program that is effective at activities like asset identification, timely patching, maintaining the installed security technologies, change control, and system hardening, than the adoption of virtualization technology usually compounds the problem via increased “server sprawl.”  Systems become even easier to deploy which leads to more systems not being properly managed.

We often see these challenges creep up in a few scenarios:

Testing environments – Teams can get the system up and running very quickly using existing hardware.  Easy and fast…but also dirty. They often don’t take the time to harden the system or bring it up to current patch levels or install required security software.

Even in the scenarios where templates are used, with major OS vendors like Microsoft and RedHat coming out with security fixes on a monthly basis a template even 2 months old is out of date.

Rapid deployment of “utility” servers – Systems that run back-office services like mail servers, print servers, file servers, DNS servers, etc.  Often times nobody really does much custom work on them and because they can no longer be physically seen or “tripped over” in the data center they sometimes fly under the radar.

Development environments – We often see virtualization technology making inroads into companies with developers that need to spin-up and spin-down environments quickly to save time and money.  The same challenges apply; if the systems aren’t maintained (and they often aren’t – developers aren’t usually known for their attention to system administration tasks) they present great targets for the would-be attacker.  Even worse if the developers use sensitive data for testing purposes.  If properly isolated, there is less risk from what we’ve described above but that isolation has to be pretty well enforced and monitoring to really mitigate these risks.

There are also risks associated with vulnerabilities in the technology itself.  The often feared “guest break out” scenario where a virtual machine or “guest” is able to “break out” of it’s jail and take over the host (and therefore, access data in any of the other guests) is a common one, although we haven’t heard of any real-world exploitations of these defects…yet.  (Although the vulnerabilities are starting to become better understood)

There are also concerns about the hopping between security “zones” when it comes to compliance or data segregation requirement.  For example, typically a physical environment has a firewall and other security controls between a webserver and a database server.  In a virtual environment, if they are sharing the same host hardware, you typically can not put a firewall or intrusion detection device or data leakage control between them.  This could violate control mandates found in standards such as PCI in a credit card environment.

Even assuming there are no vulnerabilities in the hypervisor technology that allow for evil network games between hosts, when you house two virtual machines/guests on the same hypervisor/host you often lose the visibility of the network traffic between them.  So if your security relies on restricting or monitoring at the network level, you no longer have that ability.  Some vendors are working on solutions to resolve intra-host communication security but it’s not mature by any means.

Finally, the “many eggs in one basket” concern is still a factor; when you have 10, 20, 40 or more guest machines on a single piece of hardware that’s a lot of potential systems going down should there be a problem.  While the virtualization software vendors will certainly offer high availability scenarios with technology such as VMware’s “VMotion”, redundant hardware, the use of SANs, etc., the cost and complexity adds up fairly fast.  (And as we have seen from some rather nasty SAN failures the past two months, SANs aren’t always as failsafe as we have been lead to believe. You still have backups right?)

While in some situations the benefits of virtualization technology far outweigh the risks, there are certainly situations where existing non-virtualized architectures are better. The trick is finding that line in the midst of the hell mell towards virtualization.

–Tyler

Enterprise, or Opportunity Risk Management?

The Enterprise Risk Management (ERM) market seems to have been driven by the Finance & Banking (F&B) sector’s interpretation of what ERM means. They have taken their traditional risk methodology in the areas of Credit, Market and Operational risk management and extended that out to other areas of their businesses and called that ERM. But, for true ERM, the F&B institution’s methodology for managing risk is applying too much emphasis on backward-looking analysis of loss, as opposed to a more forward-looking speculation about potential loss (or risk) in future. Historical analysis of actual loss is of course a significant indicator of further loss in future, but only where defined losses are to be expected, such as in the areas of insurance or the provision of credit, or speculation into markets etc.

However, such a methodology doesn’t provide a sound analytical basis for those less frequent and possibly more drastic events, or those events where historical loss data doesn’t exist, which applies to the general operations of most businesses today.

So, in F&B, risk management has become very much a science, but for areas of risk outside the realms of credit, market and banking operations, and without the benefit of hindsight and a loss history, it is very much an art today.

Perhaps analyzing some of the more accepted definitions of risk will help us to figure out where and how we should be focusing our risk assessment efforts, acronyms aside:

Wikipedia by its very nature takes a broad view, not specifically in the context of business, and states simply that “risk is a concept that denoted the precise probability of specific eventualities”. Interestingly Wikipedia’s definition of risk continues by stating that risk can be defined as “the threat or probability that an action or event, will adversely or beneficially affect an organization’s ability to achieve its objectives” Hmmm, so immediately Wikipedia is recognizing that we have to tie our speculation of loss to our desire to achieve stated objectives; in other words, realizing our opportunities.

Corporate Integrity, a leading Global advisory on Governance, Risk and Compliance succinctly defines Risk as “the effect of uncertainty on business objectives”. Again, focusing on what a company is endeavouring to achieve through the realization of its opportunities.

So, how about substituting the word ENTERPRISE and replace it with OPPORTUNITY? After all, businesses are in business to profit from opportunities, and given that the above definitions of risk relate its management to the achievement of those objectives, it would seem that this has to be the basis of risk analysis.

However, this would provide us with ORM instead of ERM as an acronym for the management of risks in our business, but unfortunately, ORM is already generally accepted as meaning Operational Risk Management, which is a term well understood and accepted in the F&B world because it is a component part of the Basel II Capital Accord. This might explain the route cause of the problem!

F&B understands Operational Risk Management in Basel II terms and by extending that across the enterprise they assume it to be Enterprise Risk Management, but, as stated previously the methodology used is driven by the analysis of losses, not the analysis of risks to the achievement of objectives, goals or opportunities.

It is correct that loss analysis is an excellent way of predicting likely loss in future, but as noted earlier only if an extensive loss history exists. This is the key point. In most businesses the loss history does not exist or is very limited, and even in the F&B industry it is limited to the scope of Basel II, which does not cover business risks such as supply-chain, internal operations such as HR or reputational risks and many other risk areas.

So, whilst extensive loss history can help us to add some science to the art of risk management across the enterprise, the fact is that an extensive history of losses does not exist for most businesses, so the only viable methodology is to start with understanding a businesses strategy, its objectives, its opportunities, and trying to quantify what will prevent the company achieving those. i.e. what are the risks to the realization of opportunities.

Opportunity Risk Management (ORM).

COSO II Event Identification will be a significant challenge for companies

COSO’s improvement to COSO II, sometimes referred to as COSO ERM, added requirements for objective setting, risk identification, management & reporting, as well as risk treatment and event identification. These would be regarded as the basic elements of a good ERM (Enterprise Risk Management) program, and tying these into COSO integrates the already established control framework around those ERM practices.

Sounds good! It is good, or at least a good starting point, I believe. A far more granular assessment to derive a risk level is needed if this is to become truly scientific, but that’s for another blog topic!

The issue now is that whilst most companies will probably be able to implement just about all the elements of COSO II, there is one that I believe will be a significant challenge, the ‘Event Identification’ element.

Under COSO II, Event Identification encompasses incidents (internal or external) that can have a negative (or positive – so perhaps opportunities?) effect. Much like under Basel II Operational Risk analysis, the benefit of hindsight determines or assists the prediction of future risk. So, effectively, to adhere to COSO II a company must identify and quantify incidents within the ERM framework, such that predictions of risk can be assisted by knowledge of actual loss in the past.

This is a whole new set of business processes and responsibilities that certain individuals must accept as part of their regular employment descriptions. But it is more complicated than that, because internal systems and processes will need to be developed to help those individuals to obtain the correct data to support event identification.

Take a simple example in IT. Let’s assume we are a pharmaceutical company, and a system falls victim to a security breach. On that system is 20 years of clinical trial information for a product and we know that an outside organization has potentially accessed that IP. Who’s responsibility is it to recognize that the incident has occurred? Who decides what the cost to the organization is? Who’s responsibility is it to capture that information? Who’s responsibility is it to identify the business and technical risks associated with the incident? Who’s responsibility is it to decide what actions should be taken as a result of the incident to prevent it happening again?

Even for this fairly tangible event, there are a whole set of new processes, policies, documentation and responsibilities that need to be in place to properly implement Event Identification.

So much so, that I contend most companies should not declare COSO II compliance, just yet.

The case for extending XBRL to encompass a Risk and Control taxonomy

Through the SEC’s ‘21st Century Disclosure Initiative‘ announced in January 2009, and their demand that Fortune 500 companies start XBRL tagging of financial statements and footnotes this year, it’s clear that greater transparency associated with financial reporting and transactions is seen as one of the steps towards improving the ability of investors and lenders to analyse and compare reports of financial performance and strategic declarations. By adopting such a standard, the SEC is seeking to provide investors and lenders with greater confidence in the results of their analysis because there is a defined taxonomy that ensures they are analysing and comparing apples to apples in all aspects of relevant financial statements.

That’s good, it’s helpful and the derived confidence will be further enhanced through the involvement of an Assurance Working Group (AWG) that is co-operating with the International Audit and Assurance Standards Board (IAASB) to develop standards around how XBRL information can be audited.

Whilst XBRL was initially designed to allow standard tagging of financial reporting, it also can be used for financial statements around transaction information, discrete projects and initiatives, etc. It would seem, therefore, that if XBRL tagging could be extended to encompass risk and control information by introducing an extended taxonomy for that, then, perhaps, a far more meaningful value could be associated with those financial statements, or the validity of them could be better trusted.

When Credit Default Swaps (CDS) were sold on, and on, and on, imagine if along with the financial details of the transaction there was a clear statement about the associated risks, along with details of what mitigation measures were in place and how effective they were likely to be. Surely, that would have allowed the prevention of them being significantly over valued or at least recognition that they were being overvalued despite their associated risks.

Ultimately the whole issue of trust is at the hub of the financial crisis we find ourselves in and, interestingly, it parallels an observation that the American economist John Kenneth Galbraith made in 1954. He observed that fraud can be easily hidden in the good times, yet it gets revealed in the bad times, which he called the ‘bezzle’. With reference to the great crash of 1929 he wrote,”In good times people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always many people who need more. Under these circumstances the rate of embezzlement grows, the rate of discovery falls off, and the bezzle increases rapidly. In depression all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks.” He also observed that “the bezzle is harder to hide during a tougher economic climate” because of the demand for increased scrutiny.

Applying a similar theory to our CDS example, in the good times the bezzle was large, and there were high levels of trust between the banks and asset management companies, thus, nobody really worried about the increasing risks. But now the bezzle has been revealed, trust has all but disappeared and the market has stagnated.

Hence, it is my belief that additional assurance will be required around financial reporting, particularly with specific transactions, such that a high level of trust can be regained. This will not occur through a high bezzle which exists due to positive market conditions. Rather, it will occur through qualified assurance and tangible evidence of the levels of associated risks and how effectively they are being mitigated. Taking the CDS situation as an example, if the level of associated risk and the efficacy of the control strategy accompanies the transaction the buyer will be better informed and the information will have higher trust.

In my view, therefore, the XBRL taxonomy must extend to include taxonomy around risk and control information.

Response to Visa’s Chief Enterprise Risk Officer comments on PCI DSS

Visa’s Chief Enterprise Risk Officer, Ellen Richey, recently presented at the Visa Security Summit on March 19th. One of the valuable points made in her presentation was defending the value of implementing PCI DSS to protect against data theft. In addition, Ellen Richey spoke about the challenge organizations face, not only becoming compliant, but proactively maintaining compliance, defending against attacks and protecting sensitive information.

Recent compromises of payment processors and merchants that were stated to be PCI compliant have brought criticism to the PCI program. Our views are strongly aligned with the views presented by Ellen Richey. While the current PCI program requires an annual audit, this audit is simply an annual health-check. If you were to view the PCI audit like a state vehicle inspection. Even though at the time of the inspection everything on your car checks out, this does not prevent the situation of days later your brake lights go out. You would still have a valid inspection sticker, but are no longer in compliance with safety requirements. It is the owner’s responsibility to ensure the car is maintained appropriately. Similarly in PCI, it is the company’s responsibility to ensure the effectiveness and maintenance of controls to protect their data in an ongoing manner.

Ellen Richey also mentioned increased collaboration with the payment card industry, merchants and consumers. Collaboration is a key step to implementing the technology and processes necessary to continue reducing fraud and data theft. From a merchant, service provider and payment processor perspective, new technologies and programs will continue to reduce transaction risk, but, today, there are areas where these organizations need to proactively improve. The PCI DSS standard provides guidance around the implementation of controls to protect data. Though in addition to protecting data, merchants, service providers and processors need to proactively address their ability to detect attack and be prepared to respond effectively in the event of a compromise. These are two areas that are not currently adequately addressed by the PCI DSS and are areas where we continue to see organizations lacking.

See the following link to the Remarks by Ellen Richey, Chief Enterprise Risk Officer, Visa Inc. at the Visa Security Summit, March 19, 2009:

http://www.corporate.visa.com/md/dl/documents/downloads/EllenRichey09SummitRemarks.pdf

My $0.02 on PCI DSS 6.6.

The PCI Security Council on April 15, released clarification on DSS requirement 6.6.

Requirement 6.6 states that all web facing applications are protected against known attacks by having a code review or installing an application-layer firewall (WAP) in front of the web application.

I am of the opinion that the clarification document for requirement 6.6 still does not address the issue adequately, and leaves mis-interpretations about code review and WAP.

Let’s start off with observation #1.

From the Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified document:

“Manual reviews/assessments may be performed by a qualified internal resource or aqualified third party. In all cases, the individual(s) must have the proper skills andexperience to understand the source code and/or web application, know how to evaluate each for vulnerabilities, and understand the findings. Similarly, individuals using automated tools must have the skills and knowledge to properly configure the tool andtest environment, use the tool, and evaluate the results. If internal resources are being used, they should be organizationally separate from the management of the application being tested. For example, the team writing the software should not perform the final review or assessment and verify the code is secure.”

What qualifies a qualified internal resource? Does the QSA qualify this internal resource?

There currently is no standard certification in our industry for code review, and in my experience, very few organizations have any staff that could perform adequate code review if the focus is on identifying security relevant issues.

Scenario 1: Development team uses someone from the IT/QA group to run a web application vulnerability scanner against their web application.

Does this meet Requirement 6.6?
Absolutely not.

Web application vulnerability scanners do not find all vulnerabilities, in addition, they throw out a lot of false positives.. Here at Neohapsis we have seen that and so have others, Rolling review: Web Application Scanners.

“Ultimately, you can’t automate your way to secure software–any combination of analysis tools is only as effective as the security knowledge of the user. As Michael Howard, co-author of Writing Secure Code (Microsoft Press, 2002) and one of the lead architects of Microsoft’s current security posture, put it in his blog: “Not-so-knowledgeable-developer + great tools = marginally more secure code.”

We recommend taking advantage of documented processes, including Microsoft’s SDL (Security Development Lifecycle), The Open Web Application Security Project’s CLASP (Comprehensive Lightweight Application Security Process) and general techniques available at the National Cyber Security Division’s “Build Security In” portal (Automated Code Scanners.” Network Computing Magazine, April 16, 2006).

Web application vulnerability scanners should be used as a tool in conjunction with a full code review.

Observation #2: Option #2 in Requirement 6.6.

I find this to be the band-aid approach to passing 6.6. Web application firewalls, WAF, should be used as an additional layer of security, not a band-aid and avoiding code reviews. Would we consider an IDS or a firewall to be the resolution to running unnecessary services on a system, or a solution to avoid us from hardening or configuring systems to best practice of vendor recommended guidelines? Similarly, WAF’s are another step in the principle of Defense-in-Depth (DiD), but are in no means a solution to securing an application.

There needs to be a balance found with Requirement 6.6 to include source code review, using a web application vulnerability scanner as a tool, and a WAP for it to be taken seriously. There are ways to do this. If an organization implements a solid SDL process, you only need to do sample source code review during the development phase, since a lot of the initial threats were identified during the threat analysis phase. Also, you have secure coding practices and modules that already address a lot of issues such as poor input validation etc. If one is looking to spend less time on finding security issues after a production / environment has to be deployed, one has to look at security right from the start. This holds true when you are building out a security network or DMZ, and also holds true when it comes to the design and development of an application.

The bottom line is that the clarification does not really help out overall.

PCI or not, a proper SDLC implementation and process, developers going through secure code training, and having a proper set of tools will lead to a more secure application.