PCI DSS 3.0

The Payment Card Industry Security Standards Council (PCI SSC) has released a draft version of the Payment Card Industry Data Security Standard (PCI DSS) Version 3.0 to Qualified Security Assessor (QSA) companies.  This release was not made available to the general public as PCI SSC is still reviewing feedback from QSA companies and other participating organizations before the final version is released in November 2013.  The new PCI DSS 3.0 compliance requirements will go in to effect for all companies on January 1, 2015 but there are a few requirements defined in PCI DSS 3.0 that will not be required for compliance until July 1, 2015.

Sign: Changes Ahead

PCI SSC has categorized the changes for the standard into three types: Clarifications, Additional guidance and Evolving Requirements.  PCI SSC defines a clarification as a means to make the intent of a particular requirement clear to QSA’s and entities that must comply with PCI DSS.  Additional guidance provides an explanation, definition and/or instructions to increase understanding or provide further information or guidance on a particular topic being assessed.  Evolving requirements are changes to ensure that the standards are up to date with emerging threats and changes in the market and are new requirements for the most part.

In total there are a proposed 99 changes between the three types of changes, which were nicely defined in the document “PCI DSS 3.0 Summary of Changes” provided by the council.  The majority of changes, 71 to be exact, are clarifications followed up by 23 evolving requirements and 5 changes that fall under the “additional guidance” category.  This blog post provides a high-level overview of the proposed changes in the draft PCI DSS V3.0 and this overview is not specific to particular requirements.  Neohapsis will be releasing a series of blog posts dedicated to PCI DSS V3.0 that will explore individual requirement changes in great detail.

So enough setup, here are some key takeaways of high-level changes to PCI DSS Version 3.0.

Scope of PCI DSS Requirements

PCI SSC added some clarification around the responsibilities for defining scope for both the entity seeking compliance and the QSA performing the validation.  PCI SSC did a great job of clarifying the scoping process by providing examples of system components that should be included are part of the scope for a ROC assessment.  In addition to system components, the scoping guidance section contains requirements for capturing all purchased or custom developed applications that are being used within the cardholder data environment (CDE).  Compared to PCI DSS Version 2.0, this section of PCI DSS Version 3.0 has been broken out to be more descriptive as well as to display the details in a clearer manner to help entities better understand the scoping process.

Use of Third-Party Service Providers / Outsourcing

The use of third-party service providers to support an entity’s cardholder data environment (CDE) and the related PCI DSS validation requirements have not change that drastically.  All entities that fall under PCI DSS are still required to validate that each third-party service provider is PCI DSS compliant by obtaining a copy of their attestations of compliance (AOC) or by having their QSA assess the compliance status of each third-party service provider for relevant PCI DSS requirements..  However, the draft of PCI DSS Version 3.0 provides examples such as advising entities seeking PCI compliance to validate the IP addresses for quarterly scans if one or more shared hosting providers are in-scope for their  CDE.  Furthermore, entities seeking compliance are advised to work with service providers’ to ensure that contractual language between the two parties clearly states PCI DSS compliance responsibilities down to the requirement level.

Business-As-Usual (BAU)

The biggest change that I see in PCI DSS 3.0 is the emphasis on making PCI DSS controls part of “business-as-usual” (BAU) as compared to a moment in time assessment. To get this message across, the draft version of PCI DSS 3.0 provides several best practice examples for making PCI are part of BAU. These best practices are not requirements, but the council wants to encourage organizations to make PCI a part of BAI. The goal of this change in thinking is to get businesses to implement PCI DSS as part of their overall security strategy and daily operations.  PCI DSS is stressing that their compliance standard is not a set-it and forget-it mentality.

Some important areas in this new mentality focus on security processes.  For example, entities should be validating on their own that controls are implemented effectively for applicable businesses processes and related technologies.  Some specific examples related to this new mentality focus on antivirus definitions, logging, vulnerability signatures and ensuring that only appropriate services are enabled on systems.  With this new mentality, entities should look to take corrective actions when compliance gaps are identified so that PCI DSS compliance can be maintained at all times and not wait until their QSA comes to validate their compliance. When gaps are identified, implementation of compensating or mitigating controls may be necessary and should not be left open until the start of a QSA assessment.

Lastly in regards to BAU, the company seeking PCI DSS compliance should continue to have open dialog with their QSA throughout the year.  Good QSA companies will work with their clients to help ensure the right choices are being made in between ROC assessments.

Until Next Post

This is just one of multiple blog posts about upcoming changes in PCI DSS version 3.0.  Neohapsis will be releasing additional posts on more specific PCI DSS 3.0 requirements in the near future.  If you are required to be PCI DSS compliant then I would recommend having a talk with your QSA company to start planning and preparing for PCI DSS 3.0.  You are also welcome to reach out to Neohapsis as we have multiple QSA’s who are seasoned professionals who can address your PCI DSS and PCI PA DSS compliance questions.

Set and Don’t Forget

By Patrick Harbauer, Neohapsis Senior Security Consultant and PCI Technical Lead

There are several PCI DSS requirements that are related to tasks that must be performed on a regular basis. The frequency of these tasks varies from daily to annual. There are also a few requirements that make it important to have PCI DSS compliant data retention policies and procedures in place. An example of a requirement that calls for a task to be performed periodically is requirement 11.2.2: Perform quarterly external vulnerability scans via an Approved Scanning Vendor (ASV). An example of a requirement the calls for compliant data retention policies and procedures is requirement 9.4: Use a visitor log to maintain a physical audit trail of visitor activity. Retain this log for a minimum of three months, unless otherwise restricted by law. If processes or checklists are not in place to track your compliance with these reoccurring tasks, you may be in for an unpleasant surprise during your next annual ROC assessment.

Are You Certifiable?

11.2.2 is one of the classic requirements where we see this happen all too often. When we ask a customer if we can review the certified, passing ASV scans from the last four quarters and we get a response such as, “Oops, Susie was responsible for that and she was reassigned to a different department…” we stick our fingers in our ears and say “la la la la” but that hasn’t ever made the problem go away. Unfortunately, when this happens, instead of a 10 minute conversation reviewing 4 certified and passing ASV scans, we have to buy a few pizza’s, cross our fingers and review several external vulnerability scan reports in hopes that the customer can demonstrate they are scanning and remediating to meet the spirit and intent of requirement 11.2.2.

A Ruleset Only a Mother Could Love

We have seen some very ugly firewall rule sets. We do understand that the business must be able to function and exists to make as large a profit as possible – not to sing the praises of PCI. But as QSA’s, we do need to see six month firewall and router rule set reviews and evidence that the rule sets are being maintained with good hygiene. Maintaining clean and healthy firewall rule sets is similar to a good exercise regimen. If your doctor gives you a daily exercise program to maintain your health and you follow it in a haphazard fashion, your doctor is not going to be able to give you a good health report upon your next doctor’s visit. Similarly, you need a solid program in place to make sure that your firewall rule sets remain healthy and only allow the outbound and inbound network traffic that is actually needed and authorized. And let’s face it, automation is needed for most organizations to manage their firewall and router rule sets effectively. Fortunately there are several excellent solutions available on the market that give you the ability to manage your firewall and router rule sets. For example, these solutions can analyze your rule sets to find overlapping and redundant rules, rules that have not been used over that last X days or rules that allow “any” access – a big PCI no-no. They can also provide the change control mechanisms needed to make sure that changes to firewall rule sets are reviewed and approved by authorized individuals and are properly documented so that rule sets are closely and properly managed.

“The Matrix”

To assist you with making sure that your security program is giving proper attention to specific PCI requirements, we are providing the following two lists. These can be used to create a matrix, review your security operations and to correct any gaps that you may uncover. List 1 covers the frequency with which tasks must be performed related to specific PCI DSS requirements. List 2 shows data retention periods tied to specific requirements. With a little planning, you can keep your PCI compliance on track at all times and avoid unpleasant surprises when your friendly QSA shows up for your next ROC assessment!

List 1 – Recurring PCI Compliance Tasks

1.1.6 – Review firewall and router rule sets (Every 6 Months)

3.1.1 – Automatic or manual process for identifying and securely deleting stored cardholder data (Quarterly)

6.1 – All system components and software are protected from known vulnerabilities (Monthly)

6.6 – Address new threats and vulnerabilities for public-facing web applications (At least annually and after any changes)

8.5.5 – Remove/disable inactive user accounts (Quarterly)

9.5 – Review security of backup media storage location (Annually)

9.9.1 – Properly maintain inventory logs of all media and conduct media inventories (Annually)

10.6 – Review logs for all system components (Daily)

11.1 – Test for the presence of wireless access points and detect unauthorized wireless access points (Quarterly)

11.2.1 – Perform internal vulnerability scans (Quarterly)

11.2.2 – Perform external vulnerability scans via an Approved Scanning Vendor (Quarterly)

11.2.3 – Perform internal and external scans (After any significant change)

11.3 – Perform external and internal penetration testing (At least once a year and after any significant infrastructure or application upgrade or modification)

11.5 – Deploy file-integrity monitoring tools and perform critical file comparisons (Weekly)

12.1.2 – Perform and document a formal risk assessment (Annually)

12.1.3 – Review security policy and update when the environment changes (Annually)

12.2 – Develop daily operational security procedures (Daily)

12.6.1 – Educate personnel (Upon hire and at least annually)

12.6.2 – Require personnel to acknowledge that they have read and understand the security policy and procedures (Annually)

12.8.4 – Maintain a program to monitor service providers’ PCI DSS compliance status (Annually)

List 2 – Data Retention Periods

9.1.1 – Store video camera and/or controls mechanism log (3 months)

9.4 – Retain visitor logs (3 months)

10.7 – Retain audit trail history (1 year)

Service Provider Scoping Angst

By Patrick Harbauer

Over the past several months we have had many service providers come to us wringing their hands wondering if they should go through the ROC process. They may offer cloud services or a managed security service, for example, and they keep running up against QSA’s peppering them with questions when the QSA is assessing one of the service provider’s customers. Or the sales cycle repeatedly hits road blocks when a potential customer’s InfoSec department raises a red flag and wants to know what security controls the service provider has in place to protect their customers from a security breach.

I equate this dilemma that service providers face to going to the hardware store to buy an extension cord. If the hardware store carries extension cords from four manufactures, three of the manufactures have “UL Listed” on their packaging and one has “Not UL Listed” on their packaging, would anyone buy the “Not UL Listed” extension cord? Absolutely not!

So, if the service provider decides to engage us to pursue PCI compliance, they begin to wring their hands again when trying to determine what is in scope. Client employees, and sometimes even the individuals who hired us, ask “Why are we going through this again? We don’t process, store or transmit payment card information?!?!?!??”

Again, I turn to the UL analogy. The extension cord manufacturer may wonder if they should certify the male end of the extension cord, the female end or just the wire.” They could spin their wheels for a long time trying to determine what the scope should be. This is where service providers need to leverage their QSA’s expertise. Again, using my extension cord analogy (OK, so maybe it’s a little goofy but it works for me!!) UL will most likely say, “Mr. Customer, if we are going to put our logo on this product, we have to use our best judgment and examine all aspects of this extension cord to make sure that it is safe to use and won’t cause a fire that results in one or more deaths. If you are not willing to let us perform what, in our best judgment, is a thorough set of testing procedures on any aspect of your product that we deem necessary, we cannot certify your product.”

As QSA’s assessing a service provider’s products and services for PCI compliance, we must be allowed to use our best judgment to determine what components of a service provider’s environment are in scope and the PCI requirements and testing procedures that are applicable. It is our responsibility as QSA’s to make sure that a merchant doesn’t “burn their business” as the result of making the service provider’s solution a part of their CDE only to find out that the solution is not properly secured in the spirit of the PCI DSS.

PCI Surprises

By Patrick Harbauer

Whenever we perform a PCI assessment for a new client, we invariably have the Gomer Pyle “Surprise!, surprise!” conversation with IT management. And the outcome of the conversation is that IT security controls are more closely monitored and the overall security posture of the organization improves. Here are a few examples:

Swiss Cheese Firewalls – When we perform a PCI assessment, we take firewall rule set reviews very seriously. Besides finding the obvious “ANY ANY” rule infractions, we find rules that were meant to be “temporary for testing” or rules that are no longer needed because entire environments have been decommissioned. It isn’t uncommon to see 10-20% of the firewall rules removed or tightened to allow only protocols, IP’s and ports that are actually required for the cardholder environment to function properly.

Missing Patches – Time and again we find in-scope systems that are not properly patched. This is usually due to overtaxed IT staff who don’t find the time to patch systems or a malfunctioning patching software solution. And in some cases administrators have been burned by a patch that brought down an application and vow to never patch again. What we usually find is that “practice makes perfect” with patching. Organizations that are up to date on patches have well-defined processes and document procedures to perform patching. And that leads us to our next issue…

One Environment – In many cases, organizations that are not up-to-date with their patches do not have isolated test/development and production environments. Besides being a PCI violation to have test/development and production systems on the same network and/or servers, if you do not have a test environment that mirrors production, you are more likely to break production applications when you patch. You will be much more successful remaining current with patches if you have a test environment that mirrors production and where you can address issues before applying the patches to production systems.

These are just a few examples of what we see when performing PCI assessments for new clients and illustrates some of the benefits that come out of a PCI assessment.

Virtualization: When and where?

We often field questions from our clients regarding the risks associated with hypervisor / virtualization technology.  Ultimately the technology is still software, and still faces many of the same challenges any commercial software package faces, but there are definitely some areas worth noting.

The following thoughts are by no means a comprehensive overview of all issues, but they should provide the reader with a general foundation for thinking about virtualization-specific risks.

Generally speaking, virtual environments are not that different than physical environments.  They require much of the same care and feeding, but that’s the rub; most companies don’t do a good job of managing their physical environments, either.  Virtualization can simply make existing issues worse.

For example, if an organization doesn’t have a vulnerability management program that is effective at activities like asset identification, timely patching, maintaining the installed security technologies, change control, and system hardening, than the adoption of virtualization technology usually compounds the problem via increased “server sprawl.”  Systems become even easier to deploy which leads to more systems not being properly managed.

We often see these challenges creep up in a few scenarios:

Testing environments – Teams can get the system up and running very quickly using existing hardware.  Easy and fast…but also dirty. They often don’t take the time to harden the system or bring it up to current patch levels or install required security software.

Even in the scenarios where templates are used, with major OS vendors like Microsoft and RedHat coming out with security fixes on a monthly basis a template even 2 months old is out of date.

Rapid deployment of “utility” servers – Systems that run back-office services like mail servers, print servers, file servers, DNS servers, etc.  Often times nobody really does much custom work on them and because they can no longer be physically seen or “tripped over” in the data center they sometimes fly under the radar.

Development environments – We often see virtualization technology making inroads into companies with developers that need to spin-up and spin-down environments quickly to save time and money.  The same challenges apply; if the systems aren’t maintained (and they often aren’t – developers aren’t usually known for their attention to system administration tasks) they present great targets for the would-be attacker.  Even worse if the developers use sensitive data for testing purposes.  If properly isolated, there is less risk from what we’ve described above but that isolation has to be pretty well enforced and monitoring to really mitigate these risks.

There are also risks associated with vulnerabilities in the technology itself.  The often feared “guest break out” scenario where a virtual machine or “guest” is able to “break out” of it’s jail and take over the host (and therefore, access data in any of the other guests) is a common one, although we haven’t heard of any real-world exploitations of these defects…yet.  (Although the vulnerabilities are starting to become better understood)

There are also concerns about the hopping between security “zones” when it comes to compliance or data segregation requirement.  For example, typically a physical environment has a firewall and other security controls between a webserver and a database server.  In a virtual environment, if they are sharing the same host hardware, you typically can not put a firewall or intrusion detection device or data leakage control between them.  This could violate control mandates found in standards such as PCI in a credit card environment.

Even assuming there are no vulnerabilities in the hypervisor technology that allow for evil network games between hosts, when you house two virtual machines/guests on the same hypervisor/host you often lose the visibility of the network traffic between them.  So if your security relies on restricting or monitoring at the network level, you no longer have that ability.  Some vendors are working on solutions to resolve intra-host communication security but it’s not mature by any means.

Finally, the “many eggs in one basket” concern is still a factor; when you have 10, 20, 40 or more guest machines on a single piece of hardware that’s a lot of potential systems going down should there be a problem.  While the virtualization software vendors will certainly offer high availability scenarios with technology such as VMware’s “VMotion”, redundant hardware, the use of SANs, etc., the cost and complexity adds up fairly fast.  (And as we have seen from some rather nasty SAN failures the past two months, SANs aren’t always as failsafe as we have been lead to believe. You still have backups right?)

While in some situations the benefits of virtualization technology far outweigh the risks, there are certainly situations where existing non-virtualized architectures are better. The trick is finding that line in the midst of the hell mell towards virtualization.

–Tyler

Response to Visa’s Chief Enterprise Risk Officer comments on PCI DSS

Visa’s Chief Enterprise Risk Officer, Ellen Richey, recently presented at the Visa Security Summit on March 19th. One of the valuable points made in her presentation was defending the value of implementing PCI DSS to protect against data theft. In addition, Ellen Richey spoke about the challenge organizations face, not only becoming compliant, but proactively maintaining compliance, defending against attacks and protecting sensitive information.

Recent compromises of payment processors and merchants that were stated to be PCI compliant have brought criticism to the PCI program. Our views are strongly aligned with the views presented by Ellen Richey. While the current PCI program requires an annual audit, this audit is simply an annual health-check. If you were to view the PCI audit like a state vehicle inspection. Even though at the time of the inspection everything on your car checks out, this does not prevent the situation of days later your brake lights go out. You would still have a valid inspection sticker, but are no longer in compliance with safety requirements. It is the owner’s responsibility to ensure the car is maintained appropriately. Similarly in PCI, it is the company’s responsibility to ensure the effectiveness and maintenance of controls to protect their data in an ongoing manner.

Ellen Richey also mentioned increased collaboration with the payment card industry, merchants and consumers. Collaboration is a key step to implementing the technology and processes necessary to continue reducing fraud and data theft. From a merchant, service provider and payment processor perspective, new technologies and programs will continue to reduce transaction risk, but, today, there are areas where these organizations need to proactively improve. The PCI DSS standard provides guidance around the implementation of controls to protect data. Though in addition to protecting data, merchants, service providers and processors need to proactively address their ability to detect attack and be prepared to respond effectively in the event of a compromise. These are two areas that are not currently adequately addressed by the PCI DSS and are areas where we continue to see organizations lacking.

See the following link to the Remarks by Ellen Richey, Chief Enterprise Risk Officer, Visa Inc. at the Visa Security Summit, March 19, 2009:

http://www.corporate.visa.com/md/dl/documents/downloads/EllenRichey09SummitRemarks.pdf