The Payment Card Industry Security Standards Council (PCI SSC) has released a draft version of the Payment Card Industry Data Security Standard (PCI DSS) Version 3.0 to Qualified Security Assessor (QSA) companies.  This release was not made available to the general public as PCI SSC is still reviewing feedback from QSA companies and other participating organizations before the final version is released in November 2013.  The new PCI DSS 3.0 compliance requirements will go in to effect for all companies on January 1, 2015 but there are a few requirements defined in PCI DSS 3.0 that will not be required for compliance until July 1, 2015.

Sign: Changes Ahead

PCI SSC has categorized the changes for the standard into three types: Clarifications, Additional guidance and Evolving Requirements.  PCI SSC defines a clarification as a means to make the intent of a particular requirement clear to QSA’s and entities that must comply with PCI DSS.  Additional guidance provides an explanation, definition and/or instructions to increase understanding or provide further information or guidance on a particular topic being assessed.  Evolving requirements are changes to ensure that the standards are up to date with emerging threats and changes in the market and are new requirements for the most part.

In total there are a proposed 99 changes between the three types of changes, which were nicely defined in the document “PCI DSS 3.0 Summary of Changes” provided by the council.  The majority of changes, 71 to be exact, are clarifications followed up by 23 evolving requirements and 5 changes that fall under the “additional guidance” category.  This blog post provides a high-level overview of the proposed changes in the draft PCI DSS V3.0 and this overview is not specific to particular requirements.  Neohapsis will be releasing a series of blog posts dedicated to PCI DSS V3.0 that will explore individual requirement changes in great detail.

So enough setup, here are some key takeaways of high-level changes to PCI DSS Version 3.0.

Scope of PCI DSS Requirements

PCI SSC added some clarification around the responsibilities for defining scope for both the entity seeking compliance and the QSA performing the validation.  PCI SSC did a great job of clarifying the scoping process by providing examples of system components that should be included are part of the scope for a ROC assessment.  In addition to system components, the scoping guidance section contains requirements for capturing all purchased or custom developed applications that are being used within the cardholder data environment (CDE).  Compared to PCI DSS Version 2.0, this section of PCI DSS Version 3.0 has been broken out to be more descriptive as well as to display the details in a clearer manner to help entities better understand the scoping process.

Use of Third-Party Service Providers / Outsourcing

The use of third-party service providers to support an entity’s cardholder data environment (CDE) and the related PCI DSS validation requirements have not change that drastically.  All entities that fall under PCI DSS are still required to validate that each third-party service provider is PCI DSS compliant by obtaining a copy of their attestations of compliance (AOC) or by having their QSA assess the compliance status of each third-party service provider for relevant PCI DSS requirements..  However, the draft of PCI DSS Version 3.0 provides examples such as advising entities seeking PCI compliance to validate the IP addresses for quarterly scans if one or more shared hosting providers are in-scope for their  CDE.  Furthermore, entities seeking compliance are advised to work with service providers’ to ensure that contractual language between the two parties clearly states PCI DSS compliance responsibilities down to the requirement level.

Business-As-Usual (BAU)

The biggest change that I see in PCI DSS 3.0 is the emphasis on making PCI DSS controls part of “business-as-usual” (BAU) as compared to a moment in time assessment. To get this message across, the draft version of PCI DSS 3.0 provides several best practice examples for making PCI are part of BAU. These best practices are not requirements, but the council wants to encourage organizations to make PCI a part of BAI. The goal of this change in thinking is to get businesses to implement PCI DSS as part of their overall security strategy and daily operations.  PCI DSS is stressing that their compliance standard is not a set-it and forget-it mentality.

Some important areas in this new mentality focus on security processes.  For example, entities should be validating on their own that controls are implemented effectively for applicable businesses processes and related technologies.  Some specific examples related to this new mentality focus on antivirus definitions, logging, vulnerability signatures and ensuring that only appropriate services are enabled on systems.  With this new mentality, entities should look to take corrective actions when compliance gaps are identified so that PCI DSS compliance can be maintained at all times and not wait until their QSA comes to validate their compliance. When gaps are identified, implementation of compensating or mitigating controls may be necessary and should not be left open until the start of a QSA assessment.

Lastly in regards to BAU, the company seeking PCI DSS compliance should continue to have open dialog with their QSA throughout the year.  Good QSA companies will work with their clients to help ensure the right choices are being made in between ROC assessments.

Until Next Post

This is just one of multiple blog posts about upcoming changes in PCI DSS version 3.0.  Neohapsis will be releasing additional posts on more specific PCI DSS 3.0 requirements in the near future.  If you are required to be PCI DSS compliant then I would recommend having a talk with your QSA company to start planning and preparing for PCI DSS 3.0.  You are also welcome to reach out to Neohapsis as we have multiple QSA’s who are seasoned professionals who can address your PCI DSS and PCI PA DSS compliance questions.

Set and Don’t Forget

By Patrick Harbauer, Neohapsis Senior Security Consultant and PCI Technical Lead

There are several PCI DSS requirements that are related to tasks that must be performed on a regular basis. The frequency of these tasks varies from daily to annual. There are also a few requirements that make it important to have PCI DSS compliant data retention policies and procedures in place. An example of a requirement that calls for a task to be performed periodically is requirement 11.2.2: Perform quarterly external vulnerability scans via an Approved Scanning Vendor (ASV). An example of a requirement the calls for compliant data retention policies and procedures is requirement 9.4: Use a visitor log to maintain a physical audit trail of visitor activity. Retain this log for a minimum of three months, unless otherwise restricted by law. If processes or checklists are not in place to track your compliance with these reoccurring tasks, you may be in for an unpleasant surprise during your next annual ROC assessment.

Are You Certifiable?

11.2.2 is one of the classic requirements where we see this happen all too often. When we ask a customer if we can review the certified, passing ASV scans from the last four quarters and we get a response such as, “Oops, Susie was responsible for that and she was reassigned to a different department…” we stick our fingers in our ears and say “la la la la” but that hasn’t ever made the problem go away. Unfortunately, when this happens, instead of a 10 minute conversation reviewing 4 certified and passing ASV scans, we have to buy a few pizza’s, cross our fingers and review several external vulnerability scan reports in hopes that the customer can demonstrate they are scanning and remediating to meet the spirit and intent of requirement 11.2.2.

A Ruleset Only a Mother Could Love

We have seen some very ugly firewall rule sets. We do understand that the business must be able to function and exists to make as large a profit as possible – not to sing the praises of PCI. But as QSA’s, we do need to see six month firewall and router rule set reviews and evidence that the rule sets are being maintained with good hygiene. Maintaining clean and healthy firewall rule sets is similar to a good exercise regimen. If your doctor gives you a daily exercise program to maintain your health and you follow it in a haphazard fashion, your doctor is not going to be able to give you a good health report upon your next doctor’s visit. Similarly, you need a solid program in place to make sure that your firewall rule sets remain healthy and only allow the outbound and inbound network traffic that is actually needed and authorized. And let’s face it, automation is needed for most organizations to manage their firewall and router rule sets effectively. Fortunately there are several excellent solutions available on the market that give you the ability to manage your firewall and router rule sets. For example, these solutions can analyze your rule sets to find overlapping and redundant rules, rules that have not been used over that last X days or rules that allow “any” access – a big PCI no-no. They can also provide the change control mechanisms needed to make sure that changes to firewall rule sets are reviewed and approved by authorized individuals and are properly documented so that rule sets are closely and properly managed.

“The Matrix”

To assist you with making sure that your security program is giving proper attention to specific PCI requirements, we are providing the following two lists. These can be used to create a matrix, review your security operations and to correct any gaps that you may uncover. List 1 covers the frequency with which tasks must be performed related to specific PCI DSS requirements. List 2 shows data retention periods tied to specific requirements. With a little planning, you can keep your PCI compliance on track at all times and avoid unpleasant surprises when your friendly QSA shows up for your next ROC assessment!

List 1 – Recurring PCI Compliance Tasks

1.1.6 – Review firewall and router rule sets (Every 6 Months)

3.1.1 – Automatic or manual process for identifying and securely deleting stored cardholder data (Quarterly)

6.1 – All system components and software are protected from known vulnerabilities (Monthly)

6.6 – Address new threats and vulnerabilities for public-facing web applications (At least annually and after any changes)

8.5.5 – Remove/disable inactive user accounts (Quarterly)

9.5 – Review security of backup media storage location (Annually)

9.9.1 – Properly maintain inventory logs of all media and conduct media inventories (Annually)

10.6 – Review logs for all system components (Daily)

11.1 – Test for the presence of wireless access points and detect unauthorized wireless access points (Quarterly)

11.2.1 – Perform internal vulnerability scans (Quarterly)

11.2.2 – Perform external vulnerability scans via an Approved Scanning Vendor (Quarterly)

11.2.3 – Perform internal and external scans (After any significant change)

11.3 – Perform external and internal penetration testing (At least once a year and after any significant infrastructure or application upgrade or modification)

11.5 – Deploy file-integrity monitoring tools and perform critical file comparisons (Weekly)

12.1.2 – Perform and document a formal risk assessment (Annually)

12.1.3 – Review security policy and update when the environment changes (Annually)

12.2 – Develop daily operational security procedures (Daily)

12.6.1 – Educate personnel (Upon hire and at least annually)

12.6.2 – Require personnel to acknowledge that they have read and understand the security policy and procedures (Annually)

12.8.4 – Maintain a program to monitor service providers’ PCI DSS compliance status (Annually)

List 2 – Data Retention Periods

9.1.1 – Store video camera and/or controls mechanism log (3 months)

9.4 – Retain visitor logs (3 months)

10.7 – Retain audit trail history (1 year)

Who owns and regulates MY Facebook data?

My previous post briefly described the data that makes up a user’s Facebook data and this post will try to shed light on who owns and regulates this data.

I am probably not going out on a limb here to say that the majority of Facebook’s registered users have not read the privacy statement. I was like the majority of users myself, in that I did not fully read Facebook’s privacy statement upon signing up for the service. Facebook created a social media network online, and there were few requirements previously defined for such types of business in America or the world. A lack of rules, combined with users constantly uploading more data, has allowed Facebook to maximize the use of your data and create a behemoth of a social media networking business.

Over time, Facebook has added features to allow users to self regulate their data by limiting others (whether Facebook users or general Internet public) from viewing certain data that one might want to share with only family or specific friends. This provided a user with the sense of ownership and privacy as the creator of the data could block or restrict friends and search providers from viewing their data. Zuckerberg is even quoted by WSJ as saying “The power here is that people have information they don’t want to share with everyone. If you give people very tight control over what information they are sharing or who they are sharing with they will actually share more. One example is that one third of our users share their cell phone number on the site”.

In addition to privacy controls, Facebook gave users more insight into their data through a feature that allowed a user to download ‘all’ their data through a button in the account settings. I placed ‘all’ in quotes because, while you could download your Facebook profile data, this did not include data including wall comments, links, information tagged by other Facebook users or any other data that you created during your Facebook experience. Combined, privacy controls and data export are the main forms of control that Facebook gives to their users for ownership of profile, pictures, notes, links, tags and comment data since Facebook went live in 2004.

So now you might be thinking problem solved; restricting your privacy settings on the viewing of information and downloading ‘all’ your information fixes everything for you. Well, I wish that was the case with Facebook business operations. An open letter by 10 Security professionals to the US Congress highlighted that this was not simply the way things worked with Facebook and third party Facebook developer’s operations. Facebook has reserved the right to change their privacy statement at any time with no notice to the user and Facebook has done this a few times, to an uproar from their user base. As Facebook has grown in popularity and company footprint, security professionals along with media outlets have started publishing security studies painting Facebook in a darker light.

As highlighted by US Congress in December 2011, Facebook was not respecting user’s privacy when sharing information to advertisers or when automatically enabling contradicting privacy settings on new services to their users.  Facebook settled with the US Congress on seven charges of deceiving the user by telling them they could keep their data private.  From my perspective it appears that Facebook is willing to contradict their user’s privacy to suit their best interest for shareholders and business revenue.

In additional privacy mishaps, Facebook was found by an Austrian student to be storing user details even after a user deactivates the service. This started an EU versus Facebook initiative over the Internet that put heat on Facebook to give more details on length of time data was being retained for current and deactivated users.  Holding on to user data is lucrative for Facebook as this allows them to claim more users in selling to advertising subscribers as well as promoting the total user base for private investor bottom lines.

So the next step one might ask is “who regulates my data held by social media companies?” Summed up quickly today, no one outside Facebook is regulating your data and little insight is given to users on this process. The governments of the US, along with the European Union, are looking at means of regulating Facebook’s operations using things such as data privacy regulations and the US/EU Safe Harbor Act.  With Facebook announcing their initial public offering of five billion USD there is soon to be more regulations, at least financially, to hit Facebook in the future.

As an outcome of the December 2011 investigation by the United States Congress, Facebook has agreed to independent audits by third parties, presumably of their choosing. I have not been able to identify details regarding the subject of these audits or ramifications for findings from an audit. Facebook has also updated the public statement and communication to developers and now states that deactivated users will have accounts deleted after 30 days. I have yet to see a change in Facebook’s operations for respecting their user’s privacy settings when pertaining to third parties and other outside entities – in fairness they insist data is not directly shared for advertising; although some British folks may disagree with Facebook claims of advertising privacy.

From an information security perspective, my ‘free’ advice to businesses, developers and end users, do not accesses or give more data than necessary for your user experience as this only brings trouble in the long run. While I would like to give Facebook the benefit of the doubt in their operations, I personally only give data that I am comfortable sharing with the world even though it is limited to friends.  In global business data privacy regulations vary significantly between countries, with regulations come requirements and everyone knows that failing requirements results to fines so business need to think about only access appropriate information and accordingly restricting access.  For the end user, or Facebook’s product, remember that Facebook can change their privacy statement at their leisure and Facebook is ultimately a business with stakeholders that are eager to see quarter after quarter growth.

I hope this post has been insightful to you; please check back soon for my future post on how your Facebook data is being used and the different entities that want to access your data.

Service Provider Scoping Angst

By Patrick Harbauer

Over the past several months we have had many service providers come to us wringing their hands wondering if they should go through the ROC process. They may offer cloud services or a managed security service, for example, and they keep running up against QSA’s peppering them with questions when the QSA is assessing one of the service provider’s customers. Or the sales cycle repeatedly hits road blocks when a potential customer’s InfoSec department raises a red flag and wants to know what security controls the service provider has in place to protect their customers from a security breach.

I equate this dilemma that service providers face to going to the hardware store to buy an extension cord. If the hardware store carries extension cords from four manufactures, three of the manufactures have “UL Listed” on their packaging and one has “Not UL Listed” on their packaging, would anyone buy the “Not UL Listed” extension cord? Absolutely not!

So, if the service provider decides to engage us to pursue PCI compliance, they begin to wring their hands again when trying to determine what is in scope. Client employees, and sometimes even the individuals who hired us, ask “Why are we going through this again? We don’t process, store or transmit payment card information?!?!?!??”

Again, I turn to the UL analogy. The extension cord manufacturer may wonder if they should certify the male end of the extension cord, the female end or just the wire.” They could spin their wheels for a long time trying to determine what the scope should be. This is where service providers need to leverage their QSA’s expertise. Again, using my extension cord analogy (OK, so maybe it’s a little goofy but it works for me!!) UL will most likely say, “Mr. Customer, if we are going to put our logo on this product, we have to use our best judgment and examine all aspects of this extension cord to make sure that it is safe to use and won’t cause a fire that results in one or more deaths. If you are not willing to let us perform what, in our best judgment, is a thorough set of testing procedures on any aspect of your product that we deem necessary, we cannot certify your product.”

As QSA’s assessing a service provider’s products and services for PCI compliance, we must be allowed to use our best judgment to determine what components of a service provider’s environment are in scope and the PCI requirements and testing procedures that are applicable. It is our responsibility as QSA’s to make sure that a merchant doesn’t “burn their business” as the result of making the service provider’s solution a part of their CDE only to find out that the solution is not properly secured in the spirit of the PCI DSS.

PCI Surprises

By Patrick Harbauer

Whenever we perform a PCI assessment for a new client, we invariably have the Gomer Pyle “Surprise!, surprise!” conversation with IT management. And the outcome of the conversation is that IT security controls are more closely monitored and the overall security posture of the organization improves. Here are a few examples:

Swiss Cheese Firewalls – When we perform a PCI assessment, we take firewall rule set reviews very seriously. Besides finding the obvious “ANY ANY” rule infractions, we find rules that were meant to be “temporary for testing” or rules that are no longer needed because entire environments have been decommissioned. It isn’t uncommon to see 10-20% of the firewall rules removed or tightened to allow only protocols, IP’s and ports that are actually required for the cardholder environment to function properly.

Missing Patches – Time and again we find in-scope systems that are not properly patched. This is usually due to overtaxed IT staff who don’t find the time to patch systems or a malfunctioning patching software solution. And in some cases administrators have been burned by a patch that brought down an application and vow to never patch again. What we usually find is that “practice makes perfect” with patching. Organizations that are up to date on patches have well-defined processes and document procedures to perform patching. And that leads us to our next issue…

One Environment – In many cases, organizations that are not up-to-date with their patches do not have isolated test/development and production environments. Besides being a PCI violation to have test/development and production systems on the same network and/or servers, if you do not have a test environment that mirrors production, you are more likely to break production applications when you patch. You will be much more successful remaining current with patches if you have a test environment that mirrors production and where you can address issues before applying the patches to production systems.

These are just a few examples of what we see when performing PCI assessments for new clients and illustrates some of the benefits that come out of a PCI assessment.

Size is a Factor

How do you protect sensitive data and networks?  The approach you take tends to depend a lot on “size.”  For most organizations, their “size” is simply measured by sales and revenue.  For organizations processing credit cards, the “size” can be defined by the number of credit card transactions they process.  No matter what measuring stick one uses, the larger the “size” of a corporation, the more information and assets it has to protect.

The size of an entity makes decisions around what to protect become more complicated.  Most small organizations know the assets and information they need to protect, but often simply aren’t aware of  – or simply disregard – regulations specifying minimum security controls that must be in place (e.g. PCI requirements).  Large corporations tend to have the opposite problem.  They most often are well aware of the regulations that apply to them, but don’t have a clue how to strategically plan around complying with such regulations or requirements.

In a large and/or complex environment, the information and assets of an entity become exponentially more difficult to protect.  New regulations and/or requirements stipulate compliance to specific security requirements.  For large organizations and corporations that “grew up” without these stipulations, bringing themselves into compliance can be a very daunting task.  One good example is identity management.  Compliance to section 404 of Sarbanes-Oxley in the context of user provisioning, authentication, and access control can be extremely difficult for large organizations.  Legacy systems, lack of a standardized password policy, customized provisioning systems, and inefficient (yet heavily utilized) manual processes are only a few of the major challenges that an enterprise may face.

Another good example is PCI compliance.  Large organizations are charged with implementation of very specific controls to protect credit card data.  For a large enterprise that evolved without these requirements, this can be a major challenge.   Even determining where all the credit card data resides is often a herculean task, let alone architecting and implementing controls to bring the corporation into compliance.

Smaller companies face different sets of challenges.  In the case of PCI requirements, many smaller corporations are often out of PCI compliance and don’t even know it until they get a threatening letter in the mail from one of the major credit payment brands.  Once they get such a letter, it turns into a scramble to  a) figure out what PCI compliance actually is b) figure out how to comply and c) implement the controls to ensure compliance.  The smaller corporation may have an easier time determining PCI in-scope systems and environments, but often faces issues that a large corporation enterprises aren’t as concerned with.  These issues almost always involve budget and resource constraints.

Bottom line, the challenges an entity faces when attempting to protect their data and networks depends a lot on size – regardless of how it’s measured.  Larger corporations have more resources to direct at securing their data, but may have a much more difficult time implementing solutions.  Smaller organizations are much more agile, but often simply don’t have the knowledge or resources to get the job done.


Virtualization: When and where?

We often field questions from our clients regarding the risks associated with hypervisor / virtualization technology.  Ultimately the technology is still software, and still faces many of the same challenges any commercial software package faces, but there are definitely some areas worth noting.

The following thoughts are by no means a comprehensive overview of all issues, but they should provide the reader with a general foundation for thinking about virtualization-specific risks.

Generally speaking, virtual environments are not that different than physical environments.  They require much of the same care and feeding, but that’s the rub; most companies don’t do a good job of managing their physical environments, either.  Virtualization can simply make existing issues worse.

For example, if an organization doesn’t have a vulnerability management program that is effective at activities like asset identification, timely patching, maintaining the installed security technologies, change control, and system hardening, than the adoption of virtualization technology usually compounds the problem via increased “server sprawl.”  Systems become even easier to deploy which leads to more systems not being properly managed.

We often see these challenges creep up in a few scenarios:

Testing environments – Teams can get the system up and running very quickly using existing hardware.  Easy and fast…but also dirty. They often don’t take the time to harden the system or bring it up to current patch levels or install required security software.

Even in the scenarios where templates are used, with major OS vendors like Microsoft and RedHat coming out with security fixes on a monthly basis a template even 2 months old is out of date.

Rapid deployment of “utility” servers – Systems that run back-office services like mail servers, print servers, file servers, DNS servers, etc.  Often times nobody really does much custom work on them and because they can no longer be physically seen or “tripped over” in the data center they sometimes fly under the radar.

Development environments – We often see virtualization technology making inroads into companies with developers that need to spin-up and spin-down environments quickly to save time and money.  The same challenges apply; if the systems aren’t maintained (and they often aren’t – developers aren’t usually known for their attention to system administration tasks) they present great targets for the would-be attacker.  Even worse if the developers use sensitive data for testing purposes.  If properly isolated, there is less risk from what we’ve described above but that isolation has to be pretty well enforced and monitoring to really mitigate these risks.

There are also risks associated with vulnerabilities in the technology itself.  The often feared “guest break out” scenario where a virtual machine or “guest” is able to “break out” of it’s jail and take over the host (and therefore, access data in any of the other guests) is a common one, although we haven’t heard of any real-world exploitations of these defects…yet.  (Although the vulnerabilities are starting to become better understood)

There are also concerns about the hopping between security “zones” when it comes to compliance or data segregation requirement.  For example, typically a physical environment has a firewall and other security controls between a webserver and a database server.  In a virtual environment, if they are sharing the same host hardware, you typically can not put a firewall or intrusion detection device or data leakage control between them.  This could violate control mandates found in standards such as PCI in a credit card environment.

Even assuming there are no vulnerabilities in the hypervisor technology that allow for evil network games between hosts, when you house two virtual machines/guests on the same hypervisor/host you often lose the visibility of the network traffic between them.  So if your security relies on restricting or monitoring at the network level, you no longer have that ability.  Some vendors are working on solutions to resolve intra-host communication security but it’s not mature by any means.

Finally, the “many eggs in one basket” concern is still a factor; when you have 10, 20, 40 or more guest machines on a single piece of hardware that’s a lot of potential systems going down should there be a problem.  While the virtualization software vendors will certainly offer high availability scenarios with technology such as VMware’s “VMotion”, redundant hardware, the use of SANs, etc., the cost and complexity adds up fairly fast.  (And as we have seen from some rather nasty SAN failures the past two months, SANs aren’t always as failsafe as we have been lead to believe. You still have backups right?)

While in some situations the benefits of virtualization technology far outweigh the risks, there are certainly situations where existing non-virtualized architectures are better. The trick is finding that line in the midst of the hell mell towards virtualization.