Putting Windows XP Out To Pasture

Farewell, Windows XP! We hated you, then loved you, and soon we’ll hate you again.

 (This post is a resource for home and small-business users with questions about the impending end-of-life for Windows XP. Larger enterprise users have some different options available to them; contact us to discuss your situation and options.)

For those who haven’t seen it in the news yet: Microsoft will be ending support for its hugely successful operating system, Windows XP, on April 8th. This means that users of the 12-year-old operating system will no longer be able to get updates, and in particular will not be able to get security updates. Users of more modern versions of Windows, such as Windows Vista or Windows 7 will remain supported for several more years.

Once support ends, computers still on Windows XP will become a very juicy target for Internet criminals and attackers. Internet crime is big business, so every day there are criminals looking for new weaknesses in computer systems (called vulnerabilities), and developing attacks to take advantage of them (these attacks are called exploits). Normally, the software vendor (Microsoft in this case) quickly finds out about these weaknesses and releases updates to fix them. When an exploit is developed, some number of people fall victim shortly after the exploit is first used, but people who get the update in a relatively timely manner are protected.

But what happens when a vendor stops updating the software? All of a sudden, the bad guys can use these same attacks, the same exploits, indefinitely. As a product nears end of life, attackers have an incentive to hold off on using critical vulnerabilities until the deadline passes. The value of their exploits goes up significantly once they have confidence that the vendor will never patch it. Based on that, we can expect a period of relative quiet in terms of announced vulnerabilities affecting XP from now until shortly after the deadline, when we will likely see stockpiled critical vulnerabilities begin circulating. From then on, the risk of these legacy XP systems will continue to increase, so migrating away from XP or dramatically isolating the systems should be a priority for people or organizations that still use them.

How do I know if I’m running Windows XP?

  • If your computer is more than 5 years old, odds are it is running Windows XP
  • Simplest way: “Win+Break”: Press and hold down the Windows key on your keyboard, then find the “Pause” or “Break” key and press it. Let both keys go. That will show the System Properties windows. You may have to hunt around for your “Pause/Break” key, but hey, it finally has a use.
  • Alternate way: Click the Start Menu -> Right click on “My Computer” -> On the menu that comes out, click on Properties

 

Image

Click the Start Menu, then right-click My Computer, then click Properties.

 

Image

Your version of Windows will be the first thing on the System Properties window.

 

How do I stay safe?

Really, you should think about buying a new computer. You can think of it as a once a decade spring cleaning. If your computer is old enough to have Windows XP, having an unsupported OS is likely just one of several problems. It is possible to upgrade your old computer to a newer operating system such as Windows 7, or convert to a free Linux-based operating system, but this may be a more complicated undertaking than many users want to tackle.

Any computer you buy these days will be a huge step up from a 7-year old (at least!) machine running XP, so you can comfortably shop the cheapest lines of computers. New computers can be found for $300, and it’s also possible to buy reputable refurbished ones with a modern operating system for $100-$200.

For those who really don’t want to or can’t upgrade, the situation isn’t pretty. Your computer will continue to work as it always has, but the security of your system and your data is entirely in your hands. These systems have been low-hanging fruit for attackers for a long time, but after April 8th they will have a giant neon bull’s-eye on them.

There are a few things you can do to reduce your risks, but there really is no substitute for timely vendor patches.

  1. Only use the system for tasks that can’t be done elsewhere. If the reason for keeping an XP machine is to run some specific program or piece of hardware, then use it only for that. In particular, avoid web browsing and email on the unsupported machine: both activities expose the vulnerable system to lots of untrusted input.
  2. Keep all of your other software up to date. Install and use the latest version of Firefox or Chrome web browsers, which won’t be affected by Microsoft’s end of life.
  3. Back up your computer. There are many online backup services available for less than $5 a month. If something goes wrong, you want to make sure that your data is safe. Good online backup services provide a “set it and forget it” peace of mind. This is probably the single most important thing you can do, and should be a priority even for folks using a supported operating system. Backblaze, CrashPlan, and SpiderOak are all reasonable choices for home users.
  4. Run antivirus software, and keep it up to date. AVAST, AVG, and Bitdefender are all reasonable free options but be aware that antivirus is only a layer of protection: it’s not perfect.

 

What Kickstarter Did Right

Only a few details have emerged about the recent breach at Kickstarter, but it appears that this one will be a case study in doing things right both before and after the breach.

What Kickstarter has done right:

  • Timely notification
  • Clear messaging
  • Limited sensitive data retention
  • Proper password handling

Timely notification

The hours and days after a breach is discovered are incredibly hectic, and there will be powerful voices both attempting to delay public announcement and attempting to rush it. When users’ information may be at risk beyond the immediate breach, organizations should strive to make an announcement as soon as it will do more good than harm. An initial public announcement doesn’t have to have all the answers, it just needs to be able to give users an idea of how they are affected, and what they can do about it. While it may be tempting to wait for full details, an organization that shows transparency in the early stages of a developing story is going to have more credibility as it goes on.

Clear messaging

Kickstarter explained in clear terms what was and was not affected, and gave straightforward actions for users to follow as a result. The logging and access control groundwork for making these strong, clear statements at the time of a breach needs to be laid far in advance and thoroughly tested. Live penetration testing exercises with detailed post mortems can help companies decide if their systems will be able to capture this critical data.

Limited sensitive data retention

One of the first questions in any breach is “what did they get?”, and data handling policies in place before a breach are going to have a huge impact on the answer. Thinking far in advance about how we would like to be able to answer that question can be a driver for getting those policies in place. Kickstarter reported that they do not store full credit card numbers, a choice that is certainly saving them some headaches right now. Not all businesses have quite that luxury, but thinking in general about how to reduce the retention of sensitive data that’s not actively used can reduce costs in protecting it and chances of exposure over the long term.

Proper password handling (mostly)

Kickstarter appears to have done a pretty good job in handling user passwords, though not perfect. Password reuse across different websites continues to be one of the most significant threats to users, and a breach like this can often lead to ripple effects against users if attackers are able to obtain account passwords.

In order to protect against this, user passwords should always be stored in a hashed form, a representation that allows a server to verify that a correct password has been provided without ever actually storing the plaintext password. Kickstarter reported that their “passwords were uniquely salted and digested with SHA-1 multiple times. More recent passwords are hashed with bcrypt.” When reading breach reports, the level of detail shared by the organization is often telling and these details show that Kickstarter did their homework beforehand.

A strong password hashing scheme must protect against the two main approaches that attackers can use: hash cracking, and rainbow tables. The details of these approaches have been well-covered elsewhere, so we can focus on what Kickstarter used to make their users’ hashes more resistant to these attacks.

To resist hash cracking, defenders want to massively increase the amount of work an attacker has to do to check each possible password. The problem with hash algorithms like SHA1 and MD5 is that they are too efficient; they were designed to be completed in as few CPU cycles as possible. We want the opposite from a password hash function, so that it is reasonable to check a few possible passwords in normal use but computationally ridiculous to try out large numbers of possible passwords during cracking. Kickstarter indicated that they used “multiple” iterations of the SHA1 hash, which multiplies the attacker effort required for each guess (so 5 iterations of hashing means 5 times more effort). Ideally we like to see a hashing attempt take at least 100 ms, which is a trivial delay during a legitimate login but makes large scale hash cracking essentially infeasible. Unfortunately, SHA1 is so efficient that it would take more than 100,000 iterations to raise the effort to that level. While Kickstarter probably didn’t get to that level (it’s safe to assume they would have said so if they did), their use of multiple iterations of SHA1 is an improvement over many practices we see.

To resist rainbow tables, it is important to use a long, random, unique salt for each password. Salting passwords removes the ability of attackers to simply look up hashes in a precomputed rainbow tables. Using a random, unique salt on each password also means that an attacker has to perform cracking on each password individually; even if two users have an identical password, it would be impossible to tell from the hashes. There’s no word yet on the length of the salt, but Kickstarter appears to have gotten the random and unique parts right.

Finally, Kickstarter’s move to bcrypt for more recent passwords is particularly encouraging. Bcrypt is a modern key derivation function specifically designed for storing password representations. It builds in the idea of strong unique salts and a scalable work factor, so that defenders can easily dial up the amount computation required to try out a hash as computers get faster. Bcrypt and similar functions such as PBKDF2 and the newer scrypt (which adds memory requirements) are purpose built make it easy to get password handling right; they should be the go-to approach for all new development, and a high-priority change for any codebases still using MD5 or SHA1.

On NTP distributed denial of service attacks

NTP, network time protocol, is a time synchronization protocol that is implemented on a network protocol called UDP. UDP is designed for speed at the cost of simplicity, which plays into the inherent time-sensitivity (or specifically, jitter sensitivity) of NTP. Time is an interesting scenario in computer security. Time isn’t exactly secret; it has relatively minor confidentiality considerations, but in certain uses it’s exceedingly important that multiple parties agree on the time. Engineering, space technology, financial transactions and such.

At the bottom is a simple equation:

denial of service amplification = bytes out / bytes in

When you get to a ratio > 1, a protocol like NTP becomes attractive as a magnifier for denial of service traffic.

UDP’s simplicity makes it susceptible to spoofing. An NTP server can’t always decide whether a request is spoofed or not; it’s up to the network to decide in many cases. For a long time, operating system designers, system implementers, and ISPs did not pay a lot of attention to managing or preventing spoofed traffic. It was and is up to millions of internet participants to harden their networking configuration to limit the potential for denial of service amplification. Economically there’s frequently little incentive to do so – most denial of service attacks target someone else, and the impact to being involved as a drone is relatively minor. As a result you get systemic susceptibility.

My advice is for enterprises and individuals to research and implement network hardening techniques on the systems and networks they own. This often means tweaking system settings, or in certain cases may require tinkering with routers and switches. Product specific hardening guides can be found online at reputable sites. As with all technology, the devil is in the details and effective management is important in getting it right.

Gutting a Phish

In the news lately there have been countless examples of phishing attacks becoming more sophisticated, but it’s important to remember that entire “industry” is a bell curve: the most dedicated attackers are upping their game, but advancements in tooling and automation are also letting many less sophisticated players get started even more easily. Put another way, spamming and phishing are coexisting happily as both massive multinational business organizations and smaller cottage-industry efforts.

One such enterprising but misguided individual made the mistake of sending a typically blatant phishing email to one of our Neohapsis mailing lists, and someone forwarded it along to me for a laugh.

Initial Phish Email

The phishing email, as it appeared in a mailbox

As silly and evident as this is, one thing I’m constantly astounded by is how the proportion of people who will click never quite drops to zero. Our work on social engineering assessments bears out this real world example: with a large enough sample set, you’ll always hook at least one. In fact, a paper out of Microsoft Research suggests that, for scammers, this sort of painfully blatant opening is actually an intentional tool: it acts as a filter that only the most gullible will pass.

Given the weak effort put into the email, I was curious to see if the scam got any better if someone actually clicked through. To be honest, I was pleasantly surprised.

Phish Site

The phishing site: a combination of legitimate Apple code and images and a form added by the attacker

The site is dressed up as a reasonable approximation of an official Apple site. In fact, a look at the source shows that there are two things going on here: some HTML/CSS set dressing and template code that is copied directly from the legitimate Apple site, and the phishing form itself which is a reusable template form created by one of the phishers.

Naturally, I was curious where data went once the form was submitted. I filled in some bogus data and submitted it (the phishing form helpfully pointed out any missing data; there is certainly an audacity in being asked to check the format of the credit card number that’s about to be stolen). The data POST went back to another page on the same server, then quickly forwarded me on to the legitimate iTunes site.

Submit and Forward Burp -For Blog

This is another standard technique: if a “login” appears to work because the victim was already logged in, the victim will often simply proceed with what they were doing without questioning why the login was prompted in the first place. During social engineering exercises at Neohapsis, we have seen participants repeatedly log into a cloned attack site, with mounting frustration, as they wonder why the legitimate site isn’t showing them the bait they logged in for.

Back to this phishing site: my application security tester spider senses were tingling, so I felt that I had to see what our phisher was doing with the data being submitted. To find out, I replayed the submit request with various types of invalid data, strings that should cause errors depending on how the data was being parsed or stored. Not a single test string produced any errors or different behavior. This could be an indication that any parsing and processing is being done carefully and correctly, but the far more likely case is that they’re simply doing no processing and dumping it all straight out as plain text.

Interesting… if harvested data is just being simply dumped to disk, where exactly is it going? Burp indicates that the data is being POSTed to a harvester script at Snd/Snd.php. I wonder what else is in that directory?

directory listing

Under the hood of the phishing site, the loot stash is clearly visible

That results.txt file looks mighty promising… and it is.

result.txt

The format of the result.txt file

These are the raw results dumped from victims by the harvester script (Snd.php). The top entry is dummy data that I submitted, and when I checked it, the file was entirely filled with the various dummy submissions I had done before. It’s pretty clear from the results that I was the first person to actually click through and submit data to the phish site; actually pretty fortunate, because if a victim did enter legitimate information, the attacker would have to sort it out from a few hundred bogus submissions. Any day that we can make life harder for the the bad guys is a good day.

So, the data collection is dead simple, but I’d still like to know a bit more about the scam and the phishers if possible. There’s not a lot to go on, but the tag at the top of each entry seems unique. It’s the sort of thing we’re used to seeing when hackers deface a website and leave a tag to publicize the work:

------------+| $ o H a B  Dz and a m i r TN |+------------

Googling some variations turned up Google cache of a forum post that’s definitely related to the phishing site above; it’s either the same guy, or someone else using the same tool.

AppleFullz Forum post

A post in a carder forum, offering to sell data in the same format as generated by the phishing site above

A criminal using the name AppleFullz is selling complete information dumps of login details and credit card numbers plus CVV numbers (called “fulls” in carder forums) captured in the exact format that the Apple phish used, and even provides a sample of his wares (Insult to injury for the victim: not only was his information stolen, but it’s being given away as the credit card fraud equivalent of the taster trays at the grocery store). This carder is asking for $10 for one person’s information, but is willing to give bulk discounts: $30 for 5 accounts (This is actually a discount over the sorts of prices normally seen on carder forums; Krebs recently reported that Target cards were selling for $20-$100 per card. I read this as an implicit acknowledgement by our seller that this data is much “dirtier” and that the seller is expecting buyers to mine it for legitimate data). The tools being used here are a combination of some pre-existing scraps of  PHP code widely used in other spam and scam campaigns (the section labeled “|INFO|VBV|”), and a separate section added specifically to target Apple ID’s.

Of particular interest is that the carder provided a Bitcoin address. For criminals, Bitcoin has the advantage of anonymity but the disadvantage that transactions are public. This means that we can actually look up how much money has flowed into that particular Bitcoin address.

blockchain

Ill-gotten gains: the Bitcoin blockchain records transfers into the account used for selling stolen Apple Id’s and credit card numbers.

From November 17, when the forum posting went up, until December 4th, when I investigated this phishing attempt, he has received Bitcoin transfers totaling 0.81815987 BTC, which is around $744.53 (based on the BTC value on 12/4). According to his price sheet, that translates to a sale of between 74 and 124 records: not bad for a month of terribly unsophisticated phishing.

Within a few hours of investigating the initial phishing site, it had been removed. The actual server where the phish site was hosted was a legitimate domain that had been compromised; perhaps the phisher noticed the volume of bogus traffic and decided that the jig was up for that particular phish, or the system administrator got tipped off by the unusual traffic and investigated. Either way the phish site is offline, so that’s another small victory.

5 Tips for Safer Online Shopping

Use good password practices

No surprise here – it seems to be on the top of every list of this kind, but people still don’t listen. Passwords are still (and will continue to be) the weakest form of authentication. In a perfect security utopia passwords would not exist, but since we’re not there (yet) everyone relies on them. The two main rules on passwords are: make them complex, and make them unique. Complex doesn’t necessarily mean you need thirty random character monstrosities that only a savant could remember, but avoid dictionary words and don’t think that you’re safe by just appending numbers or special characters. The first thing an attacker will do is take every English word in the dictionary and append random characters to the end of it. Yep, “password1989!” is just as (in)secure as “password”. Lastly, passwords should be unique to each site. This is an even bigger sin that most people (myself included) are guilty of. We have one good password so we use it for everything. The problem with this is obvious: if it gets compromised an attacker has access to everything. When LinkedIn’s passwords were compromised last year I realized I was using the same password for all my social media accounts, leaving all those vulnerable too. You don’t need to make an attacker’s job easier for him or her by reusing passwords. Make them work for each one they need to crack.

Store sensitive data in secure locations

Hopefully, you’ve followed the first rule and have unique, complex passwords for every site you visit. Now, how to remember them all? This is where I love to recommend password managers. Password managers securely store all your log in information in an easily accessible location. I emphasize “securely” here, because I see far too many people with word documents called “My Passwords” or the like sitting on their desktops. This is a goldmine for any attacker who has access to it. I’ve even seen theses “password” files being shared unencrypted in the cloud, so people can pull them up on their phones or tablets to remember their passwords on-the-go. Please don’t do this. Now if you lose your phone you also lose every password to every secure site you have.

Instead, use a password manager like 1PasswordLastPass, or KeepPass to name a few popular ones. These encrypt and store your sensitive information (not just passwords, but also SSNs, CC numbers, etc..) in an easy to access format. You encrypt your “wallet” of passwords with one very secure password (the only one you ever need to remember), and can even additionally encrypt them with a private key. A private key works just like a physical key – you need a copy of it to access the file. Keep it on a USB stick on your keychain and a backup in a fire-proof safe.

Watch out for HTTP(S)

Ever notice how some sites start with https:// as opposed to http:// ? That little ‘s’ at the end makes a whole world of difference. When it’s present it means that you have established a trusted and encrypted connection with the website. Its security purpose is two-fold: all data between you and the site is encrypted and cannot be eavesdropped, and you have established through a chain of trust that the website you are visiting is, in fact, who they say they are.

For example, this is what the address bar on Firefox looks like when I have a secure connection to Bank of America:

https bar

Notice the ‘https’ and the padlock icon. If you are ever on a webpage that is asking you to enter sensitive information (like a password) and you don’t see something similar, don’t enter it! There could be any number of reasons why you are not connected via HTTPS, including benign ones, but it’s better to be safe than sorry. Likewise, if you ever receive a warning from your browser like this:

https error

It means that the browser cannot verify the website is actually who it says it is. Phishing sites can imitate legitimate logins down to the smallest detail, but they cannot imitate their SSL certificate. If you see this type of warning when trying to access a well-known site, get out immediately! There could be legitimate problem with the website or your browser, but more likely somebody is impersonating them and trying to fool you!

Install those nagging updates

Microsoft actually does an excellent job of patching vulnerabilities when they arise; the problem is most people don’t install them. Every other Tuesday new patches and updates are released to the public. Microsoft will also release patches out-of-bounds (OOB), meaning as needed and not waiting for the next Tuesday, for serious vulnerabilities. These patches are a great way to fix security holes but also offer a nasty catch. Attackers use these patches to see where the holes were.

Every “Patch Tuesday” attackers will reverse engineer the Windows updates to discover new vulnerabilities and then attempt to target machines that have not applied the update yet. It’s akin to a car manufacturer releasing a statement saying “this year and model car can be unlocked with a toothpick, so apply this fix.” Now every car thief in the world knows to look out for that year and model, and if the fix hasn’t been applied they know to try a toothpick.

This is why it’s imperative to keep your computer up to date. The “Conficker” worm that ran rampant in 2009 exploited a security vulnerability that was patched by Microsoft almost immediately. Part of the reason it spread so successfully was people’s reluctance to install new Windows updates. It preyed on out-of-date systems.

Likewise, many online exploits will use common vulnerabilities found in different software, like Flash, Java, or even the browsers. When software that you use online prompts you to install an update – do it!

So the next time your computer asks you to restart to install updates, go grab a cup of coffee and let it do its thing. It’ll save you in the long run.

(note: Mac users are not exempt! Install those updates from Apple as well!)

It’s okay to be a little paranoid

My last tip is more of a paradigm shift than a tip for when you are conducting business online. It’s okay to be a little paranoid. The old mantra “if it’s too good to be true, it probably is” has never been more applicable when it comes to common phishing schemes. I’m sure most people know by now to not trust a pop-up that says “You’ve won an iPad – click here!”, but modern phishing techniques are much more subtle – and much more dangerous.

One of the only times I’ve ever fallen victim to a phishing scheme was when “Paypal” emailed me asking me to confirm a large purchase because it was suspicious. Since I didn’t make the order I immediately thought I had been compromised. I went into panic mode, clicked the link, entered my password….and, wait, I just entered my Paypal password into a site I don’t even recognize. They got me.

It’s okay to mistrust emails and links. If something seems phishy (pun intended) then exit out. Services like Paypal and online banks will never ask for personal information over email, chat, or any avenue besides their main website. If you have an issue, go to their website, ensure that ‘s’ is in your address bar, and do your business from there. If you’re still not convinced, find their 800 number and call them. The point is, if I had stayed calm for a second and thought it was strange Paypal was asking me to urgently log in via an email message, I would have gathered myself, gone to their official site to log in and then looked for any alerts or suspicious activity. I could have even called them.

Trying not to sound too misanthropic here, but when it comes to dealing with sensitive information online it’s better to not trust someone initially then it is to trust them implicitly. Your bank account information won’t be deleted and nothing bad will happen if you don’t immediately update your password, so take a second to make sure what you’re doing is actually legit.

Configuration Assurance: Evolving Security Beyond the Basics

Perhaps it is more appropriate for us to approach configuration management as an assurance process meant to ensure system integrity is maintained over time. By evolving our view of how to establish and control the integrity of the different devices and technologies in use, the concept of “configuration management” evolves to become more about “configuration assurance.”

The Need to Manage Configuration 

When considering the different aspects of information security program management, few topics are of as much importance to an organization’s overall security posture as the topic of “configuration management.” This is due, in part, to the number of different standards and processes that typically comprise or govern a configuration management program. And, it is usually the lack of governance or enforcement of configuration management practices that lead to system and information compromises.

When we look at configuration management, it is important for us to keep in mind that what we’re really addressing is the “I” of InfoSec’s “Confidentiality, Integrity, and Availability,” or “C.I.A.” Because of this, we should understand each of the different parts that make up a configuration management program or process, and further understand them as part of an overall process for ensuring the integrity of any given device or system. Ultimately, the basis for establishing and verifying the integrity of a device or system needs to be consistent with the information security standards defined by the organization, industry best practices, industry or governmental regulation, and relevant legislative requirements.

The Basics of Configuration Management 

The objective of any meaningful configuration management program is a security-minded framework within which all information systems can be tracked, classified, reviewed, analyzed, and maintained according to a consistent set of practices and standards. Configuration management programs usually incorporate several different standards and processes to address the diverse aspects of information security, such as standard build/configuration documentation and processes, antivirus monitoring, patch management, vulnerability management, asset management, etc. Essentially, it comes down to having lot of eggs that ultimately wind up in the same basket, with the objective being that none of the egg shells get broken.

At a high level, the functional and security requirements for most of these programs and services are fairly well understood. It is common for organizations to treat each of the different aspects of configuration management as stand-alone programs or processes. However, reality is quite different. In addition to ensuring that a configuration management program addresses all of the relevant security requirements, it is also equally necessary to understand how each individual security process or program relates to other security processes or programs. Why? Because each of the processes associated with configuration management impacts other processes related to configuration management. The manner in which these interrelationships are addressed (or not addressed) may expose significant risks in critical or sensitive information systems.

Regardless, many organizations still tend to approach delivery of these programs and services as individual and somewhat isolated or unrelated processes. This is especially true for organizations that heavily focus on meeting compliance requirements without embracing the larger concept of “information security.” This is also true in organizations where information security programs are less mature, or if there is an over-reliance on technology in the absence of formal documentation.

Where the Gaps May Lie

Following are a couple of examples where gaps might typically occur in the configuration management process. After each example, I’ve put together a few follow-up questions to help explore each issue a little more in-depth.

A. Auditing and Log Monitoring – Most security policies and system configuration standards tend to address audit and logging requirements at the operating system level. However, operating system audit log services are not always capable of capturing detailed audit log data generated by some applications or services. As a result, it may be necessary to combine and correlate multiple audit log data sources (perhaps from multiple devices) to reconstruct a specific chain of events. All business processes should be reviewed to ensure that the full complement of required audit log data is being collected and reviewed.

  1. Do your policies, standards, and processes ensure that all required security audit log data is collected for any and all firewalls/routers, workstations, critical/sensitive applications, databases, monitoring technologies, and other relevant security devices or technologies used in the environment?
  2. Do policies or standards require audit log data collection to include audit log data from all antivirus endpoints, file integrity monitoring endpoints, IDS/IPS alerts and events, security devices or applications, and file or database access?
  3. Is all audit log data, of all types, collected to a single or centralized source(s)?
  4. Is all audit log data backed up regularly (at least daily) and protected against unauthorized access or modification?
  5. Is audit log data from one source combined and correlated with audit log data from other devices or services to reconstruct specific activities, identify complex attacks, and/or raise appropriate alerts?
  6. Has your organization performed any testing or forensic activities to verify that audit log information currently being collected is sufficient to raise appropriate alerts and reconstruct the events related to any suspicious activity?

B. Standard Build/Configuration – It is commonplace for organizations to have standards documentation describing how to install and configure the different kinds of operating systems (and sometimes databases) used in the environment. However, it is not quite so common to have similar documentation (or similar level of detail) when it comes to some specialized technologies or functions. As we are all aware, a secure technical environment is reliant upon more than just securing the operating systems and extends to all devices in use. Policies, standards, and processes should exist to address all technologies used in the environment and should define how to establish, maintain, and verify the integrity of any device or application intended for use within the environment.

  1. Do documentation and processes currently exist to define the secure initial configuration of all technology device types and applications in use in the environment? This includes technologies or devices such as firewalls, routers, servers, databases, mainframe/mid-range, wireless technologies and devices, mobile computing devices (laptops and smartphones), workstations, point-of-interaction devices, IVR systems, and any other technologies related to establishing, enforcing, or monitoring security posture or controls.
  2. Are configuration standards cross-checked to ensure that all relevant information security subject areas are addressed or appropriately cross-referenced? For example, do OS configuration standards include details for installing antivirus or other critical software (FIM, patch management, etc.)? If not, is a reference provided to supporting documentation that details how to install antivirus or other critical software for each specific operating system type?
  3. Do documentation and processes currently exist to define not just the secure configuration of the base operating system, but also to define a minimum patch level or version a system must meet (e.g., “Win7 SP2″ or “Apache version X.X.Y”) before being permitted to connect to the network environment?

These are clearly not all of the possible intersections or gaps that might occur in how an organization approaches configuration management. In developing an information security program, each organization will need to identify the relevant services, processes, and programs that represent how configuration management is achieved. As part of a process of constant improvement, the next logical step would then be to take a closer look at the internal process interrelationships and try to identify any gaps that might exist.

Where to from here?

By evolving our view of how to establish and control the integrity of the different devices and technologies, the concept of “configuration management” evolves to become more about “configuration assurance.” Instead of approaching configuration management as a somewhat unregulated process kept in check by periodic review (audit), perhaps it is more appropriate for us to approach configuration management as an assurance process meant to ensure system integrity is maintained over time.

In the end, one of the biggest enemies of information security is time. Because even if you have bullet-proof security controls in place today, they will probably not offer much protection  against the vulnerability/exploit that a hacker will identify tonight or a vendor will announce tomorrow (or Tuesday).

PCI DSS 3.0

The Payment Card Industry Security Standards Council (PCI SSC) has released a draft version of the Payment Card Industry Data Security Standard (PCI DSS) Version 3.0 to Qualified Security Assessor (QSA) companies.  This release was not made available to the general public as PCI SSC is still reviewing feedback from QSA companies and other participating organizations before the final version is released in November 2013.  The new PCI DSS 3.0 compliance requirements will go in to effect for all companies on January 1, 2015 but there are a few requirements defined in PCI DSS 3.0 that will not be required for compliance until July 1, 2015.

Sign: Changes Ahead

PCI SSC has categorized the changes for the standard into three types: Clarifications, Additional guidance and Evolving Requirements.  PCI SSC defines a clarification as a means to make the intent of a particular requirement clear to QSA’s and entities that must comply with PCI DSS.  Additional guidance provides an explanation, definition and/or instructions to increase understanding or provide further information or guidance on a particular topic being assessed.  Evolving requirements are changes to ensure that the standards are up to date with emerging threats and changes in the market and are new requirements for the most part.

In total there are a proposed 99 changes between the three types of changes, which were nicely defined in the document “PCI DSS 3.0 Summary of Changes” provided by the council.  The majority of changes, 71 to be exact, are clarifications followed up by 23 evolving requirements and 5 changes that fall under the “additional guidance” category.  This blog post provides a high-level overview of the proposed changes in the draft PCI DSS V3.0 and this overview is not specific to particular requirements.  Neohapsis will be releasing a series of blog posts dedicated to PCI DSS V3.0 that will explore individual requirement changes in great detail.

So enough setup, here are some key takeaways of high-level changes to PCI DSS Version 3.0.

Scope of PCI DSS Requirements

PCI SSC added some clarification around the responsibilities for defining scope for both the entity seeking compliance and the QSA performing the validation.  PCI SSC did a great job of clarifying the scoping process by providing examples of system components that should be included are part of the scope for a ROC assessment.  In addition to system components, the scoping guidance section contains requirements for capturing all purchased or custom developed applications that are being used within the cardholder data environment (CDE).  Compared to PCI DSS Version 2.0, this section of PCI DSS Version 3.0 has been broken out to be more descriptive as well as to display the details in a clearer manner to help entities better understand the scoping process.

Use of Third-Party Service Providers / Outsourcing

The use of third-party service providers to support an entity’s cardholder data environment (CDE) and the related PCI DSS validation requirements have not change that drastically.  All entities that fall under PCI DSS are still required to validate that each third-party service provider is PCI DSS compliant by obtaining a copy of their attestations of compliance (AOC) or by having their QSA assess the compliance status of each third-party service provider for relevant PCI DSS requirements..  However, the draft of PCI DSS Version 3.0 provides examples such as advising entities seeking PCI compliance to validate the IP addresses for quarterly scans if one or more shared hosting providers are in-scope for their  CDE.  Furthermore, entities seeking compliance are advised to work with service providers’ to ensure that contractual language between the two parties clearly states PCI DSS compliance responsibilities down to the requirement level.

Business-As-Usual (BAU)

The biggest change that I see in PCI DSS 3.0 is the emphasis on making PCI DSS controls part of “business-as-usual” (BAU) as compared to a moment in time assessment. To get this message across, the draft version of PCI DSS 3.0 provides several best practice examples for making PCI are part of BAU. These best practices are not requirements, but the council wants to encourage organizations to make PCI a part of BAI. The goal of this change in thinking is to get businesses to implement PCI DSS as part of their overall security strategy and daily operations.  PCI DSS is stressing that their compliance standard is not a set-it and forget-it mentality.

Some important areas in this new mentality focus on security processes.  For example, entities should be validating on their own that controls are implemented effectively for applicable businesses processes and related technologies.  Some specific examples related to this new mentality focus on antivirus definitions, logging, vulnerability signatures and ensuring that only appropriate services are enabled on systems.  With this new mentality, entities should look to take corrective actions when compliance gaps are identified so that PCI DSS compliance can be maintained at all times and not wait until their QSA comes to validate their compliance. When gaps are identified, implementation of compensating or mitigating controls may be necessary and should not be left open until the start of a QSA assessment.

Lastly in regards to BAU, the company seeking PCI DSS compliance should continue to have open dialog with their QSA throughout the year.  Good QSA companies will work with their clients to help ensure the right choices are being made in between ROC assessments.

Until Next Post

This is just one of multiple blog posts about upcoming changes in PCI DSS version 3.0.  Neohapsis will be releasing additional posts on more specific PCI DSS 3.0 requirements in the near future.  If you are required to be PCI DSS compliant then I would recommend having a talk with your QSA company to start planning and preparing for PCI DSS 3.0.  You are also welcome to reach out to Neohapsis as we have multiple QSA’s who are seasoned professionals who can address your PCI DSS and PCI PA DSS compliance questions.