Putting Windows XP Out To Pasture

Farewell, Windows XP! We hated you, then loved you, and soon we’ll hate you again.

 (This post is a resource for home and small-business users with questions about the impending end-of-life for Windows XP. Larger enterprise users have some different options available to them; contact us to discuss your situation and options.)

For those who haven’t seen it in the news yet: Microsoft will be ending support for its hugely successful operating system, Windows XP, on April 8th. This means that users of the 12-year-old operating system will no longer be able to get updates, and in particular will not be able to get security updates. Users of more modern versions of Windows, such as Windows Vista or Windows 7 will remain supported for several more years.

Once support ends, computers still on Windows XP will become a very juicy target for Internet criminals and attackers. Internet crime is big business, so every day there are criminals looking for new weaknesses in computer systems (called vulnerabilities), and developing attacks to take advantage of them (these attacks are called exploits). Normally, the software vendor (Microsoft in this case) quickly finds out about these weaknesses and releases updates to fix them. When an exploit is developed, some number of people fall victim shortly after the exploit is first used, but people who get the update in a relatively timely manner are protected.

But what happens when a vendor stops updating the software? All of a sudden, the bad guys can use these same attacks, the same exploits, indefinitely. As a product nears end of life, attackers have an incentive to hold off on using critical vulnerabilities until the deadline passes. The value of their exploits goes up significantly once they have confidence that the vendor will never patch it. Based on that, we can expect a period of relative quiet in terms of announced vulnerabilities affecting XP from now until shortly after the deadline, when we will likely see stockpiled critical vulnerabilities begin circulating. From then on, the risk of these legacy XP systems will continue to increase, so migrating away from XP or dramatically isolating the systems should be a priority for people or organizations that still use them.

How do I know if I’m running Windows XP?

  • If your computer is more than 5 years old, odds are it is running Windows XP
  • Simplest way: “Win+Break”: Press and hold down the Windows key on your keyboard, then find the “Pause” or “Break” key and press it. Let both keys go. That will show the System Properties windows. You may have to hunt around for your “Pause/Break” key, but hey, it finally has a use.
  • Alternate way: Click the Start Menu -> Right click on “My Computer” -> On the menu that comes out, click on Properties

 

Image

Click the Start Menu, then right-click My Computer, then click Properties.

 

Image

Your version of Windows will be the first thing on the System Properties window.

 

How do I stay safe?

Really, you should think about buying a new computer. You can think of it as a once a decade spring cleaning. If your computer is old enough to have Windows XP, having an unsupported OS is likely just one of several problems. It is possible to upgrade your old computer to a newer operating system such as Windows 7, or convert to a free Linux-based operating system, but this may be a more complicated undertaking than many users want to tackle.

Any computer you buy these days will be a huge step up from a 7-year old (at least!) machine running XP, so you can comfortably shop the cheapest lines of computers. New computers can be found for $300, and it’s also possible to buy reputable refurbished ones with a modern operating system for $100-$200.

For those who really don’t want to or can’t upgrade, the situation isn’t pretty. Your computer will continue to work as it always has, but the security of your system and your data is entirely in your hands. These systems have been low-hanging fruit for attackers for a long time, but after April 8th they will have a giant neon bull’s-eye on them.

There are a few things you can do to reduce your risks, but there really is no substitute for timely vendor patches.

  1. Only use the system for tasks that can’t be done elsewhere. If the reason for keeping an XP machine is to run some specific program or piece of hardware, then use it only for that. In particular, avoid web browsing and email on the unsupported machine: both activities expose the vulnerable system to lots of untrusted input.
  2. Keep all of your other software up to date. Install and use the latest version of Firefox or Chrome web browsers, which won’t be affected by Microsoft’s end of life.
  3. Back up your computer. There are many online backup services available for less than $5 a month. If something goes wrong, you want to make sure that your data is safe. Good online backup services provide a “set it and forget it” peace of mind. This is probably the single most important thing you can do, and should be a priority even for folks using a supported operating system. Backblaze, CrashPlan, and SpiderOak are all reasonable choices for home users.
  4. Run antivirus software, and keep it up to date. AVAST, AVG, and Bitdefender are all reasonable free options but be aware that antivirus is only a layer of protection: it’s not perfect.

 

What Kickstarter Did Right

Only a few details have emerged about the recent breach at Kickstarter, but it appears that this one will be a case study in doing things right both before and after the breach.

What Kickstarter has done right:

  • Timely notification
  • Clear messaging
  • Limited sensitive data retention
  • Proper password handling

Timely notification

The hours and days after a breach is discovered are incredibly hectic, and there will be powerful voices both attempting to delay public announcement and attempting to rush it. When users’ information may be at risk beyond the immediate breach, organizations should strive to make an announcement as soon as it will do more good than harm. An initial public announcement doesn’t have to have all the answers, it just needs to be able to give users an idea of how they are affected, and what they can do about it. While it may be tempting to wait for full details, an organization that shows transparency in the early stages of a developing story is going to have more credibility as it goes on.

Clear messaging

Kickstarter explained in clear terms what was and was not affected, and gave straightforward actions for users to follow as a result. The logging and access control groundwork for making these strong, clear statements at the time of a breach needs to be laid far in advance and thoroughly tested. Live penetration testing exercises with detailed post mortems can help companies decide if their systems will be able to capture this critical data.

Limited sensitive data retention

One of the first questions in any breach is “what did they get?”, and data handling policies in place before a breach are going to have a huge impact on the answer. Thinking far in advance about how we would like to be able to answer that question can be a driver for getting those policies in place. Kickstarter reported that they do not store full credit card numbers, a choice that is certainly saving them some headaches right now. Not all businesses have quite that luxury, but thinking in general about how to reduce the retention of sensitive data that’s not actively used can reduce costs in protecting it and chances of exposure over the long term.

Proper password handling (mostly)

Kickstarter appears to have done a pretty good job in handling user passwords, though not perfect. Password reuse across different websites continues to be one of the most significant threats to users, and a breach like this can often lead to ripple effects against users if attackers are able to obtain account passwords.

In order to protect against this, user passwords should always be stored in a hashed form, a representation that allows a server to verify that a correct password has been provided without ever actually storing the plaintext password. Kickstarter reported that their “passwords were uniquely salted and digested with SHA-1 multiple times. More recent passwords are hashed with bcrypt.” When reading breach reports, the level of detail shared by the organization is often telling and these details show that Kickstarter did their homework beforehand.

A strong password hashing scheme must protect against the two main approaches that attackers can use: hash cracking, and rainbow tables. The details of these approaches have been well-covered elsewhere, so we can focus on what Kickstarter used to make their users’ hashes more resistant to these attacks.

To resist hash cracking, defenders want to massively increase the amount of work an attacker has to do to check each possible password. The problem with hash algorithms like SHA1 and MD5 is that they are too efficient; they were designed to be completed in as few CPU cycles as possible. We want the opposite from a password hash function, so that it is reasonable to check a few possible passwords in normal use but computationally ridiculous to try out large numbers of possible passwords during cracking. Kickstarter indicated that they used “multiple” iterations of the SHA1 hash, which multiplies the attacker effort required for each guess (so 5 iterations of hashing means 5 times more effort). Ideally we like to see a hashing attempt take at least 100 ms, which is a trivial delay during a legitimate login but makes large scale hash cracking essentially infeasible. Unfortunately, SHA1 is so efficient that it would take more than 100,000 iterations to raise the effort to that level. While Kickstarter probably didn’t get to that level (it’s safe to assume they would have said so if they did), their use of multiple iterations of SHA1 is an improvement over many practices we see.

To resist rainbow tables, it is important to use a long, random, unique salt for each password. Salting passwords removes the ability of attackers to simply look up hashes in a precomputed rainbow tables. Using a random, unique salt on each password also means that an attacker has to perform cracking on each password individually; even if two users have an identical password, it would be impossible to tell from the hashes. There’s no word yet on the length of the salt, but Kickstarter appears to have gotten the random and unique parts right.

Finally, Kickstarter’s move to bcrypt for more recent passwords is particularly encouraging. Bcrypt is a modern key derivation function specifically designed for storing password representations. It builds in the idea of strong unique salts and a scalable work factor, so that defenders can easily dial up the amount computation required to try out a hash as computers get faster. Bcrypt and similar functions such as PBKDF2 and the newer scrypt (which adds memory requirements) are purpose built make it easy to get password handling right; they should be the go-to approach for all new development, and a high-priority change for any codebases still using MD5 or SHA1.

Gutting a Phish

In the news lately there have been countless examples of phishing attacks becoming more sophisticated, but it’s important to remember that entire “industry” is a bell curve: the most dedicated attackers are upping their game, but advancements in tooling and automation are also letting many less sophisticated players get started even more easily. Put another way, spamming and phishing are coexisting happily as both massive multinational business organizations and smaller cottage-industry efforts.

One such enterprising but misguided individual made the mistake of sending a typically blatant phishing email to one of our Neohapsis mailing lists, and someone forwarded it along to me for a laugh.

Initial Phish Email

The phishing email, as it appeared in a mailbox

As silly and evident as this is, one thing I’m constantly astounded by is how the proportion of people who will click never quite drops to zero. Our work on social engineering assessments bears out this real world example: with a large enough sample set, you’ll always hook at least one. In fact, a paper out of Microsoft Research suggests that, for scammers, this sort of painfully blatant opening is actually an intentional tool: it acts as a filter that only the most gullible will pass.

Given the weak effort put into the email, I was curious to see if the scam got any better if someone actually clicked through. To be honest, I was pleasantly surprised.

Phish Site

The phishing site: a combination of legitimate Apple code and images and a form added by the attacker

The site is dressed up as a reasonable approximation of an official Apple site. In fact, a look at the source shows that there are two things going on here: some HTML/CSS set dressing and template code that is copied directly from the legitimate Apple site, and the phishing form itself which is a reusable template form created by one of the phishers.

Naturally, I was curious where data went once the form was submitted. I filled in some bogus data and submitted it (the phishing form helpfully pointed out any missing data; there is certainly an audacity in being asked to check the format of the credit card number that’s about to be stolen). The data POST went back to another page on the same server, then quickly forwarded me on to the legitimate iTunes site.

Submit and Forward Burp -For Blog

This is another standard technique: if a “login” appears to work because the victim was already logged in, the victim will often simply proceed with what they were doing without questioning why the login was prompted in the first place. During social engineering exercises at Neohapsis, we have seen participants repeatedly log into a cloned attack site, with mounting frustration, as they wonder why the legitimate site isn’t showing them the bait they logged in for.

Back to this phishing site: my application security tester spider senses were tingling, so I felt that I had to see what our phisher was doing with the data being submitted. To find out, I replayed the submit request with various types of invalid data, strings that should cause errors depending on how the data was being parsed or stored. Not a single test string produced any errors or different behavior. This could be an indication that any parsing and processing is being done carefully and correctly, but the far more likely case is that they’re simply doing no processing and dumping it all straight out as plain text.

Interesting… if harvested data is just being simply dumped to disk, where exactly is it going? Burp indicates that the data is being POSTed to a harvester script at Snd/Snd.php. I wonder what else is in that directory?

directory listing

Under the hood of the phishing site, the loot stash is clearly visible

That results.txt file looks mighty promising… and it is.

result.txt

The format of the result.txt file

These are the raw results dumped from victims by the harvester script (Snd.php). The top entry is dummy data that I submitted, and when I checked it, the file was entirely filled with the various dummy submissions I had done before. It’s pretty clear from the results that I was the first person to actually click through and submit data to the phish site; actually pretty fortunate, because if a victim did enter legitimate information, the attacker would have to sort it out from a few hundred bogus submissions. Any day that we can make life harder for the the bad guys is a good day.

So, the data collection is dead simple, but I’d still like to know a bit more about the scam and the phishers if possible. There’s not a lot to go on, but the tag at the top of each entry seems unique. It’s the sort of thing we’re used to seeing when hackers deface a website and leave a tag to publicize the work:

------------+| $ o H a B  Dz and a m i r TN |+------------

Googling some variations turned up Google cache of a forum post that’s definitely related to the phishing site above; it’s either the same guy, or someone else using the same tool.

AppleFullz Forum post

A post in a carder forum, offering to sell data in the same format as generated by the phishing site above

A criminal using the name AppleFullz is selling complete information dumps of login details and credit card numbers plus CVV numbers (called “fulls” in carder forums) captured in the exact format that the Apple phish used, and even provides a sample of his wares (Insult to injury for the victim: not only was his information stolen, but it’s being given away as the credit card fraud equivalent of the taster trays at the grocery store). This carder is asking for $10 for one person’s information, but is willing to give bulk discounts: $30 for 5 accounts (This is actually a discount over the sorts of prices normally seen on carder forums; Krebs recently reported that Target cards were selling for $20-$100 per card. I read this as an implicit acknowledgement by our seller that this data is much “dirtier” and that the seller is expecting buyers to mine it for legitimate data). The tools being used here are a combination of some pre-existing scraps of  PHP code widely used in other spam and scam campaigns (the section labeled “|INFO|VBV|”), and a separate section added specifically to target Apple ID’s.

Of particular interest is that the carder provided a Bitcoin address. For criminals, Bitcoin has the advantage of anonymity but the disadvantage that transactions are public. This means that we can actually look up how much money has flowed into that particular Bitcoin address.

blockchain

Ill-gotten gains: the Bitcoin blockchain records transfers into the account used for selling stolen Apple Id’s and credit card numbers.

From November 17, when the forum posting went up, until December 4th, when I investigated this phishing attempt, he has received Bitcoin transfers totaling 0.81815987 BTC, which is around $744.53 (based on the BTC value on 12/4). According to his price sheet, that translates to a sale of between 74 and 124 records: not bad for a month of terribly unsophisticated phishing.

Within a few hours of investigating the initial phishing site, it had been removed. The actual server where the phish site was hosted was a legitimate domain that had been compromised; perhaps the phisher noticed the volume of bogus traffic and decided that the jig was up for that particular phish, or the system administrator got tipped off by the unusual traffic and investigated. Either way the phish site is offline, so that’s another small victory.

CyanogenMod 9, An Android ROM Without Root

By Jon Janego

As a follow up to my blog post in December about custom Android ROMs, i’d like to comment on the news released by the CyanogenMod team last month about their removal of default root access in their upcoming CM9 release.

In a post on their blog  a few weeks ago, the CyanogenMod team announced that they were changing the way that they handle root access on devices using their ROM.  Previous releases of their ROM  have root access enabled by default, as is common in most custom ROMs.  That had the result that any application that requested root access on the device would be granted it.  This is great for some of the power-user applications that are common among the Android modding scene – Titanium Backup is one that comes to mind – but it comes with a significant security risk, since a malicious application installed on the device could have full root access without the user being aware of what it was doing.  The CyanogenMod team acknowledged this in their post, saying, “Shipping root enabled by default to 1,000,000+ devices was a gaping hole.

What the team is planning to do instead is to implement root access in a selective, user configurable manner.  A device using the ROM has root access disabled by default, but can be configured to only enable it for ADB console access, to enable it only for applications, or to have it enabled across the board.  This type of control leaves it in the hands of the users to choose the level of risk that they are willing to accept.  Obviously, many of the tech-savvy enthusiasts will immediately enable unfettered root access. However, for the large part of the Android community that is only interested in custom ROMs for the customizable interfaces offered by them, this will be a welcome and overdue security protection for them.  Already, it is clear in the comments to the CyanogenMod post that not everyone understands what the risk of root level access is – someone asks the community to “explain this for the liberal arts majors.

Just so it’s clear, the removal of root level access is strictly at the operating system layer.  Installing a custom ROM onto an Android phone still requires unlocking the bootloader, which on most devices requires running a “jailbreaking” exploit of some sort.  There are a few exceptions to this; the Google Nexus line of phones lets you unlock the bootloader with only some console commands, and HTC and Motorola have also been providing bootloader unlocks to their devices.  Unless it’s coming from the manufacturer, there is always the possibility of some risk when executing unknown code on your device.  But once you’ve gotten to the point of installing the custom ROM, there was the further risk of having root-level access to the operating system easily available, which is the gap that CyanogenMod has closed here.

To me, this indicates that the CyanogenMod team is acknowledging their influence in the community and using it to educate users on good security measures.  Baking in a “secure by default” configuration to the most popular ROM will be good for everyone.  Kudos to them for acknowledging this, and let’s hope that it leads to a more secure Android ecosystem for everyone!

CyanogenMod Logo Used Under a Creative Commons Attribution License

We’re All Consultants

Clients hire Neohapsis for many reasons: our expertise, our perspective as impartial outsiders, and our commitment to executing projects efficiently and expertly are just a few reasons.  But while working with clients, an important sub task that I try to accomplish is to help them change the way they interact with the rest of their business – to get security departments to think and act like consultants.  It’s easy for people working in IT, and those in Security in particular, to get caught up in their day to day activities.  There’s always a new fire to be contained or technical hurdle to overcome.  But while doing so, it’s important to understand how these activities are helping enable the business to continue to meet its overall goals.  The most effective consultants understand their role: to be the trusted advisor.  Internal security professionals can take on this same role within the company.  Their departments have the responsibility of ensuring that risks are appropriately mitigated and that the business can continue to function smoothly in the face of constant external and internal threats.  The core business can be viewed as a client of the security team, who is engaging security for assistance and reassurance that their day-to-day activities aren’t putting the business at a risk.

I’ve been working with one of our clients recently to help one of their business units engage more effectively with the internal security organization.  In the past, the business unit handled many IT activities themselves, acting as a de facto independent IT department.  While they are effective at running their own business, they did not have a security team focusing on their organization, so security concerns were often overlooked.  When I began working with the team, I found out that one of their main complaints with asking the security organization for assistance was lack of responsiveness.  I’ve helped set this organization up for future success by serving as a liaison between these two parts of the business, facilitating better communication on both sides.  The business unit has a central point of contact for security concerns, who can funnel them to the right people in the security organization; and the security organization has someone aware of most of the business unit’s projects and activities, which helps them cut through the confusion that can happen with disparate teams.

Security professionals must be both advisor and enforcer at the same time.  It’s tempting to get caught up in enforcing security for security’s sake – but it is important to remember that the ultimate goal of a security professional must be to help the core business be successful.

Security Organization Redesign

Historically, security organizations have grown up organically, starting 15 – 20 years ago with a single security conscious person who ended up getting tagged with doing the job.  Over the years, that manager asked for new positions, filling a tactical need when issues were presented, creating departments/teams as it made sense. There was no particular plan in place or a long-term strategy. Eventually, you end up with more than a handful of employees and a dysfunctional team. Don’t get me wrong, the team is usually very good at putting out fires and “getting the work done” but, by no means is it robust or optimized.  They typically do not work on the issues that are most important to the company rather these large security groups are playing whack-a-mole with issues and deal with fires as they are presented.  There is no opportunity to get ahead of the game and when the fire is in your area you deal with it the best way you can.  Of course, this can cause inter-personal issues amongst the team members and duplication of efforts, driving even more dysfunction.

As a consultant, it’s easy to say that lack of planning created these problems, but I don’t know many info sec managers who could claim they have a growth plan that goes out 15 years and involves hiring 30-50 new employees.  Most security professionals, for the majority of their careers, are fighting fires

What lessons has Neohapsis learned working with our clients to reorganize their security departments?

Don’t under estimate the angst that will be voiced by the team leaders/managers within the department if they are not included in the decision making process, even if you already know the right decision.

When it comes down to it, there are only so many ways you can design a security organization.  Certain jobs and tasks make sense together.  Certain others require similar skill sets. Technically, you don’t really need to involve many people in the decision if you have someone who knows the culture of the company and has done this before. You could very easily take a CISO and a consultant and develop a new organizational structure and announce it to the CISO’s management team.  You try that, and you’ll be surprised at the uproar about not understanding the nuances of each department and the needs and issues of the individuals.  Though it will take longer, a CISO will find better acceptance with his own management team if they are allowed to go off and work together to propose an org design of their own. It will probably take 30 days and in the end, it will probably look almost identical to what the CISO and consultant would have wanted anyway.  But, the managers’ attitudes will be different and they will have buy-in.  It still doesn’t hurt to get a consultants opinion on the org design, just don’t let your management team think you outsourced their career path.  Even though you could have started your organization change 30 days ago, sometimes it is more about buy-in than being right. That’s a very hard lesson for many security professionals.

Titles are a big deal to security people

Probably the most contentious and politically painful experience, and frankly the biggest complaints from the security team leads and managers, will be coming up with proper titles for the new departments.  As is generally the case in large organizations, there are way too many Directors, VPs, Senior VPs than can honestly be justified by organizational design.  You look over the fence and wonder how everyone in the sales department can be a Senior VP.

What makes this particularly difficult within a security organization is that security professionals by nature view themselves as different or special than everyone else in the organization.  Inevitably, that means corporate HR policy is perceived to be inapplicable to them. The presumption of non-applicability is exactly what security complains about when co-workers ignore security policy. So when company policy dictates a Director title requires X number of direct reports, what do you do with your architecture group that has 5 people with 20 years security experience and no direct reports?  If you don’t title that team as Director’s or better, nobody from the outside will apply for the positions. But if you do, others in the organization will ask why there is a department of 5 people all with Director titles.

In the same vein, titles are routinely viewed by security professionals as a way of bullying co-workers into complying with a particular security policy or decision.  Any perceived lost opportunity to get a title promotion is met with severe angst, no check that…open revolt…even when no salary increase comes with it.

In the end most titles will end up being a mix of corporate policy and what levels in the organization that particular person would have to interact with (eg: need for presumed power). Yes, many feelings will be hurt.

Salaries are all over the map

In similar alignment with titles, salaries are a difficult thing to pin down in the security industry.  Sure you can go to any of a number of surveys and pull an average salary…but often they are for a generic title like “security architect” or “security analyst” or something very specific like “IDS specialist”.  Is your security analyst the same as my security analyst? I can’t tell.  Should a firewall guru get paid the same as a policy guru? Why? Why not?   Eventually you will have to look at existing salaries within the team (obviously), a third party perspective of the market conditions, and the caliber of talent you want applying.  At some level it becomes a throw a number out there and see if you a get nibble approach.

Sounds like too much work…

Are the basic issues outlined above insurmountable?  Of course not.  But they seem to be so minor that many security managers will ignore them and focus on the so called “big picture”.  Little do they know, the big picture was never really in doubt.  It was the little things that were going to give them the biggest head aches and threaten to derail the path to the big picture.

Has this happened in you organization? Did you have a re-org experience to tell? We would love comments.

ThotCon 0×01

For those who haven’t heard Greg Ose and I will be presenting at the first annual ThotCon on April 23 in Chicago. If you haven’t gotten your ticket yet you will need to hurry as they are almost gone. Our talk is called Forensic Fail: Malware Kombat and will cover some of the failings of digital forensics. We also have a surprise lined up for the end so if you are in the area you won’t want to miss it.

You can register for the conference at http://www.thotcon.org/registration.html. We hope to see you there.

Virtualization: When and where?

We often field questions from our clients regarding the risks associated with hypervisor / virtualization technology.  Ultimately the technology is still software, and still faces many of the same challenges any commercial software package faces, but there are definitely some areas worth noting.

The following thoughts are by no means a comprehensive overview of all issues, but they should provide the reader with a general foundation for thinking about virtualization-specific risks.

Generally speaking, virtual environments are not that different than physical environments.  They require much of the same care and feeding, but that’s the rub; most companies don’t do a good job of managing their physical environments, either.  Virtualization can simply make existing issues worse.

For example, if an organization doesn’t have a vulnerability management program that is effective at activities like asset identification, timely patching, maintaining the installed security technologies, change control, and system hardening, than the adoption of virtualization technology usually compounds the problem via increased “server sprawl.”  Systems become even easier to deploy which leads to more systems not being properly managed.

We often see these challenges creep up in a few scenarios:

Testing environments – Teams can get the system up and running very quickly using existing hardware.  Easy and fast…but also dirty. They often don’t take the time to harden the system or bring it up to current patch levels or install required security software.

Even in the scenarios where templates are used, with major OS vendors like Microsoft and RedHat coming out with security fixes on a monthly basis a template even 2 months old is out of date.

Rapid deployment of “utility” servers – Systems that run back-office services like mail servers, print servers, file servers, DNS servers, etc.  Often times nobody really does much custom work on them and because they can no longer be physically seen or “tripped over” in the data center they sometimes fly under the radar.

Development environments – We often see virtualization technology making inroads into companies with developers that need to spin-up and spin-down environments quickly to save time and money.  The same challenges apply; if the systems aren’t maintained (and they often aren’t – developers aren’t usually known for their attention to system administration tasks) they present great targets for the would-be attacker.  Even worse if the developers use sensitive data for testing purposes.  If properly isolated, there is less risk from what we’ve described above but that isolation has to be pretty well enforced and monitoring to really mitigate these risks.

There are also risks associated with vulnerabilities in the technology itself.  The often feared “guest break out” scenario where a virtual machine or “guest” is able to “break out” of it’s jail and take over the host (and therefore, access data in any of the other guests) is a common one, although we haven’t heard of any real-world exploitations of these defects…yet.  (Although the vulnerabilities are starting to become better understood)

There are also concerns about the hopping between security “zones” when it comes to compliance or data segregation requirement.  For example, typically a physical environment has a firewall and other security controls between a webserver and a database server.  In a virtual environment, if they are sharing the same host hardware, you typically can not put a firewall or intrusion detection device or data leakage control between them.  This could violate control mandates found in standards such as PCI in a credit card environment.

Even assuming there are no vulnerabilities in the hypervisor technology that allow for evil network games between hosts, when you house two virtual machines/guests on the same hypervisor/host you often lose the visibility of the network traffic between them.  So if your security relies on restricting or monitoring at the network level, you no longer have that ability.  Some vendors are working on solutions to resolve intra-host communication security but it’s not mature by any means.

Finally, the “many eggs in one basket” concern is still a factor; when you have 10, 20, 40 or more guest machines on a single piece of hardware that’s a lot of potential systems going down should there be a problem.  While the virtualization software vendors will certainly offer high availability scenarios with technology such as VMware’s “VMotion”, redundant hardware, the use of SANs, etc., the cost and complexity adds up fairly fast.  (And as we have seen from some rather nasty SAN failures the past two months, SANs aren’t always as failsafe as we have been lead to believe. You still have backups right?)

While in some situations the benefits of virtualization technology far outweigh the risks, there are certainly situations where existing non-virtualized architectures are better. The trick is finding that line in the midst of the hell mell towards virtualization.

–Tyler

Response to Visa’s Chief Enterprise Risk Officer comments on PCI DSS

Visa’s Chief Enterprise Risk Officer, Ellen Richey, recently presented at the Visa Security Summit on March 19th. One of the valuable points made in her presentation was defending the value of implementing PCI DSS to protect against data theft. In addition, Ellen Richey spoke about the challenge organizations face, not only becoming compliant, but proactively maintaining compliance, defending against attacks and protecting sensitive information.

Recent compromises of payment processors and merchants that were stated to be PCI compliant have brought criticism to the PCI program. Our views are strongly aligned with the views presented by Ellen Richey. While the current PCI program requires an annual audit, this audit is simply an annual health-check. If you were to view the PCI audit like a state vehicle inspection. Even though at the time of the inspection everything on your car checks out, this does not prevent the situation of days later your brake lights go out. You would still have a valid inspection sticker, but are no longer in compliance with safety requirements. It is the owner’s responsibility to ensure the car is maintained appropriately. Similarly in PCI, it is the company’s responsibility to ensure the effectiveness and maintenance of controls to protect their data in an ongoing manner.

Ellen Richey also mentioned increased collaboration with the payment card industry, merchants and consumers. Collaboration is a key step to implementing the technology and processes necessary to continue reducing fraud and data theft. From a merchant, service provider and payment processor perspective, new technologies and programs will continue to reduce transaction risk, but, today, there are areas where these organizations need to proactively improve. The PCI DSS standard provides guidance around the implementation of controls to protect data. Though in addition to protecting data, merchants, service providers and processors need to proactively address their ability to detect attack and be prepared to respond effectively in the event of a compromise. These are two areas that are not currently adequately addressed by the PCI DSS and are areas where we continue to see organizations lacking.

See the following link to the Remarks by Ellen Richey, Chief Enterprise Risk Officer, Visa Inc. at the Visa Security Summit, March 19, 2009:

http://www.corporate.visa.com/md/dl/documents/downloads/EllenRichey09SummitRemarks.pdf

Easiest Way into a Company

One web page and one email is all you need to gain access to a major corporation’s internal network. Catchy I know, but this is not an exaggeration of what an attacker can do to gain access on their internal network. In culmination with exploiting a few systems on the internal network, they can have free reign. Securing your network infrastructure begins with your employees. I don’t think you will be able to extract any new techniques or any new concepts from this post; however, this should shed some light and acknowledge the importance of safe end user practices as well as securing internal networks and resources.

Much of the governance and regulatory focus is securing your external networks, but what if they get in? We have seen a rise in external vulnerability scans and a decrease in internal/external penetration tests. Did we forget security awareness, defense in depth, network architecture or even the most basic administrative practices? Not surprisingly, it seems corporations are searching for that check mark on their audit and not concerned with actual security.

So what, right?

Even the most security-aware corporations’ are still falling victim to social engineering exercises. Valuable resources which an attacker can use are found in the most trivial places such as social networking sites. Anyone can acquire an adequate employee list in minutes with all the social networking sites such as Linkedin, Facebook, Myspace, etc. From the vast amount of information that can be collected from social networking sites, message boards, and online-groups you can realistically create an organization chart (which helps addressing employees and providing focus for your phishing attack).

Scenario:

Currently, much of the workforce has logged into a VPN or OWA once in their lifetimes. Corporations are offering many services remotely to keep their workers adequately connected. These basic infrastructure items seem the most prone and widespread systems for an attacker to prey on. The first step an attacker makes is basic recon and choosing their targets. Often employees in administrative or sales roles are selected because they tend to login to resources remotely. Next, an attacker will search for an external facing login prompt to clone it to a dummy system with a basic logging to record IP and user credentials. After that, well crafted emails directing unsuspecting users to the dummy login…Done. Simple as that, login credentials obtained within minutes.

How do we protect from here:

There are three fronts that could dramatically improve the outcome of these scenarios. First off, end user training and policies geared towards making employees more aware of possible attacks and best practices. I am not talking about handing a policy to the employee and having them read it either. Second, internal penetrations tests still are viable and will cover a number of areas that will protect from employee attacks as well as minimizing potential sophisticated attacks. This may include additional tasks of hardening of hosts, segregation of networks/assets, and adjusting the appropriate policies. Third, static passwords on critical systems externally facing should be changed to a more secure method such as token authentication. The truth is there is no magic bullet to prevent phishing or social attacks, we will always be combating the human tendency to trust.