Neohapsis Announcement

As our clients and friends in the industry know, Neohapsis has been a key player in the security, risk and compliance market. Today, we are excited to announce plans to join Cisco, who we believe will be the perfect strategic match for us, given our services and research mission.

We share with Cisco a global enterprise customer base, and a commitment to help our customers address their most challenging threats, especially in the rapidly evolving mobile and cloud arenas. Because of Neohapsis’ and Cisco’s shared focus on the Internet of Everything, the opportunity to do groundbreaking work together is enormous. Together, what we bring to enterprise customers, IoT device manufacturers, and associated service providers will be unique in the market.

Please read Hilton Romanski’s blog which outlines more about the strategy driving this acquisition: http://blogs.cisco.com/news/cisco-announces-intent-to-acquire-neohapsis.

 

James Mobley,

President & CEO,

Neohapsis

Putting Windows XP Out To Pasture

Farewell, Windows XP! We hated you, then loved you, and soon we’ll hate you again.

 (This post is a resource for home and small-business users with questions about the impending end-of-life for Windows XP. Larger enterprise users have some different options available to them; contact us to discuss your situation and options.)

For those who haven’t seen it in the news yet: Microsoft will be ending support for its hugely successful operating system, Windows XP, on April 8th. This means that users of the 12-year-old operating system will no longer be able to get updates, and in particular will not be able to get security updates. Users of more modern versions of Windows, such as Windows Vista or Windows 7 will remain supported for several more years.

Once support ends, computers still on Windows XP will become a very juicy target for Internet criminals and attackers. Internet crime is big business, so every day there are criminals looking for new weaknesses in computer systems (called vulnerabilities), and developing attacks to take advantage of them (these attacks are called exploits). Normally, the software vendor (Microsoft in this case) quickly finds out about these weaknesses and releases updates to fix them. When an exploit is developed, some number of people fall victim shortly after the exploit is first used, but people who get the update in a relatively timely manner are protected.

But what happens when a vendor stops updating the software? All of a sudden, the bad guys can use these same attacks, the same exploits, indefinitely. As a product nears end of life, attackers have an incentive to hold off on using critical vulnerabilities until the deadline passes. The value of their exploits goes up significantly once they have confidence that the vendor will never patch it. Based on that, we can expect a period of relative quiet in terms of announced vulnerabilities affecting XP from now until shortly after the deadline, when we will likely see stockpiled critical vulnerabilities begin circulating. From then on, the risk of these legacy XP systems will continue to increase, so migrating away from XP or dramatically isolating the systems should be a priority for people or organizations that still use them.

How do I know if I’m running Windows XP?

  • If your computer is more than 5 years old, odds are it is running Windows XP
  • Simplest way: “Win+Break”: Press and hold down the Windows key on your keyboard, then find the “Pause” or “Break” key and press it. Let both keys go. That will show the System Properties windows. You may have to hunt around for your “Pause/Break” key, but hey, it finally has a use.
  • Alternate way: Click the Start Menu -> Right click on “My Computer” -> On the menu that comes out, click on Properties

 

Image

Click the Start Menu, then right-click My Computer, then click Properties.

 

Image

Your version of Windows will be the first thing on the System Properties window.

 

How do I stay safe?

Really, you should think about buying a new computer. You can think of it as a once a decade spring cleaning. If your computer is old enough to have Windows XP, having an unsupported OS is likely just one of several problems. It is possible to upgrade your old computer to a newer operating system such as Windows 7, or convert to a free Linux-based operating system, but this may be a more complicated undertaking than many users want to tackle.

Any computer you buy these days will be a huge step up from a 7-year old (at least!) machine running XP, so you can comfortably shop the cheapest lines of computers. New computers can be found for $300, and it’s also possible to buy reputable refurbished ones with a modern operating system for $100-$200.

For those who really don’t want to or can’t upgrade, the situation isn’t pretty. Your computer will continue to work as it always has, but the security of your system and your data is entirely in your hands. These systems have been low-hanging fruit for attackers for a long time, but after April 8th they will have a giant neon bull’s-eye on them.

There are a few things you can do to reduce your risks, but there really is no substitute for timely vendor patches.

  1. Only use the system for tasks that can’t be done elsewhere. If the reason for keeping an XP machine is to run some specific program or piece of hardware, then use it only for that. In particular, avoid web browsing and email on the unsupported machine: both activities expose the vulnerable system to lots of untrusted input.
  2. Keep all of your other software up to date. Install and use the latest version of Firefox or Chrome web browsers, which won’t be affected by Microsoft’s end of life.
  3. Back up your computer. There are many online backup services available for less than $5 a month. If something goes wrong, you want to make sure that your data is safe. Good online backup services provide a “set it and forget it” peace of mind. This is probably the single most important thing you can do, and should be a priority even for folks using a supported operating system. Backblaze, CrashPlan, and SpiderOak are all reasonable choices for home users.
  4. Run antivirus software, and keep it up to date. AVAST, AVG, and Bitdefender are all reasonable free options but be aware that antivirus is only a layer of protection: it’s not perfect.

 

Gutting a Phish

In the news lately there have been countless examples of phishing attacks becoming more sophisticated, but it’s important to remember that entire “industry” is a bell curve: the most dedicated attackers are upping their game, but advancements in tooling and automation are also letting many less sophisticated players get started even more easily. Put another way, spamming and phishing are coexisting happily as both massive multinational business organizations and smaller cottage-industry efforts.

One such enterprising but misguided individual made the mistake of sending a typically blatant phishing email to one of our Neohapsis mailing lists, and someone forwarded it along to me for a laugh.

Initial Phish Email

The phishing email, as it appeared in a mailbox

As silly and evident as this is, one thing I’m constantly astounded by is how the proportion of people who will click never quite drops to zero. Our work on social engineering assessments bears out this real world example: with a large enough sample set, you’ll always hook at least one. In fact, a paper out of Microsoft Research suggests that, for scammers, this sort of painfully blatant opening is actually an intentional tool: it acts as a filter that only the most gullible will pass.

Given the weak effort put into the email, I was curious to see if the scam got any better if someone actually clicked through. To be honest, I was pleasantly surprised.

Phish Site

The phishing site: a combination of legitimate Apple code and images and a form added by the attacker

The site is dressed up as a reasonable approximation of an official Apple site. In fact, a look at the source shows that there are two things going on here: some HTML/CSS set dressing and template code that is copied directly from the legitimate Apple site, and the phishing form itself which is a reusable template form created by one of the phishers.

Naturally, I was curious where data went once the form was submitted. I filled in some bogus data and submitted it (the phishing form helpfully pointed out any missing data; there is certainly an audacity in being asked to check the format of the credit card number that’s about to be stolen). The data POST went back to another page on the same server, then quickly forwarded me on to the legitimate iTunes site.

Submit and Forward Burp -For Blog

This is another standard technique: if a “login” appears to work because the victim was already logged in, the victim will often simply proceed with what they were doing without questioning why the login was prompted in the first place. During social engineering exercises at Neohapsis, we have seen participants repeatedly log into a cloned attack site, with mounting frustration, as they wonder why the legitimate site isn’t showing them the bait they logged in for.

Back to this phishing site: my application security tester spider senses were tingling, so I felt that I had to see what our phisher was doing with the data being submitted. To find out, I replayed the submit request with various types of invalid data, strings that should cause errors depending on how the data was being parsed or stored. Not a single test string produced any errors or different behavior. This could be an indication that any parsing and processing is being done carefully and correctly, but the far more likely case is that they’re simply doing no processing and dumping it all straight out as plain text.

Interesting… if harvested data is just being simply dumped to disk, where exactly is it going? Burp indicates that the data is being POSTed to a harvester script at Snd/Snd.php. I wonder what else is in that directory?

directory listing

Under the hood of the phishing site, the loot stash is clearly visible

That results.txt file looks mighty promising… and it is.

result.txt

The format of the result.txt file

These are the raw results dumped from victims by the harvester script (Snd.php). The top entry is dummy data that I submitted, and when I checked it, the file was entirely filled with the various dummy submissions I had done before. It’s pretty clear from the results that I was the first person to actually click through and submit data to the phish site; actually pretty fortunate, because if a victim did enter legitimate information, the attacker would have to sort it out from a few hundred bogus submissions. Any day that we can make life harder for the the bad guys is a good day.

So, the data collection is dead simple, but I’d still like to know a bit more about the scam and the phishers if possible. There’s not a lot to go on, but the tag at the top of each entry seems unique. It’s the sort of thing we’re used to seeing when hackers deface a website and leave a tag to publicize the work:

------------+| $ o H a B  Dz and a m i r TN |+------------

Googling some variations turned up Google cache of a forum post that’s definitely related to the phishing site above; it’s either the same guy, or someone else using the same tool.

AppleFullz Forum post

A post in a carder forum, offering to sell data in the same format as generated by the phishing site above

A criminal using the name AppleFullz is selling complete information dumps of login details and credit card numbers plus CVV numbers (called “fulls” in carder forums) captured in the exact format that the Apple phish used, and even provides a sample of his wares (Insult to injury for the victim: not only was his information stolen, but it’s being given away as the credit card fraud equivalent of the taster trays at the grocery store). This carder is asking for $10 for one person’s information, but is willing to give bulk discounts: $30 for 5 accounts (This is actually a discount over the sorts of prices normally seen on carder forums; Krebs recently reported that Target cards were selling for $20-$100 per card. I read this as an implicit acknowledgement by our seller that this data is much “dirtier” and that the seller is expecting buyers to mine it for legitimate data). The tools being used here are a combination of some pre-existing scraps of  PHP code widely used in other spam and scam campaigns (the section labeled “|INFO|VBV|”), and a separate section added specifically to target Apple ID’s.

Of particular interest is that the carder provided a Bitcoin address. For criminals, Bitcoin has the advantage of anonymity but the disadvantage that transactions are public. This means that we can actually look up how much money has flowed into that particular Bitcoin address.

blockchain

Ill-gotten gains: the Bitcoin blockchain records transfers into the account used for selling stolen Apple Id’s and credit card numbers.

From November 17, when the forum posting went up, until December 4th, when I investigated this phishing attempt, he has received Bitcoin transfers totaling 0.81815987 BTC, which is around $744.53 (based on the BTC value on 12/4). According to his price sheet, that translates to a sale of between 74 and 124 records: not bad for a month of terribly unsophisticated phishing.

Within a few hours of investigating the initial phishing site, it had been removed. The actual server where the phish site was hosted was a legitimate domain that had been compromised; perhaps the phisher noticed the volume of bogus traffic and decided that the jig was up for that particular phish, or the system administrator got tipped off by the unusual traffic and investigated. Either way the phish site is offline, so that’s another small victory.

Defeating Cross-site Scripting with Content Security Policy

If you have been following Neohapsis’ @helloarbit or @coffeetocode on Twitter, you have probably seen us tweeting quite a bit about Content Security Policy.  Content Security Policy is an HTTP header that allows you, the developer or security engineer, to define where web applications can or can not load content from.   By defining a strict Content Security policy, the deveopers of web applications can completely almost completely mitigate cross-site scripting and other attacks.

CSP functions by allowing a web application to declare the source of where it expects to load scripts, allowing the client to detect and block malicious scripts injected into the application by an attacker.

Another way to think about content-security policy is as a source whitelist. Typically when an end user makes a request for a web page, the browser trusts output that the server is delivering. CSP however limits this trust model by sending Content-Security-Policy header that allows the application to specify a whitelist of trusted (expected) sources. When the browser receives this header, it will only render or execute resources from those sources.

In the event that an attacker does have the ability to inject malicious content that is reflected back against the user, the script will not match the source whitelist, and the script will not be executed.

Traditional mechanisms to mitigate cross-site scripting are to HTML encode or escape output that is reflected back to a user as well as perform rigorous input validation. However due to the complexity of encoding and validation, cross-site scripting may crop up in your website.  Think of CSP as your insurance policy in the event something malicious sneaks through your input validation and output encoding strategies.

Although we at Neohapsis Labs have been researching Content Security Policy for a while, we found that it’s a complicated technology that has plenty of intricacies.  After realizing that many users may run into the same issues and confusion we did, we decided to launch cspplayground.com to help educate developers and security engineers on how to use CSP as well as a way to validate your own complex policies.

CSP Playground allows you to see example code and practices that are likely to break when applying CSP policies.  You can toggle custom policies and watch in real-time how they affect the playground web application.  We also provide some sample approaches on how to modify your code so that it will play nicely with CSP.  After you have had time to tinker around, you can use the CSP Validator to build out your own CSP policy and ensure that it is in compliance with the CSP 1.1 specification.

The best way to learn CSP is by experimenting with it.  We  at Neohapsis Labs have put together cspplayground.com site just for those reasons and suggest you check out the site to learn more about it.

Picking Up The SLAAC With Sudden Six

By Brent Bandelgar and Scott Behrens

The people that run The Internet have been clamoring for years for increased adoption of IPv6, the next generation Internet Protocol. Modern operating systems, such as Windows 8 and Mac OS X, come out of the box ready and willing to use IPv6, but most networks still have only IPv4. This is a problem because the administrators of those networks may not be expecting any IPv6 activity and only have IPv4 monitoring and defenses in place.

In 2011, Alec Waters wrote a guide on how to take advantage of the fact that Windows Vista and Windows 7 were ‘out of the box’ configured to support IPv6. Dubbed the “SLAAC Attack”, his guide described how to set up a host that advertised itself as an IPv6 router, so that Windows clients would prefer to send their requests to this IPv6 host router first, which would then resend the requests along to the legitimate IPv4 router on their behalf.

This past winter, we at Neohapsis Labs tried to recreate the SLAAC Attack to test it against Windows 8 and make it easy to deploy during our own penetration tests.

We came up with a set of standard packages and accompanying configuration files that worked, then created a script to automate this process, which we call “Sudden Six.” It can quickly create an IPv6 overlay network and the intermediate translation to IPv4 with little more than a base Ubuntu Linux or Kali Linux installation, an available IPv4 address on the target network, and about a minute or so to download and install the packages.

Windows 8 on Sudden Six

Windows 8 on Sudden Six

As with the SLAAC Attack described by Waters, this works against networks that only have IPv4 connectivity and do not have IPv6 infrastructure and defenses deployed. The attack establishes a transparent IPv6 network on top of the IPv4 infrastructure. Attackers may take advantage of Operating Systems that prefer IPv6 traffic to force those hosts to route their traffic over our IPv6 infrastructure so they can intercept and modify that communication.

To boil it down, attackers can conceivably (and fairly easily) weaponize an attack on our systems simply by leveraging this vulnerability. They could pretend to be an IPv6 router on your network and see all your web traffic, including data being sent to and from your machine. Even more lethal, the attacker could modify web pages to launch client-side attacks, meaning they could create fake websites that look like the ones you are trying to access, but send all data you enter back to the attacker (such as your username and password or credit card number).

As an example, we can imagine this type of attack being used to snoop on web traffic from employees browsing web sites. Even more lethal, the attackers could modify web pages to launch client-side attacks.

The most extreme way to mitigate the attack is to disable IPv6 on client machines. In Windows, this can be accomplished manually in each Network Adapter Properties panel or with GPO. Unfortunately, this would hinder IPv6 adoption. Instead, we would like to see more IPv6 networks being deployed, along with the defenses described in RFC 6105 and the Cisco First Hop Security Implementation Guide. This includes using features such as RA Guard, which allows administrators to configure a trusted switch port that will accept IPv6 Router Advertisement packets, indicating the legitimate IPv6 router.

At DEF CON 21, Brent Bandelgar and Scott Behrens will be presenting this attack as well as recommendations on how to protect your environment. You can find a more detailed abstract of our talk here. The talk will be held during Track 2 on Friday at 2 pm. In addition, on Friday we will be releasing the tool on the Neohapsis Github page.

Collecting Cookies with PhantomJS

TL;DR: Automate WebKit with PhantomJS to get specific Web site data.

This is the first post in a series about gathering Web site reconnaissance with PhantomJS.

My first major engagement with Neohapsis involved compiling a Web site survey for a global legal services firm. The client was preparing for a compliance assessment against Article 29 of the EU Data Protection Directive, which details disclosure requirements for user privacy and usage of cookies. The scope of the engagement involved working with their provided list of IP addresses and domain names to validate their active and inactive Web sites and redirects, count how many first party and third party cookies each site placed, identify any login forms, and determine the presence of links to site privacy policy and cookie policy.

The list was extensive and the team had a hard deadline. We had a number of tools at our disposal to scrape Web sites, but as we had a specific set of attributes to look for, we determined that our best bet was to use a modern browser engine to capture fully rendered pages and try to automate the analysis. My colleague, Ben Toews, contributed a script towards this effort that used PhantomJS to visit a text file full of URLs and capture the cookies into another file. PhantomJS is a distribution of WebKit that is intended to run in a “headless” fashion, meaning that it renders Web pages and scripts like Apple Safari or Google Chrome, but without an interactive user interface. Instead, it runs on the command line and exposes an API for JavaScript for command execution.  I was able to build on this script to build out a list of active and inactive URLs by checking the status callback from page.open and capture the cookies from every active URL as stored in page.cookies property.

Remember how I said that PhantomJS would render a Web page like Safari or Chrome? This was very important to the project as I needed to capture the Web site attributes in the same way a typical user would encounter the site. We needed to account for redirects from either the Web server or from JavaScript, and any first or third party cookies along the way. As it turns out, PhantomJS provides a way to capture URL changes with the page.OnUrlChanged callback function, which I used to log the redirects and final destination URL. The page.cookies attribute includes all first and third party cookies without any additional work as PhantomJS makes all of the needed requests and script executions already. Check out my version of the script in chs2-basic.coffee.

This is the command invocation. It takes two arguments: a text file with one URL per line and a file name prefix for the output files.


phantomjs chs2-basic.coffee [in.txt] [prefix]

This snippet writes out the cookies into a JSON string and appends it to an output file.

if status is 'success'
# output JSON of cookies from page, one JSON string per line
# format: url:(requested URL from input) pageURL:(resolved Location from the PhantomJS "Address Bar") cookie: object containing cookies set on the page
fs.write system.args[2] + ".jsoncookies", JSON.stringify({url:url,pageURL:page.url,cookie:page.cookies})+"\n", 'a'

In a followup post, I’ll discuss how to capture page headers and detect some common platform stacks.

Facebook Applications Have Nagging Vulnerabilities

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.