Are You Prepared for Certificate Authority Breaches?

By Nate Couper

In the last few years, security breaches of signed SSL certificates, as well as a number of certificate authorities (CA’s) themselves, have illustrated gaps in the foundations of online security.

  • Diginotar
  • Comodo
  • Verisign
  • others

It is no longer safe to assume that CA’s, large or small, have sufficient stake in their reputation to invest in security that is 100% effective.  In other words, it’s time to start assuming that CA’s can and will be breached again.

Fortunately for the white hats out there, NIST has just released a bulletin on responding to CA breaches.  Find it on NIST’s website at http://csrc.nist.gov/publications/nistbul/july-2012_itl-bulletin.pdf.

The NIST document has great recommendations for responding to CA breaches, including:

  • Document what certificates and CA’s your organization uses.
  • Document logistics and information required to respond to CA compromises.
  • Review and understand CA’s in active use in your organization.
  • Understand “trust anchors” in your organization.
  • Develop policies for application development and procurement, and implement them.
  • Understand and react appropriately to CA breaches.

Let’s dive into these:

1. Document the certificates and CA’s that your organization uses

Any compliance wonk will tell you that inventory is your first and best control.  Does your organization have an inventory?

Let’s count certificates.  There’s http://www.example.com, www2.example.com, admin.example.com, backend.example.com, and there’s mail.example.com.  There may also be VPN.example.com, ftps.example.com, ssh.example.com.  These are the obvious ones.

Practically every embedded device from the cheapest WIFI router to the lights-out management interface on your big iron systems these days comes with an SSL interface.  Count each of those.  Every router, switch, firewall, every blade server enclosure, every SAN array.  Take a closer look at your desktops.  Windows has a certificate database, Firefox carries its own, Java has its own, and multiple instances of Java on a single system can have multiple CA databases.  Now your servers—every major OS ships with SSL capabilities, Windows, Linux (OpenSSL), Unix.  Look at your applications – chances are every piece of J2EE and .NET middleware has a CA database associated with it.  Every application your organization bought or wrote that uses SSL probably has a CA database.  Every database, every load balancer, every IDS / IPS.  Every temperature sensor, scanner, printer, and badging system that supports SSL probably has a list of CA’s somewhere.

All your mobile devices.  All your cloud providers and all the services they backend to.

If your organization is like most, you probably have an excel spreadsheet with a list of AD servers, or maybe you query a domain controller when you need a list of systems.  Forget about software and component inventory.  Don’t even think about printers, switches, or cameras.

If you’re lucky enough to have a configuration management database (CMDB), what is its scope?  When was the last time you checked it for accuracy?  In-scope accuracy rates of 75% are “good”, if some of my clients are any measure.  And CMDB scope rarely exceeded production servers.

Each one of these devices may have several SSL certificates, and may trust hundreds of CA’s for no reason other than it shipped that way.

Using my laptop as an example, I’ve got several hundred “trusted” CA’s loaded by default into Java, Firefox, IE and OpenSSL.  Times five or so to account for the virtual machines I frequent.  Of those thousands of CA’s, my system probably uses a dozen or so per day.

2. Document logistics and information required to respond to CA breaches

How exactly do you manage the list of trusted CA’s on your iPad anyway?  Your load balancer?  Who is responsible for these devices, and who depends on them? If you found out that Thawte was compromised tomorrow, would you be able to marshal all the people who manage these systems in less than a day?  In a week?

What would it take to replace certificates, to tweak the list of CA’s across the enterprise?  It will definitely take longer if you’re trying to figure it out as you go.

3. Review and understand CA’s in active use in your organization

Of all the dozens of CA’s on my laptop, I actually use no more than a dozen or so each day.  In fact, it would be noteworthy if more than a handful got used at all.  I could disable hundreds of them and never notice.  After all, I don’t spend a lot of time on Romanian or Singaporean sites, and CA’s from those regions probably don’t see a lot of foreign use.

Most organizations are savvy enough to source their certificates from at most a handful of trusted CA’s.  A server might only need one trusted CA.  Ask your network and application administrators – which CA’s do we trust and which do we need to trust?  It might make sense to preemptively strike some or all the CA’s you’re not actually using, if only in the name of reducing attack surface.

4. Understand “trust anchors” within your organization.

Trust Anchors are the major agents in a PKI – the CA’s.  Trust anchors provide rules and services to govern the roles of others such as the intermediates, the registrars, and the users of certificates.  Go back through your inventory (you made one of those, right?) and document the configuration.  What do the trust anchors allow and disallow with your certificates?  Will revoked certificates get handled correctly?  How do you configure it?

Does your organization deploy internal CA’s?  Which parts of the organization control the internal CA’s, and what other parts of the business depend on them?  What internal SLA’s / SLO’s are afforded?  What metrics measure them?

5. Develop policies for application development and procurement.

How many RSA SecurID customers really understood that RSA was holding on to secret information that could contribute to attacks against RSA’s customers?  Did your organization ask RIM if trusted CA’s on your Blackberries could be replaced?  Do you use external CA’s for purely internal applications, knowing full well the potential implications of an external breach?

Does your purchase and service contract language oblige your vendor even to tell you if they do have a breach, or will you have to wait till it turns up on CNN?  Do they make claims about their security, and are their claims verifiable?  Do they coast on vague marketing language, or ride on the coattails of once-hip internet celebrities and gobbled-up startups?

6. Understand CA breaches and react appropriately.

Does your incident response program understand CA breaches?  Can you mobilize your organization to do what it needs to when the time comes, and within operational parameters?

CA breaches have happened before and will happen again.  NIST has again delivered a world-class roadmap for achieving enterprise security objectives.  Is your organization equipped?

Facebook Applications Have Nagging Vulnerabilities

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.

The Security Implications of Custom Android ROMs

By Jon Janego

As most smartphone geeks like myself are undoubtedly aware, the latest phone in Google’s Nexus line, the Samsung Galaxy Nexus, was released for Verizon Wireless last week.  Mine just arrived last night, and it’s fantastic (although huge!)

The Nexus line of devices are unique among Android phones in that they are essentially a commercial line of the internal development phones used at Google.  As such, they are designed to allow easy installation of custom firmware.  This makes them especially popular with the robust Android “modding” community, which develops customized firmware that can be run on Android devices that extend the functionality beyond what is initially provided by the handset manufacturer.

Made for Modders

The Galaxy Nexus appears to be the biggest Nexus phone launch in Google’s history, initially being offered by the largest carrier in the US, Verizon, and the second-largest carrier in the UK, O2.The popularity of the Galaxy Nexus will likely draw a large number of people into the Android modding community because of the low barrier to entry that it presents.  Already, on the popular Android blog Droid Life, there is an article about unlocking the Galaxy Nexus’ bootloader, which allows for installation of custom firmware, that has over 400 comments, and another post on the same blog about “rooting” the phone has well over 300 comments.The Android customization community is a large and robust one.  However, like many open-source and community-based development projects, the majority of users just want the project to “work”, and have little-to-no interest in viewing the source code or having a deep understanding of how it functions.  Despite user bases sometimes numbering in the millions, a relatively small group of developers do most of the creation and distribution of the software.

The problem of software authenticity has been encountered before by the linux community, and over the last decade, that community has developed distribution methods such as centralized Debian APT repositories that provide some degree of certainty over what the end user is actually installing on their computer.  Additionally, many linux users still download the source code for projects and compile them locally on their computer.  Within the Android modding community, neither of these options have been implemented with the same level of maturity that linux has seen.  The more popular aftermarket firmware, or “ROMs”, such as CyanogenMod, are distributed by more accountable means, providing MD5 checksums for the files and a clear distribution network.  However, often, Android customization software is provided through links to anonymous file-sharing sites such as Mediafire and Megaupload.  This creates the opportunity to trick a user into installing malicious files.

A Ripe Target

There has already been at least one documented case of malware targeting custom Android ROMs, a trojan affected devices in the Chinese market.  With the popularity of the Galaxy Nexus, and the continued interest in Android customization community, this could become an attack vector that is more and more appealing to malware authors.  And the fact that majority of customized Android software is distributed without the usage of the Android Market, users do not have the additional means of protection that the Market provides, namely the “kill-switch” for apps that Google has flagged as malicious that were installed via the Android Market.

So how can end users protect themselves, while still participating in the Android customization community?  The most important thing that any technology user can do is to educate themselves about the software they install.  Security researcher Dan Rosenberg wrote an excellent blog post summarizing exactly how “rooting” works, which is a concept that many users want, but fewer truly understand.  Too often, users are tempted to just install the attachment that someone on a forum or blog says “works”, without question.  Also, users should avoid downloading and installing software from sources that are anonymous or unaccountable.  Instead, download from the primary source of the software developer, and validate the MD5 checksum of the file before installing it.  And often, many of the files shared on forums or anonymous upload sites are those provided directly Google or the phone manufacturers themselves.  Instead of downloading from the anonymous links, download these files from Google directly.

The Android community is already beginning to tackle these problems.  The popular ROM manager ClockworkMod is attempting to become an authoritative source for aftermarket software.  They coordinate with the developers of custom software and allow users to install files directly from the developers’ Git repositories.  However, this is still reliant upon the goodwill and trustworthiness of the developers.  ClockworkMod does not perform any code review of the ROMs that they aggregate, and while they may be able to de-list one that has a security vulnerability, there is still not a way to automatically remove the software from installed devices.

Looking Forward

In the future, I hope that the popular Android blogs such as Droid-Life, and forums such as XDA-developers will begin linking to central, trusted software repositories, rather than the anonymous file sharing sites or forum post attachments that are currently commonly used.  In the long-term, I would not be surprised to see an even more formalized system be implemented by the Android community, similar to the Debian APT repositories, but there is still some time to go.  Until then, Android users interested in customizing their devices should try and stay educated about their technology, and be very skeptical of any software that they install.  While the mobile carriers have recently had some credibility issues with the CarrierIQ fiasco, they are for the most part held far more accountable than any custom ROM developer will ever be.  Given the sensitivity of the data stored on mobile devices, a user should think very closely about what they are willing to install onto it.  As for me, I have already unlocked the bootloader to my Nexus Galaxy, but will probably refrain from installing any custom third-party ROMs until I repurpose it as a research device, at least another year or two down the road.  But the draw of the enhanced features and controls that the custom ROMs provide will likely lead many users down that path.  The integrity of your data ultimately resides with you, so I hope that everyone carefully weighs their decision to install new firmware onto their most sensitive and personal piece of technology.  Be careful out there!

Risk and Understanding All the Variables

One of the things that drives me the most insane is when data is presented as information without properly considering all of the variables. Over dinner with Martin a few weeks ago, I got off on a rant about an example of this. My target that night was the Dow Jones Industrial Average, which we hear about every time we turn on the financial news.

The Dow is a single point of data within the large sea of the economy. And, unfortunately, it presents a relatively skewed picture of the actual state of economic progress. I have been most enervated by it over the past couple years with the 2006-2007 psuedo-bull market that had every financial analyst talking about economic prosperity. (For example, see articles here, here, here, etc.).

Unfortunately, it’s pretty easy (about 2 hours of research and excel mojo) to show the illusory nature of the “bull market” that we have seen in the past 4 years. The US government has pursued an aggressive strategy of currency devaluation since 2002, and that plays heavily into the value of any asset valued in USD (e.g. the Dow). In order to understand the true nature of the state of the Dow, one must take into account all of the variables – in this case, the value of the currency that the price of the market is described in.

Dow raw and adjusted for the euro

Martin would point out that this is a somewhat naive analysis as I have only adjusted for one currency. To do a more robust analysis, we would have to average the currency impact across world regions, including the Chinese Yuan, Canadian Dollar and British Pound. However, even with a simple analysis, the results show a staggering change. While the DJIA enjoyed a raw increase of approximately 75% between 2003-2008, when adjusted for changes in currency value against the Euro, we see that increase drop to 35%.

This is significant – assuming that you got in at the absolute low (Feb 2003) and out at the top of each, this suggests that your actual annual rate of return on the investment went from 11.7% when just looking at the DJIA to 6.3% when adjusting for currency.

And I didn’t even factor in goods/services inflation or taxes. If I had, you would see that the currency loss here is the difference between making a profit and just barely breaking even on the investment.

Why am I talking about all this random financial stuff on a blog dedicated to risk and security (especially when I’m not an accountant)? Because this IS risk. This is the same kind of calculation that we make every day, and it is the same sort of mistake that I see risk management professionals make all the time. When calculating risk, we have a tendency to look only at the simple numbers – humans just aren’t good at multivariate analysis, especially in our head. So, we have a tendency to look for simple answers, often resorting the Tarzan method:

Dow up, good. Dow down, bad.

Unfortunately, it’s never quite that simple. If you don’t take into account all of the variables when considering the risk of your investments (whether financial, information security, or otherwise), you’re likely to significantly mis-read the potential for return on those investments.

Weak Application Security = Non-Compliance

I had to post about this one – our general counsel and compliance specialist Dave Stampley wrote an article recently at Information Week about the importance of ensuring application security as part of your regulatory compliance efforts. From the article:

Web-application security vulnerabilities pose a unique compliance risk for companies. Unlike compliance failures that take place in the background–for example, an unencrypted business-to-business transmission of sensitive consumer data–application weaknesses are open to discovery by any skilled Web surfer and even consumers themselves.

“The FTC appears to be taking a strict liability approach to E-commerce security flaws,” says Mary Ellen Callahan, an attorney at Hogan & Hartson in Washington, D.C., who has represented clients facing government privacy compliance investigations. “White-hat hackers and tipsters have prompted a number of enforcement actions by reporting Web-site flaws they discovered.”

Read the full article here

Whose Risk?

I often get frustrated when we talk about risk, measurement, metrics, and (my new least favorite buzz word) “key performance indicators”. Because we (as an industry) have a tendency to drop the audience from the statement of risk.

That may sound confusing, but I’ll illustrate by example. This is a real sentence that I hear far too often:

Doing that presents too much risk.

Unfortunately, that sentence is linguistically incomplete. The concept of “risk” requires specification of the audience – Risk to whom/what? This is a similar problem as that which Lakoff presents in Whose Freedom? – certain concepts require a reference to the audience in order to make sense of them. Leaving the audience unspecified is productive when used in marketing (or politics), but creates massive confusion when actually trying to have real productive discourse.

A recent post at Security Retentive illustrates the kind of confusion that ensues when the audience for risk metrics/measurements isn’t specified. (I have also previously talked (ranted?) about this type of confusion here and here.

This confusion fundamentally arises from the need to remember that risk is relative to an audience. The confusion arises because of a lack of perspective – each person in the discourse applies the “risk” to their own perspective, and comes up with radically differing meanings.

It seems important that when we’re talking about and attempting to measure and specify risk, we need to always present the data/information to a relevant audience: risk to what/whom is an important way of ensuring that we don’t remain mired in the kind of confusion that Security Retentive talked about.