What Kickstarter Did Right

Only a few details have emerged about the recent breach at Kickstarter, but it appears that this one will be a case study in doing things right both before and after the breach.

What Kickstarter has done right:

  • Timely notification
  • Clear messaging
  • Limited sensitive data retention
  • Proper password handling

Timely notification

The hours and days after a breach is discovered are incredibly hectic, and there will be powerful voices both attempting to delay public announcement and attempting to rush it. When users’ information may be at risk beyond the immediate breach, organizations should strive to make an announcement as soon as it will do more good than harm. An initial public announcement doesn’t have to have all the answers, it just needs to be able to give users an idea of how they are affected, and what they can do about it. While it may be tempting to wait for full details, an organization that shows transparency in the early stages of a developing story is going to have more credibility as it goes on.

Clear messaging

Kickstarter explained in clear terms what was and was not affected, and gave straightforward actions for users to follow as a result. The logging and access control groundwork for making these strong, clear statements at the time of a breach needs to be laid far in advance and thoroughly tested. Live penetration testing exercises with detailed post mortems can help companies decide if their systems will be able to capture this critical data.

Limited sensitive data retention

One of the first questions in any breach is “what did they get?”, and data handling policies in place before a breach are going to have a huge impact on the answer. Thinking far in advance about how we would like to be able to answer that question can be a driver for getting those policies in place. Kickstarter reported that they do not store full credit card numbers, a choice that is certainly saving them some headaches right now. Not all businesses have quite that luxury, but thinking in general about how to reduce the retention of sensitive data that’s not actively used can reduce costs in protecting it and chances of exposure over the long term.

Proper password handling (mostly)

Kickstarter appears to have done a pretty good job in handling user passwords, though not perfect. Password reuse across different websites continues to be one of the most significant threats to users, and a breach like this can often lead to ripple effects against users if attackers are able to obtain account passwords.

In order to protect against this, user passwords should always be stored in a hashed form, a representation that allows a server to verify that a correct password has been provided without ever actually storing the plaintext password. Kickstarter reported that their “passwords were uniquely salted and digested with SHA-1 multiple times. More recent passwords are hashed with bcrypt.” When reading breach reports, the level of detail shared by the organization is often telling and these details show that Kickstarter did their homework beforehand.

A strong password hashing scheme must protect against the two main approaches that attackers can use: hash cracking, and rainbow tables. The details of these approaches have been well-covered elsewhere, so we can focus on what Kickstarter used to make their users’ hashes more resistant to these attacks.

To resist hash cracking, defenders want to massively increase the amount of work an attacker has to do to check each possible password. The problem with hash algorithms like SHA1 and MD5 is that they are too efficient; they were designed to be completed in as few CPU cycles as possible. We want the opposite from a password hash function, so that it is reasonable to check a few possible passwords in normal use but computationally ridiculous to try out large numbers of possible passwords during cracking. Kickstarter indicated that they used “multiple” iterations of the SHA1 hash, which multiplies the attacker effort required for each guess (so 5 iterations of hashing means 5 times more effort). Ideally we like to see a hashing attempt take at least 100 ms, which is a trivial delay during a legitimate login but makes large scale hash cracking essentially infeasible. Unfortunately, SHA1 is so efficient that it would take more than 100,000 iterations to raise the effort to that level. While Kickstarter probably didn’t get to that level (it’s safe to assume they would have said so if they did), their use of multiple iterations of SHA1 is an improvement over many practices we see.

To resist rainbow tables, it is important to use a long, random, unique salt for each password. Salting passwords removes the ability of attackers to simply look up hashes in a precomputed rainbow tables. Using a random, unique salt on each password also means that an attacker has to perform cracking on each password individually; even if two users have an identical password, it would be impossible to tell from the hashes. There’s no word yet on the length of the salt, but Kickstarter appears to have gotten the random and unique parts right.

Finally, Kickstarter’s move to bcrypt for more recent passwords is particularly encouraging. Bcrypt is a modern key derivation function specifically designed for storing password representations. It builds in the idea of strong unique salts and a scalable work factor, so that defenders can easily dial up the amount computation required to try out a hash as computers get faster. Bcrypt and similar functions such as PBKDF2 and the newer scrypt (which adds memory requirements) are purpose built make it easy to get password handling right; they should be the go-to approach for all new development, and a high-priority change for any codebases still using MD5 or SHA1.

PCI DSS 3.0

The Payment Card Industry Security Standards Council (PCI SSC) has released a draft version of the Payment Card Industry Data Security Standard (PCI DSS) Version 3.0 to Qualified Security Assessor (QSA) companies.  This release was not made available to the general public as PCI SSC is still reviewing feedback from QSA companies and other participating organizations before the final version is released in November 2013.  The new PCI DSS 3.0 compliance requirements will go in to effect for all companies on January 1, 2015 but there are a few requirements defined in PCI DSS 3.0 that will not be required for compliance until July 1, 2015.

Sign: Changes Ahead

PCI SSC has categorized the changes for the standard into three types: Clarifications, Additional guidance and Evolving Requirements.  PCI SSC defines a clarification as a means to make the intent of a particular requirement clear to QSA’s and entities that must comply with PCI DSS.  Additional guidance provides an explanation, definition and/or instructions to increase understanding or provide further information or guidance on a particular topic being assessed.  Evolving requirements are changes to ensure that the standards are up to date with emerging threats and changes in the market and are new requirements for the most part.

In total there are a proposed 99 changes between the three types of changes, which were nicely defined in the document “PCI DSS 3.0 Summary of Changes” provided by the council.  The majority of changes, 71 to be exact, are clarifications followed up by 23 evolving requirements and 5 changes that fall under the “additional guidance” category.  This blog post provides a high-level overview of the proposed changes in the draft PCI DSS V3.0 and this overview is not specific to particular requirements.  Neohapsis will be releasing a series of blog posts dedicated to PCI DSS V3.0 that will explore individual requirement changes in great detail.

So enough setup, here are some key takeaways of high-level changes to PCI DSS Version 3.0.

Scope of PCI DSS Requirements

PCI SSC added some clarification around the responsibilities for defining scope for both the entity seeking compliance and the QSA performing the validation.  PCI SSC did a great job of clarifying the scoping process by providing examples of system components that should be included are part of the scope for a ROC assessment.  In addition to system components, the scoping guidance section contains requirements for capturing all purchased or custom developed applications that are being used within the cardholder data environment (CDE).  Compared to PCI DSS Version 2.0, this section of PCI DSS Version 3.0 has been broken out to be more descriptive as well as to display the details in a clearer manner to help entities better understand the scoping process.

Use of Third-Party Service Providers / Outsourcing

The use of third-party service providers to support an entity’s cardholder data environment (CDE) and the related PCI DSS validation requirements have not change that drastically.  All entities that fall under PCI DSS are still required to validate that each third-party service provider is PCI DSS compliant by obtaining a copy of their attestations of compliance (AOC) or by having their QSA assess the compliance status of each third-party service provider for relevant PCI DSS requirements..  However, the draft of PCI DSS Version 3.0 provides examples such as advising entities seeking PCI compliance to validate the IP addresses for quarterly scans if one or more shared hosting providers are in-scope for their  CDE.  Furthermore, entities seeking compliance are advised to work with service providers’ to ensure that contractual language between the two parties clearly states PCI DSS compliance responsibilities down to the requirement level.

Business-As-Usual (BAU)

The biggest change that I see in PCI DSS 3.0 is the emphasis on making PCI DSS controls part of “business-as-usual” (BAU) as compared to a moment in time assessment. To get this message across, the draft version of PCI DSS 3.0 provides several best practice examples for making PCI are part of BAU. These best practices are not requirements, but the council wants to encourage organizations to make PCI a part of BAI. The goal of this change in thinking is to get businesses to implement PCI DSS as part of their overall security strategy and daily operations.  PCI DSS is stressing that their compliance standard is not a set-it and forget-it mentality.

Some important areas in this new mentality focus on security processes.  For example, entities should be validating on their own that controls are implemented effectively for applicable businesses processes and related technologies.  Some specific examples related to this new mentality focus on antivirus definitions, logging, vulnerability signatures and ensuring that only appropriate services are enabled on systems.  With this new mentality, entities should look to take corrective actions when compliance gaps are identified so that PCI DSS compliance can be maintained at all times and not wait until their QSA comes to validate their compliance. When gaps are identified, implementation of compensating or mitigating controls may be necessary and should not be left open until the start of a QSA assessment.

Lastly in regards to BAU, the company seeking PCI DSS compliance should continue to have open dialog with their QSA throughout the year.  Good QSA companies will work with their clients to help ensure the right choices are being made in between ROC assessments.

Until Next Post

This is just one of multiple blog posts about upcoming changes in PCI DSS version 3.0.  Neohapsis will be releasing additional posts on more specific PCI DSS 3.0 requirements in the near future.  If you are required to be PCI DSS compliant then I would recommend having a talk with your QSA company to start planning and preparing for PCI DSS 3.0.  You are also welcome to reach out to Neohapsis as we have multiple QSA’s who are seasoned professionals who can address your PCI DSS and PCI PA DSS compliance questions.

Are You Prepared for Certificate Authority Breaches?

By Nate Couper

In the last few years, security breaches of signed SSL certificates, as well as a number of certificate authorities (CA’s) themselves, have illustrated gaps in the foundations of online security.

  • Diginotar
  • Comodo
  • Verisign
  • others

It is no longer safe to assume that CA’s, large or small, have sufficient stake in their reputation to invest in security that is 100% effective.  In other words, it’s time to start assuming that CA’s can and will be breached again.

Fortunately for the white hats out there, NIST has just released a bulletin on responding to CA breaches.  Find it on NIST’s website at http://csrc.nist.gov/publications/nistbul/july-2012_itl-bulletin.pdf.

The NIST document has great recommendations for responding to CA breaches, including:

  • Document what certificates and CA’s your organization uses.
  • Document logistics and information required to respond to CA compromises.
  • Review and understand CA’s in active use in your organization.
  • Understand “trust anchors” in your organization.
  • Develop policies for application development and procurement, and implement them.
  • Understand and react appropriately to CA breaches.

Let’s dive into these:

1. Document the certificates and CA’s that your organization uses

Any compliance wonk will tell you that inventory is your first and best control.  Does your organization have an inventory?

Let’s count certificates.  There’s http://www.example.com, www2.example.com, admin.example.com, backend.example.com, and there’s mail.example.com.  There may also be VPN.example.com, ftps.example.com, ssh.example.com.  These are the obvious ones.

Practically every embedded device from the cheapest WIFI router to the lights-out management interface on your big iron systems these days comes with an SSL interface.  Count each of those.  Every router, switch, firewall, every blade server enclosure, every SAN array.  Take a closer look at your desktops.  Windows has a certificate database, Firefox carries its own, Java has its own, and multiple instances of Java on a single system can have multiple CA databases.  Now your servers—every major OS ships with SSL capabilities, Windows, Linux (OpenSSL), Unix.  Look at your applications – chances are every piece of J2EE and .NET middleware has a CA database associated with it.  Every application your organization bought or wrote that uses SSL probably has a CA database.  Every database, every load balancer, every IDS / IPS.  Every temperature sensor, scanner, printer, and badging system that supports SSL probably has a list of CA’s somewhere.

All your mobile devices.  All your cloud providers and all the services they backend to.

If your organization is like most, you probably have an excel spreadsheet with a list of AD servers, or maybe you query a domain controller when you need a list of systems.  Forget about software and component inventory.  Don’t even think about printers, switches, or cameras.

If you’re lucky enough to have a configuration management database (CMDB), what is its scope?  When was the last time you checked it for accuracy?  In-scope accuracy rates of 75% are “good”, if some of my clients are any measure.  And CMDB scope rarely exceeded production servers.

Each one of these devices may have several SSL certificates, and may trust hundreds of CA’s for no reason other than it shipped that way.

Using my laptop as an example, I’ve got several hundred “trusted” CA’s loaded by default into Java, Firefox, IE and OpenSSL.  Times five or so to account for the virtual machines I frequent.  Of those thousands of CA’s, my system probably uses a dozen or so per day.

2. Document logistics and information required to respond to CA breaches

How exactly do you manage the list of trusted CA’s on your iPad anyway?  Your load balancer?  Who is responsible for these devices, and who depends on them? If you found out that Thawte was compromised tomorrow, would you be able to marshal all the people who manage these systems in less than a day?  In a week?

What would it take to replace certificates, to tweak the list of CA’s across the enterprise?  It will definitely take longer if you’re trying to figure it out as you go.

3. Review and understand CA’s in active use in your organization

Of all the dozens of CA’s on my laptop, I actually use no more than a dozen or so each day.  In fact, it would be noteworthy if more than a handful got used at all.  I could disable hundreds of them and never notice.  After all, I don’t spend a lot of time on Romanian or Singaporean sites, and CA’s from those regions probably don’t see a lot of foreign use.

Most organizations are savvy enough to source their certificates from at most a handful of trusted CA’s.  A server might only need one trusted CA.  Ask your network and application administrators – which CA’s do we trust and which do we need to trust?  It might make sense to preemptively strike some or all the CA’s you’re not actually using, if only in the name of reducing attack surface.

4. Understand “trust anchors” within your organization.

Trust Anchors are the major agents in a PKI – the CA’s.  Trust anchors provide rules and services to govern the roles of others such as the intermediates, the registrars, and the users of certificates.  Go back through your inventory (you made one of those, right?) and document the configuration.  What do the trust anchors allow and disallow with your certificates?  Will revoked certificates get handled correctly?  How do you configure it?

Does your organization deploy internal CA’s?  Which parts of the organization control the internal CA’s, and what other parts of the business depend on them?  What internal SLA’s / SLO’s are afforded?  What metrics measure them?

5. Develop policies for application development and procurement.

How many RSA SecurID customers really understood that RSA was holding on to secret information that could contribute to attacks against RSA’s customers?  Did your organization ask RIM if trusted CA’s on your Blackberries could be replaced?  Do you use external CA’s for purely internal applications, knowing full well the potential implications of an external breach?

Does your purchase and service contract language oblige your vendor even to tell you if they do have a breach, or will you have to wait till it turns up on CNN?  Do they make claims about their security, and are their claims verifiable?  Do they coast on vague marketing language, or ride on the coattails of once-hip internet celebrities and gobbled-up startups?

6. Understand CA breaches and react appropriately.

Does your incident response program understand CA breaches?  Can you mobilize your organization to do what it needs to when the time comes, and within operational parameters?

CA breaches have happened before and will happen again.  NIST has again delivered a world-class roadmap for achieving enterprise security objectives.  Is your organization equipped?

Anonymous Tactics (from the attacks reported on by Imperva)

by J. Schumacher

Security professionals have been following the collective of Internet users calling themselves Anonymous for a few years now as they cause cyber mayhem to understand their tactics.  There were two well written publications in recent weeks that caught my eye, The New York Times “In Attack on Vatican Web Site, a Glimpse of Hackers’ Tactics” and Imperva’s “Hacker Intelligence Summary Report, the Anatomy of an Anonymous Attack”.  These articles shed light on how Anonymous takes a call to arms, recruits members, and searches for action.  After reading these articles I kept thinking about current state of the Internet and wondering about the future of Anonymous’ with the cyber pandemonium it creates.

Taking the Imperva report as factual, the collective group of Anonymous has an approximate 10:1 ratio of laypeople to skilled hackers, which I believe limits the sophistication of attacks. I say “collective”, as targets for attacks are not often given from above, but must be approved or agreed upon by the masses before being launched.  One very interesting note in Imperva’s report was that the attacks Imperva monitored in 2011 were not utilizing bots, malware or phishing techniques for exploit, but end users actively running tools or visiting special web sites to aid in the attack.  There was a high level of public recruitment through social media of Twitter and Facebook, which can also act to inform the victim before the attack hits properly.

The New York Times article mentions that the attack on the Vatican took 18 days to gain enough recruitment and automated scanning tools were used for reconnaissance on the Vatican virtual front during this time.  In this attack Anonymous was seeking to interrupt the International Youth Day by a certain date, but when that failed Anonymous changed tactics to widespread distribution of software for Distributed Denial of Service (DDoS) so they could to hit the Vatican with a thousand person attack.  There were mixed statements from Anonymous and Imperva (who was a contractor for Internet security monitoring) regarding whether any sites across the globe were truly taken offline for any amount of time.

I think that Rob Rachwald, Imperva’s director of security, was quoted best by The New York Times article as saying “who is Anonymous?  Anyone can use the Anonymous umbrella to hack anyone at anytime”.  However, I believe Anonymous has currently reached their collective peak and will never be the same as in its early 4chan or even the 2008 days.  However, by no means has the world heard the last of Anonymous, as people will be claiming affiliation to the collective “group” for a very long to come, and I believe it will also continue to evolve over time.  How this change takes place is going to be exciting to see as Anonymous claims an “ideas without leaders” mentality and relies on general public for consensus of missions.

Recently, an interesting report from Symantec also came out about how Anonymous affiliates were tricked into installing the Zeus Trojan by a Pastebin tutorial covering how to install and use one of the attack tools, the Low Orbit Ion Cannon (LOIC), to support in DDoS attacks.  Established Twitter handles for Anonymous contributors (YourAnonNews, AnonymousIRC, AnonOps) have tweeted that this was not done by Anonymous. But, with no leadership accountable (due to the collective nature of Anonymous), there is nothing to say whether this is a true, whether another entity is sabotaging Anonymous public fanfare, or if it was simply someone taking advantage of free publicity to trick users into installing malware.  Since what many call the start of Anonymous in 2008 (Scientology attacks), there have not been any other large scale compromises of the those supporting attacks through infected tools, but this new activity could hurt the future of Anonymous recruitment and public support.

Depending on whether this recent instance of infected tools was a fluke, I see the future of Anonymous involving with skilled hackers increasing through a Wild West collaborative of honing their talents, while keeping the true base of Anonymous as largely unskilled hackers.  The skilled will, at times, directly and indirectly work for entities (such as large scale crime syndicates as well as private entities) to whom they are lured by big pay for work that will never be reported in any news paper.  The skilled hackers will still participate in Anonymous causes, and they will also enable other Anonymous members (through writing attack tools, scripts or apps), while also keeping knowledge of their well paid exploits limited to a smaller private offshoot group.  These offshoots will put dedication into advanced exploits that require some financial backing to set up (such as servers for social engineering, injection data repository, proxies and bots) but these exploits will most likely never be communicated to the larger Anonymous collective or used for social causes of the masses but rather private gains.

At the same time though, the unskilled hackers, making up the majority of the group, are essential to Anonymous at large for bringing attention and support to causes, identifying weaknesses in networks, performing DDoS attacks and being a overall distraction and crowd to hide in. It seems bots will be unnecessary and replaced by humans where it is simpler.  A large army that is not connected (outside of the odd one-off message to a public forums or social media) provides for a large pool that the authorities must sift through in finding the dedicated Anon.  The collective group of Anonymous has showed support for many social causes, like the occupy movement and free speech outcries from proposed Internet legislation.  At the same time Anonymous seems to have very publicly promoted every hack and breach that has been reported since 2010 whether the data exposed was government, private industry or public citizens.

I like to think of myself as a practical, but at times wishful, person.  As I see it, the core ideology of the Anonymous’ movement is not going away, as their cause is not so much new as is the platform for their disobedience.  There are some basic controls that organizations can implement to protect themselves from a virtual protest, whether the risk is from DDoS attacks or exploits of un-patched public devices.  In the near term, I do not see a high probability of Anonymous becoming a super group of hackers that perform sophisticated attacks in the likes of Stuxnet. Nor do I see the possibility of a large scale take down of critical infrastructure.  There will always be a risk and sometimes possible threats to critical infrastructure through technology but this risk can be largely mitigated through proper assessment and mitigating controls.

Side note –

If the recent instance of infected tools will continue on other causes then I believe we have seen the end of wide support for Anonymous.  Distrust has always been a concern to involved members with very recent arrests across the globe for LulzSec. Anonymous will need to do internal damage control to prevent the collapse of the collective group and a public distrust in support for causes brought up by the Anons.  Even if hacking group Anonymous goes in a different direct the damage has been done and Internet society can never reverse the damage physiologically from the last 5 years.

As writing this post there was news coming out that a prominent member of Anonymous, Sabu, along with 5 others have been arrested by the FBI.  We will have more details once the dust settles a bit and all news sources can be processed, stay tuned.

Set and Don’t Forget

By Patrick Harbauer, Neohapsis Senior Security Consultant and PCI Technical Lead

There are several PCI DSS requirements that are related to tasks that must be performed on a regular basis. The frequency of these tasks varies from daily to annual. There are also a few requirements that make it important to have PCI DSS compliant data retention policies and procedures in place. An example of a requirement that calls for a task to be performed periodically is requirement 11.2.2: Perform quarterly external vulnerability scans via an Approved Scanning Vendor (ASV). An example of a requirement the calls for compliant data retention policies and procedures is requirement 9.4: Use a visitor log to maintain a physical audit trail of visitor activity. Retain this log for a minimum of three months, unless otherwise restricted by law. If processes or checklists are not in place to track your compliance with these reoccurring tasks, you may be in for an unpleasant surprise during your next annual ROC assessment.

Are You Certifiable?

11.2.2 is one of the classic requirements where we see this happen all too often. When we ask a customer if we can review the certified, passing ASV scans from the last four quarters and we get a response such as, “Oops, Susie was responsible for that and she was reassigned to a different department…” we stick our fingers in our ears and say “la la la la” but that hasn’t ever made the problem go away. Unfortunately, when this happens, instead of a 10 minute conversation reviewing 4 certified and passing ASV scans, we have to buy a few pizza’s, cross our fingers and review several external vulnerability scan reports in hopes that the customer can demonstrate they are scanning and remediating to meet the spirit and intent of requirement 11.2.2.

A Ruleset Only a Mother Could Love

We have seen some very ugly firewall rule sets. We do understand that the business must be able to function and exists to make as large a profit as possible – not to sing the praises of PCI. But as QSA’s, we do need to see six month firewall and router rule set reviews and evidence that the rule sets are being maintained with good hygiene. Maintaining clean and healthy firewall rule sets is similar to a good exercise regimen. If your doctor gives you a daily exercise program to maintain your health and you follow it in a haphazard fashion, your doctor is not going to be able to give you a good health report upon your next doctor’s visit. Similarly, you need a solid program in place to make sure that your firewall rule sets remain healthy and only allow the outbound and inbound network traffic that is actually needed and authorized. And let’s face it, automation is needed for most organizations to manage their firewall and router rule sets effectively. Fortunately there are several excellent solutions available on the market that give you the ability to manage your firewall and router rule sets. For example, these solutions can analyze your rule sets to find overlapping and redundant rules, rules that have not been used over that last X days or rules that allow “any” access – a big PCI no-no. They can also provide the change control mechanisms needed to make sure that changes to firewall rule sets are reviewed and approved by authorized individuals and are properly documented so that rule sets are closely and properly managed.

“The Matrix”

To assist you with making sure that your security program is giving proper attention to specific PCI requirements, we are providing the following two lists. These can be used to create a matrix, review your security operations and to correct any gaps that you may uncover. List 1 covers the frequency with which tasks must be performed related to specific PCI DSS requirements. List 2 shows data retention periods tied to specific requirements. With a little planning, you can keep your PCI compliance on track at all times and avoid unpleasant surprises when your friendly QSA shows up for your next ROC assessment!

List 1 – Recurring PCI Compliance Tasks

1.1.6 – Review firewall and router rule sets (Every 6 Months)

3.1.1 – Automatic or manual process for identifying and securely deleting stored cardholder data (Quarterly)

6.1 – All system components and software are protected from known vulnerabilities (Monthly)

6.6 – Address new threats and vulnerabilities for public-facing web applications (At least annually and after any changes)

8.5.5 – Remove/disable inactive user accounts (Quarterly)

9.5 – Review security of backup media storage location (Annually)

9.9.1 – Properly maintain inventory logs of all media and conduct media inventories (Annually)

10.6 – Review logs for all system components (Daily)

11.1 – Test for the presence of wireless access points and detect unauthorized wireless access points (Quarterly)

11.2.1 – Perform internal vulnerability scans (Quarterly)

11.2.2 – Perform external vulnerability scans via an Approved Scanning Vendor (Quarterly)

11.2.3 – Perform internal and external scans (After any significant change)

11.3 – Perform external and internal penetration testing (At least once a year and after any significant infrastructure or application upgrade or modification)

11.5 – Deploy file-integrity monitoring tools and perform critical file comparisons (Weekly)

12.1.2 – Perform and document a formal risk assessment (Annually)

12.1.3 – Review security policy and update when the environment changes (Annually)

12.2 – Develop daily operational security procedures (Daily)

12.6.1 – Educate personnel (Upon hire and at least annually)

12.6.2 – Require personnel to acknowledge that they have read and understand the security policy and procedures (Annually)

12.8.4 – Maintain a program to monitor service providers’ PCI DSS compliance status (Annually)

List 2 – Data Retention Periods

9.1.1 – Store video camera and/or controls mechanism log (3 months)

9.4 – Retain visitor logs (3 months)

10.7 – Retain audit trail history (1 year)

Who owns and regulates MY Facebook data?

My previous post briefly described the data that makes up a user’s Facebook data and this post will try to shed light on who owns and regulates this data.

I am probably not going out on a limb here to say that the majority of Facebook’s registered users have not read the privacy statement. I was like the majority of users myself, in that I did not fully read Facebook’s privacy statement upon signing up for the service. Facebook created a social media network online, and there were few requirements previously defined for such types of business in America or the world. A lack of rules, combined with users constantly uploading more data, has allowed Facebook to maximize the use of your data and create a behemoth of a social media networking business.

Over time, Facebook has added features to allow users to self regulate their data by limiting others (whether Facebook users or general Internet public) from viewing certain data that one might want to share with only family or specific friends. This provided a user with the sense of ownership and privacy as the creator of the data could block or restrict friends and search providers from viewing their data. Zuckerberg is even quoted by WSJ as saying “The power here is that people have information they don’t want to share with everyone. If you give people very tight control over what information they are sharing or who they are sharing with they will actually share more. One example is that one third of our users share their cell phone number on the site”.

In addition to privacy controls, Facebook gave users more insight into their data through a feature that allowed a user to download ‘all’ their data through a button in the account settings. I placed ‘all’ in quotes because, while you could download your Facebook profile data, this did not include data including wall comments, links, information tagged by other Facebook users or any other data that you created during your Facebook experience. Combined, privacy controls and data export are the main forms of control that Facebook gives to their users for ownership of profile, pictures, notes, links, tags and comment data since Facebook went live in 2004.

So now you might be thinking problem solved; restricting your privacy settings on the viewing of information and downloading ‘all’ your information fixes everything for you. Well, I wish that was the case with Facebook business operations. An open letter by 10 Security professionals to the US Congress highlighted that this was not simply the way things worked with Facebook and third party Facebook developer’s operations. Facebook has reserved the right to change their privacy statement at any time with no notice to the user and Facebook has done this a few times, to an uproar from their user base. As Facebook has grown in popularity and company footprint, security professionals along with media outlets have started publishing security studies painting Facebook in a darker light.

As highlighted by US Congress in December 2011, Facebook was not respecting user’s privacy when sharing information to advertisers or when automatically enabling contradicting privacy settings on new services to their users.  Facebook settled with the US Congress on seven charges of deceiving the user by telling them they could keep their data private.  From my perspective it appears that Facebook is willing to contradict their user’s privacy to suit their best interest for shareholders and business revenue.

In additional privacy mishaps, Facebook was found by an Austrian student to be storing user details even after a user deactivates the service. This started an EU versus Facebook initiative over the Internet that put heat on Facebook to give more details on length of time data was being retained for current and deactivated users.  Holding on to user data is lucrative for Facebook as this allows them to claim more users in selling to advertising subscribers as well as promoting the total user base for private investor bottom lines.

So the next step one might ask is “who regulates my data held by social media companies?” Summed up quickly today, no one outside Facebook is regulating your data and little insight is given to users on this process. The governments of the US, along with the European Union, are looking at means of regulating Facebook’s operations using things such as data privacy regulations and the US/EU Safe Harbor Act.  With Facebook announcing their initial public offering of five billion USD there is soon to be more regulations, at least financially, to hit Facebook in the future.

As an outcome of the December 2011 investigation by the United States Congress, Facebook has agreed to independent audits by third parties, presumably of their choosing. I have not been able to identify details regarding the subject of these audits or ramifications for findings from an audit. Facebook has also updated the public statement and communication to developers and now states that deactivated users will have accounts deleted after 30 days. I have yet to see a change in Facebook’s operations for respecting their user’s privacy settings when pertaining to third parties and other outside entities – in fairness they insist data is not directly shared for advertising; although some British folks may disagree with Facebook claims of advertising privacy.

From an information security perspective, my ‘free’ advice to businesses, developers and end users, do not accesses or give more data than necessary for your user experience as this only brings trouble in the long run. While I would like to give Facebook the benefit of the doubt in their operations, I personally only give data that I am comfortable sharing with the world even though it is limited to friends.  In global business data privacy regulations vary significantly between countries, with regulations come requirements and everyone knows that failing requirements results to fines so business need to think about only access appropriate information and accordingly restricting access.  For the end user, or Facebook’s product, remember that Facebook can change their privacy statement at their leisure and Facebook is ultimately a business with stakeholders that are eager to see quarter after quarter growth.

I hope this post has been insightful to you; please check back soon for my future post on how your Facebook data is being used and the different entities that want to access your data.

Groundhog Day in the Application Security World

By Kate Pearce, a Security Consultant and Researcher at Neohapsis

Throughout the US on Groundhog Day, an inordinate amount of media attention will be given to small furry creatures and whether or not they emerge into bright sunlight or cloudy skies. In a tradition that may seem rather topsy-turvy to those not familiar with it, the story says that if the groundhog sees his shadow (indicating the sun is shining), he returns to his hole to sleep for six more weeks and avoid the winter weather that is to come.

Similarly, when a company comes into the world of security and begins to endure the glare of security testing, the shadow of what they find can be enough to send them back into hiding. However, with the right preparation and mindset, businesses can not only withstand the sight of insecurity, they can begin to make meaningful and incremental improvements to ensure that the next time they face the sun the shadow is far less intimidating.

Hundreds or thousands of issues – Why?

It is not uncommon for a Neohapsis consultant to find hundreds of potential issues to sort through when assessing a legacy application or website for the first time. This can be due to a number of reasons, but the most prominent are:

  1. Security tools that are paranoid/badly tuned/misunderstood
  2. Lack of developer security awareness
  3. Threats and technologies have evolved since the application was designed/deployed/developed

Security Tools that are Paranoid/Badly Tuned/Misunderstood

Security testing and auditing tools, by their nature, have to be flexible and able to work in most environments and at various levels of paranoia. Because of this, if they are not configured and interpreted with the specifics of your application in mind they will often find a large number of issues, of which the majority are noise that should be ignored until the more important issues are fixed. If you have a serious, unauthenticated, SQL injection that exposes plain-text credit card and payment details, you probably shouldn’t a moment’s thought stressing about whether your website allows 4 or 5 failed logins before locking an account.

Lack of Developer Security Awareness

Developers are human (at least in my experience!), and have all the usual foibles of humanity. They are affected by business pressures to release first and fix bugs later, with the result that security bugs may be de-prioritized down as “no-one will find that” and so “later” never comes. Developers also are often taught about security as an addition rather than a core concept. For instance, when I was learning programming, I was first taught to construct SQL strings and verbatim webpage output and only much later to use parameterized queries and HTML encoding. As a result, even though I know better, I sometimes find myself falling into bad practices that could introduce SQL injection or cross-site scripting, as the practices that introduce these threats come more naturally to me than the secure equivalents.

Threats and Technologies have Evolved Since the Application was Designed/Deployed/Developed

To make it even harder to manage security, many legacy applications are developed in old technologies which are either unaware of security issues, have no way of dealing with them, or both. For instance, while SQL injection has been known about for around 15 years, and cross-site scripting a little less than that, some are far more recent, such as clickjacking and CSS history stealing.

When an application was developed without awareness of a threat, it is often more vulnerable to it, and when it was built on a technology that was less mature in approaching the threat remediating the issues can be far more difficult. For instance, try remediating SQL injection in a legacy ASP application by changing queries from string concatenation to parameterized queries (ADODB objects aren’t exactly elegant to use!).

Dealing with issues

Once you have found issues, then comes the daunting task of prioritizing, managing, and preventing their reoccurrence. This is the part that can bring the shock, and the part that can require the most care, as this is a task in managing complexity.

The response to issues requires not only looking at what you have found previously, but also what you have to do, and where you want to go. Breaking this down:

  1. Understand the Past – Deal with existing issues
  2. Manage the Present – Remedy old issues, prevent introduction of new issues where possible
  3.  Prepare for the Future – Expect new threats to arise

Understand the Past – Deal with Existing Issues

When dealing with security reports, it is important to always be psychologically and organizationally prepared for what you find. As already discussed, this is often unpleasant and the first reactions can lead to dangerous behaviors such as overreaction (“fire the person responsible”) or disillusionment (“we couldn’t possibly fix all that!”). The initial results may be frightening, but flight is not an option, so you need to fight.

To understand what you have in front of you, and to react appropriately, it is imperative that the person interpreting the results understands the tools used to develop the application; the threats surrounding the application; and the security tool and its results. If your organization is not confident in this ability, consider getting outside help or consultants (such as Neohapsis) in to explain the background and context of your findings.

 Manage the present – Remedy old issues, prevent introduction of new issues where possible

Much like any software bug or defect, once you have an idea of what your overall results mean you should start making sense of them. This can be greatly aided through the use of a system (such as Neohapsis Security Manager) which can take vulnerability data from a large number of sources and track issues across time in a similar way to a bug tracker.

Issues found should then be dealt with in order of the threat they present to your application and organization. We have often observed a tendency to go for the vulnerabilities labeled as “critical” by a tool, irrespective of their meaning in the context of your business and application. A SQL injection bug in your administration interface that is only accessible by trusted users is probably a lot less serious than a logic flaw that allows users to order items and modify the price communicated and charged to zero.

Also, if required, your organization should rapidly institute training and awareness programs so that no more avoidable issues are introduced. This can be aided by integrating security testing into your QA and pre-production testing.

 Prepare for the future – Expect new threats to arise

Nevertheless, even if you do everything right, and even if your developers do not introduce any avoidable vulnerabilities, new issues will probably be found as the threats evolve. To detect these, you need to regularly have security tests performed (both human and automated), keep up with the security state of the technologies in use, and have plans in place to deal with any new issues that are found.

Closing

It is not unusual to find a frightening degree of insecurity when you first bring your applications into the world of security testing, but diving back to hide is not prudent. Utilizing the right experience and tools can turn being afraid of your own shadow into being prepared for the changes to come. After all, if the cloud isn’t on the horizon for your company then you are probably already immersed in it.

Articulating the Value of Security (or, Security is not the point)

It’s an uphill battle to convince the decision-makers in any business that they need to invest in security. Why? Because deep down, most people think security is an annoying layer of cost and inconvenience.  If you walk in and tell them, “We need more security,” they hear, “We need a more annoying layer of cost and inconvenience.”

Getting Buy-In

Getting executive buy-in for security products and services today means understanding what drives your company’s security purchase decisions. Fear, uncertainty and doubt are not the cleverest tools to use anymore. Businesses want something that sometimes seems like a foreign concept to the security profession: value.  If you don’t adapt and start answering the questions your business is really interested in, you’ll never get the green light on new projects and upgrades.  Remember, nobody wants security; they want the benefits of security. Your family members don’t want the finest deadbolt on the front door because of the excellence of its engineering or its impact resistance. They want a comfortable, happy place to live.

Achieve Objectives

Businesses also want something other than security. If a bank manager has a mandate to reduce expenses related to bank tellers, she has a couple of options. She could fire all the tellers and lock up all the bank branches, but then the bank would have no interface with its customers. Or she could take all the money, put it in piles on the street corner under a clipboard that says, “Take what you want, but write it down so we may balance your account.” That wouldn’t work either.  The best solution for reducing teller expenses is to take the money, put in on the street corner locked in a box with a computer attached, and give customers a low-cost plastic card for authentication and auditing.  Security was never the point of creating the automated teller machine. The bank had a business objective and achieved it by using some security.

A Tool in Your Toolbox

That is precisely how we all should think of security: as a way of helping companies achieve the goals or value they seek.  Business managers, especially executives at the highest levels of an organization, have a very simple, indirect view of security. They don’t think of it as security, exactly. They think of it as a tool in the corporate toolbox for enabling business. For example, the manager responsible for a critical business application wants a few things: He wants to know who is using his website; he wants to ensure that everyone can do everything on that site they need to do; he has a lot of users doing a lot of things, so he needs an easy way to manage it; and at the end of the day or the end of the quarter, he needs a report telling him what has happened so that he can improve customer satisfaction, reduce errors and increase profits.  In that example we have all four fundamental categories of security—authentication, authorization, administration and audit—but the manager doesn’t think of security once! That’s because security is not the point.

Focus on Value 

Whenever possible, security professionals should purge the word “security” from their vocabulary. Instead, answer the questions inside your bossyou’re your customer’s head, and don’t simply spout the ways security keeps bad things from happening.  Upper management thinks in terms of money, not security. What people will be needed? What headcount can we reduce? How much will it cost? How much will we save? What new revenue can we earn as a result of this investment? And they think not in terms of security risks, but in terms of credit risk, market risks and operational risks. That’s where security professionals can shine.  For any business problem, you should be prepared to help your management identify the ways that the authentication, authorization, administration or audit solutions you’re proposing will solve their problem or help customers.  Remember, it is not our job to secure the network. It’s our job to secure the business.

- Steve Hunt

Service Provider Scoping Angst

By Patrick Harbauer

Over the past several months we have had many service providers come to us wringing their hands wondering if they should go through the ROC process. They may offer cloud services or a managed security service, for example, and they keep running up against QSA’s peppering them with questions when the QSA is assessing one of the service provider’s customers. Or the sales cycle repeatedly hits road blocks when a potential customer’s InfoSec department raises a red flag and wants to know what security controls the service provider has in place to protect their customers from a security breach.

I equate this dilemma that service providers face to going to the hardware store to buy an extension cord. If the hardware store carries extension cords from four manufactures, three of the manufactures have “UL Listed” on their packaging and one has “Not UL Listed” on their packaging, would anyone buy the “Not UL Listed” extension cord? Absolutely not!

So, if the service provider decides to engage us to pursue PCI compliance, they begin to wring their hands again when trying to determine what is in scope. Client employees, and sometimes even the individuals who hired us, ask “Why are we going through this again? We don’t process, store or transmit payment card information?!?!?!??”

Again, I turn to the UL analogy. The extension cord manufacturer may wonder if they should certify the male end of the extension cord, the female end or just the wire.” They could spin their wheels for a long time trying to determine what the scope should be. This is where service providers need to leverage their QSA’s expertise. Again, using my extension cord analogy (OK, so maybe it’s a little goofy but it works for me!!) UL will most likely say, “Mr. Customer, if we are going to put our logo on this product, we have to use our best judgment and examine all aspects of this extension cord to make sure that it is safe to use and won’t cause a fire that results in one or more deaths. If you are not willing to let us perform what, in our best judgment, is a thorough set of testing procedures on any aspect of your product that we deem necessary, we cannot certify your product.”

As QSA’s assessing a service provider’s products and services for PCI compliance, we must be allowed to use our best judgment to determine what components of a service provider’s environment are in scope and the PCI requirements and testing procedures that are applicable. It is our responsibility as QSA’s to make sure that a merchant doesn’t “burn their business” as the result of making the service provider’s solution a part of their CDE only to find out that the solution is not properly secured in the spirit of the PCI DSS.

What Security Teams Can Learn From The Department of Streets & Sanitation

As I sit here typing in beautiful Chicago, a massive blizzard is just starting to hit outside.  This storm is expected to drop between 12 to 20 inches over the next 24 hours, which could be the most snowfall in over 40 years.  Yet in spite of the weather, the citizens and city workers remain calm, confident in the fact that they know how to handle an incident like this.  A joke has been going around on twitter about the weather today – “Other cities call this a snowpacolypse.  Chicago calls it ‘Tuesday’”.

Northern cities like Chicago have been dealing with snowstorms and snow management for decades, and have gotten pretty stable at it.  Yet, when cities fail at it, there can be dramatic consequences – in 1979, the incumbent mayor of Chicago, Michael Bilandic lost to a challenger, with his poor response to a blizzard cited as one of the main reasons for his defeat.

The same crisis management practices, as well as the same negative consequences attached to failure, apply to information security organizations today.  Security teams should pay attention to what their better established, less glamorous counterparts in the Department of Streets and Sanitation do to handle a crisis like two feet of snow.

So, how do cities adequately prepare for blizzards?  The preparations can be summarized into four main points:

  • Prepare the cleanup plan in advance
  • Get as much early warning as possible
  • Communicate with the public about how to best protect themselves
  • Handle the incident as it unfolds to reduce loss of continuity

These steps are all also hallmarks of a mature security program:

  • Have an incident response plan in place in advance of a crisis
  • Utilize real-time detection methods to understand the situation
  • Ensure end users are aware of the role that they play in information security
  • Remediate security incidents with as little impact to the business as possible

An information security program that is proactive in preparing for security incidents is one that is going to be the most successful in the long run.  It will gain the appreciation of both end users and business owners, by reducing the overall impact of a security event.  Because as winter in Chicago shows us – you can’t stop a snowstorm from coming.  Security incidents are going to happen.  The test of a truly mature organization is how they react to the problem once it arrives.