Defeating Cross-site Scripting with Content Security Policy

If you have been following Neohapsis’ @helloarbit or @coffeetocode on Twitter, you have probably seen us tweeting quite a bit about Content Security Policy.  Content Security Policy is an HTTP header that allows you, the developer or security engineer, to define where web applications can or can not load content from.   By defining a strict Content Security policy, the deveopers of web applications can completely almost completely mitigate cross-site scripting and other attacks.

CSP functions by allowing a web application to declare the source of where it expects to load scripts, allowing the client to detect and block malicious scripts injected into the application by an attacker.

Another way to think about content-security policy is as a source whitelist. Typically when an end user makes a request for a web page, the browser trusts output that the server is delivering. CSP however limits this trust model by sending Content-Security-Policy header that allows the application to specify a whitelist of trusted (expected) sources. When the browser receives this header, it will only render or execute resources from those sources.

In the event that an attacker does have the ability to inject malicious content that is reflected back against the user, the script will not match the source whitelist, and the script will not be executed.

Traditional mechanisms to mitigate cross-site scripting are to HTML encode or escape output that is reflected back to a user as well as perform rigorous input validation. However due to the complexity of encoding and validation, cross-site scripting may crop up in your website.  Think of CSP as your insurance policy in the event something malicious sneaks through your input validation and output encoding strategies.

Although we at Neohapsis Labs have been researching Content Security Policy for a while, we found that it’s a complicated technology that has plenty of intricacies.  After realizing that many users may run into the same issues and confusion we did, we decided to launch cspplayground.com to help educate developers and security engineers on how to use CSP as well as a way to validate your own complex policies.

CSP Playground allows you to see example code and practices that are likely to break when applying CSP policies.  You can toggle custom policies and watch in real-time how they affect the playground web application.  We also provide some sample approaches on how to modify your code so that it will play nicely with CSP.  After you have had time to tinker around, you can use the CSP Validator to build out your own CSP policy and ensure that it is in compliance with the CSP 1.1 specification.

The best way to learn CSP is by experimenting with it.  We  at Neohapsis Labs have put together cspplayground.com site just for those reasons and suggest you check out the site to learn more about it.

Advances in Web Application and Browser Security: A Few Cool Features

Ben Toews (Head of Application Security at GitHub) and I have been doing a round of talks discussing the state of the union in web application and browser security.  There is a whole slew of new technology and standards coming out that actually really do make the web more secure.  I wanted to take a second and dive into a few of the proposed and accepted standards that are helping make things better:

Forcing browsers to use SSL

Companies like Google, Facebook, and Twitter are using a technology called HTTP Strict Transport Security (HSTS) to force browsers over SSL. For example, when a user types www.google.com into a browser, the request automatically goes over SSL, making the browsing user experience more secure at the transport layer and helping to mitigate man-in-the-middle attacks.  This is an awesome technology that is very easy to implement.  Simply add the following header on the web server:

Strict-Transport-Security: max-age=600000000

This forces the browser to only access content via SSL.

Mitigating cross-site scripting

Companies like Facebook and Github are using Content-Security Policy (CSP), which severely cripples cross-site scripting (XSS). By blocking JavaScript features such as inline scripts and dynamic evals, attackers will have a much harder time performing XSS attacks. In addition, forcing developers to avoid inline scripts will encourage better coding practices. Facebook has started to send CSP headers on certain requests, but they seem to be very permissive, potentially suggesting they are still testing this technology.

I’m working with Patrick Thomas on collecting a bunch of information on Content-Security Policy and we will be presenting some metrics and tools on the technology at a Chicago Security Meetup in July.  More details to follow!

Rendering browser content only as explicitly stated

Facebook, Live.com, and Gmail utilize X-Content-Type-Options: nosniff, which finally fixes an age-old problem where content type is ‘sniffed’ by the browser and rendered even when it conflicts with the explicitly stated Content-Type. For example, when a document’s content type is specified as a plain text document but contains HTML, the browser may (depending on the specific browser) ‘sniff’ the content and decide to actually render it as HTML. Think about an attack where you have a file upload that only allows text documents. You upload a text document containing HTML and JavaScript, and when a user goes to retrieve that text document the browser ‘sniffs’ the document and renders that content, executing your script in the user’s browser. The nosniff header instructs the browser not to perform any content sniffing, forcing the browser to render content only as explicitly stated.

Preventing clickjacking attacks

Clickjacking attacks take advantage of embedded content in iframes by placing elements over those frames and tricking users into clicking actions that seem innocuous, but may be performing malicious actions on the attacker’s behalf. Companies like Google, Facebook, Twitter, Paypal, Ebay, and Live.com are using the X-Frame-Options HTTP response header, which either permits or denies the rendering of a page in a frame or iframe element. Since this can block content from being embedded into other sites, it can prevent clickjacking attacks.

This is just a subset of the technologies Ben and I will be covering at our talk at http://shakacon.org/.

Can’t Run Nessus off of Backtrack Live…No Problem!

By Scott Behrens (arbit)

We have all been there.  You boot up into Backtrack live, pull down and install Nessus and try to run a scan after installing plugins.  Your scan runs way too quickly and your report is nowhere to be found.  Being the Tux penguin that you are, you realize you have run out of ‘memory’ aka virtual hard drive space.  Your / partition shows to be 100% full and you frantically start deleting forensic software by the megabyte, but still haven’t created enough free space.  Maybe you should have picked a host that had more than 2 gigs of memory or just installed it to the desktop.  But you are on a client deadline, and you don’t have the time to get a new host or overwrite the base OS.

I have a very quick and simple fix.  This is by no means the most effective or slick way to alleviate this problem, but takes 2 commands and is very easy.

Continue reading

DEF CON 20 – Neohapsis New Tool BBQSQL to Make its Debut!

By Scott Behrens and Ben Toews

Ben and I have been grinding away on slides and code in preparation of our talk at DefCon 20.  Without letting all of the cats out of the bag, I wanted to take a second to provide a little more context into our talk and research before we present our new tools at the conference.

BBQSQL is a SQL injection framework specifically designed to be hyper fast, database agnostic, easy to setup, and easy to modify.  The tool is extremely effective at exploiting a particular type of SQL injection flaw known as blind/semi-blind SQL injection.  When doing application security assessments we often uncover SQL vulnerabilities that are difficult to exploit. While current tools have an enormous amount of capability, when you can’t seem to get them to work you are out of luck.  We frequently end up writing custom scripts to help aid in the tricky data extraction, but a lot of time is invested in developing, testing and debugging these scripts.

BBQSQL helps automate the process of exploiting tricky blind SQL injection.  We developed a very easy UI to help you setup all the requirements for your particular vulnerability and provide real time configuration checking to make sure your data looks right.  On top of being easy to use, it was designed using the event driven concurrency provided by Python’s gevent.  This allows BBQSQL to run much faster than existing single/multithreaded applications.

We will be going into greater detail on the benefits of this kind of concurrency during the talk. We also will talk a bit about character frequency analysis and some ways BBQSQL uses it to extract data faster.  Will be doing a demo too to show you how to use the UI as well as import and export attack configs.  Here are a few screenshots to get you excited!

BBQSQL User Interface

BBQSQL Performing Blind SQL Injection

If you come see the talk, we would love to hear your thoughts!

“The Noob Within” Good Sites with Bad Plugins

By Scott Behrens

I was recently on an application blackbox assessment on a pretty solid application.  One thing that might get glazed over when developing a web application is the security of third party plugins or frameworks.  During the process of the assessments I identified a plugin that seemed to be installed but not really enabled.   It seemed to be SQL injectable but had nothing in the database.  No problem!  I found a method that allowed me to enter data in the database and then used another function to do Boolean based SQL injection against it.  This issue was easy to identify because the plugin developer stated the code was vulnerable in a comment.  I just did a Google search for the plugin name, and read though the source code.  Although slightly redacted (to protect the plugin developer while we disclose the finding), the comment basically stated that ” request variables have not been escaped and may be vulnerable to SQL injection”.

What’s the takeaway (outside of a few asprin)?  Don’t tell an attacker how to attack your application, security review third party plugins which may not have ever been assessed (especially small Github projects like the one above), and use prepared statements!

Facebook Applications Have Nagging Vulnerabilities

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.

NeoPI in the Wild

By Scott Behrens

NeoPI was a project developed by myself and Ben Hagen to aid in the detection of obfuscated and encrypted webshells.  I recently came across an article about Webacoo shell and a rewrite of this php backdoor to avoid detection from NeoPI.

Webacoo’s developer used a few interesting techniques to avoid detection.  The first technique was to avoid signature based detection by using the function ‘strrev’:

$b=strrev("edoced_4"."6esab")

This bypasses our traditional based signature detection and also lends to a few other techniques to bypass signature based detection.  Another webshell surfaced after NeoPI’s release that uses similar techniques to avoid signature based detection.  Another example could be the following (as seen in https://elrincondeseth.Wordpress.com/2011/08/17/bypasseando-neopi/):

$b = 'bas'.'e64'.'_de'.'code';

We can see that just by breaking up the word, the risk of detection is highly mitigated.  As I suggested at B-Sides, signature based detection is complimentary to the tests in NeoPI but by itself, ineffective.   These methods described above completely thwart this tests effectiveness.

But one thing these techniques must do at some point is actually eval the code.  Webacoo for example uses the following:

eval($b(str_replace(" ","","a W Y o a X …SNIP

By developing a regex that looks for eval and a variable holding the global function, we can flag this file as malicious.  After running this test against a WordPress server with Webacoo’s shell, I observed the following:

 

Figure 1 – Webacoo identified as suspicious in eval test

NeoPI was able to detect the function and flagged it as malicious.  This particular type of use of eval isn’t very common and I have really only seen it used in malware.  That being said functions.php was also flagged so I imagine this test can still have many false positives and should be used to help aid in manual investigation of the files identified.

Another tweak Webacoo’s developer did was insert spaces in-between each character of a base64 encoded string.  The function str_replace() is called to replace each space before the code is base64_decoded  and eval’d.

In order to thwart this particular obfuscation technique, I went ahead and modified the entropy function to strip spaces within the data the function is analyzing. The screenshot below shows a scan against 1752 php files in WordPress and shows the entropy test results as flagging webacoo as potentially malicious. This increased NeoPI’s effectiveness at detecting webacoo.php but is more of a stopgap solution as the attacker can craft other junk characters to lower the shells entropy and index of coincidence. Some additional thought and research is needed on potentially looking for these complicated search and replace functions to determine if the data being obfuscated is malicious.

 

Figure 2 – Test results after modifying entropy code results in higher entropy of webacoo.php

The latest version of the code can be checked out at http://github.com/Neohapsis/NeoPI which includes these enhancements.

As for improving the other tests effectiveness, I am looking into the possibility of identifying base64 encodings without looking for the function name.  This technique may be helpful by building a ratio of upper and lower case characters and seeing if there is a trend with files that use obfuscation.

If anyone has interesting techniques for defeating NeoPI please respond to this post and I’ll look at adding more detection mechanisms to this tool!

Bad Plugins, Vulnerabilities, and Facebook for a Triage of Social Engineering Win

Over the course of the last few months I have worked with clients that are tightly intertwined with Facebook through the use of third party plugins.  These plugins are used by the clients’ customer base for sharing links on their walls, entering promotions, and extending the functionality of the Facebook experience.   These third party applications are just as vulnerable as any other web application, but they have a different platform than a traditional web application, as they are tied directly into Facebook.

This leads to a few interesting opportunities for the attacker who may discover a flaw in one of these applications.  For one, there is an inherent trust in the applications on a Facebook page.  When a user on Facebook adds an application to their profile and it is a company that they view favorably, the thought of security might not cross their mind.  An attacker can use this to their advantage, especially in the context of social engineering, to potentially exploit a weakness in this plugin and have a higher success rate of exploitation.   In addition, many websites make use of Facebook’s share.php function, which parses a website and allows a user to share a link on their wall to the material.   This sharing function can also uniquely be exploited in the event that the third party plugin or site has an open redirect vulnerability.

Open redirection is an interesting vulnerability that simply redirects a user from a seemingly trusted site to an un-trusted site.   As a client side attack, Facebook would be an excellent medium for conducting this type of attack.  On a recent engagement, I came across a mobile website that was tightly integrated into Facebook.  Products and services offered by this company could be shared to Facebook, the non-mobile site had 50,000 or so “likes” on Facebook and even had an associated plugin.  The mobile site was vulnerable to open redirection on every single request.  The website took a base64 encoded URL of the non-mobile site and redirected every request to that URL (reformatted with some JavaScript).   I pulled the HTML and Stylesheets for the product I was interested in sharing, base64 encoded my malicious webserver hosting this content, and let the Facebook share.php function parse the site.  The link pasted to Facebook looked identical to the original and also contained the URL of the trusted site.  After a user clicks the product link shared on Facebook, they are redirected to my site, and client side JavaScript exploits are run.

Why is this particularly interesting?  I could send the same link to users over email.  The interesting part is that the link looks legitimate and Facebook even parses the link’s content to add to its legitimacy.  In addition, if I have already built rapport with people who like this particular client’s products and services, this also assists in the potential effectiveness of this attack.

So to help mitigate this risk, ensure your third party plugins are assessed in the same degree as your web applications.  If you are a company that allows employees to use Facebook, ensure users are educated on the risk of using Facebook and the plugins that tie into the site.

Facebook in the last few months has really ramped up their security efforts by offering a Bug Bounty program.  The program adheres to the principle of responsible disclosure and has been relatively successful as numerous bugs and fixes have been implemented.  Unfortunately, third-party plugins are excluded from this program, and since tens of thousands of third party applications integrate into Facebook, this presents many opportunities for the curious attacker.   It would be nice to see Facebook allow third-party developers to opt into this bug bounty program, but in the mean time it is a step in the right direction.

Webshell Recap

After Ben Hagen and I gave our talk at B-Sides Chicago, I wanted to circle back and recap some of the trends in webshells, new ideas, and reflections on this class of malware.  We presented a tool called NeoPI and demoed it’s effectiveness in detecting web shells.

NeoPI is a Python script that uses a variety of statistical methods to detect obfuscated and encrypted content within text and script files. The intended purpose of NeoPI is to aid in the identification of hidden web shell code. The development focus of NeoPI was creating a tool that could be used in conjunction with other established detection methods such as Linux Malware Detect or traditional signature/keyword based searches.  NeoPI is platform independent and can be run on any system with Python 2.6 installed.   NeoPI recursively scans through the file system from a base directory and will rank files based on the results of a number of tests. The ranking helps identify, with a higher probability, which files may be encrypted web shells. It also presents a “general” score derived from file rankings within the individual tests.  You can pull a copy of the code from https://github.com/Neohapsis/NeoPI.

There hasn’t been much of a change in the strange obfuscation techniques that some malware developers are using, but more security folks are aware of the problem.  One tool that has since been released that aids in the detection of these malicious files is called PHP Shell Detector.  This app’s primary focus is on the detection of PHP files, looking for suspicious calls such as eval/system/etc.  It then compares against a collection of fingerprints of common webshells.  This shares a lot of the same functionality as Linux Malware Detect, but has a pretty nice UI.  Unfortunately custom shells using other languages may be able to bypass the detection method.

Elena Kropochkina and Joffrey Czarny presented WebShells: A Framework for Penetration Testing at Hack in the Box Amsterdam earlier this year.  They presented a new platform that is language independent, resistant against third party unauthorized access and not be detected by AV/IPS/WAF.  I haven’t been able to download the slides for the talk (web server issue on the materials page for the conference) but I am interested to see how NeoPI will hold up against some of the new techniques they are claiming in their proof of concept.

HT shells  is an interesting project that contains webshells and other attacks against .htaccess files.  The shell first makes the .htaccess file accessible over web, and then another configuration setting makes the .htaccess file interpreted as PHP.  As NeoPI currently does not check for .htaccess files when using auto-regex, we will have to make a change to start looking at .htaccess files for potential malware.

Another interesting article I came across was from Rahul Sasi on the effectiveness of Antivirus in detecting web shells.  He went though the arduous effort of testing a variety of webshells and Antivirus effectiveness at detecting them.  No surprise that antivirus fails at detecting most webshells especially ones that have been customized.  This again, shows the importance of alternative mechanisms for detecting webshells.

I still strongly feel that a combination of tools and techniques is the most effective way to detect web shells, with a focus being on analyzing the specific malware for known oddities, and putting less of an emphasis on signatures.  This is mainly due to how easy it is to write a shell from scratch or modify an existing shell.  Even more important is actually assessing the applications that are being comprised with webshells, as webshell detection is a reactive control, and preventing the compromise all together is obviously preferable .

The Small, Strategic, and Utterly Transparent Attack

What is the attacker looking for?

Maybe the attacker is looking for one folder, one credit card number, or one social security number.  Maybe the attacker finds SQL injection vulnerability on a website, and the attacker steals a few things slowly to not trip an IPS or throw to many queries at the database system.  The attacker is not in a rush, but just wants to buy some TVs and some pants from Old Navy.  Stealing 40,000 credit card numbers would surely have a larger system impact and higher chance of detection than stealing 10 or stealing the 40,000 over the course of a year.  Maybe the attacker is looking for a few small files in your legal folder on a file share.    I like to think of this mentality as the slow and patient attacker.   This is the attacker that is interested in the whole pie, but just eats a small piece of the crust and works his way to the center….slowly.

As security experts, we can imagine the thought behind an attacker and what methodologies to effectively conduct targeted attacks.   GSM based ATM skimmers are a good example of a slow and strategic attack.   An attacker places the skimmer on the ATM and 10 or 15 cards may be swiped in the course of a few days.  Even if the device is then detected, the attacker can cash-out between 20-25k Euros if the device is in Europe.  This isn’t an extremely large scale attack, but it is targeted and effective.  Brian Krebs has a fantastic post on this subject, as part of his ongoing series of ATM skimmer research.

Start by imagining an attacker looking to use malware to reach an objective.  We imagine that an attacker writes malicious code, and continuously tweaks the code until it bypasses antivirus.  Now, an attacker doesn’t need to write code that bypasses all antivirus software, assuming that the attack is targeted at a specific company.  The attacker only needs to footprint their Antivirus structure (perhaps by some crafty social engineering).   The attacker determines that the company uses Antivirus Software A, works with Virus Total to get past the possibility of detection, and then attacks the company.

A good example of a “low and slow” attack is Albert Gonzalez and the Heartland breach.  Gonzalez and his team slowly penetrated each defense in the TJX network (including Wi-Fi, SQL injection, etc.) and used custom malware on several corporate systems in order to attacks which allowed him to steal information.  He did this slowly over the course of a year. Depending on the motivations of the attacker, getting one compromised PC may be enough to fulfill his or hers goals, or in Gonzalez’s case a few servers.  Or maybe there is a bit more sophistication and the attack needs to spread between machines or perform command and control like operations.  Regardless, if the attacker is smart, he or she will know that it is a limited amount of time before the virus is detected, if it’s noisy, by means of anomaly detection for example like network traffic patterns.

Web applications are a common avenue for attackers to gain access into a system.  Targeting web application vulnerabilities seems to be significantly more successful than the DDoS attacks such as those that the folks from Anonymous are doing.  Disrupting a service is great, but to an attacker, stealing something of value is greater.  An attacker performing a targeted attack can go in and quickly and strategically steal some data, maybe just a few credit card numbers or a confidential folder share, and leave a backdoor to slowly elicit more information.  The chances of detecting the his presence and what the he did are more difficult since, the attack was strategically targeted.  In addition, if an attacker is trying to leave a backdoor to get back in, web shells make the job easier, since all the command and control will go though common ports (443/80).  A smart attacker will even encrypt or hide his or her shell in some ASP page on a web server.   So, they gain access to a web server, do some damage, and leave a small shell behind obfuscated and encrypted in some directory masked by the presence of thousands other web files.   And the attacker can write a backdoor from scratch, to ensure that the payload isn’t picked up by an IPS or a virus scanner.

What do all these things have in common?  These attackers have targeted goals and the patience to do stuff right.   What can we do to protect our assets?

Similar Controls Slightly Adjusted for the Strategic Bad Guys

So how do we detect a smarter, more patient attacker?   Custom malware and web shells are inevitable.  Any savvy PHP or C developer can write malicious that won’t be detected.  The question is, as security experts, what controls do we put in place to prevent the inevitable?  The solution is a combination of prevention, limitation of exposure, and security education and training.  If Joe in HR is tricked into downloading my malicious PDF, and I am able to copy the HR folder to my external server and then get out, how will I know this happened?  The attacker may have left an audit trail, but didn’t spend a whole lot of time doing bad stuff.   If my web applications are not going through security assessments, how can an organization be confident that an attacker won’t find an exploit, and sit patiently on the webserver collecting information over the course of a year?  Assessing web applications and remediating any identified security issues will make an attacker’s job more difficult.  In addition, limiting the exposure of data by keeping authorization in control will help mitigate data leak to unauthorized users.  And security training for employees is a must, as often times they are the culprits that open the malicious PDF.    Proactive planning, segmentation, and better forensics are also necessary for helping reduce the damage of this type of attack.   By planning out where the data is and how important it is, segmentation and controls can be put into place to limit the exposure of data compromise.  We need to take security controls more seriously and come to realize that an attacker may already be in our organization, and plan for what we do to mitigate that risk.