Shellshock without the Shellac

A post by our exploit-herder in residence, Jason Royes

The Problem

Have you heard about Shellshock? If not, you may be living under a rock. To summarize:

If an application sets an environment variable name or value to a value that is derived from user input and subsequently executes bash (and possibly other shells), an attacker may be able to execute arbitrary code.


When I first read the post from Robert Graham, my first thought was: “when did we begin storing function definitions in environment variables?” I scanned through the section of the bash manual dedicated to environment variables and could not find anything on the topic.

I knew I was not alone after googling and finding this on Stack Overflow. Luckily, I had an old VM handy that I never update.

Here’s bash:

$ bash --version
GNU bash, version 4.2.24(1)-release (i686-pc-linux-gnu)

So, according to the stack overflow article, what’s actually going on is that bash stores exported functions in the environment.

$ f1
f1: command not found

Let us create a file that will define a function and export it:

$ cat
#! /bin/bash

f1() {
echo "in f1"

export -f f1

Now to include it:

$ source

Voila, f1 is now defined within the shell environment.

$ env|grep -A1 f1
f1=() {  echo "in f1"

If you’ve already read about the Shellshock attack, the value of f1 above should look familiar.

Bash 4.2 and Exported Functions

Bash 4.2 (vulnerable) processes environment variables in initialize_shell_variables (see variables.c). What happens when an environment variable has a value that begins with “() {“? A new buffer is allocated and the variable name is concatenated with the variable’s value. This basically creates a normal bash function declaration. The concatenated string is then evaluated with parse_and_execute:

temp_string = (char *)xmalloc (3 + string_length + char_index);

strcpy (temp_string, name);
temp_string[char_index] = ' ';
strcpy (temp_string + char_index + 1, string);

parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST);

Imagine an exported function named f1 that has a value resembling “() { ls -l; }”. The code above combines the name and value into temp_string, resulting in “f1() { ls -l; }”. This string is then evaluated and a function definition is burnt in memory.

The vulnerability arises because user input is being evaluated directly with the same function used to evaluate all other bash commands. If commands are appended to the end of the function definition, ex. “() { ls -l; }; ps”, they are executed. This is because they fall outside the bounds of the function declaration and so are treated just like they would be in a regular bash script. Note that anything inside the function declaration should not be executed unless the function is invoked.

The construction of temp_string also means an attacker can inject through the environment variable name. For example:

$ ./
total 6868
drwxrwxr-x 12 user1 user1    4096 Feb 13 17:28 bash-4.2
-rw-rw-r--  1 user1 user1 7009201 Feb 13  2011 bash-4.2.tar.gz
-rw-rw-r--  1 user1 user1      52 Feb 13 16:19
-rw-rw-r--  1 user1 user1      49 Feb 13 16:47
-rwxrwxr-x  1 user1 user1     101 Feb 13 17:30
-rwxrwxr-x  1 user1 user1      96 Feb 13 16:58
Segmentation fault

Whoops! Bonus segfault. Here’s

#! /usr/bin/python
import os

os.putenv('ls -l;a', '() { echo "in f2"; };')
os.system('bash -c f2')

Bash 4.3 and Exported Functions

The bash patch seems fairly concise. The patch now includes a check to make sure the variable name only contains legal characters (thwarting injection through name). There’s also a new flag called SEVAL_FUNCDEF. If parse_and_execute parses a command that is not a function definition and this flag is set, an error condition results.

This seems to correct the issue, however, relying on the function parsing code still feels dicey.

Perhaps there are other ways around these new defenses yet to be revealed.

MS-SQL Post-exploitation In-depth Workshop @ ToorCon 2014!

Come join Noelle Murata and myself (Rob Beck) for a hands-on workshop at ToorCon 2014 in San Diego this October.  It’s been a while in the making, but we’re looking forward to delivering 2 days of Microsoft SQL Server shenanigans, code samples, workshops, and general database nerdery in the MS-SQL environment.  Registration is open and the workshop is scheduled for October 22nd and 23rd at the San Diego Westin Emerald Plaza!

Workshop Overview:

The MS-SQL Post-exploitation In-depth workshop demonstrates the tactics an attacker can employ to maintain persistence in a Microsoft SQL Server database, while harnessing the available facilities to expand their influence in an environment. Plenty of resources exist today that show methods for compromising SQL and SQL-dependent applications to achieve access to the environment, very few provide methods for maintaining control of a SQL instances or performing attacks against the host and environment from within the SQL service.

This course will offer attendees an understanding of the various facilities that are available for executing system level commands, scripting, and compiling code… all from inside the SQL environment, once privileged access has been acquired. Students will walk away from this two-day course with a greater understanding of:

  • MS-SQL specific functionality
  • Stored procedures
  • Extended stored procedures
  • SQL assemblies
  • SQL agent
  • SQL internals
  • Conducting attacks and assessments from inside the SQL environment
  • Methods employed for stealth inside of SQL

Upon the completion of this workshop, attendees will:

  • Be familiar with multiple facilities in the SQL Server environment for executing system commands.
  • Understand ways to execute arbitrary code and scripts from within the database.
  • Understand methods for operating with stealth in the SQL service.
  • Know ways an attacker can rootkit or backdoor the SQL service for persistence.
  • Be familiar with hooking internal SQL functions for data manipulation.
  • Harvest credentials and password hashes of the SQL server.
  • Have familiarity with the extended stored procedure API.
  • Be able to create and deploy SQL assemblies.
  • Have the ability to impersonate system and domain level users for performing escalation in the environment.

Attendee requirements for this workshop:

  • Modern laptop with wired or wireless networking capabilities.
  • Ability to use Microsoft remote desktop from their system.
  • Basic understanding of the T-SQL language and syntax.
  • Ability to follow along with coding/scripting concepts (coding experience a plus, but not required – languages include: C, C++, C#, vbscript, jscript, and powershell)
  • Ability to navigate Visual Studio and OllyDBG (previous experience a plus, but not required.)

Attendees will be provided with:

  • Hosted VMs for testing and workshop labs.
  • Training materials – presentation materials and lab examples.

Who should attend this workshop?

  • SQL administrators and security personnel.
  • Professional pen-testers and corporate security team members.
  • Incident response analysts for new methods of attack detection.
  • Forensic team members unfamiliar with SQL related attack patterns.
  • Anyone interested in furthering their understanding of SQL Server.

A Tale of Two Professionals

Phones Aren’t the Only Things That Can Get Burned

On a recent engagement I was tasked with reviewing a mobile application that provides users with disposable phone numbers. The application I was testing provided phone numbers to mobile users, permitting users to make VoIP phone calls, as well as receive SMS and picture messages; I will omit the actual application name, since it has no impact on the information being disclosed and I’m not out to shame a specific developer.  As part of their service offering, you could acquire phone numbers in various countries, as well as, various regions and cities in those countries, allowing for “local” phone calls, reducing costs for inbound and outbound regional calls.

During the initial setup of the application, new users are provided a free phone number to use during a three day trial period.  The trial period provides all features of the service including making and receiving phone calls, sending and receiving SMS messages, sending and receiving picture messages, caller identification (“Caller ID”), and voicemail.  At the conclusion of the three day trial period, users are asked to renew the phone number or acquire a different number, using extremely short-term or long-term subscriptions.  One of the selling points of the service is permitting users to create “burner”, or disposable, phone numbers that they can use for a specific period of time and for specific purposes.

Another feature of the service is the ability to have multiple phone numbers, to be used on a single mobile device.  This allows users to have multiple phone numbers, for specific purposes, in a variety of regions around the world.  As part of my testing, I navigated the various menus, going through the process of acquiring a phone number in another country.  The most important thing to note here was that after selecting my country and region/city of choice, I was presented with a list of possible phone numbers I could acquire for that area.  Not only did this allow me to enumerate possible phone numbers for this service, at least the ones not currently in use, it indicated that the service had a finite amount of available phone numbers in any area; this “feature” might be indication enough that privacy and anonymity wasn’t on the forefront of the developer’s mind.  It was only after I had selected a number that I was prompted with the various pricing models available to procure the phone number for personal use.

Because of the various pricing options, as well as the trial period, I opted to put the project on hold and move on to another application so that I could allow the trial period to expire.  This would allow me to determine pricing models following the trial period, as well as, anything else the application might want to charge me for.  I put the application in the background and went about conducting my testing on additional applications.

Fast-forward 48 hours later.  I decided to check up on the previous VoIP application to determine if there were any additional notifications for payment, warnings of trial expiration, and to begin wrapping up my testing to begin documentation.  I was surprised to see that the application had logged in excess of 20 missed calls and had a backlog of SMS messages from 10 or more random people.  If you’ll recall, I established that this service had a finite number of available phone numbers, I was provided a free phone number to test during the trial period, and this means that the number I was provided was previously used by another user of the system.

Going based solely on the contents of the SMS messages received, as well as some of the voicemails left on my trial number messaging service, the previous owner was also a specialized professional who is use to charging an hourly rate; let’s just say that her chosen profession was of a much more discreet and intimate nature.  I was presented with text upon text message asking if he/she was available, what their hourly rate was, as well as a few much more graphic explanations of specific requests the potential clients would like performed.  What was more surprising, and traumatizing, was that some of these individuals had chosen to send naughty-gram picture messages of their previous work with this professional, personal pictures in admiration of this person, and… well, you have an imagination.

None of the individuals contacting this number had any indication that the person they were trying to contact (no pun intended) had been using a burnable phone number.  The problem was made worse for them because of the features provided by this service, as previously mentioned the VoIP service offers Caller ID; I was not only receiving the correspondence from this lengthy list of previous contacts, but now I had the phone numbers they were using to reach me.

A sample of the least explicit messages received.

A sample of the least explicit messages received.

This situation now not only posed a risk to the previous owner of this phone number, permitting me access to their contacts who had reached out to her, but exposed her clients and potential clients to exposure from an unknown individual now in possession of their information.  While it would be nice to assume that the individuals attempting to correspond with the previous owner of the number were also using temporary phone numbers, this isn’t a perfect world and people rarely take the steps needed to ensure their privacy if they don’t feel that they’re at risk; after all, some amount of this sort of business is based on a level of trust and unwritten understanding between the professionals and their clients.

I’m not here to provide commentary on the nature of the previous individual’s chosen profession or hobby, to each their own, but this situation presented an extreme introduction into some of the dangers of the burner phone culture some of us have come to accept.  While many of us can see the value of having a disposable phone number and messaging, easily hopping between numbers for both legitimate and illegitimate purposes, I don’t think many people have realized the repercussions of being the recipient of a disposable resource.  Even in the age of services such as Google Voice, assumptions are made that the numbers we’re corresponding with have a reasonable time to live with the person that provided it to us.

With a minimal amount of social engineering, much more information could have been captured from these individuals.  Due to the disclosure of their phone numbers coupled with the power of Google and other search engines, the potential for extortion by a random individual who is now in possession of compromising photos is also a reality.  The next time we make a phone call, or send a SMS with questionable content, we have to ask ourselves – do we really know who is receiving this or have we also been burned?

Blackhat USA Multipath TCP Tool Release & Audience Challenge

We hope everyone found something interesting in our talk today on Multipath TCP.

We’ve posted the tools and documents mentioned in the talk at:

Update: We’ve now also added the slides from the talk.

At the end we invited participants to explore MPTCP in a little more depth via a PCAP challenge.

Without further ado, here’s the PCAP: neohapsis_mptcp_challenge.pcapng

It’s a simple scenario: one MPTCP-capable machine sending data to another. The challenge is “simply” to reassemble and recover the original data. The data itself is not complex so you should be able to tell if you’re on the right track, but getting it exactly right will require some understanding of how MPTCP works.

If you think you have it, tweet us and follow us (@secvalve and @coffeetocode) and we’ll PM you to check your solution. You can also ask for questions/clarifications on twitter; use #BHMPTCP so others can follow along. Winner snags a $100 Amazon gift card!

Hints #0:

  • The latest version of Wireshark supports decoding mptcp options (see “tcp.options.mptcp”).
  • The scapy version in the git repo is based on Nicolas Maitre’s and supports decoding mptcp options. It will help although you don’t strictly need it.
  • The is an mptcp option field to tell the receiver how a tcp packet fits into the overall logical mptcp data flow (what it is and how it works is an exercise for the user 🙂 )
  • It’s possible to get close with techniques that don’t fully understand MPTCP (you’ll know you’re close). However the full solution should match exactly (we’ll use md5sum)

Depending on how people do and questions we get, we’ll update here with a few more hints tonight or tomorrow. Once we’ve got a winner, we’ll post the solution and code examples.

Update: Winners and Solution

We have some winners! Late last night @cozinuzo contacted us with a correct answer, and early this morning @darkfiberiru got it too.

The challenge was created using our fragmenter PoC tool, pushing to a netcat opened socket on an MPTCP-aware destination host:

python -n 9 --file=MPTCP.jpg --first_src_port 46548 -p 3000

The key to this exercise was to look at the mechanism that MPTCP uses to tell how a particular packet fits into the overall data flow. You can see that field in Wireshark as tcp.options.mptcp.dataseqno, or in mptcp-capable scapy as packet[TCPOption_MP].mptcp.dsn.


The mptcp-capable scapy in our mptcp-abuse git repo can easily do the reassembly across all the streams using this field.

Here’s the code (or as a Gist):

# Uses Nicolas Maitre's MPTCP-capable scapy impl, so that should be
# on the python path, or run this from a directory containing that "scapy" dir
from scapy.all import *

packets = rdpcap("pcaps/neohapsis_mptcp_challenge.pcap")
payload_packets = [p for p in packets if TCP in p
                   and p[IP].src in ("", "")
                   and TCPOption_MP in p
                   and p[TCPOption_MP].mptcp.subtype == 2
                   and Raw in p]

f = open("out.jpg", "w")
for p in sorted(payload_packets, key=lambda p: p[TCPOption_MP].mptcp.dsn):

These reassemble to create this image:


The md5sum for the image is 4aacab314ee1a7dc5d73a030067ae0f0, so you’ll know you’ve correctly put the stream back together if your file matches that.

Thanks to everyone who took a crack at it, discussed, and asked questions!

Rob Beck’s MS-SQL Rootkit Framework Presentation @ DefCon Skytalks 2014

SQL Gestalt: A MS-SQL Rootkit Framework will be presented by Rob “whitey” Beck (@damnit_whitey) at the DefCon Skytalks 2014 in Las Vegas, NV this year.  This talk will provide an overview of a basic framework for the creation, deployment, operation, and persistence of a MS-SQL rootkit for all versions of Microsoft SQL Server 2005 and above.


This talk illustrates the various facilities in the MS-SQL database environment for performing code execution.  Using these facilities, attendees are presented with the basis of the SQL Gestalt – A rootkit framework, utilizing various aspects of the SQL core facilities, working in conjunction to provide persistence in the database.


This talk benefits pen testers, forensic analysts, and database administrators by exposing methods and tactics that may not be commonly known or widely employed in traditional database compromises. Examples will be provided in a variety of languages including T-SQL, C#, C++, VBscript, and Powershell utilizing SQL facilities such as SQL Assemblies, the Extended Stored Procedure API, SQL Agent, and OLE Automation.  At the conclusion of this presentation a basic framework will be released with sample code to illustrate all of the functionality discussed in this talk.

Talk Agenda

The following topics will be discussed in the presentation:

  • Concept of the SQL Gestalt rootkit
  • Facilities for executable code in SQL
    • Overview
    • Advantages
    • Disadvantages
    • Examples
  • Module installation
    • Deployment
    • Execution considerations
  • Securing a native code execution point
  • Persistence in SQL
  • Advanced rootkit operations


Multipath TCP – BlackHat Briefings Teaser

Multipath TCP: Breaking Today’s networks with Tomorrow’s Protocols. is being presented at Blackhat USA this year by Me (Catherine Pearce @secvalve) as well as Patrick Thomas @coffeetocode. Here is a bit of a tease, it’s a couple of weeks out yet, but we’re really looking forward to it.

Come see us at Black Hat Briefings in South Seas AB, on Wednesday at 3:30pm.

(UPDATE 8/14: A followup post and the talk slides are now online.)

What is multipath TCP?

Multipath TCP is a backwards-compatible modification that allows a core networking protocol, TCP to talk over multiple paths at the same time. In short, Multipath TCP decouples TCP from a specific IP address, and it also allows you to add and remove network addresses on the fly.

Multipath TCP in brief

Multipath TCP splits connection data across N different TCP subflows



Why do I care?

MPTCP Changes things for security in a few key ways:

  • Breaks Traffic Inspection – If you’re inspecting traffic you need to be able to correlate and reassemble it. We haven’t found a single security technology which does so currently.
  • Changes network Trust models – Multipath TCP allows you to spread traffic around, and also remove the inherent trust you place in any single network provider. With MPTCP it becomes much harder for a single network provider to undetectably alter or sniff your traffic unless they collaborate with the other ones you are using for that connection.
  • Creates ambiguity about incoming and outgoing connections – The protocol allows a client to tell a server that it has another address which the server may connect back to. To a firewall that doesn’t understand MPTCP it looks like an outgoing connection.
MPTCP and Reverse connections

MPTCP can have outbound incoming connections!?



Backwards compatible

Did I mention that MPTCP is designed to be backwards compatible and runs on >= 85% of existing network infrastructure [How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP ]

Like IPv6, this is a technology that will slowly appear in network devices and can cause serious security side effects if not understood and properly managed. MPTCP affects far more than addressing though, it also fundamentally changes how TCP traffic flows over networks.

MPTCP confuses your existing approaches and tools

If you don’t understand MPTCP, things get really confusing. Take this wireshark “follow TCP stream” where I follow an http connection. Why does the server reply to an invalid request this way?

MPTCP Fragmentation confuses wireshark

Why does the web server reply to this garbled message? – MPTCP Confuses even tools that support it


Network flows can also become a lot more complicated. Why talk over a single network path when you can talk through all possible paths?


That’s what your non MPTCP-aware flows look like.

But, if we are able to understand it then it makes a lot more sense:


What are the implications?

Technologies are changing, and multipath technologies look like a sure thing in a decade or two. But, security isn’t keeping up with the new challenges, let alone the new technologies.

  1. I can use MPTCP to break your IDS, DLP, and many application-layer security devices today.
  2. There are security implications in multipath communications that we cannot patch our existing tools to cope with, we need to change how we do things. Right now tools can correlate flows from different points on the network, but they are incapable of handling data when part of it flows down one path and part of it flows down another.

To illustrate point 2:

What if you saw this across two subflows… Can you work out what they should be?

  • Thquicown fox jps ov the az og
  • E k brumerlyd.

Highlight the text below to see what that reassembles to

[The quick brown fox jumps over the lazy dog.]

Follow up with our Black Hat session as we discuss MPTCP and the effect on security in yet more detail. We ma not be ready for the future, but it is fast approach, just ask Siri.

How does your security decide what to do with a random fragment of a communication?



Cached Domain Credentials in Vista/7 (aka why full drive encryption is important)

Recently, I was conducting a security policy audit of a mid-size tech company and asked if they were using any form of disk encryption on their employee’s workstations. They were not, however they pointed me to a policy document that required all “sensitive” files to be stored in an encrypted folder on the User’s desktop. They assumed that this was adequate protection against the files being recovered should the laptop be lost or stolen.

Unfortunately, this is not the case. Without full disk encryption (like BitLocker), sensitive system files will always be available to an attacker, and credentials can be compromised. Since Windows file encryption is based on user credentials (either local or AD), once these creds are compromised, an attacker would have full access to all “encrypted” files on the system. I will outline an attack scenario below to stress the importance of full drive encryption.



If you are not familiar, Windows has a built in file encryption function called Encrypting File System (EFS) that has been around since Windows 2000. If you right click on a file or folder and go to Properties->Advanced you can check a box called “Encrypt contents to secure data”. When this box is checked, Windows will encrypt the folder and its contents using EFS, and the folder or file will appear green in Explorer to indicate that it is protected:

Encrypted Directory


Now only that user will be able to open the file. Even Administrators will be denied from viewing it. Here a Domain Admin (‘God’) is attempting to open the encrypted file that was created by a normal user (‘nharpsis’):




According to Microsoft’s TechNet article on EFS, “When files are encrypted, their data is protected even if an attacker has full access to the computer’s data storage.” Unfortunately, this is not quite true. The encrypted file above (“secret.txt”) will be decrypted automatically and viewable whenever ‘nharpsis’ logs in to the machine. Therefore to view the files, an attacker only needs to compromise the ‘nharpsis’ account.



In this attack scenario, we will assume that a laptop has been lost or stolen and is powered off. There are plenty of ways to mount an online attack against Windows or extract credentials and secret keys straight from memory. Tools like mimikatz or the Volatility Framework excel at these attacks.

For a purely offline attack, we will boot from a live Kali Linux image and mount the Windows hard drive. As you can see, even though we have mounted the Windows partition and have read/write access to it, we are unable to view files encrypted with EFS:

Permission Denied - Kali

Yes you read that right. We are root and we are seeing a “Permission denied”.

Commercial forensic tools like EnCase have functionality to decrypt EFS, but even they require the username and password of the user who encrypted it. So the first step will be to recover Ned Harpsis’s credentials.


Dumping Credentials

There are numerous ways to recover or bypass local accounts on a windows machine. SAMDUMP2 and ‘chntpw’ are included with Kali Linux and do a nice job of dumping NTLM hashes and resetting account passwords, respectively. However, in this instance, and the instance of the company I was auditing, these machines are part of a domain and AD credentials are used to log in.

Windows caches domain credentials locally to facilitate logging in when the Domain Controller is unreachable. This is how you can log in to your company laptop when traveling or on a different network. If any domain user, including admins, have logged in to this machine, his/her username and a hash of his password will be stored in one of the registry hives.

Kali Linux includes the tool ‘cachedump’ which is intended to be used just for this purpose. Cachedump is part of a larger suite of awesome Python tools called ‘creddump’ that is available in a public svn repo:

Unfortunately, creddump has not been updated in several years, and you will quickly realize when you try to run it that it does not work on Windows 7:

Cachedump Fail

This is a known issue and is discussed on the official Google Code project.

As a user pointed out, the issue persisted over to the Volatility project and an issue was raised there as well. A helpful user released a patch file for the cachedump program to work with Windows 7 and Vista.

After applying the patches and fixes I found online, as well as some minor adjustments for my own sanity, I got creddump working on my local Kali machine.

For convenience’s sake, I have forked the original Google Code project and applied the patches and adjustments. You can find the updated and working version of creddump on the Neohapsis Github:


Now that I had a working version of the program, it was just a matter of getting it on to my booted Kali instance and running it against the mounted Windows partition:

Creddump in action

Bingo! We have recovered two hashed passwords: one for ‘nharpsis’, the user who encrypted the initial file, and ‘god’, a Domain Admin who had previously logged in to the system.


Cracking the Hashes

Unlike locally stored credentials, these are not NT hashes. Instead, they are in a format known as ‘Domain Cache Credentials 2’ or ‘mscash2’, which uses PBKDF2 to derive the hashes. Unfortunately, PBKDF2 is a computation heavy function, which significantly slows down the cracking process.

Both John and oclHashcat support the ‘mscash2’ format. When using John, I recommend just sticking to a relatively short wordlist and not to pure bruteforce it.

If you want to attempt to use a large wordlist with some transformative rules or run pure bruteforce, use a GPU cracker with oclHashcat and still be prepared to wait a while.

To prove that cracking works, I used a wordlist I knew contained the plaintext passwords. Here’s John cracking the domain hashes:

Cracked with John

Note the format is “mscash2”. The Domain Admin’s password is “g0d”, and nharpsis’s password is “Welcome1!”

I also extracted the hashes and ran them on our powerful GPU cracking box here at Neohapsis. For oclHashcat, each line must be in the format ‘hash:username’, and the code for mscash2 is ‘-m 2100’:




Accessing the encrypted files

Now that we have the password for the user ‘nharpsis’, the simplest way to retrieve the encrypted file is just to boot the laptop back into Windows and log in as ‘nharpsis’. Once you are logged in, Windows kindly decrypts the files for you, and we can just open them up:




As you can see, if an attacker has physical access to the hard drive, EFS is only as strong as the users login password. Given this is a purely offline attack, an attacker has unlimited time to crack the password and then access the sensitive information.

So what can you do? Enforce full drive encryption. When BitLocker is enabled, everything in the drive is encrypted, including the location of the cached credentials. Yes, there are attacks agains BitLocker encryption, but they are much more difficult then attacking a user’s password.

In the end, I outlined the above attack scenario to my client and recommended they amend their policy to include mandatory full drive encryption. Hopefully this straightforward scenario shows that solely relying on EFS to protect sensitive files from unauthorized access in the event of a lost or stolen device is an inadequate control.